* [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
@ 2021-07-09 17:29 Ferruh Yigit
2021-07-09 17:29 ` [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library Ferruh Yigit
` (6 more replies)
0 siblings, 7 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-09 17:29 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Neil Horman, Bernard Iremonger, Bruce Richardson,
Konstantin Ananyev, John McNamara, Igor Russkikh, Pavel Belous,
Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, David Hunt, Harry van Haaren,
Cristian Dumitrescu, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Declan Doherty, Pavan Nikhilesh, Kirill Rybalchenko,
Jasvinder Singh, Thomas Monjalon
Cc: Ferruh Yigit, dev
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' filed of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also two different related method is confusing for
the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, but this may be different from device to
device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
As additional clarification, MTU is used to configure the device for
physical Rx/Tx limitation. Other related issue is size of the buffer to
store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
And compares MTU against Rx buffer size to decide enabling scattered Rx
or not, if PMD supports it. If scattered Rx is not supported by device,
MTU bigger than Rx buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
app/test-eventdev/test_perf_common.c | 1 -
app/test-eventdev/test_pipeline_common.c | 5 +-
app/test-pmd/cmdline.c | 45 ++++-----
app/test-pmd/config.c | 18 ++--
app/test-pmd/parameters.c | 4 +-
app/test-pmd/testpmd.c | 94 ++++++++++--------
app/test-pmd/testpmd.h | 2 +-
app/test/test_link_bonding.c | 1 -
app/test/test_link_bonding_mode4.c | 1 -
| 2 -
app/test/test_pmd_perf.c | 1 -
doc/guides/nics/dpaa.rst | 2 +-
doc/guides/nics/dpaa2.rst | 2 +-
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/fm10k.rst | 2 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/octeontx.rst | 2 +-
doc/guides/nics/thunderx.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 25 -----
doc/guides/sample_app_ug/flow_classify.rst | 8 +-
doc/guides/sample_app_ug/ioat.rst | 1 -
doc/guides/sample_app_ug/ip_reassembly.rst | 2 +-
doc/guides/sample_app_ug/skeleton.rst | 8 +-
drivers/net/atlantic/atl_ethdev.c | 3 -
drivers/net/avp/avp_ethdev.c | 17 ++--
drivers/net/axgbe/axgbe_ethdev.c | 7 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
drivers/net/bnxt/bnxt_ethdev.c | 21 ++--
drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 9 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 12 +--
drivers/net/cxgbe/cxgbe_main.c | 3 +-
drivers/net/cxgbe/sge.c | 3 +-
drivers/net/dpaa/dpaa_ethdev.c | 52 ++++------
drivers/net/dpaa2/dpaa2_ethdev.c | 31 +++---
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/igb_ethdev.c | 18 +---
drivers/net/e1000/igb_rxtx.c | 16 ++-
drivers/net/ena/ena_ethdev.c | 27 ++---
drivers/net/enetc/enetc_ethdev.c | 24 ++---
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 42 ++++----
drivers/net/fm10k/fm10k_ethdev.c | 2 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++--
drivers/net/hns3/hns3_ethdev.c | 28 ++----
drivers/net/hns3/hns3_ethdev_vf.c | 38 +++----
drivers/net/hns3/hns3_rxtx.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 10 +-
drivers/net/i40e/i40e_ethdev_vf.c | 14 +--
drivers/net/i40e/i40e_rxtx.c | 4 +-
drivers/net/iavf/iavf_ethdev.c | 9 +-
drivers/net/ice/ice_dcf_ethdev.c | 5 +-
drivers/net/ice/ice_ethdev.c | 14 +--
drivers/net/ice/ice_rxtx.c | 12 +--
drivers/net/igc/igc_ethdev.c | 51 +++-------
drivers/net/igc/igc_ethdev.h | 7 ++
drivers/net/igc/igc_txrx.c | 22 ++---
drivers/net/ionic/ionic_ethdev.c | 12 +--
drivers/net/ionic/ionic_rxtx.c | 6 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 35 +++----
drivers/net/ixgbe/ixgbe_pf.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 15 ++-
drivers/net/liquidio/lio_ethdev.c | 20 +---
drivers/net/mlx4/mlx4_rxq.c | 17 ++--
drivers/net/mlx5/mlx5_rxq.c | 25 ++---
drivers/net/mvneta/mvneta_ethdev.c | 7 --
drivers/net/mvneta/mvneta_rxtx.c | 13 ++-
drivers/net/mvpp2/mrvl_ethdev.c | 34 +++----
drivers/net/nfp/nfp_net.c | 9 +-
drivers/net/octeontx/octeontx_ethdev.c | 12 +--
drivers/net/octeontx2/otx2_ethdev.c | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +--
drivers/net/pfe/pfe_ethdev.c | 7 +-
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/qede/qede_rxtx.c | 8 +-
drivers/net/sfc/sfc_ethdev.c | 4 +-
drivers/net/sfc/sfc_port.c | 6 +-
drivers/net/tap/rte_eth_tap.c | 7 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +--
drivers/net/txgbe/txgbe_ethdev.c | 7 +-
drivers/net/txgbe/txgbe_ethdev.h | 4 +
drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
drivers/net/txgbe/txgbe_rxtx.c | 19 ++--
drivers/net/virtio/virtio_ethdev.c | 4 +-
examples/bbdev_app/main.c | 1 -
examples/bond/main.c | 1 -
examples/distributor/main.c | 1 -
.../pipeline_worker_generic.c | 1 -
.../eventdev_pipeline/pipeline_worker_tx.c | 1 -
examples/flow_classify/flow_classify.c | 10 +-
examples/ioat/ioatfwd.c | 1 -
examples/ip_fragmentation/main.c | 11 +--
examples/ip_pipeline/link.c | 2 +-
examples/ip_reassembly/main.c | 11 ++-
examples/ipsec-secgw/ipsec-secgw.c | 7 +-
examples/ipv4_multicast/main.c | 8 +-
examples/kni/main.c | 6 +-
examples/l2fwd-cat/l2fwd-cat.c | 8 +-
examples/l2fwd-crypto/main.c | 1 -
examples/l2fwd-event/l2fwd_common.c | 1 -
examples/l3fwd-acl/main.c | 11 +--
examples/l3fwd-graph/main.c | 4 +-
examples/l3fwd-power/main.c | 11 ++-
examples/l3fwd/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 7 +-
examples/pipeline/obj.c | 2 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 1 -
examples/qos_sched/init.c | 1 -
examples/rxtx_callbacks/main.c | 10 +-
examples/skeleton/basicfwd.c | 10 +-
examples/vhost/main.c | 4 +-
examples/vm_power_manager/main.c | 11 +--
lib/ethdev/rte_ethdev.c | 98 +++++++++++--------
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_trace.h | 2 +-
118 files changed, 531 insertions(+), 848 deletions(-)
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..660d5a0364b6 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..5fcea74b4d43 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
return -EINVAL;
}
- port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
- if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
+ port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
+ if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 8468018cf35d..8bdc042f6e8e 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1892,43 +1892,36 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
__rte_unused void *data)
{
struct cmd_config_max_pkt_len_result *res = parsed_result;
- uint32_t max_rx_pkt_len_backup = 0;
- portid_t pid;
+ portid_t port_id;
int ret;
+ if (strcmp(res->name, "max-pkt-len")) {
+ printf("Unknown parameter\n");
+ return;
+ }
+
if (!all_ports_stopped()) {
printf("Please stop all ports first\n");
return;
}
- RTE_ETH_FOREACH_DEV(pid) {
- struct rte_port *port = &ports[pid];
-
- if (!strcmp(res->name, "max-pkt-len")) {
- if (res->value < RTE_ETHER_MIN_LEN) {
- printf("max-pkt-len can not be less than %d\n",
- RTE_ETHER_MIN_LEN);
- return;
- }
- if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
- return;
-
- ret = eth_dev_info_get_print_err(pid, &port->dev_info);
- if (ret != 0) {
- printf("rte_eth_dev_info_get() failed for port %u\n",
- pid);
- return;
- }
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_port *port = &ports[port_id];
- max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
+ if (res->value < RTE_ETHER_MIN_LEN) {
+ printf("max-pkt-len can not be less than %d\n",
+ RTE_ETHER_MIN_LEN);
+ return;
+ }
- port->dev_conf.rxmode.max_rx_pkt_len = res->value;
- if (update_jumbo_frame_offload(pid) != 0)
- port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
- } else {
- printf("Unknown parameter\n");
+ ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
+ if (ret != 0) {
+ printf("rte_eth_dev_info_get() failed for port %u\n",
+ port_id);
return;
}
+
+ update_jumbo_frame_offload(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 04ae0feb5852..a87265d7638b 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1139,7 +1139,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
int diag;
struct rte_port *rte_port = &ports[port_id];
struct rte_eth_dev_info dev_info;
- uint16_t eth_overhead;
int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
@@ -1155,20 +1154,17 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
return;
}
diag = rte_eth_dev_set_mtu(port_id, mtu);
- if (diag)
+ if (diag) {
printf("Set MTU failed. diag=%d\n", diag);
- else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- /*
- * Ether overhead in driver is equal to the difference of
- * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
- * device supports jumbo frame.
- */
- eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
+ return;
+ }
+
+ rte_port->dev_conf.rxmode.mtu = mtu;
+
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (mtu > RTE_ETHER_MTU) {
rte_port->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- rte_port->dev_conf.rxmode.max_rx_pkt_len =
- mtu + eth_overhead;
} else
rte_port->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 5e69d2aa8cfe..8e8556d74a4a 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -860,7 +860,9 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
n = atoi(optarg);
if (n >= RTE_ETHER_MIN_LEN)
- rx_mode.max_rx_pkt_len = (uint32_t) n;
+ rx_mode.mtu = (uint32_t) n -
+ (RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN);
else
rte_exit(EXIT_FAILURE,
"Invalid max-pkt-len=%d - should be > %d\n",
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 1cdd3cdd12b6..2c79cae05664 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -445,13 +445,7 @@ lcoreid_t latencystats_lcore_id = -1;
/*
* Ethernet device configuration.
*/
-struct rte_eth_rxmode rx_mode = {
- /* Default maximum frame length.
- * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
- * in init_config().
- */
- .max_rx_pkt_len = 0,
-};
+struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
@@ -1417,6 +1411,20 @@ check_nb_hairpinq(queueid_t hairpinq)
return 0;
}
+static int
+get_eth_overhead(struct rte_eth_dev_info *dev_info)
+{
+ uint32_t eth_overhead;
+
+ if (dev_info->max_mtu != UINT16_MAX &&
+ dev_info->max_rx_pktlen > dev_info->max_mtu)
+ eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu;
+ else
+ eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return eth_overhead;
+}
+
static void
init_config(void)
{
@@ -1465,7 +1473,7 @@ init_config(void)
rte_exit(EXIT_FAILURE,
"rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid);
+ ret = update_jumbo_frame_offload(pid, 0);
if (ret != 0)
printf("Updating jumbo frame offload failed for port %u\n",
pid);
@@ -1512,14 +1520,19 @@ init_config(void)
*/
if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
- data_size = rx_mode.max_rx_pkt_len /
- port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+ uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
+ uint16_t mtu;
- if ((data_size + RTE_PKTMBUF_HEADROOM) >
+ if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
+ data_size = mtu + eth_overhead /
+ port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+
+ if ((data_size + RTE_PKTMBUF_HEADROOM) >
mbuf_data_size[0]) {
- mbuf_data_size[0] = data_size +
- RTE_PKTMBUF_HEADROOM;
- warning = 1;
+ mbuf_data_size[0] = data_size +
+ RTE_PKTMBUF_HEADROOM;
+ warning = 1;
+ }
}
}
}
@@ -3352,43 +3365,44 @@ rxtx_port_config(struct rte_port *port)
/*
* Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned if JUMBO_FRAME offload is not set.
+ * MTU is also aligned.
*
* port->dev_info should be set before calling this function.
*
+ * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
+ * ETH_OVERHEAD". This is useful to update flags but not MTU value.
+ *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid)
+update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
uint64_t rx_offloads;
- int ret;
+ uint16_t mtu, new_mtu;
bool on;
- /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
- if (port->dev_info.max_mtu != UINT16_MAX &&
- port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
- eth_overhead = port->dev_info.max_rx_pktlen -
- port->dev_info.max_mtu;
- else
- eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ eth_overhead = get_eth_overhead(&port->dev_info);
- rx_offloads = port->dev_conf.rxmode.offloads;
+ if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
+ printf("Failed to get MTU for port %u\n", portid);
+ return -1;
+ }
+
+ if (max_rx_pktlen == 0)
+ max_rx_pktlen = mtu + eth_overhead;
- /* Default config value is 0 to use PMD specific overhead */
- if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
- port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
+ rx_offloads = port->dev_conf.rxmode.offloads;
+ new_mtu = max_rx_pktlen - eth_overhead;
- if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
+ if (new_mtu <= RTE_ETHER_MTU) {
rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
on = false;
} else {
if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
printf("Frame size (%u) is not supported by port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len,
- portid);
+ max_rx_pktlen, portid);
return -1;
}
rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -3409,18 +3423,16 @@ update_jumbo_frame_offload(portid_t portid)
}
}
- /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
- * if unset do it here
- */
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- ret = rte_eth_dev_set_mtu(portid,
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
- if (ret)
- printf("Failed to set MTU to %u for port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead,
- portid);
+ if (mtu == new_mtu)
+ return 0;
+
+ if (rte_eth_dev_set_mtu(portid, new_mtu) != 0) {
+ printf("Failed to set MTU to %u for port %u\n", new_mtu, portid);
+ return -1;
}
+ port->dev_conf.rxmode.mtu = new_mtu;
+
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index d61a055bdd1b..42143f85924f 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid);
+int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..5388d18125a6 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
.split_hdr_size = 0,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7adc7..3e9254fe896d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..e7bb0497b663 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
@@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..a3b4f52c65e6 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 917482dbe2a5..b8d43aa90098 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -335,7 +335,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 6470f1c05ac8..ce16e1047df2 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -551,7 +551,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 403c2b03a386..c98242f3b72f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -166,7 +166,7 @@ Jumbo frame
Supports Rx jumbo frames.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.max_rx_pkt_len``.
+ ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..ed6afd62703d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -141,7 +141,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
up to 15364 bytes can still reach the host interface.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 83299646ddb1..338734826a7a 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -584,9 +584,9 @@ Driver options
and each stride receives one packet. MPRQ can improve throughput for
small-packet traffic.
- When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
+ When MPRQ is enabled, MTU can be larger than the size of
user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
- configure large stride size enough to accommodate max_rx_pkt_len as long as
+ configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index b1a868b054d1..8236cc3e93e0 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -157,7 +157,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 12d43ce93e28..98f23a2b2a3d 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -392,7 +392,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
up to 9200 bytes can still reach the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 9584d6bfd723..86da47d8f9c6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -56,31 +56,6 @@ Deprecation Notices
In 19.11 PMDs will still update the field even when the offload is not
enabled.
-* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be
- replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
- The new ``mtu`` field will be used to configure the initial device MTU via
- ``rte_eth_dev_configure()`` API.
- Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
- The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store
- the configured ``mtu`` value,
- and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
- be used to store the user configuration request.
- Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled,
- ``mtu`` field will be always valid.
- When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU``
- value will be used.
- ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully,
- either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
-
- An application may need to configure device for a specific Rx packet size, like for
- cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
- can't be bigger than Rx buffer size.
- To cover these cases an application needs to know the device packet overhead to be
- able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
- ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
- the device packet overhead can be calculated as:
- ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
-
* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
will be removed in 21.11.
Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst
index 01915971ae83..2cc36a688af3 100644
--- a/doc/guides/sample_app_ug/flow_classify.rst
+++ b/doc/guides/sample_app_ug/flow_classify.rst
@@ -325,13 +325,7 @@ Forwarding application is shown below:
}
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
-
-.. code-block:: c
-
- static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
- };
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
index 7eb557f91c7a..c5c06261e395 100644
--- a/doc/guides/sample_app_ug/ioat.rst
+++ b/doc/guides/sample_app_ug/ioat.rst
@@ -162,7 +162,6 @@ multiple CBDMA channels per port:
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst
index e72c8492e972..2090b23fdd1c 100644
--- a/doc/guides/sample_app_ug/ip_reassembly.rst
+++ b/doc/guides/sample_app_ug/ip_reassembly.rst
@@ -175,7 +175,7 @@ each RX queue uses its own mempool.
.. code-block:: c
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * RTE_LIBRTE_IP_FRAG_MAX_FRAGS;
- nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
+ nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += RTE_TEST_RX_DESC_DEFAULT + RTE_TEST_TX_DESC_DEFAULT;
nb_mbuf = RTE_MAX(nb_mbuf, (uint32_t)NB_MBUF);
diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst
index 263d8debc81b..a88cb8f14a4b 100644
--- a/doc/guides/sample_app_ug/skeleton.rst
+++ b/doc/guides/sample_app_ug/skeleton.rst
@@ -157,13 +157,7 @@ Forwarding application is shown below:
}
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
-
-.. code-block:: c
-
- static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
- };
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..3f654c071566 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return 0;
}
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff5b..2554f5fdf59a 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1059,17 +1059,18 @@ static int
avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
struct avp_dev *avp)
{
- unsigned int max_rx_pkt_len;
+ unsigned int max_rx_pktlen;
- max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
- if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
- (max_rx_pkt_len > avp->host_mbuf_size)) {
+ if ((max_rx_pktlen > avp->guest_mbuf_size) ||
+ (max_rx_pktlen > avp->host_mbuf_size)) {
/*
* If the guest MTU is greater than either the host or guest
* buffers then chained mbufs have to be enabled in the TX
* direction. It is assumed that the application will not need
- * to send packets larger than their max_rx_pkt_len (MRU).
+ * to send packets larger than their MTU.
*/
return 1;
}
@@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
avp->max_rx_pkt_len,
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN,
avp->host_mbuf_size,
avp->guest_mbuf_size);
@@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* function; send it truncated to avoid the performance
* hit of having to manage returning the already
* allocated buffer to the free list. This should not
- * happen since the application should have set the
- * max_rx_pkt_len based on its MTU and it should be
+ * happen since the application should have not send
+ * packages larger than its MTU and it should be
* policing its own packet sizes.
*/
txq->errors++;
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af11f..76aeec077f2b 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
int ret;
struct rte_eth_dev_data *dev_data = dev->data;
- uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len;
+ uint16_t max_pkt_len;
dev->dev_ops = &axgbe_eth_dev_ops;
@@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
+
+ max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (frame_size > AXGBE_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
val = 1;
@@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
val = 0;
}
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..009a94e9a8fa 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -175,16 +175,12 @@ static int
bnx2x_dev_configure(struct rte_eth_dev *dev)
{
struct bnx2x_softc *sc = dev->data->dev_private;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
PMD_INIT_FUNC_TRACE(sc);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- dev->data->mtu = sc->mtu;
- }
+ sc->mtu = dev->data->dev_conf.rxmode.mtu;
if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues");
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index c9536f79267d..335505a106d5 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1128,13 +1128,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
- BNXT_NUM_VLANS;
- bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
- }
+ bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
+
return 0;
resource_error:
@@ -1172,6 +1167,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
*/
static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
uint16_t buf_size;
int i;
@@ -1186,7 +1182,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size)
+ if (eth_dev->data->mtu + overhead > buf_size)
return 1;
}
return 0;
@@ -2992,6 +2988,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
uint32_t rc = 0;
@@ -3005,8 +3002,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
if (!eth_dev->data->nb_rx_queues)
return rc;
- new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
- VLAN_TAG_SIZE * BNXT_NUM_VLANS;
+ new_pkt_size = new_mtu + overhead;
/*
* Disallow any MTU change that would require scattered receive support
@@ -3033,7 +3029,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
/* Is there a change in mtu setting? */
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size)
+ if (eth_dev->data->mtu == new_mtu)
return rc;
for (i = 0; i < bp->nr_vnics; i++) {
@@ -3055,9 +3051,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
}
- if (!rc)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size;
-
PMD_DRV_LOG(INFO, "New MTU is %d\n", new_mtu);
return rc;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index b01ef003e65c..b2a1833e3f91 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1728,8 +1728,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_VLAN_FILTER;
- slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ slave_eth_dev->data->dev_conf.rxmode.mtu =
+ bonded_eth_dev->data->dev_conf.rxmode.mtu;
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_JUMBO_FRAME)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 7adab4605819..da6c5e8f242f 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
}
@@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct cnxk_eth_rxq_sp *rxq;
- uint16_t mtu;
int rc;
rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
/* Setup scatter mode if needed by jumbo */
nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
- CNXK_NIX_MAX_VTAG_ACT_SIZE;
-
- rc = cnxk_nix_mtu_set(eth_dev, mtu);
+ rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
if (rc)
plt_err("Failed to set default MTU size, rc=%d", rc);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..695d0d6fd3e2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
goto exit;
}
- frame_size += RTE_ETHER_CRC_LEN;
-
- if (frame_size > RTE_ETHER_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca397600..8cf61f12a8d6 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return err;
/* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
+ if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
/* set to jumbo mode if needed */
- if (new_mtu > CXGBE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- if (!err)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;
-
return err;
}
@@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
- unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
struct rte_eth_dev_info dev_info;
@@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
rxq->fl.size = temp_nb_desc;
/* Set to jumbo mode if necessary */
- if (pkt_len > CXGBE_ETH_MAX_LEN)
+ if (eth_dev->data->mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..91d6bb9bbcb0 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
unsigned int mtu;
int ret;
- mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ mtu = pi->eth_dev->data->mtu;
conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..830f5192474d 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
u32 wr_mid;
u64 cntrl, *end;
bool v6;
- u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
+ u32 max_pkt_len;
/* Reject xmit if queue is stopped */
if (unlikely(txq->flags & EQ_STOPPED))
@@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
return 0;
}
+ max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
(unlikely(m->pkt_len > max_pkt_len)))
goto out_free;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 27d670f843d2..56703e3a39e8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > DPAA_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
@@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
struct fman_if *fif = dev->process_private;
struct __fman_if *__fif;
struct rte_intr_handle *intr_handle;
+ uint32_t max_rx_pktlen;
int speed, duplex;
int ret;
@@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- DPAA_PMD_DEBUG("enabling jumbo");
-
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- else {
- DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
- "supported is %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- DPAA_MAX_RX_PKT_LEN);
- max_len = DPAA_MAX_RX_PKT_LEN;
- }
-
- fman_if_set_maxfrm(dev->process_private, max_len);
- dev->data->mtu = max_len
- - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
+ DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
+ "supported is %d",
+ max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
+ max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
+ fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
+
if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
@@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
u32 flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -EINVAL;
}
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
+ VLAN_TAG_SIZE;
/* Max packet can fit in single buffer */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
+ if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- buffsz * DPAA_SGT_MAX_ENTRIES) {
- DPAA_PMD_ERR("max RxPkt size %d too big to fit "
+ if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
+ DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz * DPAA_SGT_MAX_ENTRIES);
+ max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES);
rte_errno = EOVERFLOW;
return -rte_errno;
}
@@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz - RTE_PKTMBUF_HEADROOM);
+ max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
}
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
@@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif),
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 8b803b8542dc..6213bcbf3a43 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
int tx_l3_csum_offload = false;
int tx_l4_csum_offload = false;
int ret, tc_index;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
- ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
- priv->token, eth_conf->rxmode.max_rx_pkt_len
- - RTE_ETHER_CRC_LEN);
- if (ret) {
- DPAA2_PMD_ERR(
- "Unable to set mtu. check config");
- return ret;
- }
- dev->data->mtu =
- dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
- VLAN_TAG_SIZE;
- } else {
- return -1;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
+ ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
+ priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
+ if (ret) {
+ DPAA2_PMD_ERR("Unable to set mtu. check config");
+ return ret;
}
+ } else {
+ return -1;
}
if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
@@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (frame_size > DPAA2_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..6f418a36aa04 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 10ee0f33415a..35b517891d67 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2686,9 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD);
}
static void
@@ -2704,10 +2702,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE);
+ E1000_WRITE_REG(hw, E1000_RLPML,
+ dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
}
static int
@@ -4405,7 +4401,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -4416,11 +4412,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
return 0;
}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712af..de12997b4bdd 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
@@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
rctl |= E1000_RCTL_LPE;
/*
@@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
@@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
/* setup MTU */
- e1000_rlpml_set_vf(hw,
- (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE));
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
+ e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index dfe68279fa7b..e9b718786a39 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -850,26 +850,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev,
return rc;
}
-static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
-{
- uint32_t max_frame_len = adapter->max_mtu;
-
- if (adapter->edev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- max_frame_len =
- adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
-
- return max_frame_len;
-}
-
static int ena_check_valid_conf(struct ena_adapter *adapter)
{
- uint32_t max_frame_len = ena_get_mtu_conf(adapter);
+ uint32_t mtu = adapter->edev_data->mtu;
- if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_INIT_LOG(ERR, "Unsupported MTU of %d. "
"max mtu: %d, min mtu: %d",
- max_frame_len, adapter->max_mtu, ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return ENA_COM_UNSUPPORTED;
}
@@ -1042,11 +1030,11 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
ena_dev = &adapter->ena_dev;
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
- if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_DRV_LOG(ERR,
"Invalid MTU setting. new_mtu: %d "
"max mtu: %d min mtu: %d\n",
- mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return -EINVAL;
}
@@ -2067,7 +2055,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
ETH_RSS_UDP;
dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
- dev_info->max_rx_pktlen = adapter->max_mtu;
+ dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ dev_info->min_mtu = ENA_MIN_MTU;
+ dev_info->max_mtu = adapter->max_mtu;
dev_info->max_mac_addrs = 1;
dev_info->max_rx_queues = adapter->max_num_io_queues;
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index b496cd470045..cdb9783b5372 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > ENETC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads &=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/*setting the MTU*/
enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) |
ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
@@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint32_t checksum = L3_CKSUM | L4_CKSUM;
+ uint32_t max_len;
PMD_INIT_FUNC_TRACE();
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
- enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
- ENETC_SET_MAXFRM(max_len));
- enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
- ENETC_MAC_MAXFRM_SIZE);
- enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
- 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
- RTE_ETHER_CRC_LEN;
- }
+ max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len));
+ enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
+ enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
int config;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b8f..6a81ceb62ba7 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
* max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
* a hint to the driver to size receive buffers accordingly so that
* larger-than-vnic-mtu packets get truncated.. For DPDK, we let
- * the user decide the buffer size via rxmode.max_rx_pkt_len, basically
+ * the user decide the buffer size via rxmode.mtu, basically
* ignoring vNIC mtu.
*/
device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu);
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..dfc7f5d1f94f 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
struct rq_enet_desc *rqd = rq->ring.descs;
unsigned i;
dma_addr_t dma_addr;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint16_t rq_buf_len;
if (!rq->in_use)
@@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
/*
* If *not* using scatter and the mbuf size is greater than the
- * requested max packet size (max_rx_pkt_len), then reduce the
- * posted buffer size to max_rx_pkt_len. HW still receives packets
- * larger than max_rx_pkt_len, but they will be truncated, which we
+ * requested max packet size (mtu + eth overhead), then reduce the
+ * posted buffer size to max packet size. HW still receives packets
+ * larger than max packet size, but they will be truncated, which we
* drop in the rx handler. Not ideal, but better than returning
* large packets when the user is not expecting them.
*/
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM;
- if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
- rq_buf_len = max_rx_pkt_len;
+ if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
+ rq_buf_len = max_rx_pktlen;
for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
mb = rte_mbuf_raw_alloc(rq->mp);
if (mb == NULL) {
@@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
unsigned int mbuf_size, mbufs_per_pkt;
unsigned int nb_sop_desc, nb_data_desc;
uint16_t min_sop, max_sop, min_data, max_data;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
/*
* Representor uses a reserved PF queue. Translate representor
@@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM);
- /* max_rx_pkt_len includes the ethernet header and CRC. */
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ /* max_rx_pktlen includes the ethernet header and CRC. */
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
- mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
+ mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
} else {
dev_info(enic, "Scatter rx mode disabled\n");
mbufs_per_pkt = 1;
- if (max_rx_pkt_len > mbuf_size) {
+ if (max_rx_pktlen > mbuf_size) {
dev_warning(enic, "The maximum Rx packet size (%u) is"
" larger than the mbuf size (%u), and"
" scatter is disabled. Larger packets will"
" be truncated.\n",
- max_rx_pkt_len, mbuf_size);
+ max_rx_pktlen, mbuf_size);
}
}
@@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
rq_sop->data_queue_enable = 1;
rq_data->in_use = 1;
/*
- * HW does not directly support rxmode.max_rx_pkt_len. HW always
+ * HW does not directly support MTU. HW always
* receives packet sizes up to the "max" MTU.
* If not using scatter, we can achieve the effect of dropping
* larger packets by reducing the size of posted buffers.
* See enic_alloc_rx_queue_mbufs().
*/
- if (max_rx_pkt_len <
- enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
- dev_warning(enic, "rxmode.max_rx_pkt_len is ignored"
- " when scatter rx mode is in use.\n");
+ if (enic->rte_dev->data->mtu < enic->max_mtu) {
+ dev_warning(enic,
+ "mtu is ignored when scatter rx mode is in use.\n");
}
} else {
dev_info(enic, "Rq %u Scatter rx mode not being used\n",
@@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
if (mbufs_per_pkt > 1) {
dev_info(enic, "For max packet size %u and mbuf size %u valid"
" rx descriptor range is %u to %u\n",
- max_rx_pkt_len, mbuf_size, min_sop + min_data,
+ max_rx_pktlen, mbuf_size, min_sop + min_data,
max_sop + max_data);
}
dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
@@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
"MTU (%u) is greater than value configured in NIC (%u)\n",
new_mtu, config_mtu);
- /* Update the MTU and maximum packet length */
- eth_dev->data->mtu = new_mtu;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- enic_mtu_to_max_rx_pktlen(new_mtu);
-
/*
* If the device has not started (enic_enable), nothing to do.
* Later, enic_enable() will set up RQs reflecting the new maximum
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e4021..5e4b361ca6c0 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
FM10K_SRRCTL_LOOPBACK_SUPPRESS);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
+ if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
uint32_t reg;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 946465779f2e..c737ef8d06d8 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -324,19 +324,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- HINIC_MAX_JUMBO_FRAME_SIZE) {
+ if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
+ HINIC_MIN_FRAME_SIZE ||
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
+ HINIC_MAX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR,
- "Max rx pkt len out of range, get max_rx_pkt_len:%d, "
+ "Packet length out of range, get packet length:%d, "
"expect between %d and %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu),
HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- nic_dev->mtu_size =
- HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
/* rss template */
err = hinic_config_mq_mode(dev, TRUE);
@@ -1539,7 +1539,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- uint32_t frame_size;
int ret = 0;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
@@ -1557,16 +1556,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- /* update max frame size */
- frame_size = HINIC_MTU_TO_PKTLEN(mtu);
- if (frame_size > HINIC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index e51512560e15..8bccdeddb2f7 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2379,20 +2379,11 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
{
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
- int ret;
-
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
- return 0;
+ uint32_t max_rx_pktlen;
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
+ max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
+ if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
+ max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
hns3_err(hw, "maximum Rx packet length must be greater than %u "
"and no more than %u when jumbo frame enabled.",
(uint16_t)HNS3_DEFAULT_FRAME_LEN,
@@ -2400,13 +2391,7 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
return -EINVAL;
}
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3_dev_mtu_set(dev, mtu);
- if (ret)
- return ret;
- dev->data->mtu = mtu;
-
- return 0;
+ return hns3_dev_mtu_set(dev, conf->rxmode.mtu);
}
static int
@@ -2622,7 +2607,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false;
+ is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2643,7 +2628,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index e582503f529b..ca839fa55fa0 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -784,8 +784,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
struct rte_eth_rss_conf rss_conf;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
+ uint32_t max_rx_pktlen;
bool gro_en;
int ret;
@@ -825,29 +824,21 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater "
- "than %u and less than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- ret = -EINVAL;
- goto cfg_err;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3vf_dev_mtu_set(dev, mtu);
- if (ret)
- goto cfg_err;
- dev->data->mtu = mtu;
+ max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
+ if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
+ max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
+ hns3_err(hw, "maximum Rx packet length must be greater "
+ "than %u and less than %u when jumbo frame enabled.",
+ (uint16_t)HNS3_DEFAULT_FRAME_LEN,
+ (uint16_t)HNS3_MAX_FRAME_LEN);
+ ret = -EINVAL;
+ goto cfg_err;
}
+ ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret)
+ goto cfg_err;
+
ret = hns3vf_dev_configure_vlan(dev);
if (ret)
goto cfg_err;
@@ -935,7 +926,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index cb9eccf9faae..6b81688a7225 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1734,18 +1734,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size,
uint16_t nb_desc)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
- struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
uint16_t min_vec_bds;
/*
* HNS3 hardware network engine set scattered as default. If the driver
* is not work in scattered mode and the pkts greater than buf_size
- * but smaller than max_rx_pkt_len will be distributed to multiple BDs.
+ * but smaller than frame size will be distributed to multiple BDs.
* Driver cannot handle this situation.
*/
- if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) {
- hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater "
+ if (!hw->data->scattered_rx && frame_size > buf_size) {
+ hns3_err(hw, "frame size is not allowed to be set greater "
"than rx_buf_len if scattered is off.");
return -EINVAL;
}
@@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
}
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
- dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
+ dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed17a..1161f301b9ae 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11772,14 +11772,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > I40E_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return ret;
}
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b227..086a167ca672 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1927,8 +1927,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << I40E_RXQ_CTX_DBUFF_SHIFT));
len = rxq->rx_buf_len * I40E_MAX_CHAINED_RX_BUFFERS;
- rxq->max_pkt_len = RTE_MIN(len,
- dev_data->dev_conf.rxmode.max_rx_pkt_len);
+ rxq->max_pkt_len = RTE_MIN(len, dev_data->mtu + I40E_ETH_OVERHEAD);
/**
* Check if the jumbo frame and maximum packet length are set correctly
@@ -2173,7 +2172,7 @@ i40evf_dev_start(struct rte_eth_dev *dev)
hw->adapter_stopped = 0;
- vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ vf->max_pkt_len = dev->data->mtu + I40E_ETH_OVERHEAD;
vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
@@ -2885,13 +2884,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > I40E_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return ret;
}
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 8d65f287f455..aa43796ef1af 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2904,8 +2904,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
}
rxq->max_pkt_len =
- RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
- rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
+ RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
+ data->mtu + I40E_ETH_OVERHEAD);
if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 41382c6d669b..13c2329d85a7 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -563,12 +563,13 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_dev_data *dev_data = dev->data;
uint16_t buf_size, max_pkt_len, len;
+ uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
/* Calculate the maximum packet length allowed */
len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS;
- max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_pkt_len = RTE_MIN(len, frame_size);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
@@ -815,7 +816,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
adapter->stopped = 0;
- vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
num_queue_pairs = vf->num_queue_pairs;
@@ -1445,15 +1446,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IAVF_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return ret;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 69fe6e63d1d3..34b6c9b2a7ed 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -59,9 +59,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ dev->data->mtu + ICE_ETH_OVERHEAD);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 63f735d1ff72..bdda6fee3f8e 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3426,8 +3426,8 @@ ice_dev_start(struct rte_eth_dev *dev)
pf->adapter_stopped = false;
/* Set the max frame size to default value*/
- max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
- pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
+ max_frame_size = pf->dev_data->mtu ?
+ pf->dev_data->mtu + ICE_ETH_OVERHEAD :
ICE_FRAME_SIZE_MAX;
/* Set the max frame size to HW*/
@@ -3806,14 +3806,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > ICE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 3f6e7359844b..a3de4172e2bc 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -262,15 +262,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
+ uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
/* Set buffer size as the head split is disabled. */
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
RTE_PKTMBUF_HEADROOM);
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- rxq->max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev_data->dev_conf.rxmode.max_rx_pkt_len);
+ rxq->max_pkt_len =
+ RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ frame_size);
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
@@ -361,11 +362,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
return -EINVAL;
}
- buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
- RTE_PKTMBUF_HEADROOM);
-
/* Check if scattered RX needs to be used. */
- if (rxq->max_pkt_len > buf_size)
+ if (frame_size > buf_size)
dev_data->scattered_rx = 1;
rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a0954836b..b26723064b07 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -20,13 +20,6 @@
#define IGC_INTEL_VENDOR_ID 0x8086
-/*
- * The overhead from MTU to max frame size.
- * Considering VLAN so tag needs to be counted.
- */
-#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
- RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE)
-
#define IGC_FC_PAUSE_TIME 0x0680
#define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
#define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
@@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= IGC_RCTL_LPE;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~IGC_RCTL_LPE;
}
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
return 0;
}
@@ -2486,6 +2473,7 @@ static int
igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
- RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
+ if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
+ frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
@@ -2519,6 +2498,7 @@ static int
igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
+ if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
+ frame_size, MAX_RX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..b3473b5b1646 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -35,6 +35,13 @@ extern "C" {
#define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
#define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX)
+/*
+ * The overhead from MTU to max frame size.
+ * Considering VLAN so tag needs to be counted.
+ */
+#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2)
+
/*
* TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
* multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd220..d80808a002f5 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
struct igc_rx_queue *rxq;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint32_t rctl;
uint32_t rxcsum;
uint16_t buf_size;
@@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
rctl |= IGC_RCTL_LPE;
-
- /*
- * Set maximum packet length by default, and might be updated
- * together with enabling/disabling dual VLAN.
- */
- IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
- } else {
+ else
rctl &= ~IGC_RCTL_LPE;
- }
+
+ max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
+ /*
+ * Set maximum packet length by default, and might be updated
+ * together with enabling/disabling dual VLAN.
+ */
+ IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
+ if (max_rx_pktlen > buf_size)
dev->data->scattered_rx = 1;
} else {
/*
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e6207939665e..97447a10e46a 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -343,25 +343,15 @@ static int
ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
- uint32_t max_frame_size;
int err;
IONIC_PRINT_CALL();
/*
* Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
- * is done by the the API.
+ * is done by the API.
*/
- /*
- * Max frame size is MTU + Ethernet header + VLAN + QinQ
- * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
- */
- max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
-
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
- return -EINVAL;
-
err = ionic_lif_change_mtu(lif, mtu);
if (err)
return err;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa6a..3f5fc66abf71 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
struct rte_mbuf *rxm, *rxm_seg;
uint32_t max_frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint64_t pkt_flags = 0;
uint32_t pkt_type;
struct ionic_rx_stats *stats = &rxq->stats;
@@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
int __rte_cold
ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
{
- uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
struct ionic_rx_qcq *rxq;
int err;
@@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
{
struct ionic_rx_qcq *rxq = rx_queue;
uint32_t frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
struct ionic_rx_service service_cb_arg;
service_cb_arg.rx_pkts = rx_pkts;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa5877d..3634c0c8c5f0 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IPN3KE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b5371568b54d..b9048ade3c35 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5172,7 +5172,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct ixgbe_hw *hw;
struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
- struct rte_eth_dev_data *dev_data = dev->data;
int ret;
ret = ixgbe_dev_info_get(dev, &dev_info);
@@ -5186,9 +5185,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
*/
- if (dev_data->dev_started && !dev_data->scattered_rx &&
- (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ if (dev->data->dev_started && !dev->data->scattered_rx &&
+ frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -5197,23 +5196,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (frame_size > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU) {
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
return 0;
@@ -6267,12 +6261,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
* set as 0x4.
*/
if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_JUMBO_FRAME);
+ (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_DEFAULT);
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
/* Set RTTBCNRC of queue X */
IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
@@ -6556,8 +6548,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (mtu < RTE_ETHER_MIN_MTU ||
- max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
+ if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
return -EINVAL;
/* If device is started, refuse mtu that requires the support of
@@ -6565,7 +6556,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
*/
if (dev_data->dev_started && !dev_data->scattered_rx &&
(max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -6582,8 +6573,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (ixgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index fbf2b17d160f..9bcbc445f2d0 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* if PF has jumbo frames enabled which means legacy
* VFs are disabled.
*/
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ if (dev->data->mtu > RTE_ETHER_MTU)
break;
/* fall through */
default:
@@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* legacy VFs.
*/
if (max_frame > IXGBE_ETH_MAX_LEN ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ dev->data->mtu > RTE_ETHER_MTU)
return -1;
break;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index d69f36e97770..5e32a6ce6940 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -5051,6 +5051,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
uint16_t buf_size;
uint16_t i;
struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
int rc;
PMD_INIT_FUNC_TRACE();
@@ -5086,7 +5087,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (rx_conf->max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
} else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
@@ -5160,8 +5161,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
+ if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -5641,6 +5641,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
struct ixgbe_hw *hw;
struct ixgbe_rx_queue *rxq;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
uint64_t bus_addr;
uint32_t srrctl, psrtype = 0;
uint16_t buf_size;
@@ -5677,10 +5678,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
* VF packets received can work in all cases.
*/
- if (ixgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ if (ixgbevf_rlpml_set_vf(hw, frame_size)) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
return -EINVAL;
}
@@ -5739,8 +5739,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
+ (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->data->scattered_rx = 1;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index b72060a4499b..f0c165c89ba7 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
- uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (frame_len > LIO_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
- eth_dev->data->mtu = mtu;
-
return 0;
}
@@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev)
static int
lio_dev_start(struct rte_eth_dev *eth_dev)
{
- uint16_t mtu;
- uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
int ret = 0;
@@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
goto dev_mtu_set_error;
}
- mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN);
- if (mtu < RTE_ETHER_MIN_MTU)
- mtu = RTE_ETHER_MIN_MTU;
-
- if (eth_dev->data->mtu != mtu) {
- ret = lio_dev_mtu_set(eth_dev, mtu);
- if (ret)
- goto dev_mtu_set_error;
- }
+ ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
+ if (ret)
+ goto dev_mtu_set_error;
return 0;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 978cbb8201ea..4a5cfd22aa71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
int ret;
uint32_t crc_present;
uint64_t offloads;
+ uint32_t max_rx_pktlen;
offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
@@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
};
/* Enable scattered packets support for this queue if necessary. */
MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- (mb_len - RTE_PKTMBUF_HEADROOM)) {
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
- uint32_t size =
- RTE_PKTMBUF_HEADROOM +
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
/*
@@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
/* Make sure sges_n did not overflow. */
size = mb_len * (1 << rxq->sges_n);
size -= RTE_PKTMBUF_HEADROOM;
- if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
+ if (size < max_rx_pktlen) {
rte_errno = EOVERFLOW;
ERROR("%p: too many SGEs (%u) needed to handle"
" requested maximum packet size %u",
(void *)dev,
- 1 << sges_n,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ 1 << sges_n, max_rx_pktlen);
goto error;
}
} else {
WARN("%p: the requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- (void *)dev,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ (void *)dev, max_rx_pktlen,
mb_len - RTE_PKTMBUF_HEADROOM);
}
DEBUG("%p: maximum number of segments per packet: %u",
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index bb9a9080871d..bd16dde6de13 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1336,10 +1336,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
- unsigned int max_rx_pkt_len = lro_on_queue ?
+ unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
+ dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
@@ -1378,7 +1379,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* needed to handle max size packets, replace zero length
* with the buffer length from the pool.
*/
- tail_len = max_rx_pkt_len;
+ tail_len = max_rx_pktlen;
do {
struct mlx5_eth_rxseg *hw_seg =
&tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
@@ -1416,7 +1417,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- tmpl->rxq.rxseg_n, max_rx_pkt_len,
+ tmpl->rxq.rxseg_n, max_rx_pktlen,
MLX5_MAX_RXQ_NSEG);
rte_errno = ENOTSUP;
goto error;
@@ -1441,7 +1442,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
- dev->data->port_id, idx, mb_len, max_rx_pkt_len,
+ dev->data->port_id, idx, mb_len, max_rx_pktlen,
RTE_PKTMBUF_HEADROOM);
rte_errno = ENOSPC;
goto error;
@@ -1460,7 +1461,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* following conditions are met:
* - MPRQ is enabled.
* - The number of descs is more than the number of strides.
- * - max_rx_pkt_len plus overhead is less than the max size
+ * - max_rx_pktlen plus overhead is less than the max size
* of a stride or mprq_stride_size is specified by a user.
* Need to make sure that there are enough strides to encap
* the maximum packet size in case mprq_stride_size is set.
@@ -1484,7 +1485,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
!!(offloads & DEV_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
- max_lro_size = RTE_MIN(max_rx_pkt_len,
+ max_lro_size = RTE_MIN(max_rx_pktlen,
(1u << tmpl->rxq.strd_num_n) *
(1u << tmpl->rxq.strd_sz_n));
DRV_LOG(DEBUG,
@@ -1493,9 +1494,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->port_id, idx,
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
} else if (tmpl->rxq.rxseg_n == 1) {
- MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
+ MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
@@ -1517,13 +1518,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- 1 << sges_n, max_rx_pkt_len,
+ 1 << sges_n, max_rx_pktlen,
1u << MLX5_MAX_LOG_RQ_SEGS);
rte_errno = ENOTSUP;
goto error;
}
tmpl->rxq.sges_n = sges_n;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
}
if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
DRV_LOG(WARNING,
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index a3ee15020466..520c6fdb1d31 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_NETA_ETH_HDRS_LEN;
-
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
@@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
/* It is OK. New MTU will be set later on mvneta_dev_start */
return 0;
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc09039..2cd4fb31348b 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mvneta_priv *priv = dev->data->dev_private;
struct mvneta_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS;
- if (frame_size < max_rx_pkt_len) {
+ if (frame_size < max_rx_pktlen) {
MVNETA_LOG(ERR,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 63d348e27936..9d578b4ffa5d 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_PP2_ETH_HDRS_LEN;
- if (dev->data->mtu > priv->max_mtu) {
- MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n",
- dev->data->mtu,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- priv->max_mtu);
- return -EINVAL;
- }
+ if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
+ MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
+ dev->data->dev_conf.rxmode.mtu,
+ priv->max_mtu);
+ return -EINVAL;
}
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
@@ -589,9 +584,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
return 0;
@@ -1984,7 +1976,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mrvl_priv *priv = dev->data->dev_private;
struct mrvl_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
int ret, tc, inq;
uint64_t offloads;
@@ -1999,17 +1991,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
return -EFAULT;
}
- frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
- MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
- if (frame_size < max_rx_pkt_len) {
+ frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS;
+ if (frame_size < max_rx_pktlen) {
MRVL_LOG(WARNING,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MRVL_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index b18edd8c7bac..ff531fdb2354 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -644,7 +644,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = rxmode->max_rx_pkt_len;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -1551,16 +1551,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
/* switch to jumbo mode if needed */
- if ((uint32_t)mtu > RTE_ETHER_MTU)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
-
/* writing to configuration space */
- nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
+ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
hw->mtu = mtu;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9f4c0503b4d4..69c3bda12df8 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > OCCTX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
@@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
/* Setup scatter mode if needed by jumbo */
- if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (data->mtu > buffsz) {
nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
@@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
evdev_priv->rx_offload_flags = nic->rx_offload_flags;
evdev_priv->tx_offload_flags = nic->tx_offload_flags;
- /* Setup MTU based on max_rx_pkt_len */
- nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD;
+ /* Setup MTU */
+ nic->mtu = data->mtu;
return 0;
}
@@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
octeontx_recheck_rx_offloads(rxq);
}
- /* Setting up the mtu based on max_rx_pkt_len */
+ /* Setting up the mtu */
ret = octeontx_dev_mtu_set(dev, nic->mtu);
if (ret) {
octeontx_log_err("Failed to set default MTU size %d", ret);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 40af99a26a17..9f162475523c 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
mbp_priv = rte_mempool_get_priv(rxq->pool);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5a4501208e9e..ba282762b749 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -58,14 +58,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > NIX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return rc;
}
@@ -74,7 +71,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct otx2_eth_rxq *rxq;
- uint16_t mtu;
int rc;
rxq = data->rx_queues[0];
@@ -82,10 +78,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
/* Setup scatter mode if needed by jumbo */
otx2_nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
-
- rc = otx2_nix_mtu_set(eth_dev, mtu);
+ rc = otx2_nix_mtu_set(eth_dev, data->mtu);
if (rc)
otx2_err("Failed to set default MTU size %d", rc);
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index feec4d10a26e..2619bd2f2a19 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev)
static int
pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- int ret;
struct pfe_eth_priv_s *priv = dev->data->dev_private;
uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
/*TODO Support VLAN*/
- ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
- if (!ret)
- dev->data->mtu = mtu;
-
- return ret;
+ return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
}
/* pfe_eth_enet_addr_byte_mac
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 323d46e6ebb2..53b2c0ca10e3 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- /* If jumbo enabled adjust MTU */
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
-
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
@@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
- uint32_t max_rx_pkt_len;
uint32_t frame_size;
uint16_t bufsz;
bool restart = false;
@@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
DP_ERR(edev, "Error during getting ethernet device info\n");
return rc;
}
- max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
- frame_size = max_rx_pkt_len;
+
+ frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
@@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (frame_size > QEDE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->dev_started = 1;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
-
return 0;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 298f4e3e4273..62a126999a5c 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
struct qede_rx_queue *rxq;
- uint16_t max_rx_pkt_len;
+ uint16_t max_rx_pktlen;
uint16_t bufsz;
int rc;
@@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
dev->data->rx_queues[qid] = NULL;
}
- max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
/* Fix up RX buffer size */
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
- (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
+ (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
dev->data->scattered_rx = 1;
}
}
- rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
+ rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
if (rc < 0)
return rc;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index c50ecea0b993..2afb13b77892 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1016,15 +1016,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
/*
* The driver does not use it, but other PMDs update jumbo frame
- * flag and max_rx_pkt_len when MTU is set.
+ * flag when MTU is set.
*/
if (mtu > RTE_ETHER_MTU) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index ac117f9c4814..ca9538fb8f2f 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -364,14 +364,10 @@ sfc_port_configure(struct sfc_adapter *sa)
{
const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
struct sfc_port *port = &sa->port;
- const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
sfc_log_init(sa, "entry");
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- port->pdu = rxmode->max_rx_pkt_len;
- else
- port->pdu = EFX_MAC_PDU(dev_data->mtu);
+ port->pdu = EFX_MAC_PDU(dev_data->mtu);
return 0;
}
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf71d..0a8d29277aeb 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_mtu = mtu };
- int err = 0;
- err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
- if (!err)
- dev->data->mtu = mtu;
-
- return err;
+ return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
}
static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fc1844ddfce1..1d1360faff66 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (frame_size > NIC_HW_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
- /* Update max_rx_pkt_len */
- rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
nic->mtu = mtu;
for (i = 0; i < nic->sqs_count; i++)
@@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
}
/* Setup scatter mode if needed by jumbo */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE > buffsz)
+ if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
- /* Setup MTU based on max_rx_pkt_len or default */
- mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
- dev->data->dev_conf.rxmode.max_rx_pkt_len
- - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
+ /* Setup MTU */
+ mtu = dev->data->mtu;
if (nicvf_dev_set_mtu(dev, mtu)) {
PMD_INIT_LOG(ERR, "Failed to set default mtu size");
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index e62675520a15..d773a81665d7 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3482,8 +3482,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965c8..44cfcd76bca4 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -55,6 +55,10 @@
#define TXGBE_5TUPLE_MAX_PRI 7
#define TXGBE_5TUPLE_MIN_PRI 1
+
+/* The overhead from MTU to max frame size. */
+#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
+
#define TXGBE_RSS_OFFLOAD_ALL ( \
ETH_RSS_IPV4 | \
ETH_RSS_NONFRAG_IPV4_TCP | \
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 6f577f4c80df..3362ca097ca7 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -1143,8 +1143,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (txgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1bd..c6cd3803c434 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
- } else {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
- }
+ wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+ TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD));
/*
* If loopback mode is configured, set LPBK bit.
@@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
+ if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
+ 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* VF packets received can work in all cases.
*/
if (txgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ dev->data->mtu + TXGBE_ETH_OVERHEAD);
return -EINVAL;
}
@@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
+ (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 05683056676c..9491cc2669f7 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2009,8 +2009,6 @@ virtio_dev_configure(struct rte_eth_dev *dev)
const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
struct virtio_hw *hw = dev->data->dev_private;
- uint32_t ether_hdr_len = RTE_ETHER_HDR_LEN + VLAN_TAG_LEN +
- hw->vtnet_hdr_size;
uint64_t rx_offloads = rxmode->offloads;
uint64_t tx_offloads = txmode->offloads;
uint64_t req_features;
@@ -2039,7 +2037,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len)
+ if (rxmode->mtu > hw->max_mtu)
req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index 5251db0b1674..98e47e0812d5 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/bond/main.c b/examples/bond/main.c
index f48400e21156..70c37a7d2ba7 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -117,7 +117,6 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 1b1029660e77..0b973d392dc8 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index f70ab0cc9e38..f5c28268d9f8 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ca6cd200caad..9d9f150522dd 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 94c155364842..3e1daa228316 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -59,12 +59,6 @@ static struct{
} parm_config;
const char cb_port_delim[] = ":";
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
struct flow_classifier {
struct rte_flow_classifier *cls;
};
@@ -191,7 +185,7 @@ static struct rte_flow_attr attr;
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
struct rte_ether_addr addr;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
@@ -202,6 +196,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 2e377e2d4bb6..5dbf60f7ef54 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -806,7 +806,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 77a6a18d1914..f97287ce2243 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -146,7 +146,7 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_SCATTER |
@@ -914,9 +914,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
@@ -959,8 +959,7 @@ main(int argc, char **argv)
}
/* set the mtu to the maximum received packet size */
- ret = rte_eth_dev_set_mtu(portid,
- local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD);
+ ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu);
if (ret < 0) {
printf("\n");
rte_exit(EXIT_FAILURE, "Set MTU failed: "
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..8628db22f56b 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000, /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index ce8882a45883..f868e5d906c7 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -162,7 +162,7 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_JUMBO_FRAME),
@@ -875,7 +875,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
*/
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM;
- nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
+ nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
+ + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += nb_rxd + nb_txd;
@@ -1046,9 +1047,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f252d34985b4..f8a1f544c21d 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -2161,7 +2160,6 @@ cryptodevs_init(uint16_t req_queue_num)
static void
port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
{
- uint32_t frame_size;
struct rte_eth_dev_info dev_info;
struct rte_eth_txconf *txconf;
uint16_t nb_tx_queue, nb_rx_queue;
@@ -2209,10 +2207,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- frame_size = MTU_TO_FRAMELEN(mtu_size);
- if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
+ if (mtu_size > RTE_ETHER_MTU)
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- local_port_conf.rxmode.max_rx_pkt_len = frame_size;
+ local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index fd6207a18b79..989d70ae257a 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -107,7 +107,7 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
@@ -694,9 +694,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/kni/main.c b/examples/kni/main.c
index beabb3c848aa..c10814c6a94f 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
memcpy(&conf, &port_conf, sizeof(conf));
/* Set new MTU */
- if (new_mtu > RTE_ETHER_MAX_LEN)
+ if (new_mtu > RTE_ETHER_MTU)
conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* mtu + length of header + length of FCS = max pkt length */
- conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
- KNI_ENET_FCS_SIZE;
+ conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
index 8e7eb3248589..cef4187467f0 100644
--- a/examples/l2fwd-cat/l2fwd-cat.c
+++ b/examples/l2fwd-cat/l2fwd-cat.c
@@ -19,10 +19,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
-};
-
/* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
/*
@@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0)
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 4f5161649234..b36c6123c652 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -215,7 +215,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index ab341e55b299..0d0857bf8041 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564b6..913037d5f835 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -1833,12 +1832,12 @@ parse_args(int argc, char **argv)
print_usage(prgname);
return -1;
}
- port_conf.rxmode.max_rx_pkt_len = ret;
+ port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN);
}
- printf("set jumbo frame max packet length "
- "to %u\n",
- (unsigned int)
- port_conf.rxmode.max_rx_pkt_len);
+ printf("set jumbo frame max packet length to %u\n",
+ (unsigned int)port_conf.rxmode.mtu +
+ RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
break;
}
case OPT_RULE_IPV4_NUM:
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 75c2e0ef3f3f..ddcb2fbc995d 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
@@ -510,7 +509,8 @@ parse_args(int argc, char **argv)
print_usage(prgname);
return -1;
}
- port_conf.rxmode.max_rx_pkt_len = ret;
+ port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN
+ + RTE_ETHER_CRC_LEN);
}
break;
}
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index f8dfed163423..02221a79fabf 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -250,7 +250,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -1972,11 +1971,13 @@ parse_args(int argc, char **argv)
print_usage(prgname);
return -1;
}
- port_conf.rxmode.max_rx_pkt_len = ret;
+ port_conf.rxmode.mtu = ret -
+ (RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN);
}
- printf("set jumbo frame "
- "max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ printf("set jumbo frame max packet length to %u\n",
+ (unsigned int)port_conf.rxmode.mtu +
+ RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
}
if (!strncmp(lgopts[option_index].name,
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 4cb800aa158d..80b5b93d5f0d 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -719,7 +718,8 @@ parse_args(int argc, char **argv)
print_usage(prgname);
return -1;
}
- port_conf.rxmode.max_rx_pkt_len = ret;
+ port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN);
}
break;
}
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf263d..1960f00ad28d 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -3004,10 +3003,12 @@ parse_args(int argc, char **argv)
print_usage(prgname);
return -1;
}
- port_conf.rxmode.max_rx_pkt_len = ret;
+ port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN);
}
printf("set jumbo frame max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ (unsigned int)port_conf.rxmode.mtu +
+ RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
break;
}
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..52f2a139d2c6 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000, /* Jumbo frame max MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 173451eedcbe..54148631f09e 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
uint8_t ptp_enabled_port_nb;
static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static const struct rte_ether_addr ether_multicast = {
.addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
};
@@ -178,7 +172,7 @@ static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_dev_info dev_info;
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1;
const uint16_t tx_rings = 1;
int retval;
@@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 6e724f37835a..2e9ed3cf7ef7 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -54,7 +54,6 @@ static struct rte_mempool *pool = NULL;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..1367569c65db 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 192521c3c6b0..ea86c69b07ad 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
static const char usage[] =
"%s EAL_ARGS -- [-t]\n";
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static struct {
uint64_t total_cycles;
uint64_t total_queue_cycles;
@@ -118,7 +112,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused,
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -131,6 +125,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index 43b9d17a3c91..26c63ffed742 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -17,12 +17,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
/* basicfwd.c: Basic DPDK skeleton forwarding example. */
/*
@@ -32,7 +26,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -44,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d2179eadb979..e27712727f6a 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -639,8 +639,8 @@ us_vhost_parse_args(int argc, char **argv)
if (ret) {
vmdq_conf_default.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- vmdq_conf_default.rxmode.max_rx_pkt_len
- = JUMBO_FRAME_MAX_SIZE;
+ vmdq_conf_default.rxmode.mtu =
+ JUMBO_FRAME_MAX_SIZE;
}
break;
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index 7d5bf6855426..309d1a3a8444 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -51,17 +51,10 @@
static uint32_t enabled_port_mask;
static volatile bool force_quit;
-/****************/
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index c607eabb5b0c..3451125639f9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1249,15 +1249,15 @@ rte_eth_dev_tx_offload_name(uint64_t offload)
static inline int
eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
- uint32_t max_rx_pkt_len, uint32_t dev_info_size)
+ uint32_t max_rx_pktlen, uint32_t dev_info_size)
{
int ret = 0;
if (dev_info_size == 0) {
- if (config_size != max_rx_pkt_len) {
+ if (config_size != max_rx_pktlen) {
RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size"
" %u != %u is not allowed\n",
- port_id, config_size, max_rx_pkt_len);
+ port_id, config_size, max_rx_pktlen);
ret = -EINVAL;
}
} else if (config_size > dev_info_size) {
@@ -1325,6 +1325,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
return ret;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1332,6 +1345,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
+ uint32_t max_rx_pktlen;
uint16_t overhead_len;
int diag;
int ret;
@@ -1375,11 +1389,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
/* Get the real Ethernet overhead length */
- if (dev_info.max_mtu != UINT16_MAX &&
- dev_info.max_rx_pktlen > dev_info.max_mtu)
- overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
- else
- overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
/* If number of queues specified by application for both Rx and Tx is
* zero, use driver preferred values. This cannot be done individually
@@ -1448,49 +1459,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/*
- * If jumbo frames are enabled, check that the maximum RX packet
- * length is supported by the configured device.
+ * Check that the maximum RX packet length is supported by the
+ * configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- (unsigned int)RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
- goto rollback;
- }
+ if (dev_conf->rxmode.mtu == 0)
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
+ if (max_rx_pktlen > dev_info.max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
+ port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
+ ret = -EINVAL;
+ goto rollback;
+ } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
+ port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ goto rollback;
+ }
- /* Scale the MTU size to adapt max_rx_pkt_len */
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- overhead_len;
- } else {
- uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
- if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
- pktlen > RTE_ETHER_MTU + overhead_len)
+ if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
+ dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
/* Use default value */
- dev->data->dev_conf.rxmode.max_rx_pkt_len =
- RTE_ETHER_MTU + overhead_len;
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
}
+ dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
+
/*
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
goto rollback;
@@ -2142,13 +2149,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
+ /* Get the real Ethernet overhead length */
if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ uint16_t overhead_len;
+ uint32_t max_rx_pktlen;
+ int ret;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->mtu + overhead_len;
if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- int ret = eth_dev_check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
+ ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index faf3bd901d75..9f288f98329c 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -410,7 +410,7 @@ enum rte_eth_tx_mq_mode {
struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
- uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ uint32_t mtu; /**< Requested MTU. */
/** Maximum allowed size of LRO aggregated packet. */
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index 0036bda7465c..1491c815c312 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -28,7 +28,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(nb_tx_q);
rte_trace_point_emit_u32(dev_conf->link_speeds);
rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
- rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
+ rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
rte_trace_point_emit_u64(dev_conf->txmode.offloads);
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library
2021-07-09 17:29 [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Ferruh Yigit
@ 2021-07-09 17:29 ` Ferruh Yigit
2021-07-13 13:48 ` Andrew Rybchenko
` (2 more replies)
2021-07-09 17:29 ` [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set Ferruh Yigit
` (5 subsequent siblings)
6 siblings, 3 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-09 17:29 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Igor Russkikh,
Andrew Rybchenko, Maciej Czekaj, Jiawen Wu, Jian Wang,
Thomas Monjalon
Cc: Ferruh Yigit, dev
Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
and application should enable the jumbo frame offload support for it.
When jumbo frame offload is not enabled by application, but MTU bigger
than RTE_ETHER_MTU is requested there are two options, either fail or
enable jumbo frame offload implicitly.
Enabling jumbo frame offload implicitly is selected by many drivers
since setting a big MTU value already implies it, and this increases
usability.
This patch moves this logic from drivers to the library, both to reduce
the duplicated code in the drivers and to make behaviour more visible.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
drivers/net/dpaa/dpaa_ethdev.c | 7 -------
drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
drivers/net/e1000/em_ethdev.c | 9 ++-------
drivers/net/e1000/igb_ethdev.c | 9 ++-------
drivers/net/enetc/enetc_ethdev.c | 7 -------
drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
drivers/net/hns3/hns3_ethdev.c | 8 --------
drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
drivers/net/i40e/i40e_ethdev.c | 5 -----
drivers/net/i40e/i40e_ethdev_vf.c | 5 -----
drivers/net/iavf/iavf_ethdev.c | 7 -------
drivers/net/ice/ice_ethdev.c | 5 -----
drivers/net/igc/igc_ethdev.c | 9 ++-------
drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
drivers/net/liquidio/lio_ethdev.c | 7 -------
drivers/net/nfp/nfp_net.c | 6 ------
drivers/net/octeontx/octeontx_ethdev.c | 5 -----
drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
drivers/net/qede/qede_ethdev.c | 4 ----
drivers/net/sfc/sfc_ethdev.c | 9 ---------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 6 ------
lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
28 files changed, 29 insertions(+), 171 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 76aeec077f2b..2960834b4539 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
val = 1;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
val = 0;
- }
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 335505a106d5..4344a012f06e 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3018,15 +3018,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
return -EINVAL;
}
- if (new_mtu > RTE_ETHER_MTU) {
+ if (new_mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* Is there a change in mtu setting? */
if (eth_dev->data->mtu == new_mtu)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 695d0d6fd3e2..349896f6a1bf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
plt_err("Failed to max Rx frame length, rc=%d", rc);
goto exit;
}
-
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 8cf61f12a8d6..0c9cc2f5bb3f 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
- /* set to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
return err;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 56703e3a39e8..a444f749bb96 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 6213bcbf3a43..be2858b3adac 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1470,13 +1470,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6f418a36aa04..1b41dd04df5a 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
return 0;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 35b517891d67..f15774eae20d 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4401,15 +4401,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index cdb9783b5372..fbcbbb6c0533 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads &=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c737ef8d06d8..c1cde811a252 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1556,13 +1556,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 8bccdeddb2f7..868d381a4772 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2597,7 +2597,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct hns3_adapter *hns = dev->data->dev_private;
uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
struct hns3_hw *hw = &hns->hw;
- bool is_jumbo_frame;
int ret;
if (dev->data->dev_started) {
@@ -2607,7 +2606,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2622,12 +2620,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (is_jumbo_frame)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index ca839fa55fa0..ff28cad53a03 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -920,12 +920,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rte_spinlock_unlock(&hw->lock);
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 1161f301b9ae..c5058f26dff2 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11772,11 +11772,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 086a167ca672..2015a86ba5ca 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -2884,11 +2884,6 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 13c2329d85a7..ba5be45e8c5e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1446,13 +1446,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index bdda6fee3f8e..502e410b5641 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3806,11 +3806,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index b26723064b07..dcbc26b8186e 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rctl = IGC_READ_REG(hw, IGC_RCTL);
-
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~IGC_RCTL_LPE;
- }
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 3634c0c8c5f0..e8a33f04bd69 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
mtu);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b9048ade3c35..c4696f34a7a1 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5196,13 +5196,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index f0c165c89ba7..5c40f16bfa24 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index ff531fdb2354..5cea035e1465 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -1550,12 +1550,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* writing to configuration space */
nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 69c3bda12df8..fb65be2c2dc3 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index ba282762b749..0c97ef7584a0 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -58,11 +58,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return rc;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 53b2c0ca10e3..71065f8072ac 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2afb13b77892..85209b5befbd 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1014,15 +1014,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
- /*
- * The driver does not use it, but other PMDs update jumbo frame
- * flag when MTU is set.
- */
- if (mtu > RTE_ETHER_MTU) {
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1d1360faff66..0639889b2144 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct nicvf *nic = nicvf_pmd_priv(dev);
uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
size_t i;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
PMD_INIT_FUNC_TRACE();
@@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index d773a81665d7..b1a3f9fbb84d 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3482,12 +3482,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
TXGBE_FRAME_SIZE_MAX);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 3451125639f9..d649a5dd69a9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3625,6 +3625,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
+ int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3643,12 +3644,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ is_jumbo_frame_capable = 1;
}
+ if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
+ return -EINVAL;
+
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (!ret)
+ if (!ret) {
dev->data->mtu = mtu;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
return eth_err(port_id, ret);
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set
2021-07-09 17:29 [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Ferruh Yigit
2021-07-09 17:29 ` [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-07-09 17:29 ` Ferruh Yigit
2021-07-13 13:56 ` Andrew Rybchenko
2021-07-18 7:52 ` Xu, Rosen
2021-07-09 17:29 ` [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag Ferruh Yigit
` (4 subsequent siblings)
6 siblings, 2 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-09 17:29 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Heinrich Kuhn,
Harman Kalra, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
Rasesh Mody, Devendra Singh Rawat, Igor Russkikh, Maciej Czekaj,
Jiawen Wu, Jian Wang, Thomas Monjalon, Andrew Rybchenko
Cc: Ferruh Yigit, dev
Move requested MTU value check to the API to prevent the duplicated
code.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 15 ++++-----------
drivers/net/bnxt/bnxt_ethdev.c | 2 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------
drivers/net/dpaa/dpaa_ethdev.c | 2 --
drivers/net/dpaa2/dpaa2_ethdev.c | 4 ----
drivers/net/e1000/em_ethdev.c | 10 ----------
drivers/net/e1000/igb_ethdev.c | 11 -----------
drivers/net/enetc/enetc_ethdev.c | 4 ----
drivers/net/hinic/hinic_pmd_ethdev.c | 8 +-------
drivers/net/i40e/i40e_ethdev.c | 17 ++++-------------
drivers/net/i40e/i40e_ethdev_vf.c | 17 ++++-------------
drivers/net/iavf/iavf_ethdev.c | 10 ++--------
drivers/net/ice/ice_ethdev.c | 14 +++-----------
drivers/net/igc/igc_ethdev.c | 5 -----
drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------
drivers/net/liquidio/lio_ethdev.c | 10 ----------
drivers/net/nfp/nfp_net.c | 4 ----
drivers/net/octeontx/octeontx_ethdev.c | 4 ----
drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
drivers/net/qede/qede_ethdev.c | 12 ------------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 10 ----------
lib/ethdev/rte_ethdev.c | 9 +++++++++
23 files changed, 29 insertions(+), 169 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 2960834b4539..c36cd7b1d2f0 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct axgbe_port *pdata = dev->data->dev_private;
- uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- unsigned int val = 0;
- axgbe_dev_info_get(dev, &dev_info);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ unsigned int val;
+
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- val = 1;
- else
- val = 0;
+ val = mtu > RTE_ETHER_MTU ? 1 : 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
+
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 4344a012f06e..1e7da8ba61a6 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -2991,7 +2991,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
- uint32_t rc = 0;
+ uint32_t rc;
uint32_t i;
rc = is_bnxt_in_error(bp);
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 0c9cc2f5bb3f..70b879fed100 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
- struct rte_eth_dev_info dev_info;
- int err;
uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- err = cxgbe_dev_info_get(eth_dev, &dev_info);
- if (err != 0)
- return err;
-
- /* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
- return -EINVAL;
-
- err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
+ return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- return err;
}
/*
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index a444f749bb96..60dd4f67fc26 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
- return -EINVAL;
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index be2858b3adac..6b44b0557e6a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1466,10 +1466,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
- return -EINVAL;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 1b41dd04df5a..6ebef55588bc 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1788,22 +1788,12 @@ eth_em_default_mac_addr_set(struct rte_eth_dev *dev,
static int
eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct e1000_hw *hw;
uint32_t frame_size;
uint32_t rctl;
- int ret;
-
- ret = eth_em_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
frame_size = mtu + E1000_ETH_OVERHEAD;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index f15774eae20d..fb69210ba9f4 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4368,9 +4368,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
uint32_t rctl;
struct e1000_hw *hw;
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + E1000_ETH_OVERHEAD;
- int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -4379,15 +4377,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (hw->mac.type == e1000_82571)
return -ENOTSUP;
#endif
- ret = eth_igb_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index fbcbbb6c0533..a7372c1787c7 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -662,10 +662,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- /* check that mtu is within the allowed range */
- if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size > ENETC_MAC_MAXFRM_SIZE)
- return -EINVAL;
-
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c1cde811a252..ce0b52c718ab 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1539,17 +1539,11 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- int ret = 0;
+ int ret;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu));
- if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) {
- PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d",
- mtu, HINIC_MIN_MTU_SIZE, HINIC_MAX_MTU_SIZE);
- return -EINVAL;
- }
-
ret = hinic_set_port_mtu(nic_dev->hwdev, mtu);
if (ret) {
PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret);
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index c5058f26dff2..dad151eac5f1 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11754,25 +11754,16 @@ static int i40e_set_default_mac_addr(struct rte_eth_dev *dev,
}
static int
-i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
- int ret = 0;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
- return ret;
+ return 0;
}
/* Restore ethertype filter */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 2015a86ba5ca..f7f9d44ef181 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -2866,25 +2866,16 @@ i40evf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
static int
-i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = vf->dev_data;
- uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
- int ret = 0;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
- return ret;
+ return 0;
}
static int
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index ba5be45e8c5e..049671ef3da9 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1432,21 +1432,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
static int
-iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD;
- int ret = 0;
-
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port must be stopped before configuration");
return -EBUSY;
}
- return ret;
+ return 0;
}
static int
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 502e410b5641..c1a96d3de183 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3788,21 +3788,13 @@ ice_dev_set_link_down(struct rte_eth_dev *dev)
}
static int
-ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + ICE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started) {
PMD_DRV_LOG(ERR,
"port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index dcbc26b8186e..e279ae1fff1d 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1576,11 +1576,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN)
frame_size += VLAN_TAG_SIZE;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > MAX_RX_JUMBO_FRAME_SIZE)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index e8a33f04bd69..377b96c0236a 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2778,12 +2778,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
int ret = 0;
struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev);
struct rte_eth_dev_data *dev_data = ethdev->data;
- uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)
- return -EINVAL;
/* mtu setting is forbidden if port is start */
/* make sure NIC port is stopped */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 5c40f16bfa24..0fd8b247aabf 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -434,7 +434,6 @@ static int
lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -446,15 +445,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -EINVAL;
}
- /* check if VF MTU is within allowed range.
- * New value should not exceed PF MTU.
- */
- if (mtu < RTE_ETHER_MIN_MTU || mtu > pf_mtu) {
- lio_dev_err(lio_dev, "VF MTU should be >= %d and <= %d\n",
- RTE_ETHER_MIN_MTU, pf_mtu);
- return -EINVAL;
- }
-
/* flush added to prevent cmd failure
* incase the queue is full
*/
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index 5cea035e1465..8efeacc03943 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -1539,10 +1539,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || (uint32_t)mtu > hw->max_mtu)
- return -EINVAL;
-
/* mtu setting is forbidden if port is started */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index fb65be2c2dc3..b2355fa695bc 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -524,10 +524,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
struct rte_eth_dev_data *data = eth_dev->data;
int rc = 0;
- /* Check if MTU is within the allowed range */
- if (frame_size < OCCTX_MIN_FRS || frame_size > OCCTX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 0c97ef7584a0..cba03b4bb9b8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -18,11 +18,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
int rc;
frame_size += NIX_TIMESYNC_RX_OFFSET * otx2_ethdev_is_ptp_en(dev);
-
- /* Check if MTU is within the allowed range */
- if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 71065f8072ac..098e56e9822f 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2307,7 +2307,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
uint32_t frame_size;
uint16_t bufsz;
@@ -2315,19 +2314,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
int i, rc;
PMD_INIT_FUNC_TRACE(edev);
- rc = qede_dev_info_get(dev, &dev_info);
- if (rc != 0) {
- DP_ERR(edev, "Error during getting ethernet device info\n");
- return rc;
- }
frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
- DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
- mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
- QEDE_ETH_OVERHEAD);
- return -EINVAL;
- }
if (!dev->data->scattered_rx &&
frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
DP_INFO(edev, "MTU greater than minimum RX buffer size of %u\n",
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 0639889b2144..ac8477cbd7f4 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -154,12 +154,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (frame_size > NIC_HW_MAX_FRS)
- return -EINVAL;
-
- if (frame_size < NIC_HW_MIN_FRS)
- return -EINVAL;
-
buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/*
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b1a3f9fbb84d..41b0e63cd79e 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3459,18 +3459,8 @@ static int
txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct rte_eth_dev_data *dev_data = dev->data;
- int ret;
-
- ret = txgbe_dev_info_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index d649a5dd69a9..41c9e630e4d4 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3638,6 +3638,9 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
@@ -3645,6 +3648,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ frame_size = mtu + overhead_len;
+ if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
+ return -EINVAL;
+
if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME)
is_jumbo_frame_capable = 1;
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag
2021-07-09 17:29 [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Ferruh Yigit
2021-07-09 17:29 ` [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-09 17:29 ` [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-07-09 17:29 ` Ferruh Yigit
2021-07-13 14:07 ` Andrew Rybchenko
2021-07-18 7:53 ` Xu, Rosen
2021-07-13 12:47 ` [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Andrew Rybchenko
` (3 subsequent siblings)
6 siblings, 2 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-09 17:29 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Pavel Belous, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Yong Wang, Konstantin Ananyev, Radu Nicolau, Akhil Goyal,
David Hunt, John McNamara, Thomas Monjalon
Cc: Ferruh Yigit, dev
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application explicitly set this flag to enable jumbo
frames, this can be deducted by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
app/test-eventdev/test_pipeline_common.c | 2 -
app/test-pmd/cmdline.c | 2 +-
app/test-pmd/config.c | 24 +---------
app/test-pmd/testpmd.c | 46 +------------------
app/test-pmd/testpmd.h | 2 +-
doc/guides/howto/debug_troubleshoot.rst | 2 -
doc/guides/nics/bnxt.rst | 1 -
doc/guides/nics/features.rst | 3 +-
drivers/net/atlantic/atl_ethdev.c | 1 -
drivers/net/axgbe/axgbe_ethdev.c | 1 -
drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
drivers/net/bnxt/bnxt.h | 1 -
drivers/net/bnxt/bnxt_ethdev.c | 10 +---
drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
drivers/net/cnxk/cnxk_ethdev.h | 5 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
drivers/net/cxgbe/cxgbe.h | 1 -
drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
drivers/net/cxgbe/sge.c | 5 +-
drivers/net/dpaa/dpaa_ethdev.c | 2 -
drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
drivers/net/e1000/e1000_ethdev.h | 4 +-
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/em_rxtx.c | 19 +++-----
drivers/net/e1000/igb_rxtx.c | 3 +-
drivers/net/ena/ena_ethdev.c | 2 -
drivers/net/enetc/enetc_ethdev.c | 3 +-
drivers/net/enic/enic_res.c | 1 -
drivers/net/failsafe/failsafe_ops.c | 2 -
drivers/net/fm10k/fm10k_ethdev.c | 1 -
drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev_vf.c | 1 -
drivers/net/i40e/i40e_ethdev.c | 1 -
drivers/net/i40e/i40e_ethdev_vf.c | 3 +-
drivers/net/i40e/i40e_rxtx.c | 2 +-
drivers/net/iavf/iavf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_vf_representor.c | 1 -
drivers/net/ice/ice_ethdev.c | 1 -
drivers/net/ice/ice_rxtx.c | 3 +-
drivers/net/igc/igc_ethdev.h | 1 -
drivers/net/igc/igc_txrx.c | 2 +-
drivers/net/ionic/ionic_ethdev.c | 1 -
drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
drivers/net/ixgbe/ixgbe_pf.c | 9 +---
drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
drivers/net/mlx4/mlx4_rxq.c | 1 -
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/mvneta/mvneta_ethdev.h | 3 +-
drivers/net/mvpp2/mrvl_ethdev.c | 1 -
drivers/net/nfp/nfp_net.c | 6 +--
drivers/net/octeontx/octeontx_ethdev.h | 1 -
drivers/net/octeontx2/otx2_ethdev.h | 1 -
drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
drivers/net/qede/qede_ethdev.c | 1 -
drivers/net/sfc/sfc_rx.c | 2 -
drivers/net/thunderx/nicvf_ethdev.h | 1 -
drivers/net/txgbe/txgbe_rxtx.c | 1 -
drivers/net/virtio/virtio_ethdev.c | 1 -
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
examples/ip_fragmentation/main.c | 3 +-
examples/ip_reassembly/main.c | 3 +-
examples/ipsec-secgw/ipsec-secgw.c | 2 -
examples/ipv4_multicast/main.c | 1 -
examples/kni/main.c | 5 --
examples/l3fwd-acl/main.c | 2 -
examples/l3fwd-graph/main.c | 1 -
examples/l3fwd-power/main.c | 2 -
examples/l3fwd/main.c | 1 -
.../performance-thread/l3fwd-thread/main.c | 2 -
examples/vhost/main.c | 2 -
lib/ethdev/rte_ethdev.c | 26 +----------
lib/ethdev/rte_ethdev.h | 1 -
76 files changed, 42 insertions(+), 250 deletions(-)
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 5fcea74b4d43..2775e72c580d 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN;
- if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
RTE_ETH_FOREACH_DEV(i) {
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 8bdc042f6e8e..c0b6132d64e8 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1921,7 +1921,7 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
return;
}
- update_jumbo_frame_offload(port_id, res->value);
+ update_mtu_from_frame_size(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index a87265d7638b..23a48557b676 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1136,39 +1136,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v)
void
port_mtu_set(portid_t port_id, uint16_t mtu)
{
+ struct rte_port *port = &ports[port_id];
int diag;
- struct rte_port *rte_port = &ports[port_id];
- struct rte_eth_dev_info dev_info;
- int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ret = eth_dev_info_get_print_err(port_id, &dev_info);
- if (ret != 0)
- return;
-
- if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
- printf("Set MTU failed. MTU:%u is not in valid range, min:%u - max:%u\n",
- mtu, dev_info.min_mtu, dev_info.max_mtu);
- return;
- }
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag) {
printf("Set MTU failed. diag=%d\n", diag);
return;
}
- rte_port->dev_conf.rxmode.mtu = mtu;
-
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (mtu > RTE_ETHER_MTU) {
- rte_port->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- } else
- rte_port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
+ port->dev_conf.rxmode.mtu = mtu;
}
/* Generic flow management functions. */
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 2c79cae05664..92feadefab59 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1473,11 +1473,6 @@ init_config(void)
rte_exit(EXIT_FAILURE,
"rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid, 0);
- if (ret != 0)
- printf("Updating jumbo frame offload failed for port %u\n",
- pid);
-
if (!(port->dev_info.tx_offload_capa &
DEV_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
@@ -3364,24 +3359,18 @@ rxtx_port_config(struct rte_port *port)
}
/*
- * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned.
+ * Helper function to set MTU from frame size
*
* port->dev_info should be set before calling this function.
*
- * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
- * ETH_OVERHEAD". This is useful to update flags but not MTU value.
- *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
+update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
- uint64_t rx_offloads;
uint16_t mtu, new_mtu;
- bool on;
eth_overhead = get_eth_overhead(&port->dev_info);
@@ -3390,39 +3379,8 @@ update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
return -1;
}
- if (max_rx_pktlen == 0)
- max_rx_pktlen = mtu + eth_overhead;
-
- rx_offloads = port->dev_conf.rxmode.offloads;
new_mtu = max_rx_pktlen - eth_overhead;
- if (new_mtu <= RTE_ETHER_MTU) {
- rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = false;
- } else {
- if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- printf("Frame size (%u) is not supported by port %u\n",
- max_rx_pktlen, portid);
- return -1;
- }
- rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = true;
- }
-
- if (rx_offloads != port->dev_conf.rxmode.offloads) {
- uint16_t qid;
-
- port->dev_conf.rxmode.offloads = rx_offloads;
-
- /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
- for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
- if (on)
- port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
- }
-
if (mtu == new_mtu)
return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 42143f85924f..b94bf668dc4d 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
+int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..df69fa8bcc24 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
* Identify if port Speed and Duplex is matching to desired values with
``rte_eth_link_get``.
- * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
-
* Check promiscuous mode if the drops do not occur for unique MAC address
with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index feb0c6a7657a..e6f1628402fc 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -886,7 +886,6 @@ processing. This improved performance is derived from a number of optimizations:
DEV_RX_OFFLOAD_VLAN_STRIP
DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_JUMBO_FRAME
DEV_RX_OFFLOAD_IPV4_CKSUM
DEV_RX_OFFLOAD_UDP_CKSUM
DEV_RX_OFFLOAD_TCP_CKSUM
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index c98242f3b72f..a077c30644d2 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -165,8 +165,7 @@ Jumbo frame
Supports Rx jumbo frames.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.mtu``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 3f654c071566..5a198f53fce7 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
| DEV_RX_OFFLOAD_IPV4_CKSUM \
| DEV_RX_OFFLOAD_UDP_CKSUM \
| DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_JUMBO_FRAME \
| DEV_RX_OFFLOAD_MACSEC_STRIP \
| DEV_RX_OFFLOAD_VLAN_FILTER)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index c36cd7b1d2f0..0bc9e5eeeb10 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 009a94e9a8fa..50ff04bb2241 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index e93a7eb933b4..9ad7821b4736 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -591,7 +591,6 @@ struct bnxt_rep_info {
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_VLAN_EXTEND | \
DEV_RX_OFFLOAD_TCP_LRO | \
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 1e7da8ba61a6..c4fd27bd92de 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -728,15 +728,10 @@ static int bnxt_start_nic(struct bnxt *bp)
unsigned int i, j;
int rc;
- if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* THOR does not support ring groups.
* But we will use the array to save RSS context IDs.
@@ -1221,7 +1216,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
if (eth_dev->data->dev_conf.rxmode.offloads &
~(DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index b2a1833e3f91..844ac1581a61 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1731,14 +1731,6 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 4eead0390532..aa147eee45c9 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -75,9 +75,8 @@
#define CNXK_NIX_RX_OFFLOAD_CAPA \
(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
+ DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP)
#define RSS_IPV4_ENABLE \
(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 349896f6a1bf..d0924df76152 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_SECURITY, " Security,"},
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..37625c5bfb69 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -51,7 +51,6 @@
DEV_RX_OFFLOAD_IPV4_CKSUM | \
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 70b879fed100..1374f32b6826 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -661,14 +661,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
if ((&rxq->fl) != NULL)
rxq->fl.size = temp_nb_desc;
- /* Set to jumbo mode if necessary */
- if (eth_dev->data->mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
&rxq->fl, NULL,
is_pf4(adapter) ?
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 830f5192474d..21b8fe61c9a7 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
struct rte_mbuf *buf_bulk[n];
int ret, i;
struct rte_pktmbuf_pool_private *mbp_priv;
- u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME;
/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
- if (jumbo_en &&
- ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000))
+ if ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000)
buf_size_idx = RX_LARGE_MTU_BUF;
ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk, n);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 60dd4f67fc26..9cc808b767ea 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,7 +54,6 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
@@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 6b44b0557e6a..53508972a4c2 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
@@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6f4..1ae78fe71f02 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
void em_dev_clear_queues(struct rte_eth_dev *dev);
void em_dev_free_queues(struct rte_eth_dev *dev);
-uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
-uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
+uint64_t em_get_rx_port_offloads_capa(void);
+uint64_t em_get_rx_queue_offloads_capa(void);
int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6ebef55588bc..8a752eef52cf 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
- dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(dev);
- dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
+ dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa();
+ dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
dev_info->rx_queue_offload_capa;
dev_info->tx_queue_offload_capa = em_get_tx_queue_offloads_capa(dev);
dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd0074..e061f80a906a 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1359,12 +1359,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
}
uint64_t
-em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_port_offloads_capa(void)
{
uint64_t rx_offload_capa;
- uint32_t max_rx_pktlen;
-
- max_rx_pktlen = em_get_max_pktlen(dev);
rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
@@ -1374,14 +1371,12 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER;
- if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
- rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
return rx_offload_capa;
}
uint64_t
-em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_queue_offloads_capa(void)
{
uint64_t rx_queue_offload_capa;
@@ -1390,7 +1385,7 @@ em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
* capability be same to per port queue offloading capability
* for better convenience.
*/
- rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
+ rx_queue_offload_capa = em_get_rx_port_offloads_capa();
return rx_queue_offload_capa;
}
@@ -1839,7 +1834,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* to avoid splitting packets that don't fit into
* one buffer.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+ if (dev->data->mtu > RTE_ETHER_MTU ||
rctl_bsize < RTE_ETHER_MAX_LEN) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1874,14 +1869,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
if ((hw->mac.type == e1000_ich9lan ||
hw->mac.type == e1000_pch2lan ||
hw->mac.type == e1000_ich10lan) &&
- rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ dev->data->mtu > RTE_ETHER_MTU) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
}
if (hw->mac.type == e1000_pch2lan) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
@@ -1908,7 +1903,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index de12997b4bdd..9998d4ea4179 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1640,7 +1640,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -2344,7 +2343,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Configure support of jumbo frames, if any.
*/
max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->mtu & RTE_ETHER_MTU) {
rctl |= E1000_RCTL_LPE;
/*
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index e9b718786a39..4322dce260f5 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -2042,8 +2042,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
- rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* Inform framework about available features */
dev_info->rx_offload_capa = rx_feat;
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index a7372c1787c7..6457677d300a 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
(DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME);
+ DEV_RX_OFFLOAD_KEEP_CRC);
return 0;
}
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index a8f5332a407f..6a4758ea8e8a 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
DEV_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e034..47c5efe9ea77 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1193,7 +1193,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
@@ -1211,7 +1210,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 5e4b361ca6c0..093021246286 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_RSS_HASH);
}
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index ce0b52c718ab..b1563350ec0e 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -747,7 +747,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 868d381a4772..0c58c55844b0 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2717,7 +2717,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index ff28cad53a03..c488e03f23a4 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -956,7 +956,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index dad151eac5f1..ad7802f63031 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3758,7 +3758,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index f7f9d44ef181..1c314e2ffdd0 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1932,7 +1932,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
/**
* Check if the jumbo frame and maximum packet length are set correctly
*/
- if (dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -2378,7 +2378,6 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = 0;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index aa43796ef1af..a421acf8f6b6 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2906,7 +2906,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
rxq->max_pkt_len =
RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
data->mtu + I40E_ETH_OVERHEAD);
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 049671ef3da9..f156add80e0d 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -574,7 +574,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->mtu & RTE_ETHER_MTU) {
if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
max_pkt_len > IAVF_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -939,7 +939,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 34b6c9b2a7ed..72fdcc29c28a 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -65,7 +65,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (max_pkt_len <= ICE_ETH_MAX_LEN ||
max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -664,7 +664,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e90a..07843c6dbc92 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -141,7 +141,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c1a96d3de183..a17c11e95e0b 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3491,7 +3491,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_FILTER;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index a3de4172e2bc..a7b0915dabfc 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -259,7 +259,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
struct ice_rlan_ctx rx_ctx;
enum ice_status err;
uint16_t buf_size;
- struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
@@ -273,7 +272,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
frame_size);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index b3473b5b1646..5e6c2ff30157 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -73,7 +73,6 @@ extern "C" {
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index d80808a002f5..30940857eac0 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu & RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
else
rctl &= ~IGC_RCTL_LPE;
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index 97447a10e46a..795980cb1ca5 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_SCATTER |
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 377b96c0236a..4e5d234e8c7d 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ DEV_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index c4696f34a7a1..8c180f77a04e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6229,7 +6229,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct rte_eth_rxmode *rxmode;
uint32_t rf_dec, rf_int;
uint32_t bcnrc_val;
uint16_t link_speed = dev->data->dev_link.link_speed;
@@ -6251,14 +6250,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
bcnrc_val = 0;
}
- rxmode = &dev->data->dev_conf.rxmode;
/*
* Set global transmit compensation time to the MMW_SIZE in RTTBCNRM
* register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
* set as 0x4.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ if (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 9bcbc445f2d0..6e64f9a0ade2 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -600,15 +600,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT;
if (max_frs < max_frame) {
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (max_frame > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (max_frame > IXGBE_ETH_MAX_LEN)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 5e32a6ce6940..1e3944127148 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3021,7 +3021,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -5083,7 +5082,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->mtu & RTE_ETHER_MTU) {
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 4a5cfd22aa71..e73112c44749 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index bd16dde6de13..b7828ef4ebb5 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..6428f9ff7931 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,8 +54,7 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 9d578b4ffa5d..7782b56d24d2 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -59,7 +59,6 @@
/** Port Rx offload capabilities */
#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index 8efeacc03943..4e860edad12c 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -643,8 +643,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = dev->data->mtu;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -1307,9 +1306,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
};
- /* All NFP devices support jumbo frames */
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..3a02824e3948 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -60,7 +60,6 @@
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e95d933a866d..25f6cbe42512 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -147,7 +147,6 @@
DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
DEV_RX_OFFLOAD_VLAN_FILTER | \
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index a243683d61d3..c65041a16ba7 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
- devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..aa4dcd33cc79 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l3_len = hdr_lens.l3_len;
droq_pkt->l4_len = hdr_lens.l4_len;
- if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
- rte_pktmbuf_free(droq_pkt);
- goto oq_read_fail;
- }
-
if (droq_pkt->nb_segs > 1 &&
!(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 098e56e9822f..abd4b998bd3a 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_RSS_HASH);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 461afc516812..3174b9150340 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -915,8 +915,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
{
uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
- caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return caps & sfc_rx_get_offload_mask(sa);
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..5d38750d6313 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -40,7 +40,6 @@
#define NICVF_RX_OFFLOAD_CAPA ( \
DEV_RX_OFFLOAD_CHECKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index c6cd3803c434..0ce754fb25b0 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1953,7 +1953,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_SCATTER;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 9491cc2669f7..efb76ccf63e6 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2442,7 +2442,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
host_features = VIRTIO_OPS(hw)->get_features(hw);
dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
dev_info->rx_offload_capa |=
DEV_RX_OFFLOAD_TCP_CKSUM |
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 5bffbb8a0e03..60f83aaaedb8 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -56,7 +56,6 @@
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f97287ce2243..7b5632fba63a 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -149,8 +149,7 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ DEV_RX_OFFLOAD_SCATTER),
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index f868e5d906c7..a1e5e6db6115 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -164,8 +164,7 @@ static struct rte_eth_conf port_conf = {
.mq_mode = ETH_MQ_RX_RSS,
.mtu = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f8a1f544c21d..bcddd30c486a 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2207,8 +2207,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- if (mtu_size > RTE_ETHER_MTU)
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 989d70ae257a..c38e310b5691 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -109,7 +109,6 @@ static struct rte_eth_conf port_conf = {
.rxmode = {
.mtu = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/kni/main.c b/examples/kni/main.c
index c10814c6a94f..0fd945e7e0b2 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
}
memcpy(&conf, &port_conf, sizeof(conf));
- /* Set new MTU */
- if (new_mtu > RTE_ETHER_MTU)
- conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 913037d5f835..81d1066c473b 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -1813,8 +1813,6 @@ parse_args(int argc, char **argv)
};
printf("jumbo frame is enabled\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index ddcb2fbc995d..7b197f49d992 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -493,7 +493,6 @@ parse_args(int argc, char **argv)
const struct option lenopts = {"max-pkt-len",
required_argument, 0, 0};
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
/*
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 02221a79fabf..a95cb9966dc8 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -1952,8 +1952,6 @@ parse_args(int argc, char **argv)
0, 0};
printf("jumbo frame is enabled \n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 80b5b93d5f0d..21c9fc73d9b8 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -702,7 +702,6 @@ parse_args(int argc, char **argv)
"max-pkt-len", required_argument, 0, 0
};
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
/*
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 1960f00ad28d..1d9a2d5cccbe 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -2986,8 +2986,6 @@ parse_args(int argc, char **argv)
required_argument, 0, 0};
printf("jumbo frame is enabled - disabling simple TX path\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index e27712727f6a..a2bdc8928fcb 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -637,8 +637,6 @@ us_vhost_parse_args(int argc, char **argv)
}
mergeable = !!ret;
if (ret) {
- vmdq_conf_default.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
vmdq_conf_default.rxmode.mtu =
JUMBO_FRAME_MAX_SIZE;
}
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 41c9e630e4d4..a0f20a71aefe 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -118,7 +118,6 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT),
RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
- RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
RTE_RX_OFFLOAD_BIT2STR(SCATTER),
RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
RTE_RX_OFFLOAD_BIT2STR(SECURITY),
@@ -1479,13 +1478,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
- dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
- /* Use default value */
- dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- }
-
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
/*
@@ -3625,7 +3617,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
- int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3653,27 +3644,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
frame_size = mtu + overhead_len;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
-
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME)
- is_jumbo_frame_capable = 1;
}
- if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
- return -EINVAL;
-
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (!ret) {
+ if (!ret)
dev->data->mtu = mtu;
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
return eth_err(port_id, ret);
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9f288f98329c..b31e660de23e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1359,7 +1359,6 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-09 17:29 [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Ferruh Yigit
` (2 preceding siblings ...)
2021-07-09 17:29 ` [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-07-13 12:47 ` Andrew Rybchenko
2021-07-21 16:46 ` Ferruh Yigit
2021-07-18 7:45 ` Xu, Rosen
` (2 subsequent siblings)
6 siblings, 1 reply; 112+ messages in thread
From: Andrew Rybchenko @ 2021-07-13 12:47 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Neil Horman, Bernard Iremonger, Bruce Richardson,
Konstantin Ananyev, John McNamara, Igor Russkikh, Pavel Belous,
Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Keith Wiles, Jiawen Wu,
Jian Wang, Maxime Coquelin, Chenbo Xia, Nicolas Chautru,
David Hunt, Harry van Haaren, Cristian Dumitrescu, Radu Nicolau,
Akhil Goyal, Tomasz Kantecki, Declan Doherty, Pavan Nikhilesh,
Kirill Rybalchenko, Jasvinder Singh, Thomas Monjalon
Cc: dev
On 7/9/21 8:29 PM, Ferruh Yigit wrote:
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' filed of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also two different related method is confusing for
> the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, but this may be different from device to
> device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> As additional clarification, MTU is used to configure the device for
> physical Rx/Tx limitation. Other related issue is size of the buffer to
> store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
> And compares MTU against Rx buffer size to decide enabling scattered Rx
> or not, if PMD supports it. If scattered Rx is not supported by device,
> MTU bigger than Rx buffer size should fail.
>
Do I understand correctly that target is 21.11?
Really huge work. Many thanks.
See my notes below.
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
[snip]
> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
> index 6ee530d4cdc9..5fcea74b4d43 100644
> --- a/app/test-eventdev/test_pipeline_common.c
> +++ b/app/test-eventdev/test_pipeline_common.c
> @@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
> return -EINVAL;
> }
>
> - port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
> - if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
> + port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN;
Subtract requires overflow check. May max_pkt_size be 0 or just
smaller that RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN?
> + if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
> port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> t->internal_port = 1;
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 8468018cf35d..8bdc042f6e8e 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1892,43 +1892,36 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
> __rte_unused void *data)
> {
> struct cmd_config_max_pkt_len_result *res = parsed_result;
> - uint32_t max_rx_pkt_len_backup = 0;
> - portid_t pid;
> + portid_t port_id;
> int ret;
>
> + if (strcmp(res->name, "max-pkt-len")) {
> + printf("Unknown parameter\n");
> + return;
> + }
> +
> if (!all_ports_stopped()) {
> printf("Please stop all ports first\n");
> return;
> }
>
> - RTE_ETH_FOREACH_DEV(pid) {
> - struct rte_port *port = &ports[pid];
> -
> - if (!strcmp(res->name, "max-pkt-len")) {
> - if (res->value < RTE_ETHER_MIN_LEN) {
> - printf("max-pkt-len can not be less than %d\n",
> - RTE_ETHER_MIN_LEN);
> - return;
> - }
> - if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
> - return;
> -
> - ret = eth_dev_info_get_print_err(pid, &port->dev_info);
> - if (ret != 0) {
> - printf("rte_eth_dev_info_get() failed for port %u\n",
> - pid);
> - return;
> - }
> + RTE_ETH_FOREACH_DEV(port_id) {
> + struct rte_port *port = &ports[port_id];
>
> - max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
> + if (res->value < RTE_ETHER_MIN_LEN) {
> + printf("max-pkt-len can not be less than %d\n",
fprintf() to stderr, please.
Here and in a number of places below.
> + RTE_ETHER_MIN_LEN);
> + return;
> + }
>
> - port->dev_conf.rxmode.max_rx_pkt_len = res->value;
> - if (update_jumbo_frame_offload(pid) != 0)
> - port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
> - } else {
> - printf("Unknown parameter\n");
> + ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
> + if (ret != 0) {
> + printf("rte_eth_dev_info_get() failed for port %u\n",
> + port_id);
> return;
> }
> +
> + update_jumbo_frame_offload(port_id, res->value);
> }
>
> init_port_config();
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 04ae0feb5852..a87265d7638b 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
[snip]
> @@ -1155,20 +1154,17 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> return;
> }
> diag = rte_eth_dev_set_mtu(port_id, mtu);
> - if (diag)
> + if (diag) {
> printf("Set MTU failed. diag=%d\n", diag);
> - else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - /*
> - * Ether overhead in driver is equal to the difference of
> - * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
> - * device supports jumbo frame.
> - */
> - eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
> + return;
> + }
> +
> + rte_port->dev_conf.rxmode.mtu = mtu;
> +
> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (mtu > RTE_ETHER_MTU) {
> rte_port->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - rte_port->dev_conf.rxmode.max_rx_pkt_len =
> - mtu + eth_overhead;
> } else
I guess curly brackets should be removed now.
> rte_port->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
[snip]
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 1cdd3cdd12b6..2c79cae05664 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
[snip]
> @@ -1465,7 +1473,7 @@ init_config(void)
> rte_exit(EXIT_FAILURE,
> "rte_eth_dev_info_get() failed\n");
>
> - ret = update_jumbo_frame_offload(pid);
> + ret = update_jumbo_frame_offload(pid, 0);
> if (ret != 0)
> printf("Updating jumbo frame offload failed for port %u\n",
> pid);
> @@ -1512,14 +1520,19 @@ init_config(void)
> */
> if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
> port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
> - data_size = rx_mode.max_rx_pkt_len /
> - port->dev_info.rx_desc_lim.nb_mtu_seg_max;
> + uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
> + uint16_t mtu;
>
> - if ((data_size + RTE_PKTMBUF_HEADROOM) >
> + if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
> + data_size = mtu + eth_overhead /
> + port->dev_info.rx_desc_lim.nb_mtu_seg_max;
> +
> + if ((data_size + RTE_PKTMBUF_HEADROOM) >
Unnecessary parenthesis.
> mbuf_data_size[0]) {
> - mbuf_data_size[0] = data_size +
> - RTE_PKTMBUF_HEADROOM;
> - warning = 1;
> + mbuf_data_size[0] = data_size +
> + RTE_PKTMBUF_HEADROOM;
> + warning = 1;
> + }
> }
> }
> }
[snip]
> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
> index c515de3bf71d..0a8d29277aeb 100644
> --- a/drivers/net/tap/rte_eth_tap.c
> +++ b/drivers/net/tap/rte_eth_tap.c
> @@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct pmd_internals *pmd = dev->data->dev_private;
> struct ifreq ifr = { .ifr_mtu = mtu };
> - int err = 0;
>
> - err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
> - if (!err)
> - dev->data->mtu = mtu;
> -
> - return err;
> + return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
The cleanup could be done separately before the patch, since
it just makes the long patch longer and unrelated in fact,
since assignment after callback is already done.
> }
>
> static int
[snip]
> diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
> index 77a6a18d1914..f97287ce2243 100644
> --- a/examples/ip_fragmentation/main.c
> +++ b/examples/ip_fragmentation/main.c
> @@ -146,7 +146,7 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE,
Before the patch JUMBO_FRAME_MAX_SIZE inluded overhad, but
after the patch it is used as it is does not include overhead.
There a number of similiar cases in other apps.
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> DEV_RX_OFFLOAD_SCATTER |
[snip]
> diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
> index 16bcffe356bc..8628db22f56b 100644
> --- a/examples/ip_pipeline/link.c
> +++ b/examples/ip_pipeline/link.c
> @@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
> .link_speeds = 0,
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
> + .mtu = 9000, /* Jumbo frame MTU */
Strictly speaking 9000 included overhead before the patch and
does not include overhead after the patch.
There a number of similiar cases in other apps.
> .split_hdr_size = 0, /* Header split buffer size */
> },
> .rx_adv_conf = {
[snip]
> diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
> index a1f457b564b6..913037d5f835 100644
> --- a/examples/l3fwd-acl/main.c
> +++ b/examples/l3fwd-acl/main.c
[snip]
> @@ -1833,12 +1832,12 @@ parse_args(int argc, char **argv)
> print_usage(prgname);
> return -1;
> }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN);
> }
> - printf("set jumbo frame max packet length "
> - "to %u\n",
> - (unsigned int)
> - port_conf.rxmode.max_rx_pkt_len);
> + printf("set jumbo frame max packet length to %u\n",
> + (unsigned int)port_conf.rxmode.mtu +
> + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
I think that overhead should be obtainded from dev_info with
fallback to value used above.
There are many similar cases in other apps.
[snip]
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c607eabb5b0c..3451125639f9 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1249,15 +1249,15 @@ rte_eth_dev_tx_offload_name(uint64_t offload)
>
> static inline int
> eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> - uint32_t max_rx_pkt_len, uint32_t dev_info_size)
> + uint32_t max_rx_pktlen, uint32_t dev_info_size)
> {
> int ret = 0;
>
> if (dev_info_size == 0) {
> - if (config_size != max_rx_pkt_len) {
> + if (config_size != max_rx_pktlen) {
> RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size"
> " %u != %u is not allowed\n",
> - port_id, config_size, max_rx_pkt_len);
> + port_id, config_size, max_rx_pktlen);
This patch looks a bit unrelated and make the long patch
even more longer. May be it is better to do the cleanup
first (before the patch).
> ret = -EINVAL;
> }
> } else if (config_size > dev_info_size) {
> @@ -1325,6 +1325,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
> return ret;
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1332,6 +1345,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> struct rte_eth_dev *dev;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_conf orig_conf;
> + uint32_t max_rx_pktlen;
> uint16_t overhead_len;
> int diag;
> int ret;
> @@ -1375,11 +1389,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
>
> /* Get the real Ethernet overhead length */
> - if (dev_info.max_mtu != UINT16_MAX &&
> - dev_info.max_rx_pktlen > dev_info.max_mtu)
> - overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
> - else
> - overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
>
> /* If number of queues specified by application for both Rx and Tx is
> * zero, use driver preferred values. This cannot be done individually
> @@ -1448,49 +1459,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> /*
> - * If jumbo frames are enabled, check that the maximum RX packet
> - * length is supported by the configured device.
> + * Check that the maximum RX packet length is supported by the
> + * configured device.
> */
> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - dev_info.max_rx_pktlen);
> - ret = -EINVAL;
> - goto rollback;
> - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - (unsigned int)RTE_ETHER_MIN_LEN);
> - ret = -EINVAL;
> - goto rollback;
> - }
> + if (dev_conf->rxmode.mtu == 0)
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
> + ret = -EINVAL;
> + goto rollback;
> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
> + ret = -EINVAL;
> + goto rollback;
> + }
>
> - /* Scale the MTU size to adapt max_rx_pkt_len */
> - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - overhead_len;
> - } else {
> - uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
> - if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
> - pktlen > RTE_ETHER_MTU + overhead_len)
> + if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> + if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
> + dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
> /* Use default value */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - RTE_ETHER_MTU + overhead_len;
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
I don't understand it. It would be good to add comments to
explain logic above.
> }
>
> + dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
> +
> /*
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> if (dev_conf->rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data->dev_conf.rxmode.max_lro_pkt_size,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> goto rollback;
[snip]
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index faf3bd901d75..9f288f98329c 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -410,7 +410,7 @@ enum rte_eth_tx_mq_mode {
> struct rte_eth_rxmode {
> /** The multi-queue packet distribution mode to be used, e.g. RSS. */
> enum rte_eth_rx_mq_mode mq_mode;
> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
> + uint32_t mtu; /**< Requested MTU. */
Maximum Transmit Unit looks a bit confusing in Rx mode
structure.
> /** Maximum allowed size of LRO aggregated packet. */
> uint32_t max_lro_pkt_size;
> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
[snip]
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library
2021-07-09 17:29 ` [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-07-13 13:48 ` Andrew Rybchenko
2021-07-21 12:26 ` Ferruh Yigit
2021-07-18 7:49 ` Xu, Rosen
2021-07-19 14:38 ` Ajit Khaparde
2 siblings, 1 reply; 112+ messages in thread
From: Andrew Rybchenko @ 2021-07-13 13:48 UTC (permalink / raw)
To: Ferruh Yigit, Somalapuram Amaranath, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Hemant Agrawal,
Sachin Saxena, Haiyue Wang, Gagandeep Singh, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Igor Russkikh,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: dev
On 7/9/21 8:29 PM, Ferruh Yigit wrote:
> Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
> and application should enable the jumbo frame offload support for it.
>
> When jumbo frame offload is not enabled by application, but MTU bigger
> than RTE_ETHER_MTU is requested there are two options, either fail or
> enable jumbo frame offload implicitly.
>
> Enabling jumbo frame offload implicitly is selected by many drivers
> since setting a big MTU value already implies it, and this increases
> usability.
>
> This patch moves this logic from drivers to the library, both to reduce
> the duplicated code in the drivers and to make behaviour more visible.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Very good cleanup, many thanks.
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
[snip]
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 3451125639f9..d649a5dd69a9 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -3625,6 +3625,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> int ret;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_dev *dev;
> + int is_jumbo_frame_capable = 0;
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> dev = &rte_eth_devices[port_id];
> @@ -3643,12 +3644,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>
> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
> return -EINVAL;
> +
> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME)
> + is_jumbo_frame_capable = 1;
> }
>
> + if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
> + return -EINVAL;
> +
> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
> - if (!ret)
> + if (!ret) {
Since line it updated anyway, may I ask to use explicit
comparison vs 0 as coding style says.
> dev->data->mtu = mtu;
>
> + /* switch to jumbo mode if needed */
> + if (mtu > RTE_ETHER_MTU)
> + dev->data->dev_conf.rxmode.offloads |=
> + DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> + dev->data->dev_conf.rxmode.offloads &=
> + ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> return eth_err(port_id, ret);
> }
>
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set
2021-07-09 17:29 ` [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-07-13 13:56 ` Andrew Rybchenko
2021-07-18 7:52 ` Xu, Rosen
1 sibling, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-07-13 13:56 UTC (permalink / raw)
To: Ferruh Yigit, Somalapuram Amaranath, Ajit Khaparde,
Somnath Kotur, Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena,
Haiyue Wang, Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang,
Rosen Xu, Shijith Thotton, Srisivasubramanian Srinivasan,
Heinrich Kuhn, Harman Kalra, Jerin Jacob, Nithin Dabilpuram,
Kiran Kumar K, Rasesh Mody, Devendra Singh Rawat, Igor Russkikh,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: dev
On 7/9/21 8:29 PM, Ferruh Yigit wrote:
> Move requested MTU value check to the API to prevent the duplicated
> code.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag
2021-07-09 17:29 ` [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-07-13 14:07 ` Andrew Rybchenko
2021-07-21 12:26 ` Ferruh Yigit
2021-07-21 12:39 ` Ferruh Yigit
2021-07-18 7:53 ` Xu, Rosen
1 sibling, 2 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-07-13 14:07 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Ajit Khaparde,
Somnath Kotur, Igor Russkikh, Pavel Belous,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Chas Williams,
Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Konstantin Ananyev, Radu Nicolau, Akhil Goyal, David Hunt,
John McNamara, Thomas Monjalon
Cc: dev
On 7/9/21 8:29 PM, Ferruh Yigit wrote:
> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>
> Instead of drivers announce this capability, application can deduct the
> capability by checking reported 'dev_info.max_mtu' or
> 'dev_info.max_rx_pktlen'.
>
> And instead of application explicitly set this flag to enable jumbo
> frames, this can be deducted by driver by comparing requested 'mtu' to
> 'RTE_ETHER_MTU'.
I can imagine the case when app wants to enable jumbo MTU in
run-time, but enabling requires to know it in advance in order
to configure HW correctly (i.e. offload is needed).
I think it may be ignored. Driver should either reject MTU
set in started state or do restart automatically on request.
However, driver maintainers should keep it in mind reviewing
the patch.
>
> Removing this additional configuration for simplification.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
ethdev part:
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
[snip]
> diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
> index 3b4d9c3ee6f4..1ae78fe71f02 100644
> --- a/drivers/net/e1000/e1000_ethdev.h
> +++ b/drivers/net/e1000/e1000_ethdev.h
> @@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
> void em_dev_clear_queues(struct rte_eth_dev *dev);
> void em_dev_free_queues(struct rte_eth_dev *dev);
>
> -uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
> -uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
> +uint64_t em_get_rx_port_offloads_capa(void);
> +uint64_t em_get_rx_queue_offloads_capa(void);
I'm not sure that it is a step in right direction.
May be it is better to keep dev unused.
net/e1000 maintainers should decide.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-09 17:29 [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Ferruh Yigit
` (3 preceding siblings ...)
2021-07-13 12:47 ` [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Andrew Rybchenko
@ 2021-07-18 7:45 ` Xu, Rosen
2021-07-19 3:35 ` Huisong Li
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
6 siblings, 0 replies; 112+ messages in thread
From: Xu, Rosen @ 2021-07-18 7:45 UTC (permalink / raw)
To: Yigit, Ferruh, Jerin Jacob, Li, Xiaoyun, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Zhang, Qi Z, Wang, Xiao W,
Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko, Harman Kalra,
Maciej Czekaj, Ray Kinsella, Neil Horman, Iremonger, Bernard,
Richardson, Bruce, Ananyev, Konstantin, Mcnamara, John,
Igor Russkikh, Pavel Belous, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Wang, Haiyue,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, Daley, John, Hyong Youb Kim,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
Xing, Beilei, Wu, Jingjing, Yang, Qiming, Andrew Boyer,
Shijith Thotton, Srisivasubramanian Srinivasan, Zyta Szpak,
Liron Himi, Heinrich Kuhn, Devendra Singh Rawat,
Andrew Rybchenko, Wiles, Keith, Jiawen Wu, Jian Wang,
Maxime Coquelin, Xia, Chenbo, Chautru, Nicolas, Hunt, David,
Van Haaren, Harry, Dumitrescu, Cristian, Nicolau, Radu,
Akhil Goyal, Kantecki, Tomasz, Doherty, Declan, Pavan Nikhilesh,
Rybalchenko, Kirill, Singh, Jasvinder, Thomas Monjalon
Cc: dev
Hi,
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Saturday, July 10, 2021 1:29
> To: Jerin Jacob <jerinj@marvell.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Chas Williams <chas3@att.com>; Min Hu (Connor) <humin29@huawei.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> <sachin.saxena@oss.nxp.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang,
> Xiao W <xiao.w.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> Shahaf Shuler <shahafs@nvidia.com>; Viacheslav Ovsiienko
> <viacheslavo@nvidia.com>; Harman Kalra <hkalra@marvell.com>; Maciej
> Czekaj <mczekaj@marvell.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> Horman <nhorman@tuxdriver.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Igor Russkikh <igor.russkikh@aquantia.com>;
> Pavel Belous <pavel.belous@aquantia.com>; Steven Webster
> <steven.webster@windriver.com>; Matt Peters
> <matt.peters@windriver.com>; Somalapuram Amaranath
> <asomalap@amd.com>; Rasesh Mody <rmody@marvell.com>; Shahed
> Shaikh <shshaikh@marvell.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar K <kirankumark@marvell.com>;
> Sunil Kumar Kori <skori@marvell.com>; Satha Rao
> <skoteshwar@marvell.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> Marcin Wojtas <mw@semihalf.com>; Michal Krawczyk <mk@semihalf.com>;
> Guy Tzalik <gtzalik@amazon.com>; Evgeny Schemeilin
> <evgenys@amazon.com>; Igor Chauskin <igorch@amazon.com>;
> Gagandeep Singh <g.singh@nxp.com>; Daley, John <johndale@cisco.com>;
> Hyong Youb Kim <hyonkim@cisco.com>; Ziyang Xuan
> <xuanziyang2@huawei.com>; Xiaoyun Wang
> <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Yisen Zhuang <yisen.zhuang@huawei.com>;
> Lijun Ou <oulijun@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> Andrew Boyer <aboyer@pensando.io>; Xu, Rosen <rosen.xu@intel.com>;
> Shijith Thotton <sthotton@marvell.com>; Srisivasubramanian Srinivasan
> <srinivasan@marvell.com>; Zyta Szpak <zr@semihalf.com>; Liron Himi
> <lironh@marvell.com>; Heinrich Kuhn <heinrich.kuhn@netronome.com>;
> Devendra Singh Rawat <dsinghrawat@marvell.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Wiles, Keith <keith.wiles@intel.com>;
> Jiawen Wu <jiawenwu@trustnetic.com>; Jian Wang
> <jianwang@trustnetic.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>;
> Chautru, Nicolas <nicolas.chautru@intel.com>; Hunt, David
> <david.hunt@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; Akhil Goyal <gakhil@marvell.com>; Kantecki,
> Tomasz <tomasz.kantecki@intel.com>; Doherty, Declan
> <declan.doherty@intel.com>; Pavan Nikhilesh
> <pbhagavatula@marvell.com>; Rybalchenko, Kirill
> <kirill.rybalchenko@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org
> Subject: [PATCH 1/4] ethdev: fix max Rx packet length
>
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' filed of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also two different related method is confusing for
> the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, but this may be different from device to
> device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> As additional clarification, MTU is used to configure the device for
> physical Rx/Tx limitation. Other related issue is size of the buffer to
> store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
> And compares MTU against Rx buffer size to decide enabling scattered Rx
> or not, if PMD supports it. If scattered Rx is not supported by device,
> MTU bigger than Rx buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> app/test-eventdev/test_perf_common.c | 1 -
> app/test-eventdev/test_pipeline_common.c | 5 +-
> app/test-pmd/cmdline.c | 45 ++++-----
> app/test-pmd/config.c | 18 ++--
> app/test-pmd/parameters.c | 4 +-
> app/test-pmd/testpmd.c | 94 ++++++++++--------
> app/test-pmd/testpmd.h | 2 +-
> app/test/test_link_bonding.c | 1 -
> app/test/test_link_bonding_mode4.c | 1 -
> app/test/test_link_bonding_rssconf.c | 2 -
> app/test/test_pmd_perf.c | 1 -
> doc/guides/nics/dpaa.rst | 2 +-
> doc/guides/nics/dpaa2.rst | 2 +-
> doc/guides/nics/features.rst | 2 +-
> doc/guides/nics/fm10k.rst | 2 +-
> doc/guides/nics/mlx5.rst | 4 +-
> doc/guides/nics/octeontx.rst | 2 +-
> doc/guides/nics/thunderx.rst | 2 +-
> doc/guides/rel_notes/deprecation.rst | 25 -----
> doc/guides/sample_app_ug/flow_classify.rst | 8 +-
> doc/guides/sample_app_ug/ioat.rst | 1 -
> doc/guides/sample_app_ug/ip_reassembly.rst | 2 +-
> doc/guides/sample_app_ug/skeleton.rst | 8 +-
> drivers/net/atlantic/atl_ethdev.c | 3 -
> drivers/net/avp/avp_ethdev.c | 17 ++--
> drivers/net/axgbe/axgbe_ethdev.c | 7 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
> drivers/net/bnxt/bnxt_ethdev.c | 21 ++--
> drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
> drivers/net/cnxk/cnxk_ethdev.c | 9 +-
> drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
> drivers/net/cxgbe/cxgbe_ethdev.c | 12 +--
> drivers/net/cxgbe/cxgbe_main.c | 3 +-
> drivers/net/cxgbe/sge.c | 3 +-
> drivers/net/dpaa/dpaa_ethdev.c | 52 ++++------
> drivers/net/dpaa2/dpaa2_ethdev.c | 31 +++---
> drivers/net/e1000/em_ethdev.c | 4 +-
> drivers/net/e1000/igb_ethdev.c | 18 +---
> drivers/net/e1000/igb_rxtx.c | 16 ++-
> drivers/net/ena/ena_ethdev.c | 27 ++---
> drivers/net/enetc/enetc_ethdev.c | 24 ++---
> drivers/net/enic/enic_ethdev.c | 2 +-
> drivers/net/enic/enic_main.c | 42 ++++----
> drivers/net/fm10k/fm10k_ethdev.c | 2 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++--
> drivers/net/hns3/hns3_ethdev.c | 28 ++----
> drivers/net/hns3/hns3_ethdev_vf.c | 38 +++----
> drivers/net/hns3/hns3_rxtx.c | 10 +-
> drivers/net/i40e/i40e_ethdev.c | 10 +-
> drivers/net/i40e/i40e_ethdev_vf.c | 14 +--
> drivers/net/i40e/i40e_rxtx.c | 4 +-
> drivers/net/iavf/iavf_ethdev.c | 9 +-
> drivers/net/ice/ice_dcf_ethdev.c | 5 +-
> drivers/net/ice/ice_ethdev.c | 14 +--
> drivers/net/ice/ice_rxtx.c | 12 +--
> drivers/net/igc/igc_ethdev.c | 51 +++-------
> drivers/net/igc/igc_ethdev.h | 7 ++
> drivers/net/igc/igc_txrx.c | 22 ++---
> drivers/net/ionic/ionic_ethdev.c | 12 +--
> drivers/net/ionic/ionic_rxtx.c | 6 +-
> drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 35 +++----
> drivers/net/ixgbe/ixgbe_pf.c | 6 +-
> drivers/net/ixgbe/ixgbe_rxtx.c | 15 ++-
> drivers/net/liquidio/lio_ethdev.c | 20 +---
> drivers/net/mlx4/mlx4_rxq.c | 17 ++--
> drivers/net/mlx5/mlx5_rxq.c | 25 ++---
> drivers/net/mvneta/mvneta_ethdev.c | 7 --
> drivers/net/mvneta/mvneta_rxtx.c | 13 ++-
> drivers/net/mvpp2/mrvl_ethdev.c | 34 +++----
> drivers/net/nfp/nfp_net.c | 9 +-
> drivers/net/octeontx/octeontx_ethdev.c | 12 +--
> drivers/net/octeontx2/otx2_ethdev.c | 2 +-
> drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +--
> drivers/net/pfe/pfe_ethdev.c | 7 +-
> drivers/net/qede/qede_ethdev.c | 16 +--
> drivers/net/qede/qede_rxtx.c | 8 +-
> drivers/net/sfc/sfc_ethdev.c | 4 +-
> drivers/net/sfc/sfc_port.c | 6 +-
> drivers/net/tap/rte_eth_tap.c | 7 +-
> drivers/net/thunderx/nicvf_ethdev.c | 13 +--
> drivers/net/txgbe/txgbe_ethdev.c | 7 +-
> drivers/net/txgbe/txgbe_ethdev.h | 4 +
> drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
> drivers/net/txgbe/txgbe_rxtx.c | 19 ++--
> drivers/net/virtio/virtio_ethdev.c | 4 +-
> examples/bbdev_app/main.c | 1 -
> examples/bond/main.c | 1 -
> examples/distributor/main.c | 1 -
> .../pipeline_worker_generic.c | 1 -
> .../eventdev_pipeline/pipeline_worker_tx.c | 1 -
> examples/flow_classify/flow_classify.c | 10 +-
> examples/ioat/ioatfwd.c | 1 -
> examples/ip_fragmentation/main.c | 11 +--
> examples/ip_pipeline/link.c | 2 +-
> examples/ip_reassembly/main.c | 11 ++-
> examples/ipsec-secgw/ipsec-secgw.c | 7 +-
> examples/ipv4_multicast/main.c | 8 +-
> examples/kni/main.c | 6 +-
> examples/l2fwd-cat/l2fwd-cat.c | 8 +-
> examples/l2fwd-crypto/main.c | 1 -
> examples/l2fwd-event/l2fwd_common.c | 1 -
> examples/l3fwd-acl/main.c | 11 +--
> examples/l3fwd-graph/main.c | 4 +-
> examples/l3fwd-power/main.c | 11 ++-
> examples/l3fwd/main.c | 4 +-
> .../performance-thread/l3fwd-thread/main.c | 7 +-
> examples/pipeline/obj.c | 2 +-
> examples/ptpclient/ptpclient.c | 10 +-
> examples/qos_meter/main.c | 1 -
> examples/qos_sched/init.c | 1 -
> examples/rxtx_callbacks/main.c | 10 +-
> examples/skeleton/basicfwd.c | 10 +-
> examples/vhost/main.c | 4 +-
> examples/vm_power_manager/main.c | 11 +--
> lib/ethdev/rte_ethdev.c | 98 +++++++++++--------
> lib/ethdev/rte_ethdev.h | 2 +-
> lib/ethdev/rte_ethdev_trace.h | 2 +-
> 118 files changed, 531 insertions(+), 848 deletions(-)
>
> diff --git a/app/test-eventdev/test_perf_common.c b/app/test-
> eventdev/test_perf_common.c
> index cc100650c21e..660d5a0364b6 100644
> --- a/app/test-eventdev/test_perf_common.c
> +++ b/app/test-eventdev/test_perf_common.c
> @@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
> struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-
> eventdev/test_pipeline_common.c
> index 6ee530d4cdc9..5fcea74b4d43 100644
> --- a/app/test-eventdev/test_pipeline_common.c
> +++ b/app/test-eventdev/test_pipeline_common.c
> @@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
> return -EINVAL;
> }
>
> - port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
> - if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
> + port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN;
> + if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
> port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> t->internal_port = 1;
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 8468018cf35d..8bdc042f6e8e 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1892,43 +1892,36 @@ cmd_config_max_pkt_len_parsed(void
> *parsed_result,
> __rte_unused void *data)
> {
> struct cmd_config_max_pkt_len_result *res = parsed_result;
> - uint32_t max_rx_pkt_len_backup = 0;
> - portid_t pid;
> + portid_t port_id;
> int ret;
>
> + if (strcmp(res->name, "max-pkt-len")) {
> + printf("Unknown parameter\n");
> + return;
> + }
> +
> if (!all_ports_stopped()) {
> printf("Please stop all ports first\n");
> return;
> }
>
> - RTE_ETH_FOREACH_DEV(pid) {
> - struct rte_port *port = &ports[pid];
> -
> - if (!strcmp(res->name, "max-pkt-len")) {
> - if (res->value < RTE_ETHER_MIN_LEN) {
> - printf("max-pkt-len can not be less
> than %d\n",
> - RTE_ETHER_MIN_LEN);
> - return;
> - }
> - if (res->value == port-
> >dev_conf.rxmode.max_rx_pkt_len)
> - return;
> -
> - ret = eth_dev_info_get_print_err(pid, &port-
> >dev_info);
> - if (ret != 0) {
> - printf("rte_eth_dev_info_get() failed for
> port %u\n",
> - pid);
> - return;
> - }
> + RTE_ETH_FOREACH_DEV(port_id) {
> + struct rte_port *port = &ports[port_id];
>
> - max_rx_pkt_len_backup = port-
> >dev_conf.rxmode.max_rx_pkt_len;
> + if (res->value < RTE_ETHER_MIN_LEN) {
> + printf("max-pkt-len can not be less than %d\n",
> + RTE_ETHER_MIN_LEN);
> + return;
> + }
>
> - port->dev_conf.rxmode.max_rx_pkt_len = res-
> >value;
> - if (update_jumbo_frame_offload(pid) != 0)
> - port->dev_conf.rxmode.max_rx_pkt_len =
> max_rx_pkt_len_backup;
> - } else {
> - printf("Unknown parameter\n");
> + ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
> + if (ret != 0) {
> + printf("rte_eth_dev_info_get() failed for port %u\n",
> + port_id);
> return;
> }
> +
> + update_jumbo_frame_offload(port_id, res->value);
> }
>
> init_port_config();
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 04ae0feb5852..a87265d7638b 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1139,7 +1139,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> int diag;
> struct rte_port *rte_port = &ports[port_id];
> struct rte_eth_dev_info dev_info;
> - uint16_t eth_overhead;
> int ret;
>
> if (port_id_is_invalid(port_id, ENABLED_WARN))
> @@ -1155,20 +1154,17 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> return;
> }
> diag = rte_eth_dev_set_mtu(port_id, mtu);
> - if (diag)
> + if (diag) {
> printf("Set MTU failed. diag=%d\n", diag);
> - else if (dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - /*
> - * Ether overhead in driver is equal to the difference of
> - * max_rx_pktlen and max_mtu in rte_eth_dev_info when
> the
> - * device supports jumbo frame.
> - */
> - eth_overhead = dev_info.max_rx_pktlen -
> dev_info.max_mtu;
> + return;
> + }
> +
> + rte_port->dev_conf.rxmode.mtu = mtu;
> +
> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME)
> {
> if (mtu > RTE_ETHER_MTU) {
> rte_port->dev_conf.rxmode.offloads |=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - rte_port->dev_conf.rxmode.max_rx_pkt_len =
> - mtu + eth_overhead;
> } else
> rte_port->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index 5e69d2aa8cfe..8e8556d74a4a 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -860,7 +860,9 @@ launch_args_parse(int argc, char** argv)
> if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
> n = atoi(optarg);
> if (n >= RTE_ETHER_MIN_LEN)
> - rx_mode.max_rx_pkt_len =
> (uint32_t) n;
> + rx_mode.mtu = (uint32_t) n -
> + (RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN);
> else
> rte_exit(EXIT_FAILURE,
> "Invalid max-pkt-len=%d -
> should be > %d\n",
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 1cdd3cdd12b6..2c79cae05664 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -445,13 +445,7 @@ lcoreid_t latencystats_lcore_id = -1;
> /*
> * Ethernet device configuration.
> */
> -struct rte_eth_rxmode rx_mode = {
> - /* Default maximum frame length.
> - * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
> - * in init_config().
> - */
> - .max_rx_pkt_len = 0,
> -};
> +struct rte_eth_rxmode rx_mode;
>
> struct rte_eth_txmode tx_mode = {
> .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
> @@ -1417,6 +1411,20 @@ check_nb_hairpinq(queueid_t hairpinq)
> return 0;
> }
>
> +static int
> +get_eth_overhead(struct rte_eth_dev_info *dev_info)
> +{
> + uint32_t eth_overhead;
> +
> + if (dev_info->max_mtu != UINT16_MAX &&
> + dev_info->max_rx_pktlen > dev_info->max_mtu)
> + eth_overhead = dev_info->max_rx_pktlen - dev_info-
> >max_mtu;
> + else
> + eth_overhead = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return eth_overhead;
> +}
> +
> static void
> init_config(void)
> {
> @@ -1465,7 +1473,7 @@ init_config(void)
> rte_exit(EXIT_FAILURE,
> "rte_eth_dev_info_get() failed\n");
>
> - ret = update_jumbo_frame_offload(pid);
> + ret = update_jumbo_frame_offload(pid, 0);
> if (ret != 0)
> printf("Updating jumbo frame offload failed for
> port %u\n",
> pid);
> @@ -1512,14 +1520,19 @@ init_config(void)
> */
> if (port->dev_info.rx_desc_lim.nb_mtu_seg_max !=
> UINT16_MAX &&
> port-
> >dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
> - data_size = rx_mode.max_rx_pkt_len /
> - port-
> >dev_info.rx_desc_lim.nb_mtu_seg_max;
> + uint32_t eth_overhead = get_eth_overhead(&port-
> >dev_info);
> + uint16_t mtu;
>
> - if ((data_size + RTE_PKTMBUF_HEADROOM) >
> + if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
> + data_size = mtu + eth_overhead /
> + port-
> >dev_info.rx_desc_lim.nb_mtu_seg_max;
> +
> + if ((data_size + RTE_PKTMBUF_HEADROOM) >
> mbuf_data_size[0]) {
> - mbuf_data_size[0] = data_size +
> - RTE_PKTMBUF_HEADROOM;
> - warning = 1;
> + mbuf_data_size[0] = data_size +
> +
> RTE_PKTMBUF_HEADROOM;
> + warning = 1;
> + }
> }
> }
> }
> @@ -3352,43 +3365,44 @@ rxtx_port_config(struct rte_port *port)
>
> /*
> * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME
> offload,
> - * MTU is also aligned if JUMBO_FRAME offload is not set.
> + * MTU is also aligned.
> *
> * port->dev_info should be set before calling this function.
> *
> + * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
> + * ETH_OVERHEAD". This is useful to update flags but not MTU value.
> + *
> * return 0 on success, negative on error
> */
> int
> -update_jumbo_frame_offload(portid_t portid)
> +update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
> {
> struct rte_port *port = &ports[portid];
> uint32_t eth_overhead;
> uint64_t rx_offloads;
> - int ret;
> + uint16_t mtu, new_mtu;
> bool on;
>
> - /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
> - if (port->dev_info.max_mtu != UINT16_MAX &&
> - port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
> - eth_overhead = port->dev_info.max_rx_pktlen -
> - port->dev_info.max_mtu;
> - else
> - eth_overhead = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> + eth_overhead = get_eth_overhead(&port->dev_info);
>
> - rx_offloads = port->dev_conf.rxmode.offloads;
> + if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
> + printf("Failed to get MTU for port %u\n", portid);
> + return -1;
> + }
> +
> + if (max_rx_pktlen == 0)
> + max_rx_pktlen = mtu + eth_overhead;
>
> - /* Default config value is 0 to use PMD specific overhead */
> - if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
> - port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU
> + eth_overhead;
> + rx_offloads = port->dev_conf.rxmode.offloads;
> + new_mtu = max_rx_pktlen - eth_overhead;
>
> - if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU +
> eth_overhead) {
> + if (new_mtu <= RTE_ETHER_MTU) {
> rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> on = false;
> } else {
> if ((port->dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> printf("Frame size (%u) is not supported by
> port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len,
> - portid);
> + max_rx_pktlen, portid);
> return -1;
> }
> rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -3409,18 +3423,16 @@ update_jumbo_frame_offload(portid_t portid)
> }
> }
>
> - /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
> - * if unset do it here
> - */
> - if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> - ret = rte_eth_dev_set_mtu(portid,
> - port->dev_conf.rxmode.max_rx_pkt_len -
> eth_overhead);
> - if (ret)
> - printf("Failed to set MTU to %u for port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len -
> eth_overhead,
> - portid);
> + if (mtu == new_mtu)
> + return 0;
> +
> + if (rte_eth_dev_set_mtu(portid, new_mtu) != 0) {
> + printf("Failed to set MTU to %u for port %u\n", new_mtu,
> portid);
> + return -1;
> }
>
> + port->dev_conf.rxmode.mtu = new_mtu;
> +
> return 0;
> }
>
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index d61a055bdd1b..42143f85924f 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id,
> __rte_unused uint16_t queue,
> __rte_unused void *user_param);
> void add_tx_dynf_callback(portid_t portid);
> void remove_tx_dynf_callback(portid_t portid);
> -int update_jumbo_frame_offload(portid_t portid);
> +int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
>
> /*
> * Work-around of a compilation error with ICC on invocations of the
> diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
> index 8a5c8310a8b4..5388d18125a6 100644
> --- a/app/test/test_link_bonding.c
> +++ b/app/test/test_link_bonding.c
> @@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> .split_hdr_size = 0,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/app/test/test_link_bonding_mode4.c
> b/app/test/test_link_bonding_mode4.c
> index 2c835fa7adc7..3e9254fe896d 100644
> --- a/app/test/test_link_bonding_mode4.c
> +++ b/app/test/test_link_bonding_mode4.c
> @@ -108,7 +108,6 @@ static struct link_bonding_unittest_params
> test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_link_bonding_rssconf.c
> b/app/test/test_link_bonding_rssconf.c
> index 5dac60ca1edd..e7bb0497b663 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params
> test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> @@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
> static struct rte_eth_conf rss_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
> index 3a248d512c4a..a3b4f52c65e6 100644
> --- a/app/test/test_pmd_perf.c
> +++ b/app/test/test_pmd_perf.c
> @@ -63,7 +63,6 @@ static struct rte_ether_addr
> ports_eth_addr[RTE_MAX_ETHPORTS];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
> index 917482dbe2a5..b8d43aa90098 100644
> --- a/doc/guides/nics/dpaa.rst
> +++ b/doc/guides/nics/dpaa.rst
> @@ -335,7 +335,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
> index 6470f1c05ac8..ce16e1047df2 100644
> --- a/doc/guides/nics/dpaa2.rst
> +++ b/doc/guides/nics/dpaa2.rst
> @@ -551,7 +551,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 403c2b03a386..c98242f3b72f 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -166,7 +166,7 @@ Jumbo frame
> Supports Rx jumbo frames.
>
> * **[uses] rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> - ``dev_conf.rxmode.max_rx_pkt_len``.
> + ``dev_conf.rxmode.mtu``.
> * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
> * **[related] API**: ``rte_eth_dev_set_mtu()``.
>
> diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
> index 7b8ef0e7823d..ed6afd62703d 100644
> --- a/doc/guides/nics/fm10k.rst
> +++ b/doc/guides/nics/fm10k.rst
> @@ -141,7 +141,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The FM10000 family of NICS support a maximum of a 15K jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
> up to 15364 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index 83299646ddb1..338734826a7a 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -584,9 +584,9 @@ Driver options
> and each stride receives one packet. MPRQ can improve throughput for
> small-packet traffic.
>
> - When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
> + When MPRQ is enabled, MTU can be larger than the size of
> user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled.
> PMD will
> - configure large stride size enough to accommodate max_rx_pkt_len as
> long as
> + configure large stride size enough to accommodate MTU as long as
> device allows. Note that this can waste system memory compared to
> enabling Rx
> scatter and multi-segment packet.
>
> diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
> index b1a868b054d1..8236cc3e93e0 100644
> --- a/doc/guides/nics/octeontx.rst
> +++ b/doc/guides/nics/octeontx.rst
> @@ -157,7 +157,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame.
> The value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
> up to 32k bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
> index 12d43ce93e28..98f23a2b2a3d 100644
> --- a/doc/guides/nics/thunderx.rst
> +++ b/doc/guides/nics/thunderx.rst
> @@ -392,7 +392,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
> up to 9200 bytes can still reach the host interface.
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 9584d6bfd723..86da47d8f9c6 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -56,31 +56,6 @@ Deprecation Notices
> In 19.11 PMDs will still update the field even when the offload is not
> enabled.
>
> -* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``,
> will be
> - replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
> - The new ``mtu`` field will be used to configure the initial device MTU via
> - ``rte_eth_dev_configure()`` API.
> - Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
> - The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to
> store
> - the configured ``mtu`` value,
> - and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
> - be used to store the user configuration request.
> - Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME``
> enabled,
> - ``mtu`` field will be always valid.
> - When ``mtu`` config is not provided by the application, default
> ``RTE_ETHER_MTU``
> - value will be used.
> - ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set
> successfully,
> - either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
> -
> - An application may need to configure device for a specific Rx packet size,
> like for
> - cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received
> packet size
> - can't be bigger than Rx buffer size.
> - To cover these cases an application needs to know the device packet
> overhead to be
> - able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
> - ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
> - the device packet overhead can be calculated as:
> - ``(struct rte_eth_dev_info).max_rx_pktlen - (struct
> rte_eth_dev_info).max_mtu``
> -
> * ethdev: ``rx_descriptor_done`` dev_ops and
> ``rte_eth_rx_descriptor_done``
> will be removed in 21.11.
> Existing ``rte_eth_rx_descriptor_status`` and
> ``rte_eth_tx_descriptor_status``
> diff --git a/doc/guides/sample_app_ug/flow_classify.rst
> b/doc/guides/sample_app_ug/flow_classify.rst
> index 01915971ae83..2cc36a688af3 100644
> --- a/doc/guides/sample_app_ug/flow_classify.rst
> +++ b/doc/guides/sample_app_ug/flow_classify.rst
> @@ -325,13 +325,7 @@ Forwarding application is shown below:
> }
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
> -
> -.. code-block:: c
> -
> - static const struct rte_eth_conf port_conf_default = {
> - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> - };
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/doc/guides/sample_app_ug/ioat.rst
> b/doc/guides/sample_app_ug/ioat.rst
> index 7eb557f91c7a..c5c06261e395 100644
> --- a/doc/guides/sample_app_ug/ioat.rst
> +++ b/doc/guides/sample_app_ug/ioat.rst
> @@ -162,7 +162,6 @@ multiple CBDMA channels per port:
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst
> b/doc/guides/sample_app_ug/ip_reassembly.rst
> index e72c8492e972..2090b23fdd1c 100644
> --- a/doc/guides/sample_app_ug/ip_reassembly.rst
> +++ b/doc/guides/sample_app_ug/ip_reassembly.rst
> @@ -175,7 +175,7 @@ each RX queue uses its own mempool.
> .. code-block:: c
>
> nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) *
> RTE_LIBRTE_IP_FRAG_MAX_FRAGS;
> - nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) /
> BUF_SIZE;
> + nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN + BUF_SIZE - 1) / BUF_SIZE;
> nb_mbuf *= 2; /* ipv4 and ipv6 */
> nb_mbuf += RTE_TEST_RX_DESC_DEFAULT +
> RTE_TEST_TX_DESC_DEFAULT;
> nb_mbuf = RTE_MAX(nb_mbuf, (uint32_t)NB_MBUF);
> diff --git a/doc/guides/sample_app_ug/skeleton.rst
> b/doc/guides/sample_app_ug/skeleton.rst
> index 263d8debc81b..a88cb8f14a4b 100644
> --- a/doc/guides/sample_app_ug/skeleton.rst
> +++ b/doc/guides/sample_app_ug/skeleton.rst
> @@ -157,13 +157,7 @@ Forwarding application is shown below:
> }
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
> -
> -.. code-block:: c
> -
> - static const struct rte_eth_conf port_conf_default = {
> - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> - };
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/drivers/net/atlantic/atl_ethdev.c
> b/drivers/net/atlantic/atl_ethdev.c
> index 0ce35eb519e2..3f654c071566 100644
> --- a/drivers/net/atlantic/atl_ethdev.c
> +++ b/drivers/net/atlantic/atl_ethdev.c
> @@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> dev_info.max_rx_pktlen)
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return 0;
> }
>
> diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> index 623fa5e5ff5b..2554f5fdf59a 100644
> --- a/drivers/net/avp/avp_ethdev.c
> +++ b/drivers/net/avp/avp_ethdev.c
> @@ -1059,17 +1059,18 @@ static int
> avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
> struct avp_dev *avp)
> {
> - unsigned int max_rx_pkt_len;
> + unsigned int max_rx_pktlen;
>
> - max_rx_pkt_len = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
>
> - if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
> - (max_rx_pkt_len > avp->host_mbuf_size)) {
> + if ((max_rx_pktlen > avp->guest_mbuf_size) ||
> + (max_rx_pktlen > avp->host_mbuf_size)) {
> /*
> * If the guest MTU is greater than either the host or guest
> * buffers then chained mbufs have to be enabled in the TX
> * direction. It is assumed that the application will not need
> - * to send packets larger than their max_rx_pkt_len (MRU).
> + * to send packets larger than their MTU.
> */
> return 1;
> }
> @@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
>
> PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u)
> mbuf_size=(%u,%u)\n",
> avp->max_rx_pkt_len,
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN,
> avp->host_mbuf_size,
> avp->guest_mbuf_size);
>
> @@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
> * function; send it truncated to avoid the
> performance
> * hit of having to manage returning the already
> * allocated buffer to the free list. This should not
> - * happen since the application should have set the
> - * max_rx_pkt_len based on its MTU and it should be
> + * happen since the application should have not send
> + * packages larger than its MTU and it should be
> * policing its own packet sizes.
> */
> txq->errors++;
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index 9cb4818af11f..76aeec077f2b 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
> struct axgbe_port *pdata = dev->data->dev_private;
> int ret;
> struct rte_eth_dev_data *dev_data = dev->data;
> - uint16_t max_pkt_len = dev_data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint16_t max_pkt_len;
>
> dev->dev_ops = &axgbe_eth_dev_ops;
>
> @@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
>
> rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
> rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
> +
> + max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> if ((dev_data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) ||
> max_pkt_len > pdata->rx_buf_size)
> dev_data->scattered_rx = 1;
> @@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (frame_size > AXGBE_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> val = 1;
> @@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> val = 0;
> }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c
> b/drivers/net/bnx2x/bnx2x_ethdev.c
> index 463886f17a58..009a94e9a8fa 100644
> --- a/drivers/net/bnx2x/bnx2x_ethdev.c
> +++ b/drivers/net/bnx2x/bnx2x_ethdev.c
> @@ -175,16 +175,12 @@ static int
> bnx2x_dev_configure(struct rte_eth_dev *dev)
> {
> struct bnx2x_softc *sc = dev->data->dev_private;
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
>
> int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
>
> PMD_INIT_FUNC_TRACE(sc);
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - dev->data->mtu = sc->mtu;
> - }
> + sc->mtu = dev->data->dev_conf.rxmode.mtu;
>
> if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
> PMD_DRV_LOG(ERR, sc, "The number of TX queues is
> greater than number of RX queues");
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c
> b/drivers/net/bnxt/bnxt_ethdev.c
> index c9536f79267d..335505a106d5 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -1128,13 +1128,8 @@ static int bnxt_dev_configure_op(struct
> rte_eth_dev *eth_dev)
> rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
> eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - eth_dev->data->mtu =
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len
> -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> VLAN_TAG_SIZE *
> - BNXT_NUM_VLANS;
> - bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> - }
> + bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> +
> return 0;
>
> resource_error:
> @@ -1172,6 +1167,7 @@ void bnxt_print_link_info(struct rte_eth_dev
> *eth_dev)
> */
> static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> uint16_t buf_size;
> int i;
>
> @@ -1186,7 +1182,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev
> *eth_dev)
>
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq-
> >mb_pool) -
> RTE_PKTMBUF_HEADROOM);
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len >
> buf_size)
> + if (eth_dev->data->mtu + overhead > buf_size)
> return 1;
> }
> return 0;
> @@ -2992,6 +2988,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev,
> __rte_unused uint16_t queue_id,
>
> int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> struct bnxt *bp = eth_dev->data->dev_private;
> uint32_t new_pkt_size;
> uint32_t rc = 0;
> @@ -3005,8 +3002,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> if (!eth_dev->data->nb_rx_queues)
> return rc;
>
> - new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> - VLAN_TAG_SIZE * BNXT_NUM_VLANS;
> + new_pkt_size = new_mtu + overhead;
>
> /*
> * Disallow any MTU change that would require scattered receive
> support
> @@ -3033,7 +3029,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> }
>
> /* Is there a change in mtu setting? */
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len ==
> new_pkt_size)
> + if (eth_dev->data->mtu == new_mtu)
> return rc;
>
> for (i = 0; i < bp->nr_vnics; i++) {
> @@ -3055,9 +3051,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> }
> }
>
> - if (!rc)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> new_pkt_size;
> -
> PMD_DRV_LOG(INFO, "New MTU is %d\n", new_mtu);
>
> return rc;
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
> b/drivers/net/bonding/rte_eth_bond_pmd.c
> index b01ef003e65c..b2a1833e3f91 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -1728,8 +1728,8 @@ slave_configure(struct rte_eth_dev
> *bonded_eth_dev,
> slave_eth_dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_VLAN_FILTER;
>
> - slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - bonded_eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + slave_eth_dev->data->dev_conf.rxmode.mtu =
> + bonded_eth_dev->data->dev_conf.rxmode.mtu;
>
> if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> diff --git a/drivers/net/cnxk/cnxk_ethdev.c
> b/drivers/net/cnxk/cnxk_ethdev.c
> index 7adab4605819..da6c5e8f242f 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.c
> +++ b/drivers/net/cnxk/cnxk_ethdev.c
> @@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp
> *rxq)
> mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
> buffsz = mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM;
>
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD >
> buffsz) {
> dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> }
> @@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> {
> struct rte_eth_dev_data *data = eth_dev->data;
> struct cnxk_eth_rxq_sp *rxq;
> - uint16_t mtu;
> int rc;
>
> rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
> /* Setup scatter mode if needed by jumbo */
> nix_enable_mseg_on_jumbo(rxq);
>
> - /* Setup MTU based on max_rx_pkt_len */
> - mtu = data->dev_conf.rxmode.max_rx_pkt_len -
> CNXK_NIX_L2_OVERHEAD +
> - CNXK_NIX_MAX_VTAG_ACT_SIZE;
> -
> - rc = cnxk_nix_mtu_set(eth_dev, mtu);
> + rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
> if (rc)
> plt_err("Failed to set default MTU size, rc=%d", rc);
>
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c
> b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index b6cc5286c6d0..695d0d6fd3e2 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> goto exit;
> }
>
> - frame_size += RTE_ETHER_CRC_LEN;
> -
> - if (frame_size > RTE_ETHER_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> exit:
> return rc;
> }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 177eca397600..8cf61f12a8d6 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu)
> return err;
>
> /* Must accommodate at least RTE_ETHER_MIN_MTU */
> - if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> + if (mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> return -EINVAL;
>
> /* set to jumbo mode if needed */
> - if (new_mtu > CXGBE_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu)
>
> err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1,
> -1,
> -1, -1, true);
> - if (!err)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> new_mtu;
> -
> return err;
> }
>
> @@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> const struct rte_eth_rxconf *rx_conf
> __rte_unused,
> struct rte_mempool *mp)
> {
> - unsigned int pkt_len = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN
> +
> + RTE_ETHER_CRC_LEN;
> struct port_info *pi = eth_dev->data->dev_private;
> struct adapter *adapter = pi->adapter;
> struct rte_eth_dev_info dev_info;
> @@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> rxq->fl.size = temp_nb_desc;
>
> /* Set to jumbo mode if necessary */
> - if (pkt_len > CXGBE_ETH_MAX_LEN)
> + if (eth_dev->data->mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> diff --git a/drivers/net/cxgbe/cxgbe_main.c
> b/drivers/net/cxgbe/cxgbe_main.c
> index 6dd1bf1f836e..91d6bb9bbcb0 100644
> --- a/drivers/net/cxgbe/cxgbe_main.c
> +++ b/drivers/net/cxgbe/cxgbe_main.c
> @@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
> unsigned int mtu;
> int ret;
>
> - mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
> + mtu = pi->eth_dev->data->mtu;
>
> conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
>
> diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
> index e5f7721dc4b3..830f5192474d 100644
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct
> rte_mbuf *mbuf,
> u32 wr_mid;
> u64 cntrl, *end;
> bool v6;
> - u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
> + u32 max_pkt_len;
>
> /* Reject xmit if queue is stopped */
> if (unlikely(txq->flags & EQ_STOPPED))
> @@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct
> rte_mbuf *mbuf,
> return 0;
> }
>
> + max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
> (unlikely(m->pkt_len > max_pkt_len)))
> goto out_free;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c
> b/drivers/net/dpaa/dpaa_ethdev.c
> index 27d670f843d2..56703e3a39e8 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (frame_size > DPAA_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> fman_if_set_maxfrm(dev->process_private, frame_size);
>
> return 0;
> @@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> struct fman_if *fif = dev->process_private;
> struct __fman_if *__fif;
> struct rte_intr_handle *intr_handle;
> + uint32_t max_rx_pktlen;
> int speed, duplex;
> int ret;
>
> @@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - DPAA_PMD_DEBUG("enabling jumbo");
> -
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - DPAA_MAX_RX_PKT_LEN)
> - max_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> - else {
> - DPAA_PMD_INFO("enabling jumbo override conf
> max len=%d "
> - "supported is %d",
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> - DPAA_MAX_RX_PKT_LEN);
> - max_len = DPAA_MAX_RX_PKT_LEN;
> - }
> -
> - fman_if_set_maxfrm(dev->process_private, max_len);
> - dev->data->mtu = max_len
> - - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> VLAN_TAG_SIZE;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
> + DPAA_PMD_INFO("enabling jumbo override conf max
> len=%d "
> + "supported is %d",
> + max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
> + max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
> }
>
> + fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
> +
> if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
> DPAA_PMD_DEBUG("enabling scatter mode");
> fman_if_set_sg(dev->process_private, 1);
> @@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> u32 flags = 0;
> int ret;
> u32 buffsz = rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> return -EINVAL;
> }
>
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> + VLAN_TAG_SIZE;
> /* Max packet can fit in single buffer */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
> + if (max_rx_pktlen <= buffsz) {
> ;
> } else if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - buffsz * DPAA_SGT_MAX_ENTRIES) {
> - DPAA_PMD_ERR("max RxPkt size %d too big to fit "
> + if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
> + DPAA_PMD_ERR("Maximum Rx packet size %d too
> big to fit "
> "MaxSGlist %d",
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> - buffsz * DPAA_SGT_MAX_ENTRIES);
> + max_rx_pktlen, buffsz *
> DPAA_SGT_MAX_ENTRIES);
> rte_errno = EOVERFLOW;
> return -rte_errno;
> }
> @@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> DPAA_PMD_WARN("The requested maximum Rx packet size
> (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - buffsz - RTE_PKTMBUF_HEADROOM);
> + max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
> }
>
> dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
> @@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
>
> dpaa_intf->valid = 1;
> DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf-
> >name,
> - fman_if_get_sg_enable(fif),
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + fman_if_get_sg_enable(fif), max_rx_pktlen);
> /* checking if push mode only, no error check for now */
> if (!rxq->is_static &&
> dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 8b803b8542dc..6213bcbf3a43 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> int tx_l3_csum_offload = false;
> int tx_l4_csum_offload = false;
> int ret, tc_index;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev
> *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (eth_conf->rxmode.max_rx_pkt_len <=
> DPAA2_MAX_RX_PKT_LEN) {
> - ret = dpni_set_max_frame_length(dpni,
> CMD_PRI_LOW,
> - priv->token, eth_conf-
> >rxmode.max_rx_pkt_len
> - - RTE_ETHER_CRC_LEN);
> - if (ret) {
> - DPAA2_PMD_ERR(
> - "Unable to set mtu. check config");
> - return ret;
> - }
> - dev->data->mtu =
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
> -
> - VLAN_TAG_SIZE;
> - } else {
> - return -1;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
> + ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
> + priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
> + if (ret) {
> + DPAA2_PMD_ERR("Unable to set mtu. check config");
> + return ret;
> }
> + } else {
> + return -1;
> }
>
> if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
> @@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> DPAA2_MAX_RX_PKT_LEN)
> return -EINVAL;
>
> - if (frame_size > DPAA2_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c
> b/drivers/net/e1000/em_ethdev.c
> index a0ca371b0275..6f418a36aa04 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/e1000/igb_ethdev.c
> b/drivers/net/e1000/igb_ethdev.c
> index 10ee0f33415a..35b517891d67 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -2686,9 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev
> *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu +
> E1000_ETH_OVERHEAD);
> }
>
> static void
> @@ -2704,10 +2702,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev
> *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE);
> + E1000_WRITE_REG(hw, E1000_RLPML,
> + dev->data->mtu + E1000_ETH_OVERHEAD +
> VLAN_TAG_SIZE);
> }
>
> static int
> @@ -4405,7 +4401,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -4416,11 +4412,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
>
> return 0;
> }
> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> index 278d5d2712af..de12997b4bdd 100644
> --- a/drivers/net/e1000/igb_rxtx.c
> +++ b/drivers/net/e1000/igb_rxtx.c
> @@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> @@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure support of jumbo frames, if any.
> */
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> -
> rctl |= E1000_RCTL_LPE;
>
> /*
> @@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN
> */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) >
> buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> @@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> /* setup MTU */
> - e1000_rlpml_set_vf(hw,
> - (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE));
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> + e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN
> */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) >
> buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> index dfe68279fa7b..e9b718786a39 100644
> --- a/drivers/net/ena/ena_ethdev.c
> +++ b/drivers/net/ena/ena_ethdev.c
> @@ -850,26 +850,14 @@ static int ena_queue_start_all(struct rte_eth_dev
> *dev,
> return rc;
> }
>
> -static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
> -{
> - uint32_t max_frame_len = adapter->max_mtu;
> -
> - if (adapter->edev_data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME)
> - max_frame_len =
> - adapter->edev_data-
> >dev_conf.rxmode.max_rx_pkt_len;
> -
> - return max_frame_len;
> -}
> -
> static int ena_check_valid_conf(struct ena_adapter *adapter)
> {
> - uint32_t max_frame_len = ena_get_mtu_conf(adapter);
> + uint32_t mtu = adapter->edev_data->mtu;
>
> - if (max_frame_len > adapter->max_mtu || max_frame_len <
> ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_INIT_LOG(ERR, "Unsupported MTU of %d. "
> "max mtu: %d, min mtu: %d",
> - max_frame_len, adapter->max_mtu,
> ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return ENA_COM_UNSUPPORTED;
> }
>
> @@ -1042,11 +1030,11 @@ static int ena_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> ena_dev = &adapter->ena_dev;
> ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
>
> - if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_DRV_LOG(ERR,
> "Invalid MTU setting. new_mtu: %d "
> "max mtu: %d min mtu: %d\n",
> - mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return -EINVAL;
> }
>
> @@ -2067,7 +2055,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
> ETH_RSS_UDP;
>
> dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
> - dev_info->max_rx_pktlen = adapter->max_mtu;
> + dev_info->max_rx_pktlen = adapter->max_mtu +
> RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + dev_info->min_mtu = ENA_MIN_MTU;
> + dev_info->max_mtu = adapter->max_mtu;
> dev_info->max_mac_addrs = 1;
>
> dev_info->max_rx_queues = adapter->max_num_io_queues;
> diff --git a/drivers/net/enetc/enetc_ethdev.c
> b/drivers/net/enetc/enetc_ethdev.c
> index b496cd470045..cdb9783b5372 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (frame_size > ENETC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads &=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
> enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /*setting the MTU*/
> enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> ENETC_SET_MAXFRM(frame_size) |
> ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
> @@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
> struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
> uint64_t rx_offloads = eth_conf->rxmode.offloads;
> uint32_t checksum = L3_CKSUM | L4_CKSUM;
> + uint32_t max_len;
>
> PMD_INIT_FUNC_TRACE();
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> -
> - enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> - ENETC_SET_MAXFRM(max_len));
> - enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> - ENETC_MAC_MAXFRM_SIZE);
> - enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
> - 2 * ENETC_MAC_MAXFRM_SIZE);
> - dev->data->mtu = RTE_ETHER_MAX_LEN -
> RTE_ETHER_HDR_LEN -
> - RTE_ETHER_CRC_LEN;
> - }
> + max_len = dev->data->dev_conf.rxmode.mtu +
> RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> ENETC_SET_MAXFRM(max_len));
> + enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
> + enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
>
> if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
> int config;
> diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
> index 8d5797523b8f..6a81ceb62ba7 100644
> --- a/drivers/net/enic/enic_ethdev.c
> +++ b/drivers/net/enic/enic_ethdev.c
> @@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev
> *eth_dev,
> * max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
> * a hint to the driver to size receive buffers accordingly so that
> * larger-than-vnic-mtu packets get truncated.. For DPDK, we let
> - * the user decide the buffer size via rxmode.max_rx_pkt_len,
> basically
> + * the user decide the buffer size via rxmode.mtu, basically
> * ignoring vNIC mtu.
> */
> device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic-
> >max_mtu);
> diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
> index 2affd380c6a4..dfc7f5d1f94f 100644
> --- a/drivers/net/enic/enic_main.c
> +++ b/drivers/net/enic/enic_main.c
> @@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct
> vnic_rq *rq)
> struct rq_enet_desc *rqd = rq->ring.descs;
> unsigned i;
> dma_addr_t dma_addr;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint16_t rq_buf_len;
>
> if (!rq->in_use)
> @@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic,
> struct vnic_rq *rq)
>
> /*
> * If *not* using scatter and the mbuf size is greater than the
> - * requested max packet size (max_rx_pkt_len), then reduce the
> - * posted buffer size to max_rx_pkt_len. HW still receives packets
> - * larger than max_rx_pkt_len, but they will be truncated, which we
> + * requested max packet size (mtu + eth overhead), then reduce the
> + * posted buffer size to max packet size. HW still receives packets
> + * larger than max packet size, but they will be truncated, which we
> * drop in the rx handler. Not ideal, but better than returning
> * large packets when the user is not expecting them.
> */
> - max_rx_pkt_len = enic->rte_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev-
> >data->mtu);
> rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) -
> RTE_PKTMBUF_HEADROOM;
> - if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
> - rq_buf_len = max_rx_pkt_len;
> + if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
> + rq_buf_len = max_rx_pktlen;
> for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
> mb = rte_mbuf_raw_alloc(rq->mp);
> if (mb == NULL) {
> @@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> unsigned int mbuf_size, mbufs_per_pkt;
> unsigned int nb_sop_desc, nb_data_desc;
> uint16_t min_sop, max_sop, min_data, max_data;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
>
> /*
> * Representor uses a reserved PF queue. Translate representor
> @@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
>
> mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM);
> - /* max_rx_pkt_len includes the ethernet header and CRC. */
> - max_rx_pkt_len = enic->rte_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + /* max_rx_pktlen includes the ethernet header and CRC. */
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev-
> >data->mtu);
>
> if (enic->rte_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> dev_info(enic, "Rq %u Scatter rx mode enabled\n",
> queue_idx);
> /* ceil((max pkt len)/mbuf_size) */
> - mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) /
> mbuf_size;
> + mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) /
> mbuf_size;
> } else {
> dev_info(enic, "Scatter rx mode disabled\n");
> mbufs_per_pkt = 1;
> - if (max_rx_pkt_len > mbuf_size) {
> + if (max_rx_pktlen > mbuf_size) {
> dev_warning(enic, "The maximum Rx packet size (%u)
> is"
> " larger than the mbuf size (%u), and"
> " scatter is disabled. Larger packets will"
> " be truncated.\n",
> - max_rx_pkt_len, mbuf_size);
> + max_rx_pktlen, mbuf_size);
> }
> }
>
> @@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> rq_sop->data_queue_enable = 1;
> rq_data->in_use = 1;
> /*
> - * HW does not directly support rxmode.max_rx_pkt_len.
> HW always
> + * HW does not directly support MTU. HW always
> * receives packet sizes up to the "max" MTU.
> * If not using scatter, we can achieve the effect of dropping
> * larger packets by reducing the size of posted buffers.
> * See enic_alloc_rx_queue_mbufs().
> */
> - if (max_rx_pkt_len <
> - enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
> - dev_warning(enic, "rxmode.max_rx_pkt_len is
> ignored"
> - " when scatter rx mode is in use.\n");
> + if (enic->rte_dev->data->mtu < enic->max_mtu) {
> + dev_warning(enic,
> + "mtu is ignored when scatter rx mode is in
> use.\n");
> }
> } else {
> dev_info(enic, "Rq %u Scatter rx mode not being used\n",
> @@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> if (mbufs_per_pkt > 1) {
> dev_info(enic, "For max packet size %u and mbuf size %u
> valid"
> " rx descriptor range is %u to %u\n",
> - max_rx_pkt_len, mbuf_size, min_sop + min_data,
> + max_rx_pktlen, mbuf_size, min_sop + min_data,
> max_sop + max_data);
> }
> dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
> @@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t
> new_mtu)
> "MTU (%u) is greater than value configured in NIC
> (%u)\n",
> new_mtu, config_mtu);
>
> - /* Update the MTU and maximum packet length */
> - eth_dev->data->mtu = new_mtu;
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - enic_mtu_to_max_rx_pktlen(new_mtu);
> -
> /*
> * If the device has not started (enic_enable), nothing to do.
> * Later, enic_enable() will set up RQs reflecting the new maximum
> diff --git a/drivers/net/fm10k/fm10k_ethdev.c
> b/drivers/net/fm10k/fm10k_ethdev.c
> index 3236290e4021..5e4b361ca6c0 100644
> --- a/drivers/net/fm10k/fm10k_ethdev.c
> +++ b/drivers/net/fm10k/fm10k_ethdev.c
> @@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
> FM10K_SRRCTL_LOOPBACK_SUPPRESS);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> + if ((dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> 2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
> rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
> uint32_t reg;
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
> b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 946465779f2e..c737ef8d06d8 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -324,19 +324,19 @@ static int hinic_dev_configure(struct rte_eth_dev
> *dev)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_RSS_HASH;
>
> /* mtu size is 256~9600 */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> HINIC_MIN_FRAME_SIZE ||
> - dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - HINIC_MAX_JUMBO_FRAME_SIZE) {
> + if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
> + HINIC_MIN_FRAME_SIZE ||
> + HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
> + HINIC_MAX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR,
> - "Max rx pkt len out of range, get max_rx_pkt_len:%d,
> "
> + "Packet length out of range, get packet length:%d, "
> "expect between %d and %d",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + HINIC_MTU_TO_PKTLEN(dev->data-
> >dev_conf.rxmode.mtu),
> HINIC_MIN_FRAME_SIZE,
> HINIC_MAX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
>
> - nic_dev->mtu_size =
> - HINIC_PKTLEN_TO_MTU(dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
>
> /* rss template */
> err = hinic_config_mq_mode(dev, TRUE);
> @@ -1539,7 +1539,6 @@ static void hinic_deinit_mac_addr(struct
> rte_eth_dev *eth_dev)
> static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct hinic_nic_dev *nic_dev =
> HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
> - uint32_t frame_size;
> int ret = 0;
>
> PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d,
> max_pkt_len: %d",
> @@ -1557,16 +1556,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev
> *dev, uint16_t mtu)
> return ret;
> }
>
> - /* update max frame size */
> - frame_size = HINIC_MTU_TO_PKTLEN(mtu);
> - if (frame_size > HINIC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> nic_dev->mtu_size = mtu;
>
> return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c
> b/drivers/net/hns3/hns3_ethdev.c
> index e51512560e15..8bccdeddb2f7 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2379,20 +2379,11 @@ hns3_refresh_mtu(struct rte_eth_dev *dev,
> struct rte_eth_conf *conf)
> {
> struct hns3_adapter *hns = dev->data->dev_private;
> struct hns3_hw *hw = &hns->hw;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> - int ret;
> -
> - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
> - return 0;
> + uint32_t max_rx_pktlen;
>
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
> hns3_err(hw, "maximum Rx packet length must be greater
> than %u "
> "and no more than %u when jumbo frame enabled.",
> (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> @@ -2400,13 +2391,7 @@ hns3_refresh_mtu(struct rte_eth_dev *dev,
> struct rte_eth_conf *conf)
> return -EINVAL;
> }
>
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3_dev_mtu_set(dev, mtu);
> - if (ret)
> - return ret;
> - dev->data->mtu = mtu;
> -
> - return 0;
> + return hns3_dev_mtu_set(dev, conf->rxmode.mtu);
> }
>
> static int
> @@ -2622,7 +2607,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
>
> rte_spinlock_lock(&hw->lock);
> - is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true :
> false;
> + is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
> frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
>
> /*
> @@ -2643,7 +2628,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
> b/drivers/net/hns3/hns3_ethdev_vf.c
> index e582503f529b..ca839fa55fa0 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -784,8 +784,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> uint16_t nb_rx_q = dev->data->nb_rx_queues;
> uint16_t nb_tx_q = dev->data->nb_tx_queues;
> struct rte_eth_rss_conf rss_conf;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> + uint32_t max_rx_pktlen;
> bool gro_en;
> int ret;
>
> @@ -825,29 +824,21 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> goto cfg_err;
> }
>
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> - hns3_err(hw, "maximum Rx packet length must be
> greater "
> - "than %u and less than %u when jumbo
> frame enabled.",
> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> - (uint16_t)HNS3_MAX_FRAME_LEN);
> - ret = -EINVAL;
> - goto cfg_err;
> - }
> -
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3vf_dev_mtu_set(dev, mtu);
> - if (ret)
> - goto cfg_err;
> - dev->data->mtu = mtu;
> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
> + hns3_err(hw, "maximum Rx packet length must be greater "
> + "than %u and less than %u when jumbo frame
> enabled.",
> + (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> + (uint16_t)HNS3_MAX_FRAME_LEN);
> + ret = -EINVAL;
> + goto cfg_err;
> }
>
> + ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
> + if (ret)
> + goto cfg_err;
> +
> ret = hns3vf_dev_configure_vlan(dev);
> if (ret)
> goto cfg_err;
> @@ -935,7 +926,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
> index cb9eccf9faae..6b81688a7225 100644
> --- a/drivers/net/hns3/hns3_rxtx.c
> +++ b/drivers/net/hns3/hns3_rxtx.c
> @@ -1734,18 +1734,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw
> *hw, uint16_t buf_size,
> uint16_t nb_desc)
> {
> struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
> - struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
> eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
> + uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
> uint16_t min_vec_bds;
>
> /*
> * HNS3 hardware network engine set scattered as default. If the
> driver
> * is not work in scattered mode and the pkts greater than buf_size
> - * but smaller than max_rx_pkt_len will be distributed to multiple
> BDs.
> + * but smaller than frame size will be distributed to multiple BDs.
> * Driver cannot handle this situation.
> */
> - if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len >
> buf_size) {
> - hns3_err(hw, "max_rx_pkt_len is not allowed to be set
> greater "
> + if (!hw->data->scattered_rx && frame_size > buf_size) {
> + hns3_err(hw, "frame size is not allowed to be set greater "
> "than rx_buf_len if scattered is off.");
> return -EINVAL;
> }
> @@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
> }
>
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
> - dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
> + dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
> dev->data->scattered_rx = true;
> }
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 7b230e2ed17a..1161f301b9ae 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11772,14 +11772,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > I40E_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return ret;
> }
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> index 0cfe13b7b227..086a167ca672 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -1927,8 +1927,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct
> i40e_rx_queue *rxq)
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> I40E_RXQ_CTX_DBUFF_SHIFT));
> len = rxq->rx_buf_len * I40E_MAX_CHAINED_RX_BUFFERS;
> - rxq->max_pkt_len = RTE_MIN(len,
> - dev_data->dev_conf.rxmode.max_rx_pkt_len);
> + rxq->max_pkt_len = RTE_MIN(len, dev_data->mtu +
> I40E_ETH_OVERHEAD);
>
> /**
> * Check if the jumbo frame and maximum packet length are set
> correctly
> @@ -2173,7 +2172,7 @@ i40evf_dev_start(struct rte_eth_dev *dev)
>
> hw->adapter_stopped = 0;
>
> - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + vf->max_pkt_len = dev->data->mtu + I40E_ETH_OVERHEAD;
> vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
> dev->data->nb_tx_queues);
>
> @@ -2885,13 +2884,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > I40E_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return ret;
> }
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index 8d65f287f455..aa43796ef1af 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2904,8 +2904,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
> }
>
> rxq->max_pkt_len =
> - RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
> - rxq->rx_buf_len), data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq-
> >rx_buf_len,
> + data->mtu + I40E_ETH_OVERHEAD);
> if (data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
> rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 41382c6d669b..13c2329d85a7 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -563,12 +563,13 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct
> iavf_rx_queue *rxq)
> struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> struct rte_eth_dev_data *dev_data = dev->data;
> uint16_t buf_size, max_pkt_len, len;
> + uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
>
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM;
>
> /* Calculate the maximum packet length allowed */
> len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS;
> - max_pkt_len = RTE_MIN(len, dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + max_pkt_len = RTE_MIN(len, frame_size);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> @@ -815,7 +816,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
>
> adapter->stopped = 0;
>
> - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
> vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
> dev->data->nb_tx_queues);
> num_queue_pairs = vf->num_queue_pairs;
> @@ -1445,15 +1446,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IAVF_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return ret;
> }
>
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c
> b/drivers/net/ice/ice_dcf_ethdev.c
> index 69fe6e63d1d3..34b6c9b2a7ed 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -59,9 +59,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct
> ice_rx_queue *rxq)
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM;
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> ICE_RLAN_CTX_DBUF_S));
> - max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> + dev->data->mtu + ICE_ETH_OVERHEAD);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 63f735d1ff72..bdda6fee3f8e 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3426,8 +3426,8 @@ ice_dev_start(struct rte_eth_dev *dev)
> pf->adapter_stopped = false;
>
> /* Set the max frame size to default value*/
> - max_frame_size = pf->dev_data-
> >dev_conf.rxmode.max_rx_pkt_len ?
> - pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
> + max_frame_size = pf->dev_data->mtu ?
> + pf->dev_data->mtu + ICE_ETH_OVERHEAD :
> ICE_FRAME_SIZE_MAX;
>
> /* Set the max frame size to HW*/
> @@ -3806,14 +3806,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EBUSY;
> }
>
> - if (frame_size > ICE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return 0;
> }
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index 3f6e7359844b..a3de4172e2bc 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -262,15 +262,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
> struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
> uint32_t rxdid = ICE_RXDID_COMMS_OVS;
> uint32_t regval;
> + uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
>
> /* Set buffer size as the head split is disabled. */
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM);
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> ICE_RLAN_CTX_DBUF_S));
> - rxq->max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> - dev_data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + rxq->max_pkt_len =
> + RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> + frame_size);
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
> @@ -361,11 +362,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
> return -EINVAL;
> }
>
> - buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> - RTE_PKTMBUF_HEADROOM);
> -
> /* Check if scattered RX needs to be used. */
> - if (rxq->max_pkt_len > buf_size)
> + if (frame_size > buf_size)
> dev_data->scattered_rx = 1;
>
> rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
> index 224a0954836b..b26723064b07 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -20,13 +20,6 @@
>
> #define IGC_INTEL_VENDOR_ID 0x8086
>
> -/*
> - * The overhead from MTU to max frame size.
> - * Considering VLAN so tag needs to be counted.
> - */
> -#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> - RTE_ETHER_CRC_LEN +
> VLAN_TAG_SIZE)
> -
> #define IGC_FC_PAUSE_TIME 0x0680
> #define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
> #define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
> @@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>
> /* switch to jumbo mode if needed */
> if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= IGC_RCTL_LPE;
> } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl &= ~IGC_RCTL_LPE;
> }
> IGC_WRITE_REG(hw, IGC_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> return 0;
> }
> @@ -2486,6 +2473,7 @@ static int
> igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev
> *dev)
> if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> - RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> + if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error, min
> is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
> + frame_size, VLAN_TAG_SIZE +
> RTE_ETHER_MIN_MTU);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext &
> ~IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> @@ -2519,6 +2498,7 @@ static int
> igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev
> *dev)
> if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
> + if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error,
> max is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
> + frame_size, MAX_RX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext |
> IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
> index 7b6c209df3b6..b3473b5b1646 100644
> --- a/drivers/net/igc/igc_ethdev.h
> +++ b/drivers/net/igc/igc_ethdev.h
> @@ -35,6 +35,13 @@ extern "C" {
> #define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
> #define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE *
> IGC_HKEY_MAX_INDEX)
>
> +/*
> + * The overhead from MTU to max frame size.
> + * Considering VLAN so tag needs to be counted.
> + */
> +#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> + RTE_ETHER_CRC_LEN +
> VLAN_TAG_SIZE * 2)
> +
> /*
> * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN
> should be
> * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
> diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
> index b5489eedd220..d80808a002f5 100644
> --- a/drivers/net/igc/igc_txrx.c
> +++ b/drivers/net/igc/igc_txrx.c
> @@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> struct igc_rx_queue *rxq;
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
> - uint32_t max_rx_pkt_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint32_t rctl;
> uint32_t rxcsum;
> uint16_t buf_size;
> @@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
>
> /* Configure support of jumbo frames, if any. */
> - if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> rctl |= IGC_RCTL_LPE;
> -
> - /*
> - * Set maximum packet length by default, and might be
> updated
> - * together with enabling/disabling dual VLAN.
> - */
> - IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
> - } else {
> + else
> rctl &= ~IGC_RCTL_LPE;
> - }
> +
> + max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
> + /*
> + * Set maximum packet length by default, and might be updated
> + * together with enabling/disabling dual VLAN.
> + */
> + IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN
> */
> - if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
> + if (max_rx_pktlen > buf_size)
> dev->data->scattered_rx = 1;
> } else {
> /*
> diff --git a/drivers/net/ionic/ionic_ethdev.c
> b/drivers/net/ionic/ionic_ethdev.c
> index e6207939665e..97447a10e46a 100644
> --- a/drivers/net/ionic/ionic_ethdev.c
> +++ b/drivers/net/ionic/ionic_ethdev.c
> @@ -343,25 +343,15 @@ static int
> ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> {
> struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
> - uint32_t max_frame_size;
> int err;
>
> IONIC_PRINT_CALL();
>
> /*
> * Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
> - * is done by the the API.
> + * is done by the API.
> */
>
> - /*
> - * Max frame size is MTU + Ethernet header + VLAN + QinQ
> - * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
> - */
> - max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
> -
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len <
> max_frame_size)
> - return -EINVAL;
> -
> err = ionic_lif_change_mtu(lif, mtu);
> if (err)
> return err;
> diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
> index b83ea1bcaa6a..3f5fc66abf71 100644
> --- a/drivers/net/ionic/ionic_rxtx.c
> +++ b/drivers/net/ionic/ionic_rxtx.c
> @@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
> struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
> struct rte_mbuf *rxm, *rxm_seg;
> uint32_t max_frame_size =
> - rxq->qcq.lif->eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint64_t pkt_flags = 0;
> uint32_t pkt_type;
> struct ionic_rx_stats *stats = &rxq->stats;
> @@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
> int __rte_cold
> ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t
> rx_queue_id)
> {
> - uint32_t frame_size = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
> struct ionic_rx_qcq *rxq;
> int err;
> @@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts,
> {
> struct ionic_rx_qcq *rxq = rx_queue;
> uint32_t frame_size =
> - rxq->qcq.lif->eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> struct ionic_rx_service service_cb_arg;
>
> service_cb_arg.rx_pkts = rx_pkts;
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c
> b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 589d9fa5877d..3634c0c8c5f0 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev
> *ethdev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IPN3KE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (rpst->i40e_pf_eth) {
> ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst-
> >i40e_pf_eth,
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library
2021-07-09 17:29 ` [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-13 13:48 ` Andrew Rybchenko
@ 2021-07-18 7:49 ` Xu, Rosen
2021-07-19 14:38 ` Ajit Khaparde
2 siblings, 0 replies; 112+ messages in thread
From: Xu, Rosen @ 2021-07-18 7:49 UTC (permalink / raw)
To: Yigit, Ferruh, Somalapuram Amaranath, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Hemant Agrawal,
Sachin Saxena, Wang, Haiyue, Gagandeep Singh, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
Zhang, Qi Z, Shijith Thotton, Srisivasubramanian Srinivasan,
Heinrich Kuhn, Harman Kalra, Jerin Jacob, Rasesh Mody,
Devendra Singh Rawat, Igor Russkikh, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: dev
Hi,
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Saturday, July 10, 2021 1:29
> To: Somalapuram Amaranath <asomalap@amd.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar K <kirankumark@marvell.com>;
> Sunil Kumar Kori <skori@marvell.com>; Satha Rao
> <skoteshwar@marvell.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@oss.nxp.com>;
> Wang, Haiyue <haiyue.wang@intel.com>; Gagandeep Singh
> <g.singh@nxp.com>; Ziyang Xuan <xuanziyang2@huawei.com>; Xiaoyun
> Wang <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Min Hu (Connor) <humin29@huawei.com>;
> Yisen Zhuang <yisen.zhuang@huawei.com>; Lijun Ou
> <oulijun@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi
> Z <qi.z.zhang@intel.com>; Xu, Rosen <rosen.xu@intel.com>; Shijith Thotton
> <sthotton@marvell.com>; Srisivasubramanian Srinivasan
> <srinivasan@marvell.com>; Heinrich Kuhn
> <heinrich.kuhn@netronome.com>; Harman Kalra <hkalra@marvell.com>;
> Jerin Jacob <jerinj@marvell.com>; Rasesh Mody <rmody@marvell.com>;
> Devendra Singh Rawat <dsinghrawat@marvell.com>; Igor Russkikh
> <irusskikh@marvell.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Maciej Czekaj <mczekaj@marvell.com>;
> Jiawen Wu <jiawenwu@trustnetic.com>; Jian Wang
> <jianwang@trustnetic.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org
> Subject: [PATCH 2/4] ethdev: move jumbo frame offload check to library
>
> Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
> and application should enable the jumbo frame offload support for it.
>
> When jumbo frame offload is not enabled by application, but MTU bigger
> than RTE_ETHER_MTU is requested there are two options, either fail or
> enable jumbo frame offload implicitly.
>
> Enabling jumbo frame offload implicitly is selected by many drivers since
> setting a big MTU value already implies it, and this increases usability.
>
> This patch moves this logic from drivers to the library, both to reduce the
> duplicated code in the drivers and to make behaviour more visible.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
> drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
> drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
> drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
> drivers/net/dpaa/dpaa_ethdev.c | 7 -------
> drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
> drivers/net/e1000/em_ethdev.c | 9 ++-------
> drivers/net/e1000/igb_ethdev.c | 9 ++-------
> drivers/net/enetc/enetc_ethdev.c | 7 -------
> drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
> drivers/net/hns3/hns3_ethdev.c | 8 --------
> drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
> drivers/net/i40e/i40e_ethdev.c | 5 -----
> drivers/net/i40e/i40e_ethdev_vf.c | 5 -----
> drivers/net/iavf/iavf_ethdev.c | 7 -------
> drivers/net/ice/ice_ethdev.c | 5 -----
> drivers/net/igc/igc_ethdev.c | 9 ++-------
> drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
> drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
> drivers/net/liquidio/lio_ethdev.c | 7 -------
> drivers/net/nfp/nfp_net.c | 6 ------
> drivers/net/octeontx/octeontx_ethdev.c | 5 -----
> drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
> drivers/net/qede/qede_ethdev.c | 4 ----
> drivers/net/sfc/sfc_ethdev.c | 9 ---------
> drivers/net/thunderx/nicvf_ethdev.c | 6 ------
> drivers/net/txgbe/txgbe_ethdev.c | 6 ------
> lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
> 28 files changed, 29 insertions(+), 171 deletions(-)
>
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index 76aeec077f2b..2960834b4539 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> val = 1;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> val = 0;
> - }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> return 0;
> }
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c
> b/drivers/net/bnxt/bnxt_ethdev.c index 335505a106d5..4344a012f06e
> 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -3018,15 +3018,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev
> *eth_dev, uint16_t new_mtu)
> return -EINVAL;
> }
>
> - if (new_mtu > RTE_ETHER_MTU) {
> + if (new_mtu > RTE_ETHER_MTU)
> bp->flags |= BNXT_FLAG_JUMBO;
> - bp->eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - } else {
> - bp->eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> bp->flags &= ~BNXT_FLAG_JUMBO;
> - }
>
> /* Is there a change in mtu setting? */
> if (eth_dev->data->mtu == new_mtu)
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c
> b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index 695d0d6fd3e2..349896f6a1bf 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> plt_err("Failed to max Rx frame length, rc=%d", rc);
> goto exit;
> }
> -
> - if (mtu > RTE_ETHER_MTU)
> - dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> exit:
> return rc;
> }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 8cf61f12a8d6..0c9cc2f5bb3f 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> return -EINVAL;
>
> - /* set to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1,
> -1,
> -1, -1, true);
> return err;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c
> b/drivers/net/dpaa/dpaa_ethdev.c index 56703e3a39e8..a444f749bb96
> 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> fman_if_set_maxfrm(dev->process_private, frame_size);
>
> return 0;
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 6213bcbf3a43..be2858b3adac 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -1470,13 +1470,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> DPAA2_MAX_RX_PKT_LEN)
> return -EINVAL;
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c
> b/drivers/net/e1000/em_ethdev.c index 6f418a36aa04..1b41dd04df5a
> 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~E1000_RCTL_LPE;
> - }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> return 0;
> diff --git a/drivers/net/e1000/igb_ethdev.c
> b/drivers/net/e1000/igb_ethdev.c index 35b517891d67..f15774eae20d
> 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -4401,15 +4401,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~E1000_RCTL_LPE;
> - }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> E1000_WRITE_REG(hw, E1000_RLPML, frame_size); diff --git
> a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
> index cdb9783b5372..fbcbbb6c0533 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads &=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
> enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
>
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
> b/drivers/net/hinic/hinic_pmd_ethdev.c
> index c737ef8d06d8..c1cde811a252 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -1556,13 +1556,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev
> *dev, uint16_t mtu)
> return ret;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> nic_dev->mtu_size = mtu;
>
> return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c
> b/drivers/net/hns3/hns3_ethdev.c index 8bccdeddb2f7..868d381a4772
> 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2597,7 +2597,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> struct hns3_adapter *hns = dev->data->dev_private;
> uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
> struct hns3_hw *hw = &hns->hw;
> - bool is_jumbo_frame;
> int ret;
>
> if (dev->data->dev_started) {
> @@ -2607,7 +2606,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
>
> rte_spinlock_lock(&hw->lock);
> - is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
> frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
>
> /*
> @@ -2622,12 +2620,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return ret;
> }
>
> - if (is_jumbo_frame)
> - dev->data->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
> b/drivers/net/hns3/hns3_ethdev_vf.c
> index ca839fa55fa0..ff28cad53a03 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -920,12 +920,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rte_spinlock_unlock(&hw->lock);
> return ret;
> }
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 1161f301b9ae..c5058f26dff2 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11772,11 +11772,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return ret;
> }
>
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> index 086a167ca672..2015a86ba5ca 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -2884,11 +2884,6 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return ret;
> }
>
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 13c2329d85a7..ba5be45e8c5e 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -1446,13 +1446,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return ret;
> }
>
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index bdda6fee3f8e..502e410b5641 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3806,11 +3806,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return 0;
> }
>
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index
> b26723064b07..dcbc26b8186e 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
>
> rctl = IGC_READ_REG(hw, IGC_RCTL);
> -
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= IGC_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~IGC_RCTL_LPE;
> - }
> IGC_WRITE_REG(hw, IGC_RCTL, rctl);
>
> IGC_WRITE_REG(hw, IGC_RLPML, frame_size); diff --git
> a/drivers/net/ipn3ke/ipn3ke_representor.c
> b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 3634c0c8c5f0..e8a33f04bd69 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev
> *ethdev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (rpst->i40e_pf_eth) {
> ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst-
> >i40e_pf_eth,
> mtu);
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set
2021-07-09 17:29 ` [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set Ferruh Yigit
2021-07-13 13:56 ` Andrew Rybchenko
@ 2021-07-18 7:52 ` Xu, Rosen
1 sibling, 0 replies; 112+ messages in thread
From: Xu, Rosen @ 2021-07-18 7:52 UTC (permalink / raw)
To: Yigit, Ferruh, Somalapuram Amaranath, Ajit Khaparde,
Somnath Kotur, Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena,
Wang, Haiyue, Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Xing, Beilei, Wu, Jingjing, Yang, Qiming, Zhang,
Qi Z, Shijith Thotton, Srisivasubramanian Srinivasan,
Heinrich Kuhn, Harman Kalra, Jerin Jacob, Nithin Dabilpuram,
Kiran Kumar K, Rasesh Mody, Devendra Singh Rawat, Igor Russkikh,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon,
Andrew Rybchenko
Cc: dev
Hi,
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Saturday, July 10, 2021 1:29
> To: Somalapuram Amaranath <asomalap@amd.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@oss.nxp.com>;
> Wang, Haiyue <haiyue.wang@intel.com>; Gagandeep Singh
> <g.singh@nxp.com>; Ziyang Xuan <xuanziyang2@huawei.com>; Xiaoyun
> Wang <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Xu, Rosen <rosen.xu@intel.com>;
> Shijith Thotton <sthotton@marvell.com>; Srisivasubramanian Srinivasan
> <srinivasan@marvell.com>; Heinrich Kuhn
> <heinrich.kuhn@netronome.com>; Harman Kalra <hkalra@marvell.com>;
> Jerin Jacob <jerinj@marvell.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar K <kirankumark@marvell.com>;
> Rasesh Mody <rmody@marvell.com>; Devendra Singh Rawat
> <dsinghrawat@marvell.com>; Igor Russkikh <irusskikh@marvell.com>;
> Maciej Czekaj <mczekaj@marvell.com>; Jiawen Wu
> <jiawenwu@trustnetic.com>; Jian Wang <jianwang@trustnetic.com>;
> Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org
> Subject: [PATCH 3/4] ethdev: move check to library for MTU set
>
> Move requested MTU value check to the API to prevent the duplicated code.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> drivers/net/axgbe/axgbe_ethdev.c | 15 ++++-----------
> drivers/net/bnxt/bnxt_ethdev.c | 2 +-
> drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------
> drivers/net/dpaa/dpaa_ethdev.c | 2 --
> drivers/net/dpaa2/dpaa2_ethdev.c | 4 ----
> drivers/net/e1000/em_ethdev.c | 10 ----------
> drivers/net/e1000/igb_ethdev.c | 11 -----------
> drivers/net/enetc/enetc_ethdev.c | 4 ----
> drivers/net/hinic/hinic_pmd_ethdev.c | 8 +-------
> drivers/net/i40e/i40e_ethdev.c | 17 ++++-------------
> drivers/net/i40e/i40e_ethdev_vf.c | 17 ++++-------------
> drivers/net/iavf/iavf_ethdev.c | 10 ++--------
> drivers/net/ice/ice_ethdev.c | 14 +++-----------
> drivers/net/igc/igc_ethdev.c | 5 -----
> drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------
> drivers/net/liquidio/lio_ethdev.c | 10 ----------
> drivers/net/nfp/nfp_net.c | 4 ----
> drivers/net/octeontx/octeontx_ethdev.c | 4 ----
> drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
> drivers/net/qede/qede_ethdev.c | 12 ------------
> drivers/net/thunderx/nicvf_ethdev.c | 6 ------
> drivers/net/txgbe/txgbe_ethdev.c | 10 ----------
> lib/ethdev/rte_ethdev.c | 9 +++++++++
> 23 files changed, 29 insertions(+), 169 deletions(-)
>
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index 2960834b4539..c36cd7b1d2f0 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct
> rte_eth_dev *dev)
>
> static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) {
> - struct rte_eth_dev_info dev_info;
> struct axgbe_port *pdata = dev->data->dev_private;
> - uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> - unsigned int val = 0;
> - axgbe_dev_info_get(dev, &dev_info);
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> dev_info.max_rx_pktlen)
> - return -EINVAL;
> + unsigned int val;
> +
> /* mtu setting is forbidden if port is start */
> if (dev->data->dev_started) {
> PMD_DRV_LOG(ERR, "port %d must be stopped before
> configuration",
> dev->data->port_id);
> return -EBUSY;
> }
> - if (mtu > RTE_ETHER_MTU)
> - val = 1;
> - else
> - val = 0;
> + val = mtu > RTE_ETHER_MTU ? 1 : 0;
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> +
> return 0;
> }
>
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c
> b/drivers/net/bnxt/bnxt_ethdev.c index 4344a012f06e..1e7da8ba61a6
> 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -2991,7 +2991,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> struct bnxt *bp = eth_dev->data->dev_private;
> uint32_t new_pkt_size;
> - uint32_t rc = 0;
> + uint32_t rc;
> uint32_t i;
>
> rc = is_bnxt_in_error(bp);
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 0c9cc2f5bb3f..70b879fed100 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu) {
> struct port_info *pi = eth_dev->data->dev_private;
> struct adapter *adapter = pi->adapter;
> - struct rte_eth_dev_info dev_info;
> - int err;
> uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
>
> - err = cxgbe_dev_info_get(eth_dev, &dev_info);
> - if (err != 0)
> - return err;
> -
> - /* Must accommodate at least RTE_ETHER_MIN_MTU */
> - if (mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> - return -EINVAL;
> -
> - err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1,
> -1,
> + return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu,
> -1,
> +-1,
> -1, -1, true);
> - return err;
> }
>
> /*
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c
> b/drivers/net/dpaa/dpaa_ethdev.c index a444f749bb96..60dd4f67fc26
> 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
>
> PMD_INIT_FUNC_TRACE();
>
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> DPAA_MAX_RX_PKT_LEN)
> - return -EINVAL;
> /*
> * Refuse mtu that requires the support of scattered packets
> * when this feature has not been enabled before.
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index be2858b3adac..6b44b0557e6a 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -1466,10 +1466,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EINVAL;
> }
>
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> DPAA2_MAX_RX_PKT_LEN)
> - return -EINVAL;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c
> b/drivers/net/e1000/em_ethdev.c index 1b41dd04df5a..6ebef55588bc
> 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1788,22 +1788,12 @@ eth_em_default_mac_addr_set(struct
> rte_eth_dev *dev, static int eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu) {
> - struct rte_eth_dev_info dev_info;
> struct e1000_hw *hw;
> uint32_t frame_size;
> uint32_t rctl;
> - int ret;
> -
> - ret = eth_em_infos_get(dev, &dev_info);
> - if (ret != 0)
> - return ret;
>
> frame_size = mtu + E1000_ETH_OVERHEAD;
>
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> dev_info.max_rx_pktlen)
> - return -EINVAL;
> -
> /*
> * If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> diff --git a/drivers/net/e1000/igb_ethdev.c
> b/drivers/net/e1000/igb_ethdev.c index f15774eae20d..fb69210ba9f4
> 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -4368,9 +4368,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu) {
> uint32_t rctl;
> struct e1000_hw *hw;
> - struct rte_eth_dev_info dev_info;
> uint32_t frame_size = mtu + E1000_ETH_OVERHEAD;
> - int ret;
>
> hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> @@ -4379,15 +4377,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (hw->mac.type == e1000_82571)
> return -ENOTSUP;
> #endif
> - ret = eth_igb_infos_get(dev, &dev_info);
> - if (ret != 0)
> - return ret;
> -
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - frame_size > dev_info.max_rx_pktlen)
> - return -EINVAL;
> -
> /*
> * If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> diff --git a/drivers/net/enetc/enetc_ethdev.c
> b/drivers/net/enetc/enetc_ethdev.c
> index fbcbbb6c0533..a7372c1787c7 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -662,10 +662,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> struct enetc_hw *enetc_hw = &hw->hw;
> uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
>
> - /* check that mtu is within the allowed range */
> - if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size >
> ENETC_MAC_MAXFRM_SIZE)
> - return -EINVAL;
> -
> /*
> * Refuse mtu that requires the support of scattered packets
> * when this feature has not been enabled before.
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
> b/drivers/net/hinic/hinic_pmd_ethdev.c
> index c1cde811a252..ce0b52c718ab 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -1539,17 +1539,11 @@ static void hinic_deinit_mac_addr(struct
> rte_eth_dev *eth_dev) static int hinic_dev_set_mtu(struct rte_eth_dev
> *dev, uint16_t mtu) {
> struct hinic_nic_dev *nic_dev =
> HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
> - int ret = 0;
> + int ret;
>
> PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d,
> max_pkt_len: %d",
> dev->data->port_id, mtu,
> HINIC_MTU_TO_PKTLEN(mtu));
>
> - if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) {
> - PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d
> and %d",
> - mtu, HINIC_MIN_MTU_SIZE,
> HINIC_MAX_MTU_SIZE);
> - return -EINVAL;
> - }
> -
> ret = hinic_set_port_mtu(nic_dev->hwdev, mtu);
> if (ret) {
> PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret); diff
> --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index c5058f26dff2..dad151eac5f1 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11754,25 +11754,16 @@ static int i40e_set_default_mac_addr(struct
> rte_eth_dev *dev, }
>
> static int
> -i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> +i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
> {
> - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> - struct rte_eth_dev_data *dev_data = pf->dev_data;
> - uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
> - int ret = 0;
> -
> - /* check if mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> I40E_FRAME_SIZE_MAX)
> - return -EINVAL;
> -
> /* mtu setting is forbidden if port is start */
> - if (dev_data->dev_started) {
> + if (dev->data->dev_started) {
> PMD_DRV_LOG(ERR, "port %d must be stopped before
> configuration",
> - dev_data->port_id);
> + dev->data->port_id);
> return -EBUSY;
> }
>
> - return ret;
> + return 0;
> }
>
> /* Restore ethertype filter */
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> index 2015a86ba5ca..f7f9d44ef181 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -2866,25 +2866,16 @@ i40evf_dev_rss_hash_conf_get(struct
> rte_eth_dev *dev, }
>
> static int
> -i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> +i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
> {
> - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data-
> >dev_private);
> - struct rte_eth_dev_data *dev_data = vf->dev_data;
> - uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
> - int ret = 0;
> -
> - /* check if mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> I40E_FRAME_SIZE_MAX)
> - return -EINVAL;
> -
> /* mtu setting is forbidden if port is start */
> - if (dev_data->dev_started) {
> + if (dev->data->dev_started) {
> PMD_DRV_LOG(ERR, "port %d must be stopped before
> configuration",
> - dev_data->port_id);
> + dev->data->port_id);
> return -EBUSY;
> }
>
> - return ret;
> + return 0;
> }
>
> static int
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index ba5be45e8c5e..049671ef3da9 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -1432,21 +1432,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev
> *dev, }
>
> static int
> -iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> +iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
> {
> - uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD;
> - int ret = 0;
> -
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> IAVF_FRAME_SIZE_MAX)
> - return -EINVAL;
> -
> /* mtu setting is forbidden if port is start */
> if (dev->data->dev_started) {
> PMD_DRV_LOG(ERR, "port must be stopped before
> configuration");
> return -EBUSY;
> }
>
> - return ret;
> + return 0;
> }
>
> static int
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 502e410b5641..c1a96d3de183 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3788,21 +3788,13 @@ ice_dev_set_link_down(struct rte_eth_dev
> *dev) }
>
> static int
> -ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> +ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
> {
> - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> - struct rte_eth_dev_data *dev_data = pf->dev_data;
> - uint32_t frame_size = mtu + ICE_ETH_OVERHEAD;
> -
> - /* check if mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> ICE_FRAME_SIZE_MAX)
> - return -EINVAL;
> -
> /* mtu setting is forbidden if port is start */
> - if (dev_data->dev_started) {
> + if (dev->data->dev_started) {
> PMD_DRV_LOG(ERR,
> "port %d must be stopped before configuration",
> - dev_data->port_id);
> + dev->data->port_id);
> return -EBUSY;
> }
>
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index
> dcbc26b8186e..e279ae1fff1d 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -1576,11 +1576,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN)
> frame_size += VLAN_TAG_SIZE;
>
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - frame_size > MAX_RX_JUMBO_FRAME_SIZE)
> - return -EINVAL;
> -
> /*
> * If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c
> b/drivers/net/ipn3ke/ipn3ke_representor.c
> index e8a33f04bd69..377b96c0236a 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2778,12 +2778,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev
> *ethdev, uint16_t mtu)
> int ret = 0;
> struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev);
> struct rte_eth_dev_data *dev_data = ethdev->data;
> - uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD;
> -
> - /* check if mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)
> - return -EINVAL;
>
> /* mtu setting is forbidden if port is start */
> /* make sure NIC port is stopped */
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag
2021-07-09 17:29 ` [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag Ferruh Yigit
2021-07-13 14:07 ` Andrew Rybchenko
@ 2021-07-18 7:53 ` Xu, Rosen
1 sibling, 0 replies; 112+ messages in thread
From: Xu, Rosen @ 2021-07-18 7:53 UTC (permalink / raw)
To: Yigit, Ferruh, Jerin Jacob, Li, Xiaoyun, Ajit Khaparde,
Somnath Kotur, Igor Russkikh, Pavel Belous,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Chas Williams,
Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, Daley, John, Hyong Youb Kim,
Gaetan Rivet, Zhang, Qi Z, Wang, Xiao W, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou, Xing, Beilei,
Wu, Jingjing, Yang, Qiming, Andrew Boyer, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Harman Kalra, Nalla Pradeep,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Devendra Singh Rawat, Andrew Rybchenko, Maciej Czekaj, Jiawen Wu,
Jian Wang, Maxime Coquelin, Xia, Chenbo, Yong Wang, Ananyev,
Konstantin, Nicolau, Radu, Akhil Goyal, Hunt, David, Mcnamara,
John, Thomas Monjalon
Cc: dev
Hi,
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Saturday, July 10, 2021 1:29
> To: Jerin Jacob <jerinj@marvell.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Igor Russkikh
> <igor.russkikh@aquantia.com>; Pavel Belous <pavel.belous@aquantia.com>;
> Somalapuram Amaranath <asomalap@amd.com>; Rasesh Mody
> <rmody@marvell.com>; Shahed Shaikh <shshaikh@marvell.com>; Chas
> Williams <chas3@att.com>; Min Hu (Connor) <humin29@huawei.com>;
> Nithin Dabilpuram <ndabilpuram@marvell.com>; Kiran Kumar K
> <kirankumark@marvell.com>; Sunil Kumar Kori <skori@marvell.com>; Satha
> Rao <skoteshwar@marvell.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@oss.nxp.com>;
> Wang, Haiyue <haiyue.wang@intel.com>; Marcin Wojtas
> <mw@semihalf.com>; Michal Krawczyk <mk@semihalf.com>; Guy Tzalik
> <gtzalik@amazon.com>; Evgeny Schemeilin <evgenys@amazon.com>; Igor
> Chauskin <igorch@amazon.com>; Gagandeep Singh <g.singh@nxp.com>;
> Daley, John <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>;
> Gaetan Rivet <grive@u256.net>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang,
> Xiao W <xiao.w.wang@intel.com>; Ziyang Xuan <xuanziyang2@huawei.com>;
> Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Yisen Zhuang <yisen.zhuang@huawei.com>;
> Lijun Ou <oulijun@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> Andrew Boyer <aboyer@pensando.io>; Xu, Rosen <rosen.xu@intel.com>;
> Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
> Viacheslav Ovsiienko <viacheslavo@nvidia.com>; Zyta Szpak
> <zr@semihalf.com>; Liron Himi <lironh@marvell.com>; Heinrich Kuhn
> <heinrich.kuhn@netronome.com>; Harman Kalra <hkalra@marvell.com>;
> Nalla Pradeep <pnalla@marvell.com>; Radha Mohan Chintakuntla
> <radhac@marvell.com>; Veerasenareddy Burru <vburru@marvell.com>;
> Devendra Singh Rawat <dsinghrawat@marvell.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Maciej Czekaj <mczekaj@marvell.com>;
> Jiawen Wu <jiawenwu@trustnetic.com>; Jian Wang
> <jianwang@trustnetic.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>;
> Yong Wang <yongwang@vmware.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>;
> Akhil Goyal <gakhil@marvell.com>; Hunt, David <david.hunt@intel.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org
> Subject: [PATCH 4/4] ethdev: remove jumbo offload flag
>
> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>
> Instead of drivers announce this capability, application can deduct the
> capability by checking reported 'dev_info.max_mtu' or
> 'dev_info.max_rx_pktlen'.
>
> And instead of application explicitly set this flag to enable jumbo
> frames, this can be deducted by driver by comparing requested 'mtu' to
> 'RTE_ETHER_MTU'.
>
> Removing this additional configuration for simplification.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> app/test-eventdev/test_pipeline_common.c | 2 -
> app/test-pmd/cmdline.c | 2 +-
> app/test-pmd/config.c | 24 +---------
> app/test-pmd/testpmd.c | 46 +------------------
> app/test-pmd/testpmd.h | 2 +-
> doc/guides/howto/debug_troubleshoot.rst | 2 -
> doc/guides/nics/bnxt.rst | 1 -
> doc/guides/nics/features.rst | 3 +-
> drivers/net/atlantic/atl_ethdev.c | 1 -
> drivers/net/axgbe/axgbe_ethdev.c | 1 -
> drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
> drivers/net/bnxt/bnxt.h | 1 -
> drivers/net/bnxt/bnxt_ethdev.c | 10 +---
> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
> drivers/net/cnxk/cnxk_ethdev.h | 5 +-
> drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
> drivers/net/cxgbe/cxgbe.h | 1 -
> drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
> drivers/net/cxgbe/sge.c | 5 +-
> drivers/net/dpaa/dpaa_ethdev.c | 2 -
> drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
> drivers/net/e1000/e1000_ethdev.h | 4 +-
> drivers/net/e1000/em_ethdev.c | 4 +-
> drivers/net/e1000/em_rxtx.c | 19 +++-----
> drivers/net/e1000/igb_rxtx.c | 3 +-
> drivers/net/ena/ena_ethdev.c | 2 -
> drivers/net/enetc/enetc_ethdev.c | 3 +-
> drivers/net/enic/enic_res.c | 1 -
> drivers/net/failsafe/failsafe_ops.c | 2 -
> drivers/net/fm10k/fm10k_ethdev.c | 1 -
> drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
> drivers/net/hns3/hns3_ethdev.c | 1 -
> drivers/net/hns3/hns3_ethdev_vf.c | 1 -
> drivers/net/i40e/i40e_ethdev.c | 1 -
> drivers/net/i40e/i40e_ethdev_vf.c | 3 +-
> drivers/net/i40e/i40e_rxtx.c | 2 +-
> drivers/net/iavf/iavf_ethdev.c | 3 +-
> drivers/net/ice/ice_dcf_ethdev.c | 3 +-
> drivers/net/ice/ice_dcf_vf_representor.c | 1 -
> drivers/net/ice/ice_ethdev.c | 1 -
> drivers/net/ice/ice_rxtx.c | 3 +-
> drivers/net/igc/igc_ethdev.h | 1 -
> drivers/net/igc/igc_txrx.c | 2 +-
> drivers/net/ionic/ionic_ethdev.c | 1 -
> drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
> drivers/net/ixgbe/ixgbe_pf.c | 9 +---
> drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
> drivers/net/mlx4/mlx4_rxq.c | 1 -
> drivers/net/mlx5/mlx5_rxq.c | 1 -
> drivers/net/mvneta/mvneta_ethdev.h | 3 +-
> drivers/net/mvpp2/mrvl_ethdev.c | 1 -
> drivers/net/nfp/nfp_net.c | 6 +--
> drivers/net/octeontx/octeontx_ethdev.h | 1 -
> drivers/net/octeontx2/otx2_ethdev.h | 1 -
> drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
> drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
> drivers/net/qede/qede_ethdev.c | 1 -
> drivers/net/sfc/sfc_rx.c | 2 -
> drivers/net/thunderx/nicvf_ethdev.h | 1 -
> drivers/net/txgbe/txgbe_rxtx.c | 1 -
> drivers/net/virtio/virtio_ethdev.c | 1 -
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
> examples/ip_fragmentation/main.c | 3 +-
> examples/ip_reassembly/main.c | 3 +-
> examples/ipsec-secgw/ipsec-secgw.c | 2 -
> examples/ipv4_multicast/main.c | 1 -
> examples/kni/main.c | 5 --
> examples/l3fwd-acl/main.c | 2 -
> examples/l3fwd-graph/main.c | 1 -
> examples/l3fwd-power/main.c | 2 -
> examples/l3fwd/main.c | 1 -
> .../performance-thread/l3fwd-thread/main.c | 2 -
> examples/vhost/main.c | 2 -
> lib/ethdev/rte_ethdev.c | 26 +----------
> lib/ethdev/rte_ethdev.h | 1 -
> 76 files changed, 42 insertions(+), 250 deletions(-)
>
> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-
> eventdev/test_pipeline_common.c
> index 5fcea74b4d43..2775e72c580d 100644
> --- a/app/test-eventdev/test_pipeline_common.c
> +++ b/app/test-eventdev/test_pipeline_common.c
> @@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
>
> port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> RTE_ETHER_CRC_LEN;
> - if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
> - port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> t->internal_port = 1;
> RTE_ETH_FOREACH_DEV(i) {
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 8bdc042f6e8e..c0b6132d64e8 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1921,7 +1921,7 @@ cmd_config_max_pkt_len_parsed(void
> *parsed_result,
> return;
> }
>
> - update_jumbo_frame_offload(port_id, res->value);
> + update_mtu_from_frame_size(port_id, res->value);
> }
>
> init_port_config();
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index a87265d7638b..23a48557b676 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1136,39 +1136,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off,
> uint32_t reg_v)
> void
> port_mtu_set(portid_t port_id, uint16_t mtu)
> {
> + struct rte_port *port = &ports[port_id];
> int diag;
> - struct rte_port *rte_port = &ports[port_id];
> - struct rte_eth_dev_info dev_info;
> - int ret;
>
> if (port_id_is_invalid(port_id, ENABLED_WARN))
> return;
>
> - ret = eth_dev_info_get_print_err(port_id, &dev_info);
> - if (ret != 0)
> - return;
> -
> - if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
> - printf("Set MTU failed. MTU:%u is not in valid range, min:%u
> - max:%u\n",
> - mtu, dev_info.min_mtu, dev_info.max_mtu);
> - return;
> - }
> diag = rte_eth_dev_set_mtu(port_id, mtu);
> if (diag) {
> printf("Set MTU failed. diag=%d\n", diag);
> return;
> }
>
> - rte_port->dev_conf.rxmode.mtu = mtu;
> -
> - if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME)
> {
> - if (mtu > RTE_ETHER_MTU) {
> - rte_port->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - } else
> - rte_port->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
> + port->dev_conf.rxmode.mtu = mtu;
> }
>
> /* Generic flow management functions. */
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 2c79cae05664..92feadefab59 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -1473,11 +1473,6 @@ init_config(void)
> rte_exit(EXIT_FAILURE,
> "rte_eth_dev_info_get() failed\n");
>
> - ret = update_jumbo_frame_offload(pid, 0);
> - if (ret != 0)
> - printf("Updating jumbo frame offload failed for
> port %u\n",
> - pid);
> -
> if (!(port->dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE))
> port->dev_conf.txmode.offloads &=
> @@ -3364,24 +3359,18 @@ rxtx_port_config(struct rte_port *port)
> }
>
> /*
> - * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME
> offload,
> - * MTU is also aligned.
> + * Helper function to set MTU from frame size
> *
> * port->dev_info should be set before calling this function.
> *
> - * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
> - * ETH_OVERHEAD". This is useful to update flags but not MTU value.
> - *
> * return 0 on success, negative on error
> */
> int
> -update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
> +update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
> {
> struct rte_port *port = &ports[portid];
> uint32_t eth_overhead;
> - uint64_t rx_offloads;
> uint16_t mtu, new_mtu;
> - bool on;
>
> eth_overhead = get_eth_overhead(&port->dev_info);
>
> @@ -3390,39 +3379,8 @@ update_jumbo_frame_offload(portid_t portid,
> uint32_t max_rx_pktlen)
> return -1;
> }
>
> - if (max_rx_pktlen == 0)
> - max_rx_pktlen = mtu + eth_overhead;
> -
> - rx_offloads = port->dev_conf.rxmode.offloads;
> new_mtu = max_rx_pktlen - eth_overhead;
>
> - if (new_mtu <= RTE_ETHER_MTU) {
> - rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - on = false;
> - } else {
> - if ((port->dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> - printf("Frame size (%u) is not supported by
> port %u\n",
> - max_rx_pktlen, portid);
> - return -1;
> - }
> - rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - on = true;
> - }
> -
> - if (rx_offloads != port->dev_conf.rxmode.offloads) {
> - uint16_t qid;
> -
> - port->dev_conf.rxmode.offloads = rx_offloads;
> -
> - /* Apply JUMBO_FRAME offload configuration to Rx queue(s)
> */
> - for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
> - if (on)
> - port->rx_conf[qid].offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - port->rx_conf[qid].offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
> - }
> -
> if (mtu == new_mtu)
> return 0;
>
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 42143f85924f..b94bf668dc4d 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id,
> __rte_unused uint16_t queue,
> __rte_unused void *user_param);
> void add_tx_dynf_callback(portid_t portid);
> void remove_tx_dynf_callback(portid_t portid);
> -int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
> +int update_mtu_from_frame_size(portid_t portid, uint32_t
> max_rx_pktlen);
>
> /*
> * Work-around of a compilation error with ICC on invocations of the
> diff --git a/doc/guides/howto/debug_troubleshoot.rst
> b/doc/guides/howto/debug_troubleshoot.rst
> index 457ac441429a..df69fa8bcc24 100644
> --- a/doc/guides/howto/debug_troubleshoot.rst
> +++ b/doc/guides/howto/debug_troubleshoot.rst
> @@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
> * Identify if port Speed and Duplex is matching to desired values with
> ``rte_eth_link_get``.
>
> - * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with
> ``rte_eth_dev_info_get``.
> -
> * Check promiscuous mode if the drops do not occur for unique MAC
> address
> with ``rte_eth_promiscuous_get``.
>
> diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
> index feb0c6a7657a..e6f1628402fc 100644
> --- a/doc/guides/nics/bnxt.rst
> +++ b/doc/guides/nics/bnxt.rst
> @@ -886,7 +886,6 @@ processing. This improved performance is derived
> from a number of optimizations:
>
> DEV_RX_OFFLOAD_VLAN_STRIP
> DEV_RX_OFFLOAD_KEEP_CRC
> - DEV_RX_OFFLOAD_JUMBO_FRAME
> DEV_RX_OFFLOAD_IPV4_CKSUM
> DEV_RX_OFFLOAD_UDP_CKSUM
> DEV_RX_OFFLOAD_TCP_CKSUM
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index c98242f3b72f..a077c30644d2 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -165,8 +165,7 @@ Jumbo frame
>
> Supports Rx jumbo frames.
>
> -* **[uses] rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> - ``dev_conf.rxmode.mtu``.
> +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
> * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
> * **[related] API**: ``rte_eth_dev_set_mtu()``.
>
> diff --git a/drivers/net/atlantic/atl_ethdev.c
> b/drivers/net/atlantic/atl_ethdev.c
> index 3f654c071566..5a198f53fce7 100644
> --- a/drivers/net/atlantic/atl_ethdev.c
> +++ b/drivers/net/atlantic/atl_ethdev.c
> @@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
> | DEV_RX_OFFLOAD_IPV4_CKSUM \
> | DEV_RX_OFFLOAD_UDP_CKSUM \
> | DEV_RX_OFFLOAD_TCP_CKSUM \
> - | DEV_RX_OFFLOAD_JUMBO_FRAME \
> | DEV_RX_OFFLOAD_MACSEC_STRIP \
> | DEV_RX_OFFLOAD_VLAN_FILTER)
>
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index c36cd7b1d2f0..0bc9e5eeeb10 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_KEEP_CRC;
>
> diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c
> b/drivers/net/bnx2x/bnx2x_ethdev.c
> index 009a94e9a8fa..50ff04bb2241 100644
> --- a/drivers/net/bnx2x/bnx2x_ethdev.c
> +++ b/drivers/net/bnx2x/bnx2x_ethdev.c
> @@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
> dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
> dev_info->speed_capa = ETH_LINK_SPEED_10G |
> ETH_LINK_SPEED_20G;
> - dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
> dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
> diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
> index e93a7eb933b4..9ad7821b4736 100644
> --- a/drivers/net/bnxt/bnxt.h
> +++ b/drivers/net/bnxt/bnxt.h
> @@ -591,7 +591,6 @@ struct bnxt_rep_info {
> DEV_RX_OFFLOAD_TCP_CKSUM | \
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
> | \
> DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
> | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_KEEP_CRC | \
> DEV_RX_OFFLOAD_VLAN_EXTEND | \
> DEV_RX_OFFLOAD_TCP_LRO | \
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c
> b/drivers/net/bnxt/bnxt_ethdev.c
> index 1e7da8ba61a6..c4fd27bd92de 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -728,15 +728,10 @@ static int bnxt_start_nic(struct bnxt *bp)
> unsigned int i, j;
> int rc;
>
> - if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
> - bp->eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
> bp->flags |= BNXT_FLAG_JUMBO;
> - } else {
> - bp->eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> bp->flags &= ~BNXT_FLAG_JUMBO;
> - }
>
> /* THOR does not support ring groups.
> * But we will use the array to save RSS context IDs.
> @@ -1221,7 +1216,6 @@ bnxt_receive_function(struct rte_eth_dev
> *eth_dev)
> if (eth_dev->data->dev_conf.rxmode.offloads &
> ~(DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_KEEP_CRC |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
> b/drivers/net/bonding/rte_eth_bond_pmd.c
> index b2a1833e3f91..844ac1581a61 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -1731,14 +1731,6 @@ slave_configure(struct rte_eth_dev
> *bonded_eth_dev,
> slave_eth_dev->data->dev_conf.rxmode.mtu =
> bonded_eth_dev->data->dev_conf.rxmode.mtu;
>
> - if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME)
> - slave_eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - slave_eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
> nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
>
> diff --git a/drivers/net/cnxk/cnxk_ethdev.h
> b/drivers/net/cnxk/cnxk_ethdev.h
> index 4eead0390532..aa147eee45c9 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.h
> +++ b/drivers/net/cnxk/cnxk_ethdev.h
> @@ -75,9 +75,8 @@
> #define CNXK_NIX_RX_OFFLOAD_CAPA \
> (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM
> | \
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_SCATTER | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
> - DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP |
> \
> - DEV_RX_OFFLOAD_VLAN_STRIP)
> + DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
> DEV_RX_OFFLOAD_RSS_HASH | \
> + DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP)
>
> #define RSS_IPV4_ENABLE \
> (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
> ETH_RSS_NONFRAG_IPV4_UDP | \
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c
> b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index 349896f6a1bf..d0924df76152 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev
> *eth_dev, uint16_t queue_id,
> {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
> {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
> {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
> - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
> {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
> {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
> {DEV_RX_OFFLOAD_SECURITY, " Security,"},
> diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
> index 7c89a028bf16..37625c5bfb69 100644
> --- a/drivers/net/cxgbe/cxgbe.h
> +++ b/drivers/net/cxgbe/cxgbe.h
> @@ -51,7 +51,6 @@
> DEV_RX_OFFLOAD_IPV4_CKSUM | \
> DEV_RX_OFFLOAD_UDP_CKSUM | \
> DEV_RX_OFFLOAD_TCP_CKSUM | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_SCATTER | \
> DEV_RX_OFFLOAD_RSS_HASH)
>
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 70b879fed100..1374f32b6826 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -661,14 +661,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> if ((&rxq->fl) != NULL)
> rxq->fl.size = temp_nb_desc;
>
> - /* Set to jumbo mode if necessary */
> - if (eth_dev->data->mtu > RTE_ETHER_MTU)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
> &rxq->fl, NULL,
> is_pf4(adapter) ?
> diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
> index 830f5192474d..21b8fe61c9a7 100644
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct
> adapter *adap, struct sge_fl *q,
> struct rte_mbuf *buf_bulk[n];
> int ret, i;
> struct rte_pktmbuf_pool_private *mbp_priv;
> - u8 jumbo_en = rxq->rspq.eth_dev->data-
> >dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> /* Use jumbo mtu buffers if mbuf data room size can fit jumbo data.
> */
> mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
> - if (jumbo_en &&
> - ((mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM) >= 9000))
> + if ((mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM) >= 9000)
> buf_size_idx = RX_LARGE_MTU_BUF;
>
> ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk,
> n);
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c
> b/drivers/net/dpaa/dpaa_ethdev.c
> index 60dd4f67fc26..9cc808b767ea 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -54,7 +54,6 @@
>
> /* Supported Rx offloads */
> static uint64_t dev_rx_offloads_sup =
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_SCATTER;
>
> /* Rx offloads which cannot be disabled */
> @@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev
> *dev,
> uint64_t flags;
> const char *output;
> } rx_offload_map[] = {
> - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo
> frame,"},
> {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
> {DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
> {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 6b44b0557e6a..53508972a4c2 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
> DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_TIMESTAMP;
>
> /* Rx offloads which cannot be disabled */
> @@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev
> *dev,
> {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer
> UDP csum,"},
> {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
> {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
> - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo
> frame,"},
> {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
> {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
> {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
> diff --git a/drivers/net/e1000/e1000_ethdev.h
> b/drivers/net/e1000/e1000_ethdev.h
> index 3b4d9c3ee6f4..1ae78fe71f02 100644
> --- a/drivers/net/e1000/e1000_ethdev.h
> +++ b/drivers/net/e1000/e1000_ethdev.h
> @@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
> void em_dev_clear_queues(struct rte_eth_dev *dev);
> void em_dev_free_queues(struct rte_eth_dev *dev);
>
> -uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
> -uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
> +uint64_t em_get_rx_port_offloads_capa(void);
> +uint64_t em_get_rx_queue_offloads_capa(void);
>
> int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> rx_queue_id,
> uint16_t nb_rx_desc, unsigned int socket_id,
> diff --git a/drivers/net/e1000/em_ethdev.c
> b/drivers/net/e1000/em_ethdev.c
> index 6ebef55588bc..8a752eef52cf 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
> dev_info->max_rx_queues = 1;
> dev_info->max_tx_queues = 1;
>
> - dev_info->rx_queue_offload_capa =
> em_get_rx_queue_offloads_capa(dev);
> - dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
> + dev_info->rx_queue_offload_capa =
> em_get_rx_queue_offloads_capa();
> + dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
> dev_info->rx_queue_offload_capa;
> dev_info->tx_queue_offload_capa =
> em_get_tx_queue_offloads_capa(dev);
> dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
> diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
> index dfd8f2fd0074..e061f80a906a 100644
> --- a/drivers/net/e1000/em_rxtx.c
> +++ b/drivers/net/e1000/em_rxtx.c
> @@ -1359,12 +1359,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
> }
>
> uint64_t
> -em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
> +em_get_rx_port_offloads_capa(void)
> {
> uint64_t rx_offload_capa;
> - uint32_t max_rx_pktlen;
> -
> - max_rx_pktlen = em_get_max_pktlen(dev);
>
> rx_offload_capa =
> DEV_RX_OFFLOAD_VLAN_STRIP |
> @@ -1374,14 +1371,12 @@ em_get_rx_port_offloads_capa(struct
> rte_eth_dev *dev)
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_KEEP_CRC |
> DEV_RX_OFFLOAD_SCATTER;
> - if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
> - rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return rx_offload_capa;
> }
>
> uint64_t
> -em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
> +em_get_rx_queue_offloads_capa(void)
> {
> uint64_t rx_queue_offload_capa;
>
> @@ -1390,7 +1385,7 @@ em_get_rx_queue_offloads_capa(struct
> rte_eth_dev *dev)
> * capability be same to per port queue offloading capability
> * for better convenience.
> */
> - rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
> + rx_queue_offload_capa = em_get_rx_port_offloads_capa();
>
> return rx_queue_offload_capa;
> }
> @@ -1839,7 +1834,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
> * to avoid splitting packets that don't fit into
> * one buffer.
> */
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME
> ||
> + if (dev->data->mtu > RTE_ETHER_MTU ||
> rctl_bsize < RTE_ETHER_MAX_LEN) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter
> mode");
> @@ -1874,14 +1869,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
> if ((hw->mac.type == e1000_ich9lan ||
> hw->mac.type == e1000_pch2lan ||
> hw->mac.type == e1000_ich10lan) &&
> - rxmode->offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + dev->data->mtu > RTE_ETHER_MTU) {
> u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
> E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
> E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
> }
>
> if (hw->mac.type == e1000_pch2lan) {
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> + if (dev->data->mtu > RTE_ETHER_MTU)
> e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
> else
> e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
> @@ -1908,7 +1903,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure support of jumbo frames, if any.
> */
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> + if (dev->data->mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> else
> rctl &= ~E1000_RCTL_LPE;
> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> index de12997b4bdd..9998d4ea4179 100644
> --- a/drivers/net/e1000/igb_rxtx.c
> +++ b/drivers/net/e1000/igb_rxtx.c
> @@ -1640,7 +1640,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev
> *dev)
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_KEEP_CRC |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_RSS_HASH;
> @@ -2344,7 +2343,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> * Configure support of jumbo frames, if any.
> */
> max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (dev->data->mtu & RTE_ETHER_MTU) {
> rctl |= E1000_RCTL_LPE;
>
> /*
> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> index e9b718786a39..4322dce260f5 100644
> --- a/drivers/net/ena/ena_ethdev.c
> +++ b/drivers/net/ena/ena_ethdev.c
> @@ -2042,8 +2042,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM;
>
> - rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> /* Inform framework about available features */
> dev_info->rx_offload_capa = rx_feat;
> dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
> diff --git a/drivers/net/enetc/enetc_ethdev.c
> b/drivers/net/enetc/enetc_ethdev.c
> index a7372c1787c7..6457677d300a 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev
> __rte_unused,
> (DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_KEEP_CRC |
> - DEV_RX_OFFLOAD_JUMBO_FRAME);
> + DEV_RX_OFFLOAD_KEEP_CRC);
>
> return 0;
> }
> diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
> index a8f5332a407f..6a4758ea8e8a 100644
> --- a/drivers/net/enic/enic_res.c
> +++ b/drivers/net/enic/enic_res.c
> @@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
> DEV_TX_OFFLOAD_TCP_TSO;
> enic->rx_offload_capa =
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> diff --git a/drivers/net/failsafe/failsafe_ops.c
> b/drivers/net/failsafe/failsafe_ops.c
> index 5ff33e03e034..47c5efe9ea77 100644
> --- a/drivers/net/failsafe/failsafe_ops.c
> +++ b/drivers/net/failsafe/failsafe_ops.c
> @@ -1193,7 +1193,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
> DEV_RX_OFFLOAD_HEADER_SPLIT |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_TIMESTAMP |
> DEV_RX_OFFLOAD_SECURITY |
> @@ -1211,7 +1210,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
> DEV_RX_OFFLOAD_HEADER_SPLIT |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_TIMESTAMP |
> DEV_RX_OFFLOAD_SECURITY |
> diff --git a/drivers/net/fm10k/fm10k_ethdev.c
> b/drivers/net/fm10k/fm10k_ethdev.c
> index 5e4b361ca6c0..093021246286 100644
> --- a/drivers/net/fm10k/fm10k_ethdev.c
> +++ b/drivers/net/fm10k/fm10k_ethdev.c
> @@ -1779,7 +1779,6 @@ static uint64_t
> fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_HEADER_SPLIT |
> DEV_RX_OFFLOAD_RSS_HASH);
> }
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
> b/drivers/net/hinic/hinic_pmd_ethdev.c
> index ce0b52c718ab..b1563350ec0e 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -747,7 +747,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *info)
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_TCP_LRO |
> DEV_RX_OFFLOAD_RSS_HASH;
>
> diff --git a/drivers/net/hns3/hns3_ethdev.c
> b/drivers/net/hns3/hns3_ethdev.c
> index 868d381a4772..0c58c55844b0 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2717,7 +2717,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev,
> struct rte_eth_dev_info *info)
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_RSS_HASH |
> DEV_RX_OFFLOAD_TCP_LRO);
> info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
> b/drivers/net/hns3/hns3_ethdev_vf.c
> index ff28cad53a03..c488e03f23a4 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -956,7 +956,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev,
> struct rte_eth_dev_info *info)
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_RSS_HASH |
> DEV_RX_OFFLOAD_TCP_LRO);
> info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index dad151eac5f1..ad7802f63031 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -3758,7 +3758,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_RSS_HASH;
>
> dev_info->tx_queue_offload_capa =
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> index f7f9d44ef181..1c314e2ffdd0 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -1932,7 +1932,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct
> i40e_rx_queue *rxq)
> /**
> * Check if the jumbo frame and maximum packet length are set
> correctly
> */
> - if (dev_data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (dev_data->mtu > RTE_ETHER_MTU) {
> if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
> rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> PMD_DRV_LOG(ERR, "maximum packet length must
> be "
> @@ -2378,7 +2378,6 @@ i40evf_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER;
>
> dev_info->tx_queue_offload_capa = 0;
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index aa43796ef1af..a421acf8f6b6 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2906,7 +2906,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
> rxq->max_pkt_len =
> RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq-
> >rx_buf_len,
> data->mtu + I40E_ETH_OVERHEAD);
> - if (data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (data->mtu > RTE_ETHER_MTU) {
> if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
> rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> PMD_DRV_LOG(ERR, "maximum packet length must
> "
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 049671ef3da9..f156add80e0d 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -574,7 +574,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct
> iavf_rx_queue *rxq)
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (dev->data->mtu & RTE_ETHER_MTU) {
> if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
> max_pkt_len > IAVF_FRAME_SIZE_MAX) {
> PMD_DRV_LOG(ERR, "maximum packet length must
> be "
> @@ -939,7 +939,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_RSS_HASH;
>
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c
> b/drivers/net/ice/ice_dcf_ethdev.c
> index 34b6c9b2a7ed..72fdcc29c28a 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -65,7 +65,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct
> ice_rx_queue *rxq)
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (dev_data->mtu > RTE_ETHER_MTU) {
> if (max_pkt_len <= ICE_ETH_MAX_LEN ||
> max_pkt_len > ICE_FRAME_SIZE_MAX) {
> PMD_DRV_LOG(ERR, "maximum packet length must
> be "
> @@ -664,7 +664,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_RSS_HASH;
> dev_info->tx_offload_capa =
> diff --git a/drivers/net/ice/ice_dcf_vf_representor.c
> b/drivers/net/ice/ice_dcf_vf_representor.c
> index 970461f3e90a..07843c6dbc92 100644
> --- a/drivers/net/ice/ice_dcf_vf_representor.c
> +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> @@ -141,7 +141,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev
> *dev,
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> DEV_RX_OFFLOAD_RSS_HASH;
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index c1a96d3de183..a17c11e95e0b 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3491,7 +3491,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
>
> dev_info->rx_offload_capa =
> DEV_RX_OFFLOAD_VLAN_STRIP |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_KEEP_CRC |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_VLAN_FILTER;
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index a3de4172e2bc..a7b0915dabfc 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -259,7 +259,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
> struct ice_rlan_ctx rx_ctx;
> enum ice_status err;
> uint16_t buf_size;
> - struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
> uint32_t rxdid = ICE_RXDID_COMMS_OVS;
> uint32_t regval;
> uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
> @@ -273,7 +272,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
> RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> frame_size);
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (dev_data->mtu > RTE_ETHER_MTU) {
> if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
> rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
> PMD_DRV_LOG(ERR, "maximum packet length must
> "
> diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
> index b3473b5b1646..5e6c2ff30157 100644
> --- a/drivers/net/igc/igc_ethdev.h
> +++ b/drivers/net/igc/igc_ethdev.h
> @@ -73,7 +73,6 @@ extern "C" {
> DEV_RX_OFFLOAD_UDP_CKSUM | \
> DEV_RX_OFFLOAD_TCP_CKSUM | \
> DEV_RX_OFFLOAD_SCTP_CKSUM | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_KEEP_CRC | \
> DEV_RX_OFFLOAD_SCATTER | \
> DEV_RX_OFFLOAD_RSS_HASH)
> diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
> index d80808a002f5..30940857eac0 100644
> --- a/drivers/net/igc/igc_txrx.c
> +++ b/drivers/net/igc/igc_txrx.c
> @@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
>
> /* Configure support of jumbo frames, if any. */
> - if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> + if (dev->data->mtu & RTE_ETHER_MTU)
> rctl |= IGC_RCTL_LPE;
> else
> rctl &= ~IGC_RCTL_LPE;
> diff --git a/drivers/net/ionic/ionic_ethdev.c
> b/drivers/net/ionic/ionic_ethdev.c
> index 97447a10e46a..795980cb1ca5 100644
> --- a/drivers/net/ionic/ionic_ethdev.c
> +++ b/drivers/net/ionic/ionic_ethdev.c
> @@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_SCATTER |
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c
> b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 377b96c0236a..4e5d234e8c7d 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev
> *ethdev,
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> - DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + DEV_RX_OFFLOAD_VLAN_FILTER;
>
> dev_info->tx_queue_offload_capa =
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> dev_info->tx_offload_capa =
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-09 17:29 [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Ferruh Yigit
` (4 preceding siblings ...)
2021-07-18 7:45 ` Xu, Rosen
@ 2021-07-19 3:35 ` Huisong Li
2021-07-21 15:29 ` Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
6 siblings, 1 reply; 112+ messages in thread
From: Huisong Li @ 2021-07-19 3:35 UTC (permalink / raw)
To: Yigit, Ferruh; +Cc: dev
Hi, Ferruh
在 2021/7/10 1:29, Ferruh Yigit 写道:
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' filed of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also two different related method is confusing for
> the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, but this may be different from device to
> device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> As additional clarification, MTU is used to configure the device for
> physical Rx/Tx limitation. Other related issue is size of the buffer to
> store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
> And compares MTU against Rx buffer size to decide enabling scattered Rx
> or not, if PMD supports it. If scattered Rx is not supported by device,
> MTU bigger than Rx buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> app/test-eventdev/test_perf_common.c | 1 -
> app/test-eventdev/test_pipeline_common.c | 5 +-
> app/test-pmd/cmdline.c | 45 ++++-----
> app/test-pmd/config.c | 18 ++--
> app/test-pmd/parameters.c | 4 +-
> app/test-pmd/testpmd.c | 94 ++++++++++--------
> app/test-pmd/testpmd.h | 2 +-
> app/test/test_link_bonding.c | 1 -
> app/test/test_link_bonding_mode4.c | 1 -
> app/test/test_link_bonding_rssconf.c | 2 -
> app/test/test_pmd_perf.c | 1 -
> doc/guides/nics/dpaa.rst | 2 +-
> doc/guides/nics/dpaa2.rst | 2 +-
> doc/guides/nics/features.rst | 2 +-
> doc/guides/nics/fm10k.rst | 2 +-
> doc/guides/nics/mlx5.rst | 4 +-
> doc/guides/nics/octeontx.rst | 2 +-
> doc/guides/nics/thunderx.rst | 2 +-
> doc/guides/rel_notes/deprecation.rst | 25 -----
> doc/guides/sample_app_ug/flow_classify.rst | 8 +-
> doc/guides/sample_app_ug/ioat.rst | 1 -
> doc/guides/sample_app_ug/ip_reassembly.rst | 2 +-
> doc/guides/sample_app_ug/skeleton.rst | 8 +-
> drivers/net/atlantic/atl_ethdev.c | 3 -
> drivers/net/avp/avp_ethdev.c | 17 ++--
> drivers/net/axgbe/axgbe_ethdev.c | 7 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
> drivers/net/bnxt/bnxt_ethdev.c | 21 ++--
> drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
> drivers/net/cnxk/cnxk_ethdev.c | 9 +-
> drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
> drivers/net/cxgbe/cxgbe_ethdev.c | 12 +--
> drivers/net/cxgbe/cxgbe_main.c | 3 +-
> drivers/net/cxgbe/sge.c | 3 +-
> drivers/net/dpaa/dpaa_ethdev.c | 52 ++++------
> drivers/net/dpaa2/dpaa2_ethdev.c | 31 +++---
> drivers/net/e1000/em_ethdev.c | 4 +-
> drivers/net/e1000/igb_ethdev.c | 18 +---
> drivers/net/e1000/igb_rxtx.c | 16 ++-
> drivers/net/ena/ena_ethdev.c | 27 ++---
> drivers/net/enetc/enetc_ethdev.c | 24 ++---
> drivers/net/enic/enic_ethdev.c | 2 +-
> drivers/net/enic/enic_main.c | 42 ++++----
> drivers/net/fm10k/fm10k_ethdev.c | 2 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++--
> drivers/net/hns3/hns3_ethdev.c | 28 ++----
> drivers/net/hns3/hns3_ethdev_vf.c | 38 +++----
> drivers/net/hns3/hns3_rxtx.c | 10 +-
> drivers/net/i40e/i40e_ethdev.c | 10 +-
> drivers/net/i40e/i40e_ethdev_vf.c | 14 +--
> drivers/net/i40e/i40e_rxtx.c | 4 +-
> drivers/net/iavf/iavf_ethdev.c | 9 +-
> drivers/net/ice/ice_dcf_ethdev.c | 5 +-
> drivers/net/ice/ice_ethdev.c | 14 +--
> drivers/net/ice/ice_rxtx.c | 12 +--
> drivers/net/igc/igc_ethdev.c | 51 +++-------
> drivers/net/igc/igc_ethdev.h | 7 ++
> drivers/net/igc/igc_txrx.c | 22 ++---
> drivers/net/ionic/ionic_ethdev.c | 12 +--
> drivers/net/ionic/ionic_rxtx.c | 6 +-
> drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 35 +++----
> drivers/net/ixgbe/ixgbe_pf.c | 6 +-
> drivers/net/ixgbe/ixgbe_rxtx.c | 15 ++-
> drivers/net/liquidio/lio_ethdev.c | 20 +---
> drivers/net/mlx4/mlx4_rxq.c | 17 ++--
> drivers/net/mlx5/mlx5_rxq.c | 25 ++---
> drivers/net/mvneta/mvneta_ethdev.c | 7 --
> drivers/net/mvneta/mvneta_rxtx.c | 13 ++-
> drivers/net/mvpp2/mrvl_ethdev.c | 34 +++----
> drivers/net/nfp/nfp_net.c | 9 +-
> drivers/net/octeontx/octeontx_ethdev.c | 12 +--
> drivers/net/octeontx2/otx2_ethdev.c | 2 +-
> drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +--
> drivers/net/pfe/pfe_ethdev.c | 7 +-
> drivers/net/qede/qede_ethdev.c | 16 +--
> drivers/net/qede/qede_rxtx.c | 8 +-
> drivers/net/sfc/sfc_ethdev.c | 4 +-
> drivers/net/sfc/sfc_port.c | 6 +-
> drivers/net/tap/rte_eth_tap.c | 7 +-
> drivers/net/thunderx/nicvf_ethdev.c | 13 +--
> drivers/net/txgbe/txgbe_ethdev.c | 7 +-
> drivers/net/txgbe/txgbe_ethdev.h | 4 +
> drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
> drivers/net/txgbe/txgbe_rxtx.c | 19 ++--
> drivers/net/virtio/virtio_ethdev.c | 4 +-
> examples/bbdev_app/main.c | 1 -
> examples/bond/main.c | 1 -
> examples/distributor/main.c | 1 -
> .../pipeline_worker_generic.c | 1 -
> .../eventdev_pipeline/pipeline_worker_tx.c | 1 -
> examples/flow_classify/flow_classify.c | 10 +-
> examples/ioat/ioatfwd.c | 1 -
> examples/ip_fragmentation/main.c | 11 +--
> examples/ip_pipeline/link.c | 2 +-
> examples/ip_reassembly/main.c | 11 ++-
> examples/ipsec-secgw/ipsec-secgw.c | 7 +-
> examples/ipv4_multicast/main.c | 8 +-
> examples/kni/main.c | 6 +-
> examples/l2fwd-cat/l2fwd-cat.c | 8 +-
> examples/l2fwd-crypto/main.c | 1 -
> examples/l2fwd-event/l2fwd_common.c | 1 -
> examples/l3fwd-acl/main.c | 11 +--
> examples/l3fwd-graph/main.c | 4 +-
> examples/l3fwd-power/main.c | 11 ++-
> examples/l3fwd/main.c | 4 +-
> .../performance-thread/l3fwd-thread/main.c | 7 +-
> examples/pipeline/obj.c | 2 +-
> examples/ptpclient/ptpclient.c | 10 +-
> examples/qos_meter/main.c | 1 -
> examples/qos_sched/init.c | 1 -
> examples/rxtx_callbacks/main.c | 10 +-
> examples/skeleton/basicfwd.c | 10 +-
> examples/vhost/main.c | 4 +-
> examples/vm_power_manager/main.c | 11 +--
> lib/ethdev/rte_ethdev.c | 98 +++++++++++--------
> lib/ethdev/rte_ethdev.h | 2 +-
> lib/ethdev/rte_ethdev_trace.h | 2 +-
> 118 files changed, 531 insertions(+), 848 deletions(-)
>
> diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
> index cc100650c21e..660d5a0364b6 100644
> --- a/app/test-eventdev/test_perf_common.c
> +++ b/app/test-eventdev/test_perf_common.c
> @@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
> struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
> index 6ee530d4cdc9..5fcea74b4d43 100644
> --- a/app/test-eventdev/test_pipeline_common.c
> +++ b/app/test-eventdev/test_pipeline_common.c
> @@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
> return -EINVAL;
> }
>
> - port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
> - if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
> + port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN;
> + if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
> port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> t->internal_port = 1;
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 8468018cf35d..8bdc042f6e8e 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1892,43 +1892,36 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
> __rte_unused void *data)
> {
> struct cmd_config_max_pkt_len_result *res = parsed_result;
> - uint32_t max_rx_pkt_len_backup = 0;
> - portid_t pid;
> + portid_t port_id;
> int ret;
>
> + if (strcmp(res->name, "max-pkt-len")) {
> + printf("Unknown parameter\n");
> + return;
> + }
> +
> if (!all_ports_stopped()) {
> printf("Please stop all ports first\n");
> return;
> }
>
> - RTE_ETH_FOREACH_DEV(pid) {
> - struct rte_port *port = &ports[pid];
> -
> - if (!strcmp(res->name, "max-pkt-len")) {
> - if (res->value < RTE_ETHER_MIN_LEN) {
> - printf("max-pkt-len can not be less than %d\n",
> - RTE_ETHER_MIN_LEN);
> - return;
> - }
> - if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
> - return;
> -
> - ret = eth_dev_info_get_print_err(pid, &port->dev_info);
> - if (ret != 0) {
> - printf("rte_eth_dev_info_get() failed for port %u\n",
> - pid);
> - return;
> - }
> + RTE_ETH_FOREACH_DEV(port_id) {
> + struct rte_port *port = &ports[port_id];
>
> - max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
> + if (res->value < RTE_ETHER_MIN_LEN) {
> + printf("max-pkt-len can not be less than %d\n",
> + RTE_ETHER_MIN_LEN);
> + return;
> + }
>
> - port->dev_conf.rxmode.max_rx_pkt_len = res->value;
> - if (update_jumbo_frame_offload(pid) != 0)
> - port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
> - } else {
> - printf("Unknown parameter\n");
> + ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
> + if (ret != 0) {
> + printf("rte_eth_dev_info_get() failed for port %u\n",
> + port_id);
> return;
> }
> +
> + update_jumbo_frame_offload(port_id, res->value);
> }
>
> init_port_config();
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 04ae0feb5852..a87265d7638b 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1139,7 +1139,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> int diag;
> struct rte_port *rte_port = &ports[port_id];
> struct rte_eth_dev_info dev_info;
> - uint16_t eth_overhead;
> int ret;
>
> if (port_id_is_invalid(port_id, ENABLED_WARN))
> @@ -1155,20 +1154,17 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> return;
> }
> diag = rte_eth_dev_set_mtu(port_id, mtu);
> - if (diag)
> + if (diag) {
> printf("Set MTU failed. diag=%d\n", diag);
> - else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - /*
> - * Ether overhead in driver is equal to the difference of
> - * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
> - * device supports jumbo frame.
> - */
> - eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
> + return;
> + }
> +
> + rte_port->dev_conf.rxmode.mtu = mtu;
> +
> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (mtu > RTE_ETHER_MTU) {
> rte_port->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - rte_port->dev_conf.rxmode.max_rx_pkt_len =
> - mtu + eth_overhead;
> } else
> rte_port->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index 5e69d2aa8cfe..8e8556d74a4a 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -860,7 +860,9 @@ launch_args_parse(int argc, char** argv)
> if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
> n = atoi(optarg);
> if (n >= RTE_ETHER_MIN_LEN)
> - rx_mode.max_rx_pkt_len = (uint32_t) n;
> + rx_mode.mtu = (uint32_t) n -
> + (RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN);
> else
> rte_exit(EXIT_FAILURE,
> "Invalid max-pkt-len=%d - should be > %d\n",
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 1cdd3cdd12b6..2c79cae05664 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -445,13 +445,7 @@ lcoreid_t latencystats_lcore_id = -1;
> /*
> * Ethernet device configuration.
> */
> -struct rte_eth_rxmode rx_mode = {
> - /* Default maximum frame length.
> - * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
> - * in init_config().
> - */
> - .max_rx_pkt_len = 0,
> -};
> +struct rte_eth_rxmode rx_mode;
>
> struct rte_eth_txmode tx_mode = {
> .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
> @@ -1417,6 +1411,20 @@ check_nb_hairpinq(queueid_t hairpinq)
> return 0;
> }
>
> +static int
> +get_eth_overhead(struct rte_eth_dev_info *dev_info)
> +{
> + uint32_t eth_overhead;
> +
> + if (dev_info->max_mtu != UINT16_MAX &&
> + dev_info->max_rx_pktlen > dev_info->max_mtu)
> + eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu;
> + else
> + eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return eth_overhead;
> +}
> +
> static void
> init_config(void)
> {
> @@ -1465,7 +1473,7 @@ init_config(void)
> rte_exit(EXIT_FAILURE,
> "rte_eth_dev_info_get() failed\n");
>
> - ret = update_jumbo_frame_offload(pid);
> + ret = update_jumbo_frame_offload(pid, 0);
> if (ret != 0)
> printf("Updating jumbo frame offload failed for port %u\n",
> pid);
> @@ -1512,14 +1520,19 @@ init_config(void)
> */
> if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
> port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
> - data_size = rx_mode.max_rx_pkt_len /
> - port->dev_info.rx_desc_lim.nb_mtu_seg_max;
> + uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
> + uint16_t mtu;
>
> - if ((data_size + RTE_PKTMBUF_HEADROOM) >
> + if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
> + data_size = mtu + eth_overhead /
> + port->dev_info.rx_desc_lim.nb_mtu_seg_max;
> +
> + if ((data_size + RTE_PKTMBUF_HEADROOM) >
> mbuf_data_size[0]) {
> - mbuf_data_size[0] = data_size +
> - RTE_PKTMBUF_HEADROOM;
> - warning = 1;
> + mbuf_data_size[0] = data_size +
> + RTE_PKTMBUF_HEADROOM;
> + warning = 1;
> + }
> }
> }
> }
> @@ -3352,43 +3365,44 @@ rxtx_port_config(struct rte_port *port)
>
> /*
> * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
> - * MTU is also aligned if JUMBO_FRAME offload is not set.
> + * MTU is also aligned.
> *
> * port->dev_info should be set before calling this function.
> *
> + * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
> + * ETH_OVERHEAD". This is useful to update flags but not MTU value.
> + *
> * return 0 on success, negative on error
> */
> int
> -update_jumbo_frame_offload(portid_t portid)
> +update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
> {
> struct rte_port *port = &ports[portid];
> uint32_t eth_overhead;
> uint64_t rx_offloads;
> - int ret;
> + uint16_t mtu, new_mtu;
> bool on;
>
> - /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
> - if (port->dev_info.max_mtu != UINT16_MAX &&
> - port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
> - eth_overhead = port->dev_info.max_rx_pktlen -
> - port->dev_info.max_mtu;
> - else
> - eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> + eth_overhead = get_eth_overhead(&port->dev_info);
>
> - rx_offloads = port->dev_conf.rxmode.offloads;
> + if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
> + printf("Failed to get MTU for port %u\n", portid);
> + return -1;
> + }
> +
> + if (max_rx_pktlen == 0)
> + max_rx_pktlen = mtu + eth_overhead;
>
> - /* Default config value is 0 to use PMD specific overhead */
> - if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
> - port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
> + rx_offloads = port->dev_conf.rxmode.offloads;
> + new_mtu = max_rx_pktlen - eth_overhead;
>
> - if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
> + if (new_mtu <= RTE_ETHER_MTU) {
> rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> on = false;
> } else {
> if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> printf("Frame size (%u) is not supported by port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len,
> - portid);
> + max_rx_pktlen, portid);
> return -1;
> }
> rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -3409,18 +3423,16 @@ update_jumbo_frame_offload(portid_t portid)
> }
> }
>
> - /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
> - * if unset do it here
> - */
> - if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> - ret = rte_eth_dev_set_mtu(portid,
> - port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
> - if (ret)
> - printf("Failed to set MTU to %u for port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead,
> - portid);
> + if (mtu == new_mtu)
> + return 0;
> +
> + if (rte_eth_dev_set_mtu(portid, new_mtu) != 0) {
> + printf("Failed to set MTU to %u for port %u\n", new_mtu, portid);
> + return -1;
> }
>
> + port->dev_conf.rxmode.mtu = new_mtu;
> +
> return 0;
> }
>
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index d61a055bdd1b..42143f85924f 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
> __rte_unused void *user_param);
> void add_tx_dynf_callback(portid_t portid);
> void remove_tx_dynf_callback(portid_t portid);
> -int update_jumbo_frame_offload(portid_t portid);
> +int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
>
> /*
> * Work-around of a compilation error with ICC on invocations of the
> diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
> index 8a5c8310a8b4..5388d18125a6 100644
> --- a/app/test/test_link_bonding.c
> +++ b/app/test/test_link_bonding.c
> @@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> .split_hdr_size = 0,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
> index 2c835fa7adc7..3e9254fe896d 100644
> --- a/app/test/test_link_bonding_mode4.c
> +++ b/app/test/test_link_bonding_mode4.c
> @@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
> index 5dac60ca1edd..e7bb0497b663 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> @@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
> static struct rte_eth_conf rss_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
> index 3a248d512c4a..a3b4f52c65e6 100644
> --- a/app/test/test_pmd_perf.c
> +++ b/app/test/test_pmd_perf.c
> @@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
> index 917482dbe2a5..b8d43aa90098 100644
> --- a/doc/guides/nics/dpaa.rst
> +++ b/doc/guides/nics/dpaa.rst
> @@ -335,7 +335,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
> index 6470f1c05ac8..ce16e1047df2 100644
> --- a/doc/guides/nics/dpaa2.rst
> +++ b/doc/guides/nics/dpaa2.rst
> @@ -551,7 +551,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 403c2b03a386..c98242f3b72f 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -166,7 +166,7 @@ Jumbo frame
> Supports Rx jumbo frames.
>
> * **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> - ``dev_conf.rxmode.max_rx_pkt_len``.
> + ``dev_conf.rxmode.mtu``.
> * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
> * **[related] API**: ``rte_eth_dev_set_mtu()``.
>
> diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
> index 7b8ef0e7823d..ed6afd62703d 100644
> --- a/doc/guides/nics/fm10k.rst
> +++ b/doc/guides/nics/fm10k.rst
> @@ -141,7 +141,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
> up to 15364 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index 83299646ddb1..338734826a7a 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -584,9 +584,9 @@ Driver options
> and each stride receives one packet. MPRQ can improve throughput for
> small-packet traffic.
>
> - When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
> + When MPRQ is enabled, MTU can be larger than the size of
> user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
> - configure large stride size enough to accommodate max_rx_pkt_len as long as
> + configure large stride size enough to accommodate MTU as long as
> device allows. Note that this can waste system memory compared to enabling Rx
> scatter and multi-segment packet.
>
> diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
> index b1a868b054d1..8236cc3e93e0 100644
> --- a/doc/guides/nics/octeontx.rst
> +++ b/doc/guides/nics/octeontx.rst
> @@ -157,7 +157,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
> up to 32k bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
> index 12d43ce93e28..98f23a2b2a3d 100644
> --- a/doc/guides/nics/thunderx.rst
> +++ b/doc/guides/nics/thunderx.rst
> @@ -392,7 +392,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
> up to 9200 bytes can still reach the host interface.
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 9584d6bfd723..86da47d8f9c6 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -56,31 +56,6 @@ Deprecation Notices
> In 19.11 PMDs will still update the field even when the offload is not
> enabled.
>
> -* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be
> - replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
> - The new ``mtu`` field will be used to configure the initial device MTU via
> - ``rte_eth_dev_configure()`` API.
> - Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
> - The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store
> - the configured ``mtu`` value,
> - and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
> - be used to store the user configuration request.
> - Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled,
> - ``mtu`` field will be always valid.
> - When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU``
> - value will be used.
> - ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully,
> - either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
> -
> - An application may need to configure device for a specific Rx packet size, like for
> - cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
> - can't be bigger than Rx buffer size.
> - To cover these cases an application needs to know the device packet overhead to be
> - able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
> - ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
> - the device packet overhead can be calculated as:
> - ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
> -
> * ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
> will be removed in 21.11.
> Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
> diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst
> index 01915971ae83..2cc36a688af3 100644
> --- a/doc/guides/sample_app_ug/flow_classify.rst
> +++ b/doc/guides/sample_app_ug/flow_classify.rst
> @@ -325,13 +325,7 @@ Forwarding application is shown below:
> }
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
> -
> -.. code-block:: c
> -
> - static const struct rte_eth_conf port_conf_default = {
> - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> - };
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
> index 7eb557f91c7a..c5c06261e395 100644
> --- a/doc/guides/sample_app_ug/ioat.rst
> +++ b/doc/guides/sample_app_ug/ioat.rst
> @@ -162,7 +162,6 @@ multiple CBDMA channels per port:
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst
> index e72c8492e972..2090b23fdd1c 100644
> --- a/doc/guides/sample_app_ug/ip_reassembly.rst
> +++ b/doc/guides/sample_app_ug/ip_reassembly.rst
> @@ -175,7 +175,7 @@ each RX queue uses its own mempool.
> .. code-block:: c
>
> nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * RTE_LIBRTE_IP_FRAG_MAX_FRAGS;
> - nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
> + nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + BUF_SIZE - 1) / BUF_SIZE;
> nb_mbuf *= 2; /* ipv4 and ipv6 */
> nb_mbuf += RTE_TEST_RX_DESC_DEFAULT + RTE_TEST_TX_DESC_DEFAULT;
> nb_mbuf = RTE_MAX(nb_mbuf, (uint32_t)NB_MBUF);
> diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst
> index 263d8debc81b..a88cb8f14a4b 100644
> --- a/doc/guides/sample_app_ug/skeleton.rst
> +++ b/doc/guides/sample_app_ug/skeleton.rst
> @@ -157,13 +157,7 @@ Forwarding application is shown below:
> }
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
> -
> -.. code-block:: c
> -
> - static const struct rte_eth_conf port_conf_default = {
> - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> - };
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
> index 0ce35eb519e2..3f654c071566 100644
> --- a/drivers/net/atlantic/atl_ethdev.c
> +++ b/drivers/net/atlantic/atl_ethdev.c
> @@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return 0;
> }
>
> diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> index 623fa5e5ff5b..2554f5fdf59a 100644
> --- a/drivers/net/avp/avp_ethdev.c
> +++ b/drivers/net/avp/avp_ethdev.c
> @@ -1059,17 +1059,18 @@ static int
> avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
> struct avp_dev *avp)
> {
> - unsigned int max_rx_pkt_len;
> + unsigned int max_rx_pktlen;
>
> - max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
>
> - if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
> - (max_rx_pkt_len > avp->host_mbuf_size)) {
> + if ((max_rx_pktlen > avp->guest_mbuf_size) ||
> + (max_rx_pktlen > avp->host_mbuf_size)) {
> /*
> * If the guest MTU is greater than either the host or guest
> * buffers then chained mbufs have to be enabled in the TX
> * direction. It is assumed that the application will not need
> - * to send packets larger than their max_rx_pkt_len (MRU).
> + * to send packets larger than their MTU.
> */
> return 1;
> }
> @@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
>
> PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
> avp->max_rx_pkt_len,
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN,
> avp->host_mbuf_size,
> avp->guest_mbuf_size);
>
> @@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> * function; send it truncated to avoid the performance
> * hit of having to manage returning the already
> * allocated buffer to the free list. This should not
> - * happen since the application should have set the
> - * max_rx_pkt_len based on its MTU and it should be
> + * happen since the application should have not send
> + * packages larger than its MTU and it should be
> * policing its own packet sizes.
> */
> txq->errors++;
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
> index 9cb4818af11f..76aeec077f2b 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
> struct axgbe_port *pdata = dev->data->dev_private;
> int ret;
> struct rte_eth_dev_data *dev_data = dev->data;
> - uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len;
> + uint16_t max_pkt_len;
>
> dev->dev_ops = &axgbe_eth_dev_ops;
>
> @@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
>
> rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
> rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
> +
> + max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
> max_pkt_len > pdata->rx_buf_size)
> dev_data->scattered_rx = 1;
> @@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (frame_size > AXGBE_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> val = 1;
> @@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> val = 0;
> }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
> index 463886f17a58..009a94e9a8fa 100644
> --- a/drivers/net/bnx2x/bnx2x_ethdev.c
> +++ b/drivers/net/bnx2x/bnx2x_ethdev.c
> @@ -175,16 +175,12 @@ static int
> bnx2x_dev_configure(struct rte_eth_dev *dev)
> {
> struct bnx2x_softc *sc = dev->data->dev_private;
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
>
> int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
>
> PMD_INIT_FUNC_TRACE(sc);
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - dev->data->mtu = sc->mtu;
> - }
> + sc->mtu = dev->data->dev_conf.rxmode.mtu;
>
> if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
> PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues");
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index c9536f79267d..335505a106d5 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -1128,13 +1128,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
> rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
> eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - eth_dev->data->mtu =
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
> - BNXT_NUM_VLANS;
> - bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> - }
> + bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> +
> return 0;
>
> resource_error:
> @@ -1172,6 +1167,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
> */
> static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> uint16_t buf_size;
> int i;
>
> @@ -1186,7 +1182,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
>
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
> RTE_PKTMBUF_HEADROOM);
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size)
> + if (eth_dev->data->mtu + overhead > buf_size)
> return 1;
> }
> return 0;
> @@ -2992,6 +2988,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
>
> int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> struct bnxt *bp = eth_dev->data->dev_private;
> uint32_t new_pkt_size;
> uint32_t rc = 0;
> @@ -3005,8 +3002,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> if (!eth_dev->data->nb_rx_queues)
> return rc;
>
> - new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
> - VLAN_TAG_SIZE * BNXT_NUM_VLANS;
> + new_pkt_size = new_mtu + overhead;
>
> /*
> * Disallow any MTU change that would require scattered receive support
> @@ -3033,7 +3029,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> }
>
> /* Is there a change in mtu setting? */
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size)
> + if (eth_dev->data->mtu == new_mtu)
> return rc;
>
> for (i = 0; i < bp->nr_vnics; i++) {
> @@ -3055,9 +3051,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> }
> }
>
> - if (!rc)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size;
> -
> PMD_DRV_LOG(INFO, "New MTU is %d\n", new_mtu);
>
> return rc;
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index b01ef003e65c..b2a1833e3f91 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -1728,8 +1728,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
> slave_eth_dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_VLAN_FILTER;
>
> - slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + slave_eth_dev->data->dev_conf.rxmode.mtu =
> + bonded_eth_dev->data->dev_conf.rxmode.mtu;
>
> if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
> index 7adab4605819..da6c5e8f242f 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.c
> +++ b/drivers/net/cnxk/cnxk_ethdev.c
> @@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
> mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
> buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
>
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
> dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> }
> @@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> {
> struct rte_eth_dev_data *data = eth_dev->data;
> struct cnxk_eth_rxq_sp *rxq;
> - uint16_t mtu;
> int rc;
>
> rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
> /* Setup scatter mode if needed by jumbo */
> nix_enable_mseg_on_jumbo(rxq);
>
> - /* Setup MTU based on max_rx_pkt_len */
> - mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
> - CNXK_NIX_MAX_VTAG_ACT_SIZE;
> -
> - rc = cnxk_nix_mtu_set(eth_dev, mtu);
> + rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
> if (rc)
> plt_err("Failed to set default MTU size, rc=%d", rc);
>
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index b6cc5286c6d0..695d0d6fd3e2 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> goto exit;
> }
>
> - frame_size += RTE_ETHER_CRC_LEN;
> -
> - if (frame_size > RTE_ETHER_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> exit:
> return rc;
> }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 177eca397600..8cf61f12a8d6 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> return err;
>
> /* Must accommodate at least RTE_ETHER_MIN_MTU */
> - if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
> + if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
> return -EINVAL;
>
> /* set to jumbo mode if needed */
> - if (new_mtu > CXGBE_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
>
> err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
> -1, -1, true);
> - if (!err)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;
> -
> return err;
> }
>
> @@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
> const struct rte_eth_rxconf *rx_conf __rte_unused,
> struct rte_mempool *mp)
> {
> - unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> struct port_info *pi = eth_dev->data->dev_private;
> struct adapter *adapter = pi->adapter;
> struct rte_eth_dev_info dev_info;
> @@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
> rxq->fl.size = temp_nb_desc;
>
> /* Set to jumbo mode if necessary */
> - if (pkt_len > CXGBE_ETH_MAX_LEN)
> + if (eth_dev->data->mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
> index 6dd1bf1f836e..91d6bb9bbcb0 100644
> --- a/drivers/net/cxgbe/cxgbe_main.c
> +++ b/drivers/net/cxgbe/cxgbe_main.c
> @@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
> unsigned int mtu;
> int ret;
>
> - mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
> + mtu = pi->eth_dev->data->mtu;
>
> conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
>
> diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
> index e5f7721dc4b3..830f5192474d 100644
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
> u32 wr_mid;
> u64 cntrl, *end;
> bool v6;
> - u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
> + u32 max_pkt_len;
>
> /* Reject xmit if queue is stopped */
> if (unlikely(txq->flags & EQ_STOPPED))
> @@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
> return 0;
> }
>
> + max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
> (unlikely(m->pkt_len > max_pkt_len)))
> goto out_free;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
> index 27d670f843d2..56703e3a39e8 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - if (frame_size > DPAA_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> fman_if_set_maxfrm(dev->process_private, frame_size);
>
> return 0;
> @@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> struct fman_if *fif = dev->process_private;
> struct __fman_if *__fif;
> struct rte_intr_handle *intr_handle;
> + uint32_t max_rx_pktlen;
> int speed, duplex;
> int ret;
>
> @@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - DPAA_PMD_DEBUG("enabling jumbo");
> -
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - DPAA_MAX_RX_PKT_LEN)
> - max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - else {
> - DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
> - "supported is %d",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - DPAA_MAX_RX_PKT_LEN);
> - max_len = DPAA_MAX_RX_PKT_LEN;
> - }
> -
> - fman_if_set_maxfrm(dev->process_private, max_len);
> - dev->data->mtu = max_len
> - - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
> + DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
> + "supported is %d",
> + max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
> + max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
> }
>
> + fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
> +
> if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
> DPAA_PMD_DEBUG("enabling scatter mode");
> fman_if_set_sg(dev->process_private, 1);
> @@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> u32 flags = 0;
> int ret;
> u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> return -EINVAL;
> }
>
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
> + VLAN_TAG_SIZE;
> /* Max packet can fit in single buffer */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
> + if (max_rx_pktlen <= buffsz) {
> ;
> } else if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - buffsz * DPAA_SGT_MAX_ENTRIES) {
> - DPAA_PMD_ERR("max RxPkt size %d too big to fit "
> + if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
> + DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
> "MaxSGlist %d",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - buffsz * DPAA_SGT_MAX_ENTRIES);
> + max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES);
> rte_errno = EOVERFLOW;
> return -rte_errno;
> }
> @@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - buffsz - RTE_PKTMBUF_HEADROOM);
> + max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
> }
>
> dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
> @@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>
> dpaa_intf->valid = 1;
> DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
> - fman_if_get_sg_enable(fif),
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + fman_if_get_sg_enable(fif), max_rx_pktlen);
> /* checking if push mode only, no error check for now */
> if (!rxq->is_static &&
> dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 8b803b8542dc..6213bcbf3a43 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> int tx_l3_csum_offload = false;
> int tx_l4_csum_offload = false;
> int ret, tc_index;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
> - ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
> - priv->token, eth_conf->rxmode.max_rx_pkt_len
> - - RTE_ETHER_CRC_LEN);
> - if (ret) {
> - DPAA2_PMD_ERR(
> - "Unable to set mtu. check config");
> - return ret;
> - }
> - dev->data->mtu =
> - dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> - VLAN_TAG_SIZE;
> - } else {
> - return -1;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
> + ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
> + priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
> + if (ret) {
> + DPAA2_PMD_ERR("Unable to set mtu. check config");
> + return ret;
> }
> + } else {
> + return -1;
> }
>
> if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
> @@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
> return -EINVAL;
>
> - if (frame_size > DPAA2_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
> index a0ca371b0275..6f418a36aa04 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index 10ee0f33415a..35b517891d67 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -2686,9 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD);
> }
>
> static void
> @@ -2704,10 +2702,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE);
> + E1000_WRITE_REG(hw, E1000_RLPML,
> + dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
> }
>
> static int
> @@ -4405,7 +4401,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -4416,11 +4412,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
>
> return 0;
> }
> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> index 278d5d2712af..de12997b4bdd 100644
> --- a/drivers/net/e1000/igb_rxtx.c
> +++ b/drivers/net/e1000/igb_rxtx.c
> @@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> @@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure support of jumbo frames, if any.
> */
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> -
> rctl |= E1000_RCTL_LPE;
>
> /*
> @@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) > buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> @@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> /* setup MTU */
> - e1000_rlpml_set_vf(hw,
> - (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE));
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> + e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) > buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> index dfe68279fa7b..e9b718786a39 100644
> --- a/drivers/net/ena/ena_ethdev.c
> +++ b/drivers/net/ena/ena_ethdev.c
> @@ -850,26 +850,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev,
> return rc;
> }
>
> -static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
> -{
> - uint32_t max_frame_len = adapter->max_mtu;
> -
> - if (adapter->edev_data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME)
> - max_frame_len =
> - adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
> -
> - return max_frame_len;
> -}
> -
> static int ena_check_valid_conf(struct ena_adapter *adapter)
> {
> - uint32_t max_frame_len = ena_get_mtu_conf(adapter);
> + uint32_t mtu = adapter->edev_data->mtu;
>
> - if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_INIT_LOG(ERR, "Unsupported MTU of %d. "
> "max mtu: %d, min mtu: %d",
> - max_frame_len, adapter->max_mtu, ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return ENA_COM_UNSUPPORTED;
> }
>
> @@ -1042,11 +1030,11 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> ena_dev = &adapter->ena_dev;
> ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
>
> - if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_DRV_LOG(ERR,
> "Invalid MTU setting. new_mtu: %d "
> "max mtu: %d min mtu: %d\n",
> - mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return -EINVAL;
> }
>
> @@ -2067,7 +2055,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
> ETH_RSS_UDP;
>
> dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
> - dev_info->max_rx_pktlen = adapter->max_mtu;
> + dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + dev_info->min_mtu = ENA_MIN_MTU;
> + dev_info->max_mtu = adapter->max_mtu;
> dev_info->max_mac_addrs = 1;
>
> dev_info->max_rx_queues = adapter->max_num_io_queues;
> diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
> index b496cd470045..cdb9783b5372 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - if (frame_size > ENETC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads &=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
> enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /*setting the MTU*/
> enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) |
> ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
> @@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
> struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
> uint64_t rx_offloads = eth_conf->rxmode.offloads;
> uint32_t checksum = L3_CKSUM | L4_CKSUM;
> + uint32_t max_len;
>
> PMD_INIT_FUNC_TRACE();
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> -
> - enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> - ENETC_SET_MAXFRM(max_len));
> - enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> - ENETC_MAC_MAXFRM_SIZE);
> - enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
> - 2 * ENETC_MAC_MAXFRM_SIZE);
> - dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
> - RTE_ETHER_CRC_LEN;
> - }
> + max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len));
> + enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
> + enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
>
> if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
> int config;
> diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
> index 8d5797523b8f..6a81ceb62ba7 100644
> --- a/drivers/net/enic/enic_ethdev.c
> +++ b/drivers/net/enic/enic_ethdev.c
> @@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
> * max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
> * a hint to the driver to size receive buffers accordingly so that
> * larger-than-vnic-mtu packets get truncated.. For DPDK, we let
> - * the user decide the buffer size via rxmode.max_rx_pkt_len, basically
> + * the user decide the buffer size via rxmode.mtu, basically
> * ignoring vNIC mtu.
> */
> device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu);
> diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
> index 2affd380c6a4..dfc7f5d1f94f 100644
> --- a/drivers/net/enic/enic_main.c
> +++ b/drivers/net/enic/enic_main.c
> @@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
> struct rq_enet_desc *rqd = rq->ring.descs;
> unsigned i;
> dma_addr_t dma_addr;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint16_t rq_buf_len;
>
> if (!rq->in_use)
> @@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
>
> /*
> * If *not* using scatter and the mbuf size is greater than the
> - * requested max packet size (max_rx_pkt_len), then reduce the
> - * posted buffer size to max_rx_pkt_len. HW still receives packets
> - * larger than max_rx_pkt_len, but they will be truncated, which we
> + * requested max packet size (mtu + eth overhead), then reduce the
> + * posted buffer size to max packet size. HW still receives packets
> + * larger than max packet size, but they will be truncated, which we
> * drop in the rx handler. Not ideal, but better than returning
> * large packets when the user is not expecting them.
> */
> - max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
> rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM;
> - if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
> - rq_buf_len = max_rx_pkt_len;
> + if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
> + rq_buf_len = max_rx_pktlen;
> for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
> mb = rte_mbuf_raw_alloc(rq->mp);
> if (mb == NULL) {
> @@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
> unsigned int mbuf_size, mbufs_per_pkt;
> unsigned int nb_sop_desc, nb_data_desc;
> uint16_t min_sop, max_sop, min_data, max_data;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
>
> /*
> * Representor uses a reserved PF queue. Translate representor
> @@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
>
> mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM);
> - /* max_rx_pkt_len includes the ethernet header and CRC. */
> - max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + /* max_rx_pktlen includes the ethernet header and CRC. */
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
>
> if (enic->rte_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
> /* ceil((max pkt len)/mbuf_size) */
> - mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
> + mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
> } else {
> dev_info(enic, "Scatter rx mode disabled\n");
> mbufs_per_pkt = 1;
> - if (max_rx_pkt_len > mbuf_size) {
> + if (max_rx_pktlen > mbuf_size) {
> dev_warning(enic, "The maximum Rx packet size (%u) is"
> " larger than the mbuf size (%u), and"
> " scatter is disabled. Larger packets will"
> " be truncated.\n",
> - max_rx_pkt_len, mbuf_size);
> + max_rx_pktlen, mbuf_size);
> }
> }
>
> @@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
> rq_sop->data_queue_enable = 1;
> rq_data->in_use = 1;
> /*
> - * HW does not directly support rxmode.max_rx_pkt_len. HW always
> + * HW does not directly support MTU. HW always
> * receives packet sizes up to the "max" MTU.
> * If not using scatter, we can achieve the effect of dropping
> * larger packets by reducing the size of posted buffers.
> * See enic_alloc_rx_queue_mbufs().
> */
> - if (max_rx_pkt_len <
> - enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
> - dev_warning(enic, "rxmode.max_rx_pkt_len is ignored"
> - " when scatter rx mode is in use.\n");
> + if (enic->rte_dev->data->mtu < enic->max_mtu) {
> + dev_warning(enic,
> + "mtu is ignored when scatter rx mode is in use.\n");
> }
> } else {
> dev_info(enic, "Rq %u Scatter rx mode not being used\n",
> @@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
> if (mbufs_per_pkt > 1) {
> dev_info(enic, "For max packet size %u and mbuf size %u valid"
> " rx descriptor range is %u to %u\n",
> - max_rx_pkt_len, mbuf_size, min_sop + min_data,
> + max_rx_pktlen, mbuf_size, min_sop + min_data,
> max_sop + max_data);
> }
> dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
> @@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
> "MTU (%u) is greater than value configured in NIC (%u)\n",
> new_mtu, config_mtu);
>
> - /* Update the MTU and maximum packet length */
> - eth_dev->data->mtu = new_mtu;
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - enic_mtu_to_max_rx_pktlen(new_mtu);
> -
> /*
> * If the device has not started (enic_enable), nothing to do.
> * Later, enic_enable() will set up RQs reflecting the new maximum
> diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
> index 3236290e4021..5e4b361ca6c0 100644
> --- a/drivers/net/fm10k/fm10k_ethdev.c
> +++ b/drivers/net/fm10k/fm10k_ethdev.c
> @@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
> FM10K_SRRCTL_LOOPBACK_SUPPRESS);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> + if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
> 2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
> rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
> uint32_t reg;
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 946465779f2e..c737ef8d06d8 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -324,19 +324,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
> dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
>
> /* mtu size is 256~9600 */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
> - dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - HINIC_MAX_JUMBO_FRAME_SIZE) {
> + if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
> + HINIC_MIN_FRAME_SIZE ||
> + HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
> + HINIC_MAX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR,
> - "Max rx pkt len out of range, get max_rx_pkt_len:%d, "
> + "Packet length out of range, get packet length:%d, "
> "expect between %d and %d",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu),
> HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
>
> - nic_dev->mtu_size =
> - HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
>
> /* rss template */
> err = hinic_config_mq_mode(dev, TRUE);
> @@ -1539,7 +1539,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
> static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
> - uint32_t frame_size;
> int ret = 0;
>
> PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
> @@ -1557,16 +1556,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> return ret;
> }
>
> - /* update max frame size */
> - frame_size = HINIC_MTU_TO_PKTLEN(mtu);
> - if (frame_size > HINIC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> nic_dev->mtu_size = mtu;
>
> return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
> index e51512560e15..8bccdeddb2f7 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2379,20 +2379,11 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
> {
> struct hns3_adapter *hns = dev->data->dev_private;
> struct hns3_hw *hw = &hns->hw;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> - int ret;
> -
> - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
> - return 0;
> + uint32_t max_rx_pktlen;
>
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
> hns3_err(hw, "maximum Rx packet length must be greater than %u "
> "and no more than %u when jumbo frame enabled.",
> (uint16_t)HNS3_DEFAULT_FRAME_LEN,
The preceding check for the maximum frame length was based on the
scenario where jumbo frames are enabled.
Since there is no offload of jumbo frames in this patchset, the maximum
frame length does not need to be checked and only ensure
conf->rxmode.mtu is valid.
These should be guaranteed by dev_configure() in the framework .
> @@ -2400,13 +2391,7 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
> return -EINVAL;
> }
>
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3_dev_mtu_set(dev, mtu);
> - if (ret)
> - return ret;
> - dev->data->mtu = mtu;
> -
> - return 0;
> + return hns3_dev_mtu_set(dev, conf->rxmode.mtu);
> }
>
> static int
> @@ -2622,7 +2607,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> rte_spinlock_lock(&hw->lock);
> - is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false;
> + is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
> frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
>
> /*
> @@ -2643,7 +2628,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
> index e582503f529b..ca839fa55fa0 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -784,8 +784,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> uint16_t nb_rx_q = dev->data->nb_rx_queues;
> uint16_t nb_tx_q = dev->data->nb_tx_queues;
> struct rte_eth_rss_conf rss_conf;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> + uint32_t max_rx_pktlen;
> bool gro_en;
> int ret;
>
> @@ -825,29 +824,21 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> goto cfg_err;
> }
>
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> - hns3_err(hw, "maximum Rx packet length must be greater "
> - "than %u and less than %u when jumbo frame enabled.",
> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> - (uint16_t)HNS3_MAX_FRAME_LEN);
> - ret = -EINVAL;
> - goto cfg_err;
> - }
> -
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3vf_dev_mtu_set(dev, mtu);
> - if (ret)
> - goto cfg_err;
> - dev->data->mtu = mtu;
> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
> + hns3_err(hw, "maximum Rx packet length must be greater "
> + "than %u and less than %u when jumbo frame enabled.",
> + (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> + (uint16_t)HNS3_MAX_FRAME_LEN);
> + ret = -EINVAL;
> + goto cfg_err;
> }
Please remove this check now, thanks!
>
> + ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
> + if (ret)
> + goto cfg_err;
> +
> ret = hns3vf_dev_configure_vlan(dev);
> if (ret)
> goto cfg_err;
> @@ -935,7 +926,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
> index cb9eccf9faae..6b81688a7225 100644
> --- a/drivers/net/hns3/hns3_rxtx.c
> +++ b/drivers/net/hns3/hns3_rxtx.c
> @@ -1734,18 +1734,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size,
> uint16_t nb_desc)
> {
> struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
> - struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
> eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
> + uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
> uint16_t min_vec_bds;
>
> /*
> * HNS3 hardware network engine set scattered as default. If the driver
> * is not work in scattered mode and the pkts greater than buf_size
> - * but smaller than max_rx_pkt_len will be distributed to multiple BDs.
> + * but smaller than frame size will be distributed to multiple BDs.
> * Driver cannot handle this situation.
> */
> - if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) {
> - hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater "
> + if (!hw->data->scattered_rx && frame_size > buf_size) {
> + hns3_err(hw, "frame size is not allowed to be set greater "
> "than rx_buf_len if scattered is off.");
> return -EINVAL;
> }
> @@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
> }
>
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
> - dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
> + dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
> dev->data->scattered_rx = true;
> }
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 7b230e2ed17a..1161f301b9ae 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11772,14 +11772,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > I40E_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return ret;
> }
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
> index 0cfe13b7b227..086a167ca672 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -1927,8 +1927,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << I40E_RXQ_CTX_DBUFF_SHIFT));
> len = rxq->rx_buf_len * I40E_MAX_CHAINED_RX_BUFFERS;
> - rxq->max_pkt_len = RTE_MIN(len,
> - dev_data->dev_conf.rxmode.max_rx_pkt_len);
> + rxq->max_pkt_len = RTE_MIN(len, dev_data->mtu + I40E_ETH_OVERHEAD);
>
> /**
> * Check if the jumbo frame and maximum packet length are set correctly
> @@ -2173,7 +2172,7 @@ i40evf_dev_start(struct rte_eth_dev *dev)
>
> hw->adapter_stopped = 0;
>
> - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + vf->max_pkt_len = dev->data->mtu + I40E_ETH_OVERHEAD;
> vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
> dev->data->nb_tx_queues);
>
> @@ -2885,13 +2884,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > I40E_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return ret;
> }
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index 8d65f287f455..aa43796ef1af 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2904,8 +2904,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
> }
>
> rxq->max_pkt_len =
> - RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
> - rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
> + RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
> + data->mtu + I40E_ETH_OVERHEAD);
> if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
> rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 41382c6d669b..13c2329d85a7 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -563,12 +563,13 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
> struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> struct rte_eth_dev_data *dev_data = dev->data;
> uint16_t buf_size, max_pkt_len, len;
> + uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
>
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
>
> /* Calculate the maximum packet length allowed */
> len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS;
> - max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_pkt_len = RTE_MIN(len, frame_size);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> @@ -815,7 +816,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
>
> adapter->stopped = 0;
>
> - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
> vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
> dev->data->nb_tx_queues);
> num_queue_pairs = vf->num_queue_pairs;
> @@ -1445,15 +1446,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IAVF_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return ret;
> }
>
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
> index 69fe6e63d1d3..34b6c9b2a7ed 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -59,9 +59,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
> - max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> + dev->data->mtu + ICE_ETH_OVERHEAD);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 63f735d1ff72..bdda6fee3f8e 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3426,8 +3426,8 @@ ice_dev_start(struct rte_eth_dev *dev)
> pf->adapter_stopped = false;
>
> /* Set the max frame size to default value*/
> - max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
> - pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
> + max_frame_size = pf->dev_data->mtu ?
> + pf->dev_data->mtu + ICE_ETH_OVERHEAD :
> ICE_FRAME_SIZE_MAX;
>
> /* Set the max frame size to HW*/
> @@ -3806,14 +3806,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > ICE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return 0;
> }
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index 3f6e7359844b..a3de4172e2bc 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -262,15 +262,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
> struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
> uint32_t rxdid = ICE_RXDID_COMMS_OVS;
> uint32_t regval;
> + uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
>
> /* Set buffer size as the head split is disabled. */
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM);
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
> - rxq->max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> - dev_data->dev_conf.rxmode.max_rx_pkt_len);
> + rxq->max_pkt_len =
> + RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> + frame_size);
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
> @@ -361,11 +362,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
> return -EINVAL;
> }
>
> - buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> - RTE_PKTMBUF_HEADROOM);
> -
> /* Check if scattered RX needs to be used. */
> - if (rxq->max_pkt_len > buf_size)
> + if (frame_size > buf_size)
> dev_data->scattered_rx = 1;
>
> rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
> index 224a0954836b..b26723064b07 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -20,13 +20,6 @@
>
> #define IGC_INTEL_VENDOR_ID 0x8086
>
> -/*
> - * The overhead from MTU to max frame size.
> - * Considering VLAN so tag needs to be counted.
> - */
> -#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> - RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE)
> -
> #define IGC_FC_PAUSE_TIME 0x0680
> #define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
> #define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
> @@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
>
> /* switch to jumbo mode if needed */
> if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= IGC_RCTL_LPE;
> } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl &= ~IGC_RCTL_LPE;
> }
> IGC_WRITE_REG(hw, IGC_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> return 0;
> }
> @@ -2486,6 +2473,7 @@ static int
> igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
> if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> - RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> + if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
> + frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> @@ -2519,6 +2498,7 @@ static int
> igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
> if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
> + if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
> + frame_size, MAX_RX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
> index 7b6c209df3b6..b3473b5b1646 100644
> --- a/drivers/net/igc/igc_ethdev.h
> +++ b/drivers/net/igc/igc_ethdev.h
> @@ -35,6 +35,13 @@ extern "C" {
> #define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
> #define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX)
>
> +/*
> + * The overhead from MTU to max frame size.
> + * Considering VLAN so tag needs to be counted.
> + */
> +#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2)
> +
> /*
> * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
> * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
> diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
> index b5489eedd220..d80808a002f5 100644
> --- a/drivers/net/igc/igc_txrx.c
> +++ b/drivers/net/igc/igc_txrx.c
> @@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> struct igc_rx_queue *rxq;
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
> - uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint32_t rctl;
> uint32_t rxcsum;
> uint16_t buf_size;
> @@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
>
> /* Configure support of jumbo frames, if any. */
> - if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> rctl |= IGC_RCTL_LPE;
> -
> - /*
> - * Set maximum packet length by default, and might be updated
> - * together with enabling/disabling dual VLAN.
> - */
> - IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
> - } else {
> + else
> rctl &= ~IGC_RCTL_LPE;
> - }
> +
> + max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
> + /*
> + * Set maximum packet length by default, and might be updated
> + * together with enabling/disabling dual VLAN.
> + */
> + IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
> + if (max_rx_pktlen > buf_size)
> dev->data->scattered_rx = 1;
> } else {
> /*
> diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
> index e6207939665e..97447a10e46a 100644
> --- a/drivers/net/ionic/ionic_ethdev.c
> +++ b/drivers/net/ionic/ionic_ethdev.c
> @@ -343,25 +343,15 @@ static int
> ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> {
> struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
> - uint32_t max_frame_size;
> int err;
>
> IONIC_PRINT_CALL();
>
> /*
> * Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
> - * is done by the the API.
> + * is done by the API.
> */
>
> - /*
> - * Max frame size is MTU + Ethernet header + VLAN + QinQ
> - * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
> - */
> - max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
> -
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
> - return -EINVAL;
> -
> err = ionic_lif_change_mtu(lif, mtu);
> if (err)
> return err;
> diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
> index b83ea1bcaa6a..3f5fc66abf71 100644
> --- a/drivers/net/ionic/ionic_rxtx.c
> +++ b/drivers/net/ionic/ionic_rxtx.c
> @@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
> struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
> struct rte_mbuf *rxm, *rxm_seg;
> uint32_t max_frame_size =
> - rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint64_t pkt_flags = 0;
> uint32_t pkt_type;
> struct ionic_rx_stats *stats = &rxq->stats;
> @@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
> int __rte_cold
> ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
> {
> - uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
> struct ionic_rx_qcq *rxq;
> int err;
> @@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> {
> struct ionic_rx_qcq *rxq = rx_queue;
> uint32_t frame_size =
> - rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> struct ionic_rx_service service_cb_arg;
>
> service_cb_arg.rx_pkts = rx_pkts;
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 589d9fa5877d..3634c0c8c5f0 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IPN3KE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (rpst->i40e_pf_eth) {
> ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index b5371568b54d..b9048ade3c35 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -5172,7 +5172,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> struct ixgbe_hw *hw;
> struct rte_eth_dev_info dev_info;
> uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
> - struct rte_eth_dev_data *dev_data = dev->data;
> int ret;
>
> ret = ixgbe_dev_info_get(dev, &dev_info);
> @@ -5186,9 +5185,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> /* If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> */
> - if (dev_data->dev_started && !dev_data->scattered_rx &&
> - (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
> - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> + if (dev->data->dev_started && !dev->data->scattered_rx &&
> + frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
> + dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
> PMD_INIT_LOG(ERR, "Stop port first.");
> return -EINVAL;
> }
> @@ -5197,23 +5196,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > IXGBE_ETH_MAX_LEN) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU) {
> + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> }
> IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> maxfrs &= 0x0000FFFF;
> - maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
> + maxfrs |= (frame_size << 16);
> IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
>
> return 0;
> @@ -6267,12 +6261,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
> * set as 0x4.
> */
> if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
> - (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
> - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> - IXGBE_MMW_SIZE_JUMBO_FRAME);
> + (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
> + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
> else
> - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> - IXGBE_MMW_SIZE_DEFAULT);
> + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
>
> /* Set RTTBCNRC of queue X */
> IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
> @@ -6556,8 +6548,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
>
> hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
> + if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
> return -EINVAL;
>
> /* If device is started, refuse mtu that requires the support of
> @@ -6565,7 +6556,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> */
> if (dev_data->dev_started && !dev_data->scattered_rx &&
> (max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
> - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> + dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> PMD_INIT_LOG(ERR, "Stop port first.");
> return -EINVAL;
> }
> @@ -6582,8 +6573,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> if (ixgbevf_rlpml_set_vf(hw, max_frame))
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
> return 0;
> }
>
> diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
> index fbf2b17d160f..9bcbc445f2d0 100644
> --- a/drivers/net/ixgbe/ixgbe_pf.c
> +++ b/drivers/net/ixgbe/ixgbe_pf.c
> @@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
> * if PF has jumbo frames enabled which means legacy
> * VFs are disabled.
> */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - IXGBE_ETH_MAX_LEN)
> + if (dev->data->mtu > RTE_ETHER_MTU)
> break;
> /* fall through */
> default:
> @@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
> * legacy VFs.
> */
> if (max_frame > IXGBE_ETH_MAX_LEN ||
> - dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - IXGBE_ETH_MAX_LEN)
> + dev->data->mtu > RTE_ETHER_MTU)
> return -1;
> break;
> }
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index d69f36e97770..5e32a6ce6940 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -5051,6 +5051,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> uint16_t buf_size;
> uint16_t i;
> struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
> + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
> int rc;
>
> PMD_INIT_FUNC_TRACE();
> @@ -5086,7 +5087,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> maxfrs &= 0x0000FFFF;
> - maxfrs |= (rx_conf->max_rx_pkt_len << 16);
> + maxfrs |= (frame_size << 16);
> IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
> } else
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> @@ -5160,8 +5161,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> IXGBE_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
> + if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
> dev->data->scattered_rx = 1;
> if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> @@ -5641,6 +5641,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
> struct ixgbe_hw *hw;
> struct ixgbe_rx_queue *rxq;
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
> uint64_t bus_addr;
> uint32_t srrctl, psrtype = 0;
> uint16_t buf_size;
> @@ -5677,10 +5678,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
> * ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
> * VF packets received can work in all cases.
> */
> - if (ixgbevf_rlpml_set_vf(hw,
> - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
> + if (ixgbevf_rlpml_set_vf(hw, frame_size)) {
> PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + frame_size);
> return -EINVAL;
> }
>
> @@ -5739,8 +5739,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
> /* It adds dual VLAN length for supporting dual VLAN */
> - (rxmode->max_rx_pkt_len +
> - 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
> + (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter mode");
> dev->data->scattered_rx = 1;
> diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> index b72060a4499b..f0c165c89ba7 100644
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ b/drivers/net/liquidio/lio_ethdev.c
> @@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> {
> struct lio_device *lio_dev = LIO_DEV(eth_dev);
> uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
> - uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> struct lio_dev_ctrl_cmd ctrl_cmd;
> struct lio_ctrl_pkt ctrl_pkt;
>
> @@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> return -1;
> }
>
> - if (frame_len > LIO_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> eth_dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
> - eth_dev->data->mtu = mtu;
> -
> return 0;
> }
>
> @@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev)
> static int
> lio_dev_start(struct rte_eth_dev *eth_dev)
> {
> - uint16_t mtu;
> - uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> struct lio_device *lio_dev = LIO_DEV(eth_dev);
> uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> int ret = 0;
> @@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
> goto dev_mtu_set_error;
> }
>
> - mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN);
> - if (mtu < RTE_ETHER_MIN_MTU)
> - mtu = RTE_ETHER_MIN_MTU;
> -
> - if (eth_dev->data->mtu != mtu) {
> - ret = lio_dev_mtu_set(eth_dev, mtu);
> - if (ret)
> - goto dev_mtu_set_error;
> - }
> + ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
> + if (ret)
> + goto dev_mtu_set_error;
>
> return 0;
>
> diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
> index 978cbb8201ea..4a5cfd22aa71 100644
> --- a/drivers/net/mlx4/mlx4_rxq.c
> +++ b/drivers/net/mlx4/mlx4_rxq.c
> @@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> int ret;
> uint32_t crc_present;
> uint64_t offloads;
> + uint32_t max_rx_pktlen;
>
> offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
>
> @@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> };
> /* Enable scattered packets support for this queue if necessary. */
> MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - (mb_len - RTE_PKTMBUF_HEADROOM)) {
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> + if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
> ;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> - uint32_t size =
> - RTE_PKTMBUF_HEADROOM +
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
> uint32_t sges_n;
>
> /*
> @@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> /* Make sure sges_n did not overflow. */
> size = mb_len * (1 << rxq->sges_n);
> size -= RTE_PKTMBUF_HEADROOM;
> - if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
> + if (size < max_rx_pktlen) {
> rte_errno = EOVERFLOW;
> ERROR("%p: too many SGEs (%u) needed to handle"
> " requested maximum packet size %u",
> (void *)dev,
> - 1 << sges_n,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + 1 << sges_n, max_rx_pktlen);
> goto error;
> }
> } else {
> WARN("%p: the requested maximum Rx packet size (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - (void *)dev,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + (void *)dev, max_rx_pktlen,
> mb_len - RTE_PKTMBUF_HEADROOM);
> }
> DEBUG("%p: maximum number of segments per packet: %u",
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index bb9a9080871d..bd16dde6de13 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -1336,10 +1336,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> uint64_t offloads = conf->offloads |
> dev->data->dev_conf.rxmode.offloads;
> unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
> - unsigned int max_rx_pkt_len = lro_on_queue ?
> + unsigned int max_rx_pktlen = lro_on_queue ?
> dev->data->dev_conf.rxmode.max_lro_pkt_size :
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
> + dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
> RTE_PKTMBUF_HEADROOM;
> unsigned int max_lro_size = 0;
> unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
> @@ -1378,7 +1379,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> * needed to handle max size packets, replace zero length
> * with the buffer length from the pool.
> */
> - tail_len = max_rx_pkt_len;
> + tail_len = max_rx_pktlen;
> do {
> struct mlx5_eth_rxseg *hw_seg =
> &tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
> @@ -1416,7 +1417,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to handle"
> " requested maximum packet size %u, the maximum"
> " supported are %u", dev->data->port_id,
> - tmpl->rxq.rxseg_n, max_rx_pkt_len,
> + tmpl->rxq.rxseg_n, max_rx_pktlen,
> MLX5_MAX_RXQ_NSEG);
> rte_errno = ENOTSUP;
> goto error;
> @@ -1441,7 +1442,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
> " configured and no enough mbuf space(%u) to contain "
> "the maximum RX packet length(%u) with head-room(%u)",
> - dev->data->port_id, idx, mb_len, max_rx_pkt_len,
> + dev->data->port_id, idx, mb_len, max_rx_pktlen,
> RTE_PKTMBUF_HEADROOM);
> rte_errno = ENOSPC;
> goto error;
> @@ -1460,7 +1461,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> * following conditions are met:
> * - MPRQ is enabled.
> * - The number of descs is more than the number of strides.
> - * - max_rx_pkt_len plus overhead is less than the max size
> + * - max_rx_pktlen plus overhead is less than the max size
> * of a stride or mprq_stride_size is specified by a user.
> * Need to make sure that there are enough strides to encap
> * the maximum packet size in case mprq_stride_size is set.
> @@ -1484,7 +1485,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> !!(offloads & DEV_RX_OFFLOAD_SCATTER);
> tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
> config->mprq.max_memcpy_len);
> - max_lro_size = RTE_MIN(max_rx_pkt_len,
> + max_lro_size = RTE_MIN(max_rx_pktlen,
> (1u << tmpl->rxq.strd_num_n) *
> (1u << tmpl->rxq.strd_sz_n));
> DRV_LOG(DEBUG,
> @@ -1493,9 +1494,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> dev->data->port_id, idx,
> tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
> } else if (tmpl->rxq.rxseg_n == 1) {
> - MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
> + MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
> tmpl->rxq.sges_n = 0;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> unsigned int sges_n;
>
> @@ -1517,13 +1518,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to handle"
> " requested maximum packet size %u, the maximum"
> " supported are %u", dev->data->port_id,
> - 1 << sges_n, max_rx_pkt_len,
> + 1 << sges_n, max_rx_pktlen,
> 1u << MLX5_MAX_LOG_RQ_SEGS);
> rte_errno = ENOTSUP;
> goto error;
> }
> tmpl->rxq.sges_n = sges_n;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> }
> if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
> DRV_LOG(WARNING,
> diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
> index a3ee15020466..520c6fdb1d31 100644
> --- a/drivers/net/mvneta/mvneta_ethdev.c
> +++ b/drivers/net/mvneta/mvneta_ethdev.c
> @@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - MRVL_NETA_ETH_HDRS_LEN;
> -
> if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
> priv->multiseg = 1;
>
> @@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - dev->data->mtu = mtu;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
> -
> if (!priv->ppio)
> /* It is OK. New MTU will be set later on mvneta_dev_start */
> return 0;
> diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
> index dfa7ecc09039..2cd4fb31348b 100644
> --- a/drivers/net/mvneta/mvneta_rxtx.c
> +++ b/drivers/net/mvneta/mvneta_rxtx.c
> @@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> struct mvneta_priv *priv = dev->data->dev_private;
> struct mvneta_rxq *rxq;
> uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
> - uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
>
> frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS;
>
> - if (frame_size < max_rx_pkt_len) {
> + if (frame_size < max_rx_pktlen) {
> MVNETA_LOG(ERR,
> "Mbuf size must be increased to %u bytes to hold up "
> "to %u bytes of data.",
> - buf_size + max_rx_pkt_len - frame_size,
> - max_rx_pkt_len);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> - MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_rx_pktlen + buf_size - frame_size,
> + max_rx_pktlen);
> + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
> + MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
> }
>
> if (dev->data->rx_queues[idx]) {
> diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
> index 63d348e27936..9d578b4ffa5d 100644
> --- a/drivers/net/mvpp2/mrvl_ethdev.c
> +++ b/drivers/net/mvpp2/mrvl_ethdev.c
> @@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - MRVL_PP2_ETH_HDRS_LEN;
> - if (dev->data->mtu > priv->max_mtu) {
> - MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n",
> - dev->data->mtu,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - priv->max_mtu);
> - return -EINVAL;
> - }
> + if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
> + MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
> + dev->data->dev_conf.rxmode.mtu,
> + priv->max_mtu);
> + return -EINVAL;
> }
>
> if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
> @@ -589,9 +584,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - dev->data->mtu = mtu;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
> -
> if (!priv->ppio)
> return 0;
>
> @@ -1984,7 +1976,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> struct mrvl_priv *priv = dev->data->dev_private;
> struct mrvl_rxq *rxq;
> uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
> - uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
> int ret, tc, inq;
> uint64_t offloads;
>
> @@ -1999,17 +1991,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> return -EFAULT;
> }
>
> - frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
> - MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
> - if (frame_size < max_rx_pkt_len) {
> + frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS;
> + if (frame_size < max_rx_pktlen) {
> MRVL_LOG(WARNING,
> "Mbuf size must be increased to %u bytes to hold up "
> "to %u bytes of data.",
> - buf_size + max_rx_pkt_len - frame_size,
> - max_rx_pkt_len);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> - MRVL_LOG(INFO, "Setting max rx pkt len to %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_rx_pktlen + buf_size - frame_size,
> + max_rx_pktlen);
> + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
> + MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
> }
>
> if (dev->data->rx_queues[idx]) {
> diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
> index b18edd8c7bac..ff531fdb2354 100644
> --- a/drivers/net/nfp/nfp_net.c
> +++ b/drivers/net/nfp/nfp_net.c
> @@ -644,7 +644,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
> }
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - hw->mtu = rxmode->max_rx_pkt_len;
> + hw->mtu = dev->data->mtu;
>
> if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
> ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
> @@ -1551,16 +1551,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> /* switch to jumbo mode if needed */
> - if ((uint32_t)mtu > RTE_ETHER_MTU)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
> -
> /* writing to configuration space */
> - nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
> + nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
>
> hw->mtu = mtu;
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
> index 9f4c0503b4d4..69c3bda12df8 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.c
> +++ b/drivers/net/octeontx/octeontx_ethdev.c
> @@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (frame_size > OCCTX_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
> frame_size);
>
> @@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
> buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
>
> /* Setup scatter mode if needed by jumbo */
> - if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (data->mtu > buffsz) {
> nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
> nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
> @@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
> evdev_priv->rx_offload_flags = nic->rx_offload_flags;
> evdev_priv->tx_offload_flags = nic->tx_offload_flags;
>
> - /* Setup MTU based on max_rx_pkt_len */
> - nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD;
> + /* Setup MTU */
> + nic->mtu = data->mtu;
>
> return 0;
> }
> @@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
> octeontx_recheck_rx_offloads(rxq);
> }
>
> - /* Setting up the mtu based on max_rx_pkt_len */
> + /* Setting up the mtu */
> ret = octeontx_dev_mtu_set(dev, nic->mtu);
> if (ret) {
> octeontx_log_err("Failed to set default MTU size %d", ret);
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
> index 40af99a26a17..9f162475523c 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.c
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
> mbp_priv = rte_mempool_get_priv(rxq->pool);
> buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
>
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
> dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
>
> diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
> index 5a4501208e9e..ba282762b749 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_ops.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
> @@ -58,14 +58,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (frame_size > NIX_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return rc;
> }
>
> @@ -74,7 +71,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> {
> struct rte_eth_dev_data *data = eth_dev->data;
> struct otx2_eth_rxq *rxq;
> - uint16_t mtu;
> int rc;
>
> rxq = data->rx_queues[0];
> @@ -82,10 +78,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> /* Setup scatter mode if needed by jumbo */
> otx2_nix_enable_mseg_on_jumbo(rxq);
>
> - /* Setup MTU based on max_rx_pkt_len */
> - mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
> -
> - rc = otx2_nix_mtu_set(eth_dev, mtu);
> + rc = otx2_nix_mtu_set(eth_dev, data->mtu);
> if (rc)
> otx2_err("Failed to set default MTU size %d", rc);
>
> diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
> index feec4d10a26e..2619bd2f2a19 100644
> --- a/drivers/net/pfe/pfe_ethdev.c
> +++ b/drivers/net/pfe/pfe_ethdev.c
> @@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev)
> static int
> pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> - int ret;
> struct pfe_eth_priv_s *priv = dev->data->dev_private;
> uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
>
> /*TODO Support VLAN*/
> - ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
> - if (!ret)
> - dev->data->mtu = mtu;
> -
> - return ret;
> + return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
> }
>
> /* pfe_eth_enet_addr_byte_mac
> diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
> index 323d46e6ebb2..53b2c0ca10e3 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
> return -ENOMEM;
> }
>
> - /* If jumbo enabled adjust MTU */
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - eth_dev->data->mtu =
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
> -
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
> eth_dev->data->scattered_rx = 1;
>
> @@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
> struct rte_eth_dev_info dev_info = {0};
> struct qede_fastpath *fp;
> - uint32_t max_rx_pkt_len;
> uint32_t frame_size;
> uint16_t bufsz;
> bool restart = false;
> @@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> DP_ERR(edev, "Error during getting ethernet device info\n");
> return rc;
> }
> - max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
> - frame_size = max_rx_pkt_len;
> +
> + frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
> if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
> DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
> mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
> @@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> fp->rxq->rx_buf_size = rc;
> }
> }
> - if (frame_size > QEDE_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> dev->data->dev_started = 1;
> }
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
> -
> return 0;
> }
>
> diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
> index 298f4e3e4273..62a126999a5c 100644
> --- a/drivers/net/qede/qede_rxtx.c
> +++ b/drivers/net/qede/qede_rxtx.c
> @@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
> struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> struct qede_rx_queue *rxq;
> - uint16_t max_rx_pkt_len;
> + uint16_t max_rx_pktlen;
> uint16_t bufsz;
> int rc;
>
> @@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
> dev->data->rx_queues[qid] = NULL;
> }
>
> - max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
>
> /* Fix up RX buffer size */
> bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
> /* cache align the mbuf size to simplfy rx_buf_size calculation */
> bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
> if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
> - (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
> + (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
> if (!dev->data->scattered_rx) {
> DP_INFO(edev, "Forcing scatter-gather mode\n");
> dev->data->scattered_rx = 1;
> }
> }
>
> - rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
> + rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
> if (rc < 0)
> return rc;
>
> diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
> index c50ecea0b993..2afb13b77892 100644
> --- a/drivers/net/sfc/sfc_ethdev.c
> +++ b/drivers/net/sfc/sfc_ethdev.c
> @@ -1016,15 +1016,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
>
> /*
> * The driver does not use it, but other PMDs update jumbo frame
> - * flag and max_rx_pkt_len when MTU is set.
> + * flag when MTU is set.
> */
> if (mtu > RTE_ETHER_MTU) {
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> }
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
> -
> sfc_adapter_unlock(sa);
>
> sfc_log_init(sa, "done");
> diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
> index ac117f9c4814..ca9538fb8f2f 100644
> --- a/drivers/net/sfc/sfc_port.c
> +++ b/drivers/net/sfc/sfc_port.c
> @@ -364,14 +364,10 @@ sfc_port_configure(struct sfc_adapter *sa)
> {
> const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
> struct sfc_port *port = &sa->port;
> - const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
>
> sfc_log_init(sa, "entry");
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - port->pdu = rxmode->max_rx_pkt_len;
> - else
> - port->pdu = EFX_MAC_PDU(dev_data->mtu);
> + port->pdu = EFX_MAC_PDU(dev_data->mtu);
>
> return 0;
> }
> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
> index c515de3bf71d..0a8d29277aeb 100644
> --- a/drivers/net/tap/rte_eth_tap.c
> +++ b/drivers/net/tap/rte_eth_tap.c
> @@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct pmd_internals *pmd = dev->data->dev_private;
> struct ifreq ifr = { .ifr_mtu = mtu };
> - int err = 0;
>
> - err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
> - if (!err)
> - dev->data->mtu = mtu;
> -
> - return err;
> + return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
> }
>
> static int
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
> index fc1844ddfce1..1d1360faff66 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> (frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
> return -EINVAL;
>
> - if (frame_size > NIC_HW_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> if (nicvf_mbox_update_hw_max_frs(nic, mtu))
> return -EINVAL;
>
> - /* Update max_rx_pkt_len */
> - rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
> nic->mtu = mtu;
>
> for (i = 0; i < nic->sqs_count; i++)
> @@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
> }
>
> /* Setup scatter mode if needed by jumbo */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE > buffsz)
> + if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
> dev->data->scattered_rx = 1;
> if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
> dev->data->scattered_rx = 1;
>
> - /* Setup MTU based on max_rx_pkt_len or default */
> - mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
> - dev->data->dev_conf.rxmode.max_rx_pkt_len
> - - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
> + /* Setup MTU */
> + mtu = dev->data->mtu;
>
> if (nicvf_dev_set_mtu(dev, mtu)) {
> PMD_INIT_LOG(ERR, "Failed to set default mtu size");
> diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
> index e62675520a15..d773a81665d7 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -3482,8 +3482,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + /* switch to jumbo mode if needed */
> + if (mtu > RTE_ETHER_MTU)
> + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> + dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (hw->mode)
> wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
> index 3021933965c8..44cfcd76bca4 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.h
> +++ b/drivers/net/txgbe/txgbe_ethdev.h
> @@ -55,6 +55,10 @@
> #define TXGBE_5TUPLE_MAX_PRI 7
> #define TXGBE_5TUPLE_MIN_PRI 1
>
> +
> +/* The overhead from MTU to max frame size. */
> +#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
> +
> #define TXGBE_RSS_OFFLOAD_ALL ( \
> ETH_RSS_IPV4 | \
> ETH_RSS_NONFRAG_IPV4_TCP | \
> diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
> index 6f577f4c80df..3362ca097ca7 100644
> --- a/drivers/net/txgbe/txgbe_ethdev_vf.c
> +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
> @@ -1143,8 +1143,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> if (txgbevf_rlpml_set_vf(hw, max_frame))
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
> return 0;
> }
>
> diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
> index 1a261287d1bd..c6cd3803c434 100644
> --- a/drivers/net/txgbe/txgbe_rxtx.c
> +++ b/drivers/net/txgbe/txgbe_rxtx.c
> @@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure jumbo frame support, if any.
> */
> - if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> - TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
> - } else {
> - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> - TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
> - }
> + wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> + TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD));
>
> /*
> * If loopback mode is configured, set LPBK bit.
> @@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
> + if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
> + 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
> dev->data->scattered_rx = 1;
> if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> @@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
> * VF packets received can work in all cases.
> */
> if (txgbevf_rlpml_set_vf(hw,
> - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
> + (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
> PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + dev->data->mtu + TXGBE_ETH_OVERHEAD);
> return -EINVAL;
> }
>
> @@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
> /* It adds dual VLAN length for supporting dual VLAN */
> - (rxmode->max_rx_pkt_len +
> + (dev->data->mtu + TXGBE_ETH_OVERHEAD +
> 2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter mode");
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index 05683056676c..9491cc2669f7 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -2009,8 +2009,6 @@ virtio_dev_configure(struct rte_eth_dev *dev)
> const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
> struct virtio_hw *hw = dev->data->dev_private;
> - uint32_t ether_hdr_len = RTE_ETHER_HDR_LEN + VLAN_TAG_LEN +
> - hw->vtnet_hdr_size;
> uint64_t rx_offloads = rxmode->offloads;
> uint64_t tx_offloads = txmode->offloads;
> uint64_t req_features;
> @@ -2039,7 +2037,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
> return ret;
> }
>
> - if (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len)
> + if (rxmode->mtu > hw->max_mtu)
> req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
>
> if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
> index 5251db0b1674..98e47e0812d5 100644
> --- a/examples/bbdev_app/main.c
> +++ b/examples/bbdev_app/main.c
> @@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/bond/main.c b/examples/bond/main.c
> index f48400e21156..70c37a7d2ba7 100644
> --- a/examples/bond/main.c
> +++ b/examples/bond/main.c
> @@ -117,7 +117,6 @@ static struct rte_mempool *mbuf_pool;
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> diff --git a/examples/distributor/main.c b/examples/distributor/main.c
> index 1b1029660e77..0b973d392dc8 100644
> --- a/examples/distributor/main.c
> +++ b/examples/distributor/main.c
> @@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
> index f70ab0cc9e38..f5c28268d9f8 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> @@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
> index ca6cd200caad..9d9f150522dd 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> @@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
> index 94c155364842..3e1daa228316 100644
> --- a/examples/flow_classify/flow_classify.c
> +++ b/examples/flow_classify/flow_classify.c
> @@ -59,12 +59,6 @@ static struct{
> } parm_config;
> const char cb_port_delim[] = ":";
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> struct flow_classifier {
> struct rte_flow_classifier *cls;
> };
> @@ -191,7 +185,7 @@ static struct rte_flow_attr attr;
> static inline int
> port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> struct rte_ether_addr addr;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> @@ -202,6 +196,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
> index 2e377e2d4bb6..5dbf60f7ef54 100644
> --- a/examples/ioat/ioatfwd.c
> +++ b/examples/ioat/ioatfwd.c
> @@ -806,7 +806,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
> index 77a6a18d1914..f97287ce2243 100644
> --- a/examples/ip_fragmentation/main.c
> +++ b/examples/ip_fragmentation/main.c
> @@ -146,7 +146,7 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE,
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> @@ -914,9 +914,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> @@ -959,8 +959,7 @@ main(int argc, char **argv)
> }
>
> /* set the mtu to the maximum received packet size */
> - ret = rte_eth_dev_set_mtu(portid,
> - local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD);
> + ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu);
> if (ret < 0) {
> printf("\n");
> rte_exit(EXIT_FAILURE, "Set MTU failed: "
> diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
> index 16bcffe356bc..8628db22f56b 100644
> --- a/examples/ip_pipeline/link.c
> +++ b/examples/ip_pipeline/link.c
> @@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
> .link_speeds = 0,
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
> + .mtu = 9000, /* Jumbo frame MTU */
> .split_hdr_size = 0, /* Header split buffer size */
> },
> .rx_adv_conf = {
> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
> index ce8882a45883..f868e5d906c7 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -162,7 +162,7 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE,
It feel likes that the replacement of max_rx_pkt_len with MTU is
inappropriate.
Because "max_rx_pkt_len " is the sum of "mtu" and "overhead_len".
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> DEV_RX_OFFLOAD_JUMBO_FRAME),
> @@ -875,7 +875,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
> */
>
> nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM;
> - nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
> + nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
> + + BUF_SIZE - 1) / BUF_SIZE;
> nb_mbuf *= 2; /* ipv4 and ipv6 */
> nb_mbuf += nb_rxd + nb_txd;
>
> @@ -1046,9 +1047,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
> index f252d34985b4..f8a1f544c21d 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -2161,7 +2160,6 @@ cryptodevs_init(uint16_t req_queue_num)
> static void
> port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
> {
> - uint32_t frame_size;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_txconf *txconf;
> uint16_t nb_tx_queue, nb_rx_queue;
> @@ -2209,10 +2207,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
> printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
> nb_rx_queue, nb_tx_queue);
>
> - frame_size = MTU_TO_FRAMELEN(mtu_size);
> - if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
> + if (mtu_size > RTE_ETHER_MTU)
> local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - local_port_conf.rxmode.max_rx_pkt_len = frame_size;
> + local_port_conf.rxmode.mtu = mtu_size;
>
> if (multi_seg_required()) {
> local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
> diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
> index fd6207a18b79..989d70ae257a 100644
> --- a/examples/ipv4_multicast/main.c
> +++ b/examples/ipv4_multicast/main.c
> @@ -107,7 +107,7 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
> },
> @@ -694,9 +694,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> diff --git a/examples/kni/main.c b/examples/kni/main.c
> index beabb3c848aa..c10814c6a94f 100644
> --- a/examples/kni/main.c
> +++ b/examples/kni/main.c
> @@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
>
> memcpy(&conf, &port_conf, sizeof(conf));
> /* Set new MTU */
> - if (new_mtu > RTE_ETHER_MAX_LEN)
> + if (new_mtu > RTE_ETHER_MTU)
> conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* mtu + length of header + length of FCS = max pkt length */
> - conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
> - KNI_ENET_FCS_SIZE;
> + conf.rxmode.mtu = new_mtu;
> ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
> if (ret < 0) {
> RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
> diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
> index 8e7eb3248589..cef4187467f0 100644
> --- a/examples/l2fwd-cat/l2fwd-cat.c
> +++ b/examples/l2fwd-cat/l2fwd-cat.c
> @@ -19,10 +19,6 @@
> #define MBUF_CACHE_SIZE 250
> #define BURST_SIZE 32
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> -};
> -
> /* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
>
> /*
> @@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> uint16_t q;
> @@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> /* Configure the Ethernet device. */
> retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
> if (retval != 0)
> diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
> index 4f5161649234..b36c6123c652 100644
> --- a/examples/l2fwd-crypto/main.c
> +++ b/examples/l2fwd-crypto/main.c
> @@ -215,7 +215,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
> index ab341e55b299..0d0857bf8041 100644
> --- a/examples/l2fwd-event/l2fwd_common.c
> +++ b/examples/l2fwd-event/l2fwd_common.c
> @@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
> uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
> struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
> index a1f457b564b6..913037d5f835 100644
> --- a/examples/l3fwd-acl/main.c
> +++ b/examples/l3fwd-acl/main.c
> @@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -1833,12 +1832,12 @@ parse_args(int argc, char **argv)
> print_usage(prgname);
> return -1;
> }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN);
> }
> - printf("set jumbo frame max packet length "
> - "to %u\n",
> - (unsigned int)
> - port_conf.rxmode.max_rx_pkt_len);
> + printf("set jumbo frame max packet length to %u\n",
> + (unsigned int)port_conf.rxmode.mtu +
> + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
> break;
> }
> case OPT_RULE_IPV4_NUM:
> diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
> index 75c2e0ef3f3f..ddcb2fbc995d 100644
> --- a/examples/l3fwd-graph/main.c
> +++ b/examples/l3fwd-graph/main.c
> @@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> @@ -510,7 +509,8 @@ parse_args(int argc, char **argv)
> print_usage(prgname);
> return -1;
> }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN
> + + RTE_ETHER_CRC_LEN);
> }
> break;
> }
> diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
> index f8dfed163423..02221a79fabf 100644
> --- a/examples/l3fwd-power/main.c
> +++ b/examples/l3fwd-power/main.c
> @@ -250,7 +250,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -1972,11 +1971,13 @@ parse_args(int argc, char **argv)
> print_usage(prgname);
> return -1;
> }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> + port_conf.rxmode.mtu = ret -
> + (RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN);
> }
> - printf("set jumbo frame "
> - "max packet length to %u\n",
> - (unsigned int)port_conf.rxmode.max_rx_pkt_len);
> + printf("set jumbo frame max packet length to %u\n",
> + (unsigned int)port_conf.rxmode.mtu +
> + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
> }
>
> if (!strncmp(lgopts[option_index].name,
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 4cb800aa158d..80b5b93d5f0d 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -719,7 +718,8 @@ parse_args(int argc, char **argv)
> print_usage(prgname);
> return -1;
> }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN);
> }
> break;
> }
> diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
> index 2f593abf263d..1960f00ad28d 100644
> --- a/examples/performance-thread/l3fwd-thread/main.c
> +++ b/examples/performance-thread/l3fwd-thread/main.c
> @@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -3004,10 +3003,12 @@ parse_args(int argc, char **argv)
> print_usage(prgname);
> return -1;
> }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN);
> }
> printf("set jumbo frame max packet length to %u\n",
> - (unsigned int)port_conf.rxmode.max_rx_pkt_len);
> + (unsigned int)port_conf.rxmode.mtu +
> + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
> break;
> }
> #if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
> diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
> index 467cda5a6dac..52f2a139d2c6 100644
> --- a/examples/pipeline/obj.c
> +++ b/examples/pipeline/obj.c
> @@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
> .link_speeds = 0,
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
> + .mtu = 9000, /* Jumbo frame max MTU */
> .split_hdr_size = 0, /* Header split buffer size */
> },
> .rx_adv_conf = {
> diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
> index 173451eedcbe..54148631f09e 100644
> --- a/examples/ptpclient/ptpclient.c
> +++ b/examples/ptpclient/ptpclient.c
> @@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
> uint8_t ptp_enabled_port_nb;
> static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static const struct rte_ether_addr ether_multicast = {
> .addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
> };
> @@ -178,7 +172,7 @@ static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> struct rte_eth_dev_info dev_info;
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1;
> const uint16_t tx_rings = 1;
> int retval;
> @@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
> index 6e724f37835a..2e9ed3cf7ef7 100644
> --- a/examples/qos_meter/main.c
> +++ b/examples/qos_meter/main.c
> @@ -54,7 +54,6 @@ static struct rte_mempool *pool = NULL;
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
> index 1abe003fc6ae..1367569c65db 100644
> --- a/examples/qos_sched/init.c
> +++ b/examples/qos_sched/init.c
> @@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
> index 192521c3c6b0..ea86c69b07ad 100644
> --- a/examples/rxtx_callbacks/main.c
> +++ b/examples/rxtx_callbacks/main.c
> @@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
> static const char usage[] =
> "%s EAL_ARGS -- [-t]\n";
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static struct {
> uint64_t total_cycles;
> uint64_t total_queue_cycles;
> @@ -118,7 +112,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused,
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> uint16_t nb_rxd = RX_RING_SIZE;
> uint16_t nb_txd = TX_RING_SIZE;
> @@ -131,6 +125,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
> index 43b9d17a3c91..26c63ffed742 100644
> --- a/examples/skeleton/basicfwd.c
> +++ b/examples/skeleton/basicfwd.c
> @@ -17,12 +17,6 @@
> #define MBUF_CACHE_SIZE 250
> #define BURST_SIZE 32
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> /* basicfwd.c: Basic DPDK skeleton forwarding example. */
>
> /*
> @@ -32,7 +26,7 @@ static const struct rte_eth_conf port_conf_default = {
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> uint16_t nb_rxd = RX_RING_SIZE;
> uint16_t nb_txd = TX_RING_SIZE;
> @@ -44,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index d2179eadb979..e27712727f6a 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -639,8 +639,8 @@ us_vhost_parse_args(int argc, char **argv)
> if (ret) {
> vmdq_conf_default.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - vmdq_conf_default.rxmode.max_rx_pkt_len
> - = JUMBO_FRAME_MAX_SIZE;
> + vmdq_conf_default.rxmode.mtu =
> + JUMBO_FRAME_MAX_SIZE;
> }
> break;
>
> diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
> index 7d5bf6855426..309d1a3a8444 100644
> --- a/examples/vm_power_manager/main.c
> +++ b/examples/vm_power_manager/main.c
> @@ -51,17 +51,10 @@
> static uint32_t enabled_port_mask;
> static volatile bool force_quit;
>
> -/****************/
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> uint16_t q;
> @@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c607eabb5b0c..3451125639f9 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1249,15 +1249,15 @@ rte_eth_dev_tx_offload_name(uint64_t offload)
>
> static inline int
> eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
> - uint32_t max_rx_pkt_len, uint32_t dev_info_size)
> + uint32_t max_rx_pktlen, uint32_t dev_info_size)
> {
> int ret = 0;
>
> if (dev_info_size == 0) {
> - if (config_size != max_rx_pkt_len) {
> + if (config_size != max_rx_pktlen) {
> RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size"
> " %u != %u is not allowed\n",
> - port_id, config_size, max_rx_pkt_len);
> + port_id, config_size, max_rx_pktlen);
> ret = -EINVAL;
> }
> } else if (config_size > dev_info_size) {
> @@ -1325,6 +1325,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
> return ret;
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1332,6 +1345,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> struct rte_eth_dev *dev;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_conf orig_conf;
> + uint32_t max_rx_pktlen;
> uint16_t overhead_len;
> int diag;
> int ret;
> @@ -1375,11 +1389,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
>
> /* Get the real Ethernet overhead length */
> - if (dev_info.max_mtu != UINT16_MAX &&
> - dev_info.max_rx_pktlen > dev_info.max_mtu)
> - overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
> - else
> - overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
>
> /* If number of queues specified by application for both Rx and Tx is
> * zero, use driver preferred values. This cannot be done individually
> @@ -1448,49 +1459,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> /*
> - * If jumbo frames are enabled, check that the maximum RX packet
> - * length is supported by the configured device.
> + * Check that the maximum RX packet length is supported by the
> + * configured device.
> */
> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - dev_info.max_rx_pktlen);
> - ret = -EINVAL;
> - goto rollback;
> - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - (unsigned int)RTE_ETHER_MIN_LEN);
> - ret = -EINVAL;
> - goto rollback;
> - }
> + if (dev_conf->rxmode.mtu == 0)
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
Here, it will cause a case that the user configuration is inconsistent
with the configuration saved in the framework .
Is it more reasonable to provide a prompt message?
> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
> + ret = -EINVAL;
> + goto rollback;
> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
> + ret = -EINVAL;
> + goto rollback;
> + }
Above "max_rx_pktlen < RTE_ETHER_MIN_LEN " case will be inconsistent
with dev_set_mtu() API.
The reasons are as follows:
The value of RTE_ETHER_MIN_LEN is 64. If "overhead_len" is 26 caculated
by eth_dev_get_overhead_len(), it means
that dev->data->dev_conf.rxmode.mtu equal to 38 is reasonable.
But, in dev_set_mtu() API, the check for mtu is:
@@ -3643,12 +3644,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
It should be noted that dev_info.min_mtu is RTE_ETHER_MIN_MTU (68).
>
> - /* Scale the MTU size to adapt max_rx_pkt_len */
> - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - overhead_len;
> - } else {
> - uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
> - if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
> - pktlen > RTE_ETHER_MTU + overhead_len)
> + if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> + if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
> + dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
> /* Use default value */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - RTE_ETHER_MTU + overhead_len;
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> }
>
> + dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
> +
> /*
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> if (dev_conf->rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data->dev_conf.rxmode.max_lro_pkt_size,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> goto rollback;
> @@ -2142,13 +2149,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> + /* Get the real Ethernet overhead length */
> if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + uint16_t overhead_len;
> + uint32_t max_rx_pktlen;
> + int ret;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
> + max_rx_pktlen = dev->data->mtu + overhead_len;
> if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - int ret = eth_dev_check_lro_pkt_size(port_id,
> + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> + ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data->dev_conf.rxmode.max_lro_pkt_size,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> return ret;
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index faf3bd901d75..9f288f98329c 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -410,7 +410,7 @@ enum rte_eth_tx_mq_mode {
> struct rte_eth_rxmode {
> /** The multi-queue packet distribution mode to be used, e.g. RSS. */
> enum rte_eth_rx_mq_mode mq_mode;
> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
> + uint32_t mtu; /**< Requested MTU. */
> /** Maximum allowed size of LRO aggregated packet. */
> uint32_t max_lro_pkt_size;
> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
> diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
> index 0036bda7465c..1491c815c312 100644
> --- a/lib/ethdev/rte_ethdev_trace.h
> +++ b/lib/ethdev/rte_ethdev_trace.h
> @@ -28,7 +28,7 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_u16(nb_tx_q);
> rte_trace_point_emit_u32(dev_conf->link_speeds);
> rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
> - rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
> + rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
> rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
> rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
> rte_trace_point_emit_u64(dev_conf->txmode.offloads);
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library
2021-07-09 17:29 ` [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-13 13:48 ` Andrew Rybchenko
2021-07-18 7:49 ` Xu, Rosen
@ 2021-07-19 14:38 ` Ajit Khaparde
2 siblings, 0 replies; 112+ messages in thread
From: Ajit Khaparde @ 2021-07-19 14:38 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Somalapuram Amaranath, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Hemant Agrawal, Sachin Saxena, Haiyue Wang, Gagandeep Singh,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Igor Russkikh,
Andrew Rybchenko, Maciej Czekaj, Jiawen Wu, Jian Wang,
Thomas Monjalon, dpdk-dev
[-- Attachment #1: Type: text/plain, Size: 4154 bytes --]
On Fri, Jul 9, 2021 at 10:30 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
> and application should enable the jumbo frame offload support for it.
>
> When jumbo frame offload is not enabled by application, but MTU bigger
> than RTE_ETHER_MTU is requested there are two options, either fail or
> enable jumbo frame offload implicitly.
>
> Enabling jumbo frame offload implicitly is selected by many drivers
> since setting a big MTU value already implies it, and this increases
> usability.
>
> This patch moves this logic from drivers to the library, both to reduce
> the duplicated code in the drivers and to make behaviour more visible.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
> drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
> drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
> drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
> drivers/net/dpaa/dpaa_ethdev.c | 7 -------
> drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
> drivers/net/e1000/em_ethdev.c | 9 ++-------
> drivers/net/e1000/igb_ethdev.c | 9 ++-------
> drivers/net/enetc/enetc_ethdev.c | 7 -------
> drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
> drivers/net/hns3/hns3_ethdev.c | 8 --------
> drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
> drivers/net/i40e/i40e_ethdev.c | 5 -----
> drivers/net/i40e/i40e_ethdev_vf.c | 5 -----
> drivers/net/iavf/iavf_ethdev.c | 7 -------
> drivers/net/ice/ice_ethdev.c | 5 -----
> drivers/net/igc/igc_ethdev.c | 9 ++-------
> drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
> drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
> drivers/net/liquidio/lio_ethdev.c | 7 -------
> drivers/net/nfp/nfp_net.c | 6 ------
> drivers/net/octeontx/octeontx_ethdev.c | 5 -----
> drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
> drivers/net/qede/qede_ethdev.c | 4 ----
> drivers/net/sfc/sfc_ethdev.c | 9 ---------
> drivers/net/thunderx/nicvf_ethdev.c | 6 ------
> drivers/net/txgbe/txgbe_ethdev.c | 6 ------
> lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
> 28 files changed, 29 insertions(+), 171 deletions(-)
>
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index 76aeec077f2b..2960834b4539 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> val = 1;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> val = 0;
> - }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> return 0;
> }
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c
> b/drivers/net/bnxt/bnxt_ethdev.c
> index 335505a106d5..4344a012f06e 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -3018,15 +3018,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> return -EINVAL;
> }
>
> - if (new_mtu > RTE_ETHER_MTU) {
> + if (new_mtu > RTE_ETHER_MTU)
> bp->flags |= BNXT_FLAG_JUMBO;
> - bp->eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - } else {
> - bp->eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> bp->flags &= ~BNXT_FLAG_JUMBO;
> - }
>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag
2021-07-13 14:07 ` Andrew Rybchenko
@ 2021-07-21 12:26 ` Ferruh Yigit
2021-07-21 12:39 ` Ferruh Yigit
1 sibling, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-21 12:26 UTC (permalink / raw)
To: Andrew Rybchenko, Jerin Jacob, Xiaoyun Li, Ajit Khaparde,
Somnath Kotur, Igor Russkikh, Pavel Belous,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Chas Williams,
Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Konstantin Ananyev, Radu Nicolau, Akhil Goyal, David Hunt,
John McNamara, Thomas Monjalon
Cc: dev
On 7/13/2021 3:07 PM, Andrew Rybchenko wrote:
> On 7/9/21 8:29 PM, Ferruh Yigit wrote:
>> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>>
>> Instead of drivers announce this capability, application can deduct the
>> capability by checking reported 'dev_info.max_mtu' or
>> 'dev_info.max_rx_pktlen'.
>>
>> And instead of application explicitly set this flag to enable jumbo
>> frames, this can be deducted by driver by comparing requested 'mtu' to
>> 'RTE_ETHER_MTU'.
>
> I can imagine the case when app wants to enable jumbo MTU in
> run-time, but enabling requires to know it in advance in order
> to configure HW correctly (i.e. offload is needed).
> I think it may be ignored. Driver should either reject MTU
> set in started state or do restart automatically on request.
>
As far as I can see we have both implementations. Most of PMDs return error if
device is started, a few tries to restart to apply the configuration.
And many PMDs just record the value passed with this API and apply it in the
device start, some apply the value within API.
> However, driver maintainers should keep it in mind reviewing
> the patch.
>
+1
>>
>> Removing this additional configuration for simplification.
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
> ethdev part:
>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
> [snip]
>
>> diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
>> index 3b4d9c3ee6f4..1ae78fe71f02 100644
>> --- a/drivers/net/e1000/e1000_ethdev.h
>> +++ b/drivers/net/e1000/e1000_ethdev.h
>> @@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
>> void em_dev_clear_queues(struct rte_eth_dev *dev);
>> void em_dev_free_queues(struct rte_eth_dev *dev);
>>
>> -uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
>> -uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
>> +uint64_t em_get_rx_port_offloads_capa(void);
>> +uint64_t em_get_rx_queue_offloads_capa(void);
>
> I'm not sure that it is a step in right direction.
> May be it is better to keep dev unused.
> net/e1000 maintainers should decide.
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library
2021-07-13 13:48 ` Andrew Rybchenko
@ 2021-07-21 12:26 ` Ferruh Yigit
0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-21 12:26 UTC (permalink / raw)
To: Andrew Rybchenko, Somalapuram Amaranath, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Hemant Agrawal,
Sachin Saxena, Haiyue Wang, Gagandeep Singh, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Igor Russkikh,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: dev
On 7/13/2021 2:48 PM, Andrew Rybchenko wrote:
> On 7/9/21 8:29 PM, Ferruh Yigit wrote:
>> Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
>> and application should enable the jumbo frame offload support for it.
>>
>> When jumbo frame offload is not enabled by application, but MTU bigger
>> than RTE_ETHER_MTU is requested there are two options, either fail or
>> enable jumbo frame offload implicitly.
>>
>> Enabling jumbo frame offload implicitly is selected by many drivers
>> since setting a big MTU value already implies it, and this increases
>> usability.
>>
>> This patch moves this logic from drivers to the library, both to reduce
>> the duplicated code in the drivers and to make behaviour more visible.
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
> Very good cleanup, many thanks.
>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
> [snip]
>
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index 3451125639f9..d649a5dd69a9 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -3625,6 +3625,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>> int ret;
>> struct rte_eth_dev_info dev_info;
>> struct rte_eth_dev *dev;
>> + int is_jumbo_frame_capable = 0;
>>
>> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
>> dev = &rte_eth_devices[port_id];
>> @@ -3643,12 +3644,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>>
>> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
>> return -EINVAL;
>> +
>> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME)
>> + is_jumbo_frame_capable = 1;
>> }
>>
>> + if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
>> + return -EINVAL;
>> +
>> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
>> - if (!ret)
>> + if (!ret) {
>
> Since line it updated anyway, may I ask to use explicit
> comparison vs 0 as coding style says.
>
ack, will fix all occurrences
>> dev->data->mtu = mtu;
>>
>> + /* switch to jumbo mode if needed */
>> + if (mtu > RTE_ETHER_MTU)
>> + dev->data->dev_conf.rxmode.offloads |=
>> + DEV_RX_OFFLOAD_JUMBO_FRAME;
>> + else
>> + dev->data->dev_conf.rxmode.offloads &=
>> + ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>> + }
>> +
>> return eth_err(port_id, ret);
>> }
>>
>>
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag
2021-07-13 14:07 ` Andrew Rybchenko
2021-07-21 12:26 ` Ferruh Yigit
@ 2021-07-21 12:39 ` Ferruh Yigit
1 sibling, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-21 12:39 UTC (permalink / raw)
To: Andrew Rybchenko, Jerin Jacob, Xiaoyun Li, Ajit Khaparde,
Somnath Kotur, Igor Russkikh, Pavel Belous,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Chas Williams,
Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Konstantin Ananyev, Radu Nicolau, Akhil Goyal, David Hunt,
John McNamara, Thomas Monjalon
Cc: dev
On 7/13/2021 3:07 PM, Andrew Rybchenko wrote:
<...>
>
>> diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
>> index 3b4d9c3ee6f4..1ae78fe71f02 100644
>> --- a/drivers/net/e1000/e1000_ethdev.h
>> +++ b/drivers/net/e1000/e1000_ethdev.h
>> @@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
>> void em_dev_clear_queues(struct rte_eth_dev *dev);
>> void em_dev_free_queues(struct rte_eth_dev *dev);
>>
>> -uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
>> -uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
>> +uint64_t em_get_rx_port_offloads_capa(void);
>> +uint64_t em_get_rx_queue_offloads_capa(void);
>
> I'm not sure that it is a step in right direction.
> May be it is better to keep dev unused.
> net/e1000 maintainers should decide.
>
It is possible to keep dev as unused, but these are driver internal functions
and 'dev' is not used now, when it is needed it is easy to add it back.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-19 3:35 ` Huisong Li
@ 2021-07-21 15:29 ` Ferruh Yigit
2021-07-22 7:21 ` Huisong Li
0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-21 15:29 UTC (permalink / raw)
To: Huisong Li; +Cc: dev
On 7/19/2021 4:35 AM, Huisong Li wrote:
> Hi, Ferruh
>
Hi Huisong,
Thanks for the review.
> 在 2021/7/10 1:29, Ferruh Yigit 写道:
>> There is a confusion on setting max Rx packet length, this patch aims to
>> clarify it.
>>
>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>> 'uint32_t max_rx_pkt_len' filed of the config struct 'struct
>> rte_eth_conf'.
>>
>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
>> stored into '(struct rte_eth_dev)->data->mtu'.
>>
>> These two APIs are related but they work in a disconnected way, they
>> store the set values in different variables which makes hard to figure
>> out which one to use, also two different related method is confusing for
>> the users.
>>
>> Other issues causing confusion is:
>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>> Ethernet frame overhead, but this may be different from device to
>> device based on what device supports, like VLAN and QinQ.
>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>> which adds additional confusion and some APIs and PMDs already
>> discards this documented behavior.
>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>> field, this adds configuration complexity for application.
>>
>> As solution, both APIs gets MTU as parameter, and both saves the result
>> in same variable '(struct rte_eth_dev)->data->mtu'. For this
>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>> from jumbo frame.
>>
>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
>> request and it should be used only within configure function and result
>> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
>> both application and PMD uses MTU from this variable.
>>
>> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
>> default 'RTE_ETHER_MTU' value is used.
>>
>> As additional clarification, MTU is used to configure the device for
>> physical Rx/Tx limitation. Other related issue is size of the buffer to
>> store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
>> And compares MTU against Rx buffer size to decide enabling scattered Rx
>> or not, if PMD supports it. If scattered Rx is not supported by device,
>> MTU bigger than Rx buffer size should fail.
>>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
<...>
>> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
>> index e51512560e15..8bccdeddb2f7 100644
>> --- a/drivers/net/hns3/hns3_ethdev.c
>> +++ b/drivers/net/hns3/hns3_ethdev.c
>> @@ -2379,20 +2379,11 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct
>> rte_eth_conf *conf)
>> {
>> struct hns3_adapter *hns = dev->data->dev_private;
>> struct hns3_hw *hw = &hns->hw;
>> - uint32_t max_rx_pkt_len;
>> - uint16_t mtu;
>> - int ret;
>> -
>> - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
>> - return 0;
>> + uint32_t max_rx_pktlen;
>> - /*
>> - * If jumbo frames are enabled, MTU needs to be refreshed
>> - * according to the maximum RX packet length.
>> - */
>> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
>> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
>> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
>> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
>> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
>> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
>> hns3_err(hw, "maximum Rx packet length must be greater than %u "
>> "and no more than %u when jumbo frame enabled.",
>> (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>
> The preceding check for the maximum frame length was based on the scenario where
> jumbo frames are enabled.
>
> Since there is no offload of jumbo frames in this patchset, the maximum frame
> length does not need to be checked and only ensure conf->rxmode.mtu is valid.
>
> These should be guaranteed by dev_configure() in the framework .
>
Got it, agree that 'HNS3_DEFAULT_FRAME_LEN' check is now wrong, and as you said
these checks are becoming redundant, so I will remove them.
In that case 'hns3_refresh_mtu()' becomes just wrapper to 'hns3_dev_mtu_set()',
I will remove function too.
<...>
>> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
>> b/drivers/net/hns3/hns3_ethdev_vf.c
>> index e582503f529b..ca839fa55fa0 100644
>> --- a/drivers/net/hns3/hns3_ethdev_vf.c
>> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
>> @@ -784,8 +784,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>> uint16_t nb_rx_q = dev->data->nb_rx_queues;
>> uint16_t nb_tx_q = dev->data->nb_tx_queues;
>> struct rte_eth_rss_conf rss_conf;
>> - uint32_t max_rx_pkt_len;
>> - uint16_t mtu;
>> + uint32_t max_rx_pktlen;
>> bool gro_en;
>> int ret;
>> @@ -825,29 +824,21 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>> goto cfg_err;
>> }
>> - /*
>> - * If jumbo frames are enabled, MTU needs to be refreshed
>> - * according to the maximum RX packet length.
>> - */
>> - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
>> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
>> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
>> - hns3_err(hw, "maximum Rx packet length must be greater "
>> - "than %u and less than %u when jumbo frame enabled.",
>> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>> - (uint16_t)HNS3_MAX_FRAME_LEN);
>> - ret = -EINVAL;
>> - goto cfg_err;
>> - }
>> -
>> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
>> - ret = hns3vf_dev_mtu_set(dev, mtu);
>> - if (ret)
>> - goto cfg_err;
>> - dev->data->mtu = mtu;
>> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
>> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
>> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
>> + hns3_err(hw, "maximum Rx packet length must be greater "
>> + "than %u and less than %u when jumbo frame enabled.",
>> + (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>> + (uint16_t)HNS3_MAX_FRAME_LEN);
>> + ret = -EINVAL;
>> + goto cfg_err;
>> }
> Please remove this check now, thanks!
ack
<...>
>> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
>> index ce8882a45883..f868e5d906c7 100644
>> --- a/examples/ip_reassembly/main.c
>> +++ b/examples/ip_reassembly/main.c
>> @@ -162,7 +162,7 @@ static struct lcore_queue_conf
>> lcore_queue_conf[RTE_MAX_LCORE];
>> static struct rte_eth_conf port_conf = {
>> .rxmode = {
>> .mq_mode = ETH_MQ_RX_RSS,
>> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
>> + .mtu = JUMBO_FRAME_MAX_SIZE,
>
> It feel likes that the replacement of max_rx_pkt_len with MTU is inappropriate.
>
> Because "max_rx_pkt_len " is the sum of "mtu" and "overhead_len".
You are right, it is not same thing. I will update it to remove overhead.
<...>
>> @@ -1448,49 +1459,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t
>> nb_rx_q, uint16_t nb_tx_q,
>> }
>> /*
>> - * If jumbo frames are enabled, check that the maximum RX packet
>> - * length is supported by the configured device.
>> + * Check that the maximum RX packet length is supported by the
>> + * configured device.
>> */
>> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>> - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
>> - RTE_ETHDEV_LOG(ERR,
>> - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>> - dev_info.max_rx_pktlen);
>> - ret = -EINVAL;
>> - goto rollback;
>> - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
>> - RTE_ETHDEV_LOG(ERR,
>> - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>> - (unsigned int)RTE_ETHER_MIN_LEN);
>> - ret = -EINVAL;
>> - goto rollback;
>> - }
>> + if (dev_conf->rxmode.mtu == 0)
>> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> Here, it will cause a case that the user configuration is inconsistent with the
> configuration saved in the framework .
What is the framework you mentioned?
Previously 'max_rx_pkt_len' was mandatory when jumbo frame is configured even
user doesn't really case about it and it was causing additional complexity in
the configuration.
This check is required to use defaults when application doesn't need a specific
value, I believe this is a good usability improvement.
Application who cares about a specific value can set it explicitly and it will
be in sync with application.
> Is it more reasonable to provide a prompt message?
Not sure about it. We are not changing a user configured value, but using
default value when application doesn't set it, and that kind of log will be
printed by most of the applications, this may cause noise.
>> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
>> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
>> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
>> + ret = -EINVAL;
>> + goto rollback;
>> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
>> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
>> + ret = -EINVAL;
>> + goto rollback;
>> + }
>
> Above "max_rx_pktlen < RTE_ETHER_MIN_LEN " case will be inconsistent with
> dev_set_mtu() API.
>
> The reasons are as follows:
>
> The value of RTE_ETHER_MIN_LEN is 64. If "overhead_len" is 26 caculated by
> eth_dev_get_overhead_len(), it means
>
> that dev->data->dev_conf.rxmode.mtu equal to 38 is reasonable.
>
> But, in dev_set_mtu() API, the check for mtu is:
>
> @@ -3643,12 +3644,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>
> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
> return -EINVAL;
>
> It should be noted that dev_info.min_mtu is RTE_ETHER_MIN_MTU (68).
>
Agree on the inconsistency.
RTE_ETHER_MIN_MTU is 68, that is min MTU for IPv4
RTE_ETHER_MIN_LEN is 64, and min MTU for Ethernet frame
Although we are talking about MTU, we are mainly concerned about Ethernet frame
payload, not IPv4.
I suggest only using RTE_ETHER_MIN_LEN.
Since this inconsistency was already there before this patch, I will update it
in seperate patch instead of fixing in this one.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-13 12:47 ` [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Andrew Rybchenko
@ 2021-07-21 16:46 ` Ferruh Yigit
2021-07-22 1:31 ` Ajit Khaparde
0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-21 16:46 UTC (permalink / raw)
To: Andrew Rybchenko, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Neil Horman, Bernard Iremonger, Bruce Richardson,
Konstantin Ananyev, John McNamara, Igor Russkikh, Pavel Belous,
Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Guy Tzalik,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Keith Wiles, Jiawen Wu,
Jian Wang, Maxime Coquelin, Chenbo Xia, Nicolas Chautru,
David Hunt, Harry van Haaren, Cristian Dumitrescu, Radu Nicolau,
Akhil Goyal, Tomasz Kantecki, Declan Doherty, Pavan Nikhilesh,
Kirill Rybalchenko, Jasvinder Singh, Thomas Monjalon
Cc: dev
On 7/13/2021 1:47 PM, Andrew Rybchenko wrote:
> On 7/9/21 8:29 PM, Ferruh Yigit wrote:
>> There is a confusion on setting max Rx packet length, this patch aims to
>> clarify it.
>>
>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>> 'uint32_t max_rx_pkt_len' filed of the config struct 'struct
>> rte_eth_conf'.
>>
>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
>> stored into '(struct rte_eth_dev)->data->mtu'.
>>
>> These two APIs are related but they work in a disconnected way, they
>> store the set values in different variables which makes hard to figure
>> out which one to use, also two different related method is confusing for
>> the users.
>>
>> Other issues causing confusion is:
>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>> Ethernet frame overhead, but this may be different from device to
>> device based on what device supports, like VLAN and QinQ.
>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>> which adds additional confusion and some APIs and PMDs already
>> discards this documented behavior.
>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>> field, this adds configuration complexity for application.
>>
>> As solution, both APIs gets MTU as parameter, and both saves the result
>> in same variable '(struct rte_eth_dev)->data->mtu'. For this
>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>> from jumbo frame.
>>
>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
>> request and it should be used only within configure function and result
>> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
>> both application and PMD uses MTU from this variable.
>>
>> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
>> default 'RTE_ETHER_MTU' value is used.
>>
>> As additional clarification, MTU is used to configure the device for
>> physical Rx/Tx limitation. Other related issue is size of the buffer to
>> store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
>> And compares MTU against Rx buffer size to decide enabling scattered Rx
>> or not, if PMD supports it. If scattered Rx is not supported by device,
>> MTU bigger than Rx buffer size should fail.
>>
>
> Do I understand correctly that target is 21.11?
>
Yes, it is for 21.11, I should clarify it.
> Really huge work. Many thanks.
>
> See my notes below.
>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
> [snip]
>
>> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
>> index 6ee530d4cdc9..5fcea74b4d43 100644
>> --- a/app/test-eventdev/test_pipeline_common.c
>> +++ b/app/test-eventdev/test_pipeline_common.c
>> @@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
>> return -EINVAL;
>> }
>>
>> - port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
>> - if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
>> + port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
>> + RTE_ETHER_CRC_LEN;
>
> Subtract requires overflow check. May max_pkt_size be 0 or just
> smaller that RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN?
>
There is a "opt->max_pkt_sz < RTE_ETHER_MIN_LEN" check above this, which ensures
it won't overflow.
>> + if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
>> port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
>>
>> t->internal_port = 1;
>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
>> index 8468018cf35d..8bdc042f6e8e 100644
>> --- a/app/test-pmd/cmdline.c
>> +++ b/app/test-pmd/cmdline.c
>> @@ -1892,43 +1892,36 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
>> __rte_unused void *data)
>> {
>> struct cmd_config_max_pkt_len_result *res = parsed_result;
>> - uint32_t max_rx_pkt_len_backup = 0;
>> - portid_t pid;
>> + portid_t port_id;
>> int ret;
>>
>> + if (strcmp(res->name, "max-pkt-len")) {
>> + printf("Unknown parameter\n");
>> + return;
>> + }
>> +
>> if (!all_ports_stopped()) {
>> printf("Please stop all ports first\n");
>> return;
>> }
>>
>> - RTE_ETH_FOREACH_DEV(pid) {
>> - struct rte_port *port = &ports[pid];
>> -
>> - if (!strcmp(res->name, "max-pkt-len")) {
>> - if (res->value < RTE_ETHER_MIN_LEN) {
>> - printf("max-pkt-len can not be less than %d\n",
>> - RTE_ETHER_MIN_LEN);
>> - return;
>> - }
>> - if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
>> - return;
>> -
>> - ret = eth_dev_info_get_print_err(pid, &port->dev_info);
>> - if (ret != 0) {
>> - printf("rte_eth_dev_info_get() failed for port %u\n",
>> - pid);
>> - return;
>> - }
>> + RTE_ETH_FOREACH_DEV(port_id) {
>> + struct rte_port *port = &ports[port_id];
>>
>> - max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
>> + if (res->value < RTE_ETHER_MIN_LEN) {
>> + printf("max-pkt-len can not be less than %d\n",
>
> fprintf() to stderr, please.
> Here and in a number of places below.
>
Overall I agree, but not sure to have this change in this patch. The patch is
already complex, I am for keeping logging part same as before, what do you think
to update all usage later on its own patch?
>> + RTE_ETHER_MIN_LEN);
>> + return;
>> + }
>>
>> - port->dev_conf.rxmode.max_rx_pkt_len = res->value;
>> - if (update_jumbo_frame_offload(pid) != 0)
>> - port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
>> - } else {
>> - printf("Unknown parameter\n");
>> + ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
>> + if (ret != 0) {
>> + printf("rte_eth_dev_info_get() failed for port %u\n",
>> + port_id);
>> return;
>> }
>> +
>> + update_jumbo_frame_offload(port_id, res->value);
>> }
>>
>> init_port_config();
>> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
>> index 04ae0feb5852..a87265d7638b 100644
>> --- a/app/test-pmd/config.c
>> +++ b/app/test-pmd/config.c
>
> [snip]
>
>> @@ -1155,20 +1154,17 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
>> return;
>> }
>> diag = rte_eth_dev_set_mtu(port_id, mtu);
>> - if (diag)
>> + if (diag) {
>> printf("Set MTU failed. diag=%d\n", diag);
>> - else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>> - /*
>> - * Ether overhead in driver is equal to the difference of
>> - * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
>> - * device supports jumbo frame.
>> - */
>> - eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
>> + return;
>> + }
>> +
>> + rte_port->dev_conf.rxmode.mtu = mtu;
>> +
>> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>> if (mtu > RTE_ETHER_MTU) {
>> rte_port->dev_conf.rxmode.offloads |=
>> DEV_RX_OFFLOAD_JUMBO_FRAME;
>> - rte_port->dev_conf.rxmode.max_rx_pkt_len =
>> - mtu + eth_overhead;
>> } else
>
> I guess curly brackets should be removed now.
>
ack
>> rte_port->dev_conf.rxmode.offloads &=
>> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> [snip]
>
>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
>> index 1cdd3cdd12b6..2c79cae05664 100644
>> --- a/app/test-pmd/testpmd.c
>> +++ b/app/test-pmd/testpmd.c
>
> [snip]
>
>> @@ -1465,7 +1473,7 @@ init_config(void)
>> rte_exit(EXIT_FAILURE,
>> "rte_eth_dev_info_get() failed\n");
>>
>> - ret = update_jumbo_frame_offload(pid);
>> + ret = update_jumbo_frame_offload(pid, 0);
>> if (ret != 0)
>> printf("Updating jumbo frame offload failed for port %u\n",
>> pid);
>> @@ -1512,14 +1520,19 @@ init_config(void)
>> */
>> if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
>> port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
>> - data_size = rx_mode.max_rx_pkt_len /
>> - port->dev_info.rx_desc_lim.nb_mtu_seg_max;
>> + uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
>> + uint16_t mtu;
>>
>> - if ((data_size + RTE_PKTMBUF_HEADROOM) >
>> + if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
>> + data_size = mtu + eth_overhead /
>> + port->dev_info.rx_desc_lim.nb_mtu_seg_max;
>> +
>> + if ((data_size + RTE_PKTMBUF_HEADROOM) >
>
> Unnecessary parenthesis.
>
This part already changed in upstream.
>> mbuf_data_size[0]) {
>> - mbuf_data_size[0] = data_size +
>> - RTE_PKTMBUF_HEADROOM;
>> - warning = 1;
>> + mbuf_data_size[0] = data_size +
>> + RTE_PKTMBUF_HEADROOM;
>> + warning = 1;
>> + }
>> }
>> }
>> }
>
> [snip]
>
>> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
>> index c515de3bf71d..0a8d29277aeb 100644
>> --- a/drivers/net/tap/rte_eth_tap.c
>> +++ b/drivers/net/tap/rte_eth_tap.c
>> @@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
>> {
>> struct pmd_internals *pmd = dev->data->dev_private;
>> struct ifreq ifr = { .ifr_mtu = mtu };
>> - int err = 0;
>>
>> - err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
>> - if (!err)
>> - dev->data->mtu = mtu;
>> -
>> - return err;
>> + return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
>
> The cleanup could be done separately before the patch, since
> it just makes the long patch longer and unrelated in fact,
> since assignment after callback is already done.
>
Yes, and agree it can be updated seperately, but I think change is related, not
sure about having too many commits. If you have strong opinion I can update it.
>> }
>>
>> static int
>
> [snip]
>
>> diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
>> index 77a6a18d1914..f97287ce2243 100644
>> --- a/examples/ip_fragmentation/main.c
>> +++ b/examples/ip_fragmentation/main.c
>> @@ -146,7 +146,7 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
>>
>> static struct rte_eth_conf port_conf = {
>> .rxmode = {
>> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
>> + .mtu = JUMBO_FRAME_MAX_SIZE,
>
> Before the patch JUMBO_FRAME_MAX_SIZE inluded overhad, but
> after the patch it is used as it is does not include overhead.
>
> There a number of similiar cases in other apps.
>
ack, Huisong also highlighted it, I will update.
>> .split_hdr_size = 0,
>> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
>> DEV_RX_OFFLOAD_SCATTER |
>
> [snip]
>
>> diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
>> index 16bcffe356bc..8628db22f56b 100644
>> --- a/examples/ip_pipeline/link.c
>> +++ b/examples/ip_pipeline/link.c
>> @@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
>> .link_speeds = 0,
>> .rxmode = {
>> .mq_mode = ETH_MQ_RX_NONE,
>> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
>> + .mtu = 9000, /* Jumbo frame MTU */
>
> Strictly speaking 9000 included overhead before the patch and
> does not include overhead after the patch.
>
> There a number of similiar cases in other apps.
>
ack
>> .split_hdr_size = 0, /* Header split buffer size */
>> },
>> .rx_adv_conf = {
>
> [snip]
>
>> diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
>> index a1f457b564b6..913037d5f835 100644
>> --- a/examples/l3fwd-acl/main.c
>> +++ b/examples/l3fwd-acl/main.c
>
> [snip]
>
>> @@ -1833,12 +1832,12 @@ parse_args(int argc, char **argv)
>> print_usage(prgname);
>> return -1;
>> }
>> - port_conf.rxmode.max_rx_pkt_len = ret;
>> + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN +
>> + RTE_ETHER_CRC_LEN);
>> }
>> - printf("set jumbo frame max packet length "
>> - "to %u\n",
>> - (unsigned int)
>> - port_conf.rxmode.max_rx_pkt_len);
>> + printf("set jumbo frame max packet length to %u\n",
>> + (unsigned int)port_conf.rxmode.mtu +
>> + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
>
>
> I think that overhead should be obtainded from dev_info with
> fallback to value used above.
>
Right, but since at this stage it is hard to get the overhead for the
application, I wonder if we should change the applications to get the MTU as
paramter.
And overall this work makes it harder for application to use frame size, since
it brings 'rte_eth_dev_info_get()' API call requirement. Not sure how big a
problem is this, and if we can provide some help to applications, any suggestion
is welcome.
Specific to above sample app, it can be possible to record user parameter in a
temp var, and set the 'port_conf.rxmode.max_rx_pkt_len' when app has the
dev_info, I will update it.
> There are many similar cases in other apps.
>
ack
> [snip]
>
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index c607eabb5b0c..3451125639f9 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -1249,15 +1249,15 @@ rte_eth_dev_tx_offload_name(uint64_t offload)
>>
>> static inline int
>> eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
>> - uint32_t max_rx_pkt_len, uint32_t dev_info_size)
>> + uint32_t max_rx_pktlen, uint32_t dev_info_size)
>> {
>> int ret = 0;
>>
>> if (dev_info_size == 0) {
>> - if (config_size != max_rx_pkt_len) {
>> + if (config_size != max_rx_pktlen) {
>> RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size"
>> " %u != %u is not allowed\n",
>> - port_id, config_size, max_rx_pkt_len);
>> + port_id, config_size, max_rx_pktlen);
>
> This patch looks a bit unrelated and make the long patch
> even more longer. May be it is better to do the cleanup
> first (before the patch).
>
I also had same doubt, but it is somehow related.
Previously the variable name in the two struct was slightly different:
dev_info.max_rx_pktlen => max Rx packet lenght device capability
rxmode.max_rx_pkt_len => max Rx packet length configuration
This slight difference was bothering me :), since we are removing
'rxmode.max_rx_pkt_len' now, I though it is good opportunity to unifiy variabe
name to 'max_rx_pktlen'. After this patch only avp driver has 'max_rx_pkt_len'
usage (because of usage on its interface).
I am not sure if above change worth to have its own patch, another option is
discard this change, if you have strong opinion on it I can drop the changes.
>> ret = -EINVAL;
>> }
>> } else if (config_size > dev_info_size) {
>> @@ -1325,6 +1325,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
>> return ret;
>> }
>>
>> +static uint16_t
>> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
>> +{
>> + uint16_t overhead_len;
>> +
>> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
>> + overhead_len = max_rx_pktlen - max_mtu;
>> + else
>> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
>> +
>> + return overhead_len;
>> +}
>> +
>> int
>> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>> const struct rte_eth_conf *dev_conf)
>> @@ -1332,6 +1345,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>> struct rte_eth_dev *dev;
>> struct rte_eth_dev_info dev_info;
>> struct rte_eth_conf orig_conf;
>> + uint32_t max_rx_pktlen;
>> uint16_t overhead_len;
>> int diag;
>> int ret;
>> @@ -1375,11 +1389,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>> goto rollback;
>>
>> /* Get the real Ethernet overhead length */
>> - if (dev_info.max_mtu != UINT16_MAX &&
>> - dev_info.max_rx_pktlen > dev_info.max_mtu)
>> - overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
>> - else
>> - overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
>> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
>> + dev_info.max_mtu);
>>
>> /* If number of queues specified by application for both Rx and Tx is
>> * zero, use driver preferred values. This cannot be done individually
>> @@ -1448,49 +1459,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>> }
>>
>> /*
>> - * If jumbo frames are enabled, check that the maximum RX packet
>> - * length is supported by the configured device.
>> + * Check that the maximum RX packet length is supported by the
>> + * configured device.
>> */
>> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>> - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
>> - RTE_ETHDEV_LOG(ERR,
>> - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>> - dev_info.max_rx_pktlen);
>> - ret = -EINVAL;
>> - goto rollback;
>> - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
>> - RTE_ETHDEV_LOG(ERR,
>> - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>> - (unsigned int)RTE_ETHER_MIN_LEN);
>> - ret = -EINVAL;
>> - goto rollback;
>> - }
>> + if (dev_conf->rxmode.mtu == 0)
>> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
>> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
>> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
>> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
>> + ret = -EINVAL;
>> + goto rollback;
>> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
>> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
>> + ret = -EINVAL;
>> + goto rollback;
>> + }
>>
>> - /* Scale the MTU size to adapt max_rx_pkt_len */
>> - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
>> - overhead_len;
>> - } else {
>> - uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
>> - if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
>> - pktlen > RTE_ETHER_MTU + overhead_len)
>> + if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
>> + if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
>> + dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
>> /* Use default value */
>> - dev->data->dev_conf.rxmode.max_rx_pkt_len =
>> - RTE_ETHER_MTU + overhead_len;
>> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
>
> I don't understand it. It would be good to add comments to
> explain logic above.
>
This part will be updated in next patches, and I will extract the checks to a
common function, can you please check the final out output of next patch, if it
makes sense?
>> }
>>
>> + dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
>> +
>> /*
>> * If LRO is enabled, check that the maximum aggregated packet
>> * size is supported by the configured device.
>> */
>> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
>> if (dev_conf->rxmode.max_lro_pkt_size == 0)
>> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
>> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
>> + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
>> ret = eth_dev_check_lro_pkt_size(port_id,
>> dev->data->dev_conf.rxmode.max_lro_pkt_size,
>> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
>> + max_rx_pktlen,
>> dev_info.max_lro_pkt_size);
>> if (ret != 0)
>> goto rollback;
>
> [snip]
>
>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> index faf3bd901d75..9f288f98329c 100644
>> --- a/lib/ethdev/rte_ethdev.h
>> +++ b/lib/ethdev/rte_ethdev.h
>> @@ -410,7 +410,7 @@ enum rte_eth_tx_mq_mode {
>> struct rte_eth_rxmode {
>> /** The multi-queue packet distribution mode to be used, e.g. RSS. */
>> enum rte_eth_rx_mq_mode mq_mode;
>> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
>> + uint32_t mtu; /**< Requested MTU. */
>
> Maximum Transmit Unit looks a bit confusing in Rx mode
> structure.
>
True, but I think it is already used for Rx already as concept, I believe the
intention will be clear enough. Do you think will be more clear if we pick a
DPDK specific variable name?
>> /** Maximum allowed size of LRO aggregated packet. */
>> uint32_t max_lro_pkt_size;
>> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
>
> [snip]
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-21 16:46 ` Ferruh Yigit
@ 2021-07-22 1:31 ` Ajit Khaparde
2021-07-22 10:27 ` Ferruh Yigit
0 siblings, 1 reply; 112+ messages in thread
From: Ajit Khaparde @ 2021-07-22 1:31 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Andrew Rybchenko, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Neil Horman, Bernard Iremonger, Bruce Richardson,
Konstantin Ananyev, John McNamara, Igor Russkikh, Pavel Belous,
Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
Beilei Xing, Jingjing Wu, Qiming Yang, Andrew Boyer, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Zyta Szpak,
Liron Himi, Heinrich Kuhn, Devendra Singh Rawat, Keith Wiles,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, David Hunt, Harry van Haaren,
Cristian Dumitrescu, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Declan Doherty, Pavan Nikhilesh, Kirill Rybalchenko,
Jasvinder Singh, Thomas Monjalon, dpdk-dev
[-- Attachment #1: Type: text/plain, Size: 1054 bytes --]
> > [snip]
> >
> >> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> >> index faf3bd901d75..9f288f98329c 100644
> >> --- a/lib/ethdev/rte_ethdev.h
> >> +++ b/lib/ethdev/rte_ethdev.h
> >> @@ -410,7 +410,7 @@ enum rte_eth_tx_mq_mode {
> >> struct rte_eth_rxmode {
> >> /** The multi-queue packet distribution mode to be used, e.g. RSS.
> */
> >> enum rte_eth_rx_mq_mode mq_mode;
> >> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
> >> + uint32_t mtu; /**< Requested MTU. */
> >
> > Maximum Transmit Unit looks a bit confusing in Rx mode
> > structure.
> >
>
> True, but I think it is already used for Rx already as concept, I believe
> the
> intention will be clear enough. Do you think will be more clear if we pick
> a
> DPDK specific variable name?
>
Maybe use MRU - Max Receive Unit.
>
> >> /** Maximum allowed size of LRO aggregated packet. */
> >> uint32_t max_lro_pkt_size;
> >> uint16_t split_hdr_size; /**< hdr buf size (header_split
> enabled).*/
> >
> > [snip]
> >
>
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-21 15:29 ` Ferruh Yigit
@ 2021-07-22 7:21 ` Huisong Li
2021-07-22 10:12 ` Ferruh Yigit
0 siblings, 1 reply; 112+ messages in thread
From: Huisong Li @ 2021-07-22 7:21 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev
在 2021/7/21 23:29, Ferruh Yigit 写道:
> On 7/19/2021 4:35 AM, Huisong Li wrote:
>> Hi, Ferruh
>>
> Hi Huisong,
>
> Thanks for the review.
>
>> 在 2021/7/10 1:29, Ferruh Yigit 写道:
>>> There is a confusion on setting max Rx packet length, this patch aims to
>>> clarify it.
>>>
>>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>>> 'uint32_t max_rx_pkt_len' filed of the config struct 'struct
>>> rte_eth_conf'.
>>>
>>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
>>> stored into '(struct rte_eth_dev)->data->mtu'.
>>>
>>> These two APIs are related but they work in a disconnected way, they
>>> store the set values in different variables which makes hard to figure
>>> out which one to use, also two different related method is confusing for
>>> the users.
>>>
>>> Other issues causing confusion is:
>>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>>> Ethernet frame overhead, but this may be different from device to
>>> device based on what device supports, like VLAN and QinQ.
>>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>>> which adds additional confusion and some APIs and PMDs already
>>> discards this documented behavior.
>>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>>> field, this adds configuration complexity for application.
>>>
>>> As solution, both APIs gets MTU as parameter, and both saves the result
>>> in same variable '(struct rte_eth_dev)->data->mtu'. For this
>>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>>> from jumbo frame.
>>>
>>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
>>> request and it should be used only within configure function and result
>>> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
>>> both application and PMD uses MTU from this variable.
>>>
>>> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
>>> default 'RTE_ETHER_MTU' value is used.
>>>
>>> As additional clarification, MTU is used to configure the device for
>>> physical Rx/Tx limitation. Other related issue is size of the buffer to
>>> store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
>>> And compares MTU against Rx buffer size to decide enabling scattered Rx
>>> or not, if PMD supports it. If scattered Rx is not supported by device,
>>> MTU bigger than Rx buffer size should fail.
>>>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> <...>
>
>>> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
>>> index e51512560e15..8bccdeddb2f7 100644
>>> --- a/drivers/net/hns3/hns3_ethdev.c
>>> +++ b/drivers/net/hns3/hns3_ethdev.c
>>> @@ -2379,20 +2379,11 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct
>>> rte_eth_conf *conf)
>>> {
>>> struct hns3_adapter *hns = dev->data->dev_private;
>>> struct hns3_hw *hw = &hns->hw;
>>> - uint32_t max_rx_pkt_len;
>>> - uint16_t mtu;
>>> - int ret;
>>> -
>>> - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
>>> - return 0;
>>> + uint32_t max_rx_pktlen;
>>> - /*
>>> - * If jumbo frames are enabled, MTU needs to be refreshed
>>> - * according to the maximum RX packet length.
>>> - */
>>> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
>>> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
>>> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
>>> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
>>> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
>>> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
>>> hns3_err(hw, "maximum Rx packet length must be greater than %u "
>>> "and no more than %u when jumbo frame enabled.",
>>> (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>> The preceding check for the maximum frame length was based on the scenario where
>> jumbo frames are enabled.
>>
>> Since there is no offload of jumbo frames in this patchset, the maximum frame
>> length does not need to be checked and only ensure conf->rxmode.mtu is valid.
>>
>> These should be guaranteed by dev_configure() in the framework .
>>
> Got it, agree that 'HNS3_DEFAULT_FRAME_LEN' check is now wrong, and as you said
> these checks are becoming redundant, so I will remove them.
>
> In that case 'hns3_refresh_mtu()' becomes just wrapper to 'hns3_dev_mtu_set()',
> I will remove function too.
>
> <...>
ok
>
>>> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
>>> b/drivers/net/hns3/hns3_ethdev_vf.c
>>> index e582503f529b..ca839fa55fa0 100644
>>> --- a/drivers/net/hns3/hns3_ethdev_vf.c
>>> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
>>> @@ -784,8 +784,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>>> uint16_t nb_rx_q = dev->data->nb_rx_queues;
>>> uint16_t nb_tx_q = dev->data->nb_tx_queues;
>>> struct rte_eth_rss_conf rss_conf;
>>> - uint32_t max_rx_pkt_len;
>>> - uint16_t mtu;
>>> + uint32_t max_rx_pktlen;
>>> bool gro_en;
>>> int ret;
>>> @@ -825,29 +824,21 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>>> goto cfg_err;
>>> }
>>> - /*
>>> - * If jumbo frames are enabled, MTU needs to be refreshed
>>> - * according to the maximum RX packet length.
>>> - */
>>> - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>>> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
>>> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
>>> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
>>> - hns3_err(hw, "maximum Rx packet length must be greater "
>>> - "than %u and less than %u when jumbo frame enabled.",
>>> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>>> - (uint16_t)HNS3_MAX_FRAME_LEN);
>>> - ret = -EINVAL;
>>> - goto cfg_err;
>>> - }
>>> -
>>> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
>>> - ret = hns3vf_dev_mtu_set(dev, mtu);
>>> - if (ret)
>>> - goto cfg_err;
>>> - dev->data->mtu = mtu;
>>> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
>>> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
>>> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
>>> + hns3_err(hw, "maximum Rx packet length must be greater "
>>> + "than %u and less than %u when jumbo frame enabled.",
>>> + (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>>> + (uint16_t)HNS3_MAX_FRAME_LEN);
>>> + ret = -EINVAL;
>>> + goto cfg_err;
>>> }
>> Please remove this check now, thanks!
> ack
>
> <...>
>
>>> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
>>> index ce8882a45883..f868e5d906c7 100644
>>> --- a/examples/ip_reassembly/main.c
>>> +++ b/examples/ip_reassembly/main.c
>>> @@ -162,7 +162,7 @@ static struct lcore_queue_conf
>>> lcore_queue_conf[RTE_MAX_LCORE];
>>> static struct rte_eth_conf port_conf = {
>>> .rxmode = {
>>> .mq_mode = ETH_MQ_RX_RSS,
>>> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
>>> + .mtu = JUMBO_FRAME_MAX_SIZE,
>> It feel likes that the replacement of max_rx_pkt_len with MTU is inappropriate.
>>
>> Because "max_rx_pkt_len " is the sum of "mtu" and "overhead_len".
> You are right, it is not same thing. I will update it to remove overhead.
>
> <...>
>
>>> @@ -1448,49 +1459,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t
>>> nb_rx_q, uint16_t nb_tx_q,
>>> }
>>> /*
>>> - * If jumbo frames are enabled, check that the maximum RX packet
>>> - * length is supported by the configured device.
>>> + * Check that the maximum RX packet length is supported by the
>>> + * configured device.
>>> */
>>> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>>> - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
>>> - RTE_ETHDEV_LOG(ERR,
>>> - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
>>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>>> - dev_info.max_rx_pktlen);
>>> - ret = -EINVAL;
>>> - goto rollback;
>>> - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
>>> - RTE_ETHDEV_LOG(ERR,
>>> - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
>>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>>> - (unsigned int)RTE_ETHER_MIN_LEN);
>>> - ret = -EINVAL;
>>> - goto rollback;
>>> - }
>>> + if (dev_conf->rxmode.mtu == 0)
>>> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
>> Here, it will cause a case that the user configuration is inconsistent with the
>> configuration saved in the framework .
> What is the framework you mentioned?
>
> Previously 'max_rx_pkt_len' was mandatory when jumbo frame is configured even
> user doesn't really case about it and it was causing additional complexity in
> the configuration.
> This check is required to use defaults when application doesn't need a specific
> value, I believe this is a good usability improvement.
> Application who cares about a specific value can set it explicitly and it will
> be in sync with application.
>
>> Is it more reasonable to provide a prompt message?
> Not sure about it. We are not changing a user configured value, but using
> default value when application doesn't set it, and that kind of log will be
> printed by most of the applications, this may cause noise.
This is a good reason.
>
>>> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
>>> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
>>> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
>>> + ret = -EINVAL;
>>> + goto rollback;
>>> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
>>> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
>>> + ret = -EINVAL;
>>> + goto rollback;
>>> + }
>> Above "max_rx_pktlen < RTE_ETHER_MIN_LEN " case will be inconsistent with
>> dev_set_mtu() API.
>>
>> The reasons are as follows:
>>
>> The value of RTE_ETHER_MIN_LEN is 64. If "overhead_len" is 26 caculated by
>> eth_dev_get_overhead_len(), it means
>>
>> that dev->data->dev_conf.rxmode.mtu equal to 38 is reasonable.
>>
>> But, in dev_set_mtu() API, the check for mtu is:
>>
>> @@ -3643,12 +3644,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>>
>> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
>> return -EINVAL;
>>
>> It should be noted that dev_info.min_mtu is RTE_ETHER_MIN_MTU (68).
>>
> Agree on the inconsistency.
>
> RTE_ETHER_MIN_MTU is 68, that is min MTU for IPv4
> RTE_ETHER_MIN_LEN is 64, and min MTU for Ethernet frame
>
> Although we are talking about MTU, we are mainly concerned about Ethernet frame
> payload, not IPv4.
>
> I suggest only using RTE_ETHER_MIN_LEN.
> Since this inconsistency was already there before this patch, I will update it
> in seperate patch instead of fixing in this one.
> .
Got it. Since the MTU value depends on the type of transmission link.
Why does we define
a minimum MTU?
True, we don't have to break the current restrictions in this patch. But
it is an indirect
check on the MTU. Now that "mtu" in Rxmode is an entry for configuring
MTU for driver,
I prefer to keep the same MTU check in ethdev layer. If there's a better
way to handle it,
perhaps it would be more appropriate to do it in this patchset.
I'd like to know how you're going to adjust。
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-22 7:21 ` Huisong Li
@ 2021-07-22 10:12 ` Ferruh Yigit
2021-07-22 10:15 ` Andrew Rybchenko
0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-22 10:12 UTC (permalink / raw)
To: Huisong Li; +Cc: dev
On 7/22/2021 8:21 AM, Huisong Li wrote:
>
> 在 2021/7/21 23:29, Ferruh Yigit 写道:
>> On 7/19/2021 4:35 AM, Huisong Li wrote:
>>> Hi, Ferruh
>>>
>> Hi Huisong,
>>
>> Thanks for the review.
>>
>>> 在 2021/7/10 1:29, Ferruh Yigit 写道:
>>>> There is a confusion on setting max Rx packet length, this patch aims to
>>>> clarify it.
>>>>
>>>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>>>> 'uint32_t max_rx_pkt_len' filed of the config struct 'struct
>>>> rte_eth_conf'.
>>>>
>>>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
>>>> stored into '(struct rte_eth_dev)->data->mtu'.
>>>>
>>>> These two APIs are related but they work in a disconnected way, they
>>>> store the set values in different variables which makes hard to figure
>>>> out which one to use, also two different related method is confusing for
>>>> the users.
>>>>
>>>> Other issues causing confusion is:
>>>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>>>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>>>> Ethernet frame overhead, but this may be different from device to
>>>> device based on what device supports, like VLAN and QinQ.
>>>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>>>> which adds additional confusion and some APIs and PMDs already
>>>> discards this documented behavior.
>>>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>>>> field, this adds configuration complexity for application.
>>>>
>>>> As solution, both APIs gets MTU as parameter, and both saves the result
>>>> in same variable '(struct rte_eth_dev)->data->mtu'. For this
>>>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>>>> from jumbo frame.
>>>>
>>>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
>>>> request and it should be used only within configure function and result
>>>> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
>>>> both application and PMD uses MTU from this variable.
>>>>
>>>> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
>>>> default 'RTE_ETHER_MTU' value is used.
>>>>
>>>> As additional clarification, MTU is used to configure the device for
>>>> physical Rx/Tx limitation. Other related issue is size of the buffer to
>>>> store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
>>>> And compares MTU against Rx buffer size to decide enabling scattered Rx
>>>> or not, if PMD supports it. If scattered Rx is not supported by device,
>>>> MTU bigger than Rx buffer size should fail.
>>>>
>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> <...>
>>
>>>> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
>>>> index e51512560e15..8bccdeddb2f7 100644
>>>> --- a/drivers/net/hns3/hns3_ethdev.c
>>>> +++ b/drivers/net/hns3/hns3_ethdev.c
>>>> @@ -2379,20 +2379,11 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct
>>>> rte_eth_conf *conf)
>>>> {
>>>> struct hns3_adapter *hns = dev->data->dev_private;
>>>> struct hns3_hw *hw = &hns->hw;
>>>> - uint32_t max_rx_pkt_len;
>>>> - uint16_t mtu;
>>>> - int ret;
>>>> -
>>>> - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
>>>> - return 0;
>>>> + uint32_t max_rx_pktlen;
>>>> - /*
>>>> - * If jumbo frames are enabled, MTU needs to be refreshed
>>>> - * according to the maximum RX packet length.
>>>> - */
>>>> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
>>>> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
>>>> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
>>>> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
>>>> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
>>>> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
>>>> hns3_err(hw, "maximum Rx packet length must be greater than %u "
>>>> "and no more than %u when jumbo frame enabled.",
>>>> (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>>> The preceding check for the maximum frame length was based on the scenario where
>>> jumbo frames are enabled.
>>>
>>> Since there is no offload of jumbo frames in this patchset, the maximum frame
>>> length does not need to be checked and only ensure conf->rxmode.mtu is valid.
>>>
>>> These should be guaranteed by dev_configure() in the framework .
>>>
>> Got it, agree that 'HNS3_DEFAULT_FRAME_LEN' check is now wrong, and as you said
>> these checks are becoming redundant, so I will remove them.
>>
>> In that case 'hns3_refresh_mtu()' becomes just wrapper to 'hns3_dev_mtu_set()',
>> I will remove function too.
>>
>> <...>
> ok
>>
>>>> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
>>>> b/drivers/net/hns3/hns3_ethdev_vf.c
>>>> index e582503f529b..ca839fa55fa0 100644
>>>> --- a/drivers/net/hns3/hns3_ethdev_vf.c
>>>> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
>>>> @@ -784,8 +784,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>>>> uint16_t nb_rx_q = dev->data->nb_rx_queues;
>>>> uint16_t nb_tx_q = dev->data->nb_tx_queues;
>>>> struct rte_eth_rss_conf rss_conf;
>>>> - uint32_t max_rx_pkt_len;
>>>> - uint16_t mtu;
>>>> + uint32_t max_rx_pktlen;
>>>> bool gro_en;
>>>> int ret;
>>>> @@ -825,29 +824,21 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>>>> goto cfg_err;
>>>> }
>>>> - /*
>>>> - * If jumbo frames are enabled, MTU needs to be refreshed
>>>> - * according to the maximum RX packet length.
>>>> - */
>>>> - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>>>> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
>>>> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
>>>> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
>>>> - hns3_err(hw, "maximum Rx packet length must be greater "
>>>> - "than %u and less than %u when jumbo frame enabled.",
>>>> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>>>> - (uint16_t)HNS3_MAX_FRAME_LEN);
>>>> - ret = -EINVAL;
>>>> - goto cfg_err;
>>>> - }
>>>> -
>>>> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
>>>> - ret = hns3vf_dev_mtu_set(dev, mtu);
>>>> - if (ret)
>>>> - goto cfg_err;
>>>> - dev->data->mtu = mtu;
>>>> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
>>>> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
>>>> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
>>>> + hns3_err(hw, "maximum Rx packet length must be greater "
>>>> + "than %u and less than %u when jumbo frame enabled.",
>>>> + (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>>>> + (uint16_t)HNS3_MAX_FRAME_LEN);
>>>> + ret = -EINVAL;
>>>> + goto cfg_err;
>>>> }
>>> Please remove this check now, thanks!
>> ack
>>
>> <...>
>>
>>>> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
>>>> index ce8882a45883..f868e5d906c7 100644
>>>> --- a/examples/ip_reassembly/main.c
>>>> +++ b/examples/ip_reassembly/main.c
>>>> @@ -162,7 +162,7 @@ static struct lcore_queue_conf
>>>> lcore_queue_conf[RTE_MAX_LCORE];
>>>> static struct rte_eth_conf port_conf = {
>>>> .rxmode = {
>>>> .mq_mode = ETH_MQ_RX_RSS,
>>>> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
>>>> + .mtu = JUMBO_FRAME_MAX_SIZE,
>>> It feel likes that the replacement of max_rx_pkt_len with MTU is inappropriate.
>>>
>>> Because "max_rx_pkt_len " is the sum of "mtu" and "overhead_len".
>> You are right, it is not same thing. I will update it to remove overhead.
>>
>> <...>
>>
>>>> @@ -1448,49 +1459,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t
>>>> nb_rx_q, uint16_t nb_tx_q,
>>>> }
>>>> /*
>>>> - * If jumbo frames are enabled, check that the maximum RX packet
>>>> - * length is supported by the configured device.
>>>> + * Check that the maximum RX packet length is supported by the
>>>> + * configured device.
>>>> */
>>>> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>>>> - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
>>>> - RTE_ETHDEV_LOG(ERR,
>>>> - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
>>>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>>>> - dev_info.max_rx_pktlen);
>>>> - ret = -EINVAL;
>>>> - goto rollback;
>>>> - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
>>>> - RTE_ETHDEV_LOG(ERR,
>>>> - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
>>>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>>>> - (unsigned int)RTE_ETHER_MIN_LEN);
>>>> - ret = -EINVAL;
>>>> - goto rollback;
>>>> - }
>>>> + if (dev_conf->rxmode.mtu == 0)
>>>> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
>>> Here, it will cause a case that the user configuration is inconsistent with the
>>> configuration saved in the framework .
>> What is the framework you mentioned?
>>
>> Previously 'max_rx_pkt_len' was mandatory when jumbo frame is configured even
>> user doesn't really case about it and it was causing additional complexity in
>> the configuration.
>> This check is required to use defaults when application doesn't need a specific
>> value, I believe this is a good usability improvement.
>> Application who cares about a specific value can set it explicitly and it will
>> be in sync with application.
>>
>>> Is it more reasonable to provide a prompt message?
>> Not sure about it. We are not changing a user configured value, but using
>> default value when application doesn't set it, and that kind of log will be
>> printed by most of the applications, this may cause noise.
> This is a good reason.
>>
>>>> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
>>>> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
>>>> + RTE_ETHDEV_LOG(ERR,
>>>> + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
>>>> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
>>>> + ret = -EINVAL;
>>>> + goto rollback;
>>>> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
>>>> + RTE_ETHDEV_LOG(ERR,
>>>> + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
>>>> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
>>>> + ret = -EINVAL;
>>>> + goto rollback;
>>>> + }
>>> Above "max_rx_pktlen < RTE_ETHER_MIN_LEN " case will be inconsistent with
>>> dev_set_mtu() API.
>>>
>>> The reasons are as follows:
>>>
>>> The value of RTE_ETHER_MIN_LEN is 64. If "overhead_len" is 26 caculated by
>>> eth_dev_get_overhead_len(), it means
>>>
>>> that dev->data->dev_conf.rxmode.mtu equal to 38 is reasonable.
>>>
>>> But, in dev_set_mtu() API, the check for mtu is:
>>>
>>> @@ -3643,12 +3644,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>>> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
>>> return -EINVAL;
>>>
>>> It should be noted that dev_info.min_mtu is RTE_ETHER_MIN_MTU (68).
>>>
>> Agree on the inconsistency.
>>
>> RTE_ETHER_MIN_MTU is 68, that is min MTU for IPv4
>> RTE_ETHER_MIN_LEN is 64, and min MTU for Ethernet frame
>>
>> Although we are talking about MTU, we are mainly concerned about Ethernet frame
>> payload, not IPv4.
>>
>> I suggest only using RTE_ETHER_MIN_LEN.
>> Since this inconsistency was already there before this patch, I will update it
>> in seperate patch instead of fixing in this one.
>> .
>
> Got it. Since the MTU value depends on the type of transmission link. Why does
> we define
>
> a minimum MTU?
>
I don't think we care about type of transmission in this level, I assume we
define min MTU mainly for the HW limitation and configuration. That is why it
makes sense to me to use Ethernet frame lenght limitation (not IPv4 one).
> True, we don't have to break the current restrictions in this patch. But it is
> an indirect
>
> check on the MTU. Now that "mtu" in Rxmode is an entry for configuring MTU for
> driver,
>
> I prefer to keep the same MTU check in ethdev layer. If there's a better way to
> handle it,
>
> perhaps it would be more appropriate to do it in this patchset.
>
> I'd like to know how you're going to adjust。
>
I am planning to move the MTU checks into common function and use it for both
'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' in a seperate patch. Please
check v2.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-22 10:12 ` Ferruh Yigit
@ 2021-07-22 10:15 ` Andrew Rybchenko
2021-07-22 14:43 ` Stephen Hemminger
0 siblings, 1 reply; 112+ messages in thread
From: Andrew Rybchenko @ 2021-07-22 10:15 UTC (permalink / raw)
To: Ferruh Yigit, Huisong Li; +Cc: dev
On 7/22/21 1:12 PM, Ferruh Yigit wrote:
> On 7/22/2021 8:21 AM, Huisong Li wrote:
>>
>> 在 2021/7/21 23:29, Ferruh Yigit 写道:
>>> On 7/19/2021 4:35 AM, Huisong Li wrote:
>>>> Hi, Ferruh
>>>>
>>> Hi Huisong,
>>>
>>> Thanks for the review.
>>>
>>>> 在 2021/7/10 1:29, Ferruh Yigit 写道:
>>>>> There is a confusion on setting max Rx packet length, this patch aims to
>>>>> clarify it.
>>>>>
>>>>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>>>>> 'uint32_t max_rx_pkt_len' filed of the config struct 'struct
>>>>> rte_eth_conf'.
>>>>>
>>>>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
>>>>> stored into '(struct rte_eth_dev)->data->mtu'.
>>>>>
>>>>> These two APIs are related but they work in a disconnected way, they
>>>>> store the set values in different variables which makes hard to figure
>>>>> out which one to use, also two different related method is confusing for
>>>>> the users.
>>>>>
>>>>> Other issues causing confusion is:
>>>>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>>>>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>>>>> Ethernet frame overhead, but this may be different from device to
>>>>> device based on what device supports, like VLAN and QinQ.
>>>>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>>>>> which adds additional confusion and some APIs and PMDs already
>>>>> discards this documented behavior.
>>>>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>>>>> field, this adds configuration complexity for application.
>>>>>
>>>>> As solution, both APIs gets MTU as parameter, and both saves the result
>>>>> in same variable '(struct rte_eth_dev)->data->mtu'. For this
>>>>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>>>>> from jumbo frame.
>>>>>
>>>>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
>>>>> request and it should be used only within configure function and result
>>>>> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
>>>>> both application and PMD uses MTU from this variable.
>>>>>
>>>>> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
>>>>> default 'RTE_ETHER_MTU' value is used.
>>>>>
>>>>> As additional clarification, MTU is used to configure the device for
>>>>> physical Rx/Tx limitation. Other related issue is size of the buffer to
>>>>> store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
>>>>> And compares MTU against Rx buffer size to decide enabling scattered Rx
>>>>> or not, if PMD supports it. If scattered Rx is not supported by device,
>>>>> MTU bigger than Rx buffer size should fail.
>>>>>
>>>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>> <...>
>>>
>>>>> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
>>>>> index e51512560e15..8bccdeddb2f7 100644
>>>>> --- a/drivers/net/hns3/hns3_ethdev.c
>>>>> +++ b/drivers/net/hns3/hns3_ethdev.c
>>>>> @@ -2379,20 +2379,11 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct
>>>>> rte_eth_conf *conf)
>>>>> {
>>>>> struct hns3_adapter *hns = dev->data->dev_private;
>>>>> struct hns3_hw *hw = &hns->hw;
>>>>> - uint32_t max_rx_pkt_len;
>>>>> - uint16_t mtu;
>>>>> - int ret;
>>>>> -
>>>>> - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
>>>>> - return 0;
>>>>> + uint32_t max_rx_pktlen;
>>>>> - /*
>>>>> - * If jumbo frames are enabled, MTU needs to be refreshed
>>>>> - * according to the maximum RX packet length.
>>>>> - */
>>>>> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
>>>>> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
>>>>> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
>>>>> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
>>>>> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
>>>>> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
>>>>> hns3_err(hw, "maximum Rx packet length must be greater than %u "
>>>>> "and no more than %u when jumbo frame enabled.",
>>>>> (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>>>> The preceding check for the maximum frame length was based on the scenario where
>>>> jumbo frames are enabled.
>>>>
>>>> Since there is no offload of jumbo frames in this patchset, the maximum frame
>>>> length does not need to be checked and only ensure conf->rxmode.mtu is valid.
>>>>
>>>> These should be guaranteed by dev_configure() in the framework .
>>>>
>>> Got it, agree that 'HNS3_DEFAULT_FRAME_LEN' check is now wrong, and as you said
>>> these checks are becoming redundant, so I will remove them.
>>>
>>> In that case 'hns3_refresh_mtu()' becomes just wrapper to 'hns3_dev_mtu_set()',
>>> I will remove function too.
>>>
>>> <...>
>> ok
>>>
>>>>> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
>>>>> b/drivers/net/hns3/hns3_ethdev_vf.c
>>>>> index e582503f529b..ca839fa55fa0 100644
>>>>> --- a/drivers/net/hns3/hns3_ethdev_vf.c
>>>>> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
>>>>> @@ -784,8 +784,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>>>>> uint16_t nb_rx_q = dev->data->nb_rx_queues;
>>>>> uint16_t nb_tx_q = dev->data->nb_tx_queues;
>>>>> struct rte_eth_rss_conf rss_conf;
>>>>> - uint32_t max_rx_pkt_len;
>>>>> - uint16_t mtu;
>>>>> + uint32_t max_rx_pktlen;
>>>>> bool gro_en;
>>>>> int ret;
>>>>> @@ -825,29 +824,21 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>>>>> goto cfg_err;
>>>>> }
>>>>> - /*
>>>>> - * If jumbo frames are enabled, MTU needs to be refreshed
>>>>> - * according to the maximum RX packet length.
>>>>> - */
>>>>> - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>>>>> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
>>>>> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
>>>>> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
>>>>> - hns3_err(hw, "maximum Rx packet length must be greater "
>>>>> - "than %u and less than %u when jumbo frame enabled.",
>>>>> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>>>>> - (uint16_t)HNS3_MAX_FRAME_LEN);
>>>>> - ret = -EINVAL;
>>>>> - goto cfg_err;
>>>>> - }
>>>>> -
>>>>> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
>>>>> - ret = hns3vf_dev_mtu_set(dev, mtu);
>>>>> - if (ret)
>>>>> - goto cfg_err;
>>>>> - dev->data->mtu = mtu;
>>>>> + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
>>>>> + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
>>>>> + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
>>>>> + hns3_err(hw, "maximum Rx packet length must be greater "
>>>>> + "than %u and less than %u when jumbo frame enabled.",
>>>>> + (uint16_t)HNS3_DEFAULT_FRAME_LEN,
>>>>> + (uint16_t)HNS3_MAX_FRAME_LEN);
>>>>> + ret = -EINVAL;
>>>>> + goto cfg_err;
>>>>> }
>>>> Please remove this check now, thanks!
>>> ack
>>>
>>> <...>
>>>
>>>>> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
>>>>> index ce8882a45883..f868e5d906c7 100644
>>>>> --- a/examples/ip_reassembly/main.c
>>>>> +++ b/examples/ip_reassembly/main.c
>>>>> @@ -162,7 +162,7 @@ static struct lcore_queue_conf
>>>>> lcore_queue_conf[RTE_MAX_LCORE];
>>>>> static struct rte_eth_conf port_conf = {
>>>>> .rxmode = {
>>>>> .mq_mode = ETH_MQ_RX_RSS,
>>>>> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
>>>>> + .mtu = JUMBO_FRAME_MAX_SIZE,
>>>> It feel likes that the replacement of max_rx_pkt_len with MTU is inappropriate.
>>>>
>>>> Because "max_rx_pkt_len " is the sum of "mtu" and "overhead_len".
>>> You are right, it is not same thing. I will update it to remove overhead.
>>>
>>> <...>
>>>
>>>>> @@ -1448,49 +1459,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t
>>>>> nb_rx_q, uint16_t nb_tx_q,
>>>>> }
>>>>> /*
>>>>> - * If jumbo frames are enabled, check that the maximum RX packet
>>>>> - * length is supported by the configured device.
>>>>> + * Check that the maximum RX packet length is supported by the
>>>>> + * configured device.
>>>>> */
>>>>> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>>>>> - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
>>>>> - RTE_ETHDEV_LOG(ERR,
>>>>> - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
>>>>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>>>>> - dev_info.max_rx_pktlen);
>>>>> - ret = -EINVAL;
>>>>> - goto rollback;
>>>>> - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
>>>>> - RTE_ETHDEV_LOG(ERR,
>>>>> - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
>>>>> - port_id, dev_conf->rxmode.max_rx_pkt_len,
>>>>> - (unsigned int)RTE_ETHER_MIN_LEN);
>>>>> - ret = -EINVAL;
>>>>> - goto rollback;
>>>>> - }
>>>>> + if (dev_conf->rxmode.mtu == 0)
>>>>> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
>>>> Here, it will cause a case that the user configuration is inconsistent with the
>>>> configuration saved in the framework .
>>> What is the framework you mentioned?
>>>
>>> Previously 'max_rx_pkt_len' was mandatory when jumbo frame is configured even
>>> user doesn't really case about it and it was causing additional complexity in
>>> the configuration.
>>> This check is required to use defaults when application doesn't need a specific
>>> value, I believe this is a good usability improvement.
>>> Application who cares about a specific value can set it explicitly and it will
>>> be in sync with application.
>>>
>>>> Is it more reasonable to provide a prompt message?
>>> Not sure about it. We are not changing a user configured value, but using
>>> default value when application doesn't set it, and that kind of log will be
>>> printed by most of the applications, this may cause noise.
>> This is a good reason.
>>>
>>>>> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
>>>>> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
>>>>> + RTE_ETHDEV_LOG(ERR,
>>>>> + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
>>>>> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
>>>>> + ret = -EINVAL;
>>>>> + goto rollback;
>>>>> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
>>>>> + RTE_ETHDEV_LOG(ERR,
>>>>> + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
>>>>> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
>>>>> + ret = -EINVAL;
>>>>> + goto rollback;
>>>>> + }
>>>> Above "max_rx_pktlen < RTE_ETHER_MIN_LEN " case will be inconsistent with
>>>> dev_set_mtu() API.
>>>>
>>>> The reasons are as follows:
>>>>
>>>> The value of RTE_ETHER_MIN_LEN is 64. If "overhead_len" is 26 caculated by
>>>> eth_dev_get_overhead_len(), it means
>>>>
>>>> that dev->data->dev_conf.rxmode.mtu equal to 38 is reasonable.
>>>>
>>>> But, in dev_set_mtu() API, the check for mtu is:
>>>>
>>>> @@ -3643,12 +3644,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>>>> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
>>>> return -EINVAL;
>>>>
>>>> It should be noted that dev_info.min_mtu is RTE_ETHER_MIN_MTU (68).
>>>>
>>> Agree on the inconsistency.
>>>
>>> RTE_ETHER_MIN_MTU is 68, that is min MTU for IPv4
>>> RTE_ETHER_MIN_LEN is 64, and min MTU for Ethernet frame
>>>
>>> Although we are talking about MTU, we are mainly concerned about Ethernet frame
>>> payload, not IPv4.
>>>
>>> I suggest only using RTE_ETHER_MIN_LEN.
>>> Since this inconsistency was already there before this patch, I will update it
>>> in seperate patch instead of fixing in this one.
>>> .
>>
>> Got it. Since the MTU value depends on the type of transmission link. Why does
>> we define
>>
>> a minimum MTU?
>>
>
> I don't think we care about type of transmission in this level, I assume we
> define min MTU mainly for the HW limitation and configuration. That is why it
> makes sense to me to use Ethernet frame lenght limitation (not IPv4 one).
+1
>> True, we don't have to break the current restrictions in this patch. But it is
>> an indirect
>>
>> check on the MTU. Now that "mtu" in Rxmode is an entry for configuring MTU for
>> driver,
>>
>> I prefer to keep the same MTU check in ethdev layer. If there's a better way to
>> handle it,
>>
>> perhaps it would be more appropriate to do it in this patchset.
>>
>> I'd like to know how you're going to adjust。
>>
>
> I am planning to move the MTU checks into common function and use it for both
> 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' in a seperate patch. Please
> check v2.
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-22 1:31 ` Ajit Khaparde
@ 2021-07-22 10:27 ` Ferruh Yigit
2021-07-22 10:38 ` Andrew Rybchenko
0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-22 10:27 UTC (permalink / raw)
To: Ajit Khaparde
Cc: Andrew Rybchenko, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Neil Horman, Bernard Iremonger, Bruce Richardson,
Konstantin Ananyev, John McNamara, Igor Russkikh, Pavel Belous,
Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
Beilei Xing, Jingjing Wu, Qiming Yang, Andrew Boyer, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Zyta Szpak,
Liron Himi, Heinrich Kuhn, Devendra Singh Rawat, Keith Wiles,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, David Hunt, Harry van Haaren,
Cristian Dumitrescu, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Declan Doherty, Pavan Nikhilesh, Kirill Rybalchenko,
Jasvinder Singh, Thomas Monjalon, dpdk-dev
On 7/22/2021 2:31 AM, Ajit Khaparde wrote:
>
>
>
> > [snip]
> >
> >> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> >> index faf3bd901d75..9f288f98329c 100644
> >> --- a/lib/ethdev/rte_ethdev.h
> >> +++ b/lib/ethdev/rte_ethdev.h
> >> @@ -410,7 +410,7 @@ enum rte_eth_tx_mq_mode {
> >> struct rte_eth_rxmode {
> >> /** The multi-queue packet distribution mode to be used, e.g. RSS. */
> >> enum rte_eth_rx_mq_mode mq_mode;
> >> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
> >> + uint32_t mtu; /**< Requested MTU. */
> >
> > Maximum Transmit Unit looks a bit confusing in Rx mode
> > structure.
> >
>
> True, but I think it is already used for Rx already as concept, I believe the
> intention will be clear enough. Do you think will be more clear if we pick a
> DPDK specific variable name?
>
> Maybe use MRU - Max Receive Unit.
>
It can be an option, but this patch unifies 'max_rx_pkt_len' & 'mtu' => mtu,
if we switch to 'mru', we should switch all usage to 'mru', including
'rte_eth_dev_set_mtu()' API name change, to not cause a new confusion between
'mru' & 'mtu' difference.
Does 'mtu' really cause this much confusion to do all this change?
>
> >> /** Maximum allowed size of LRO aggregated packet. */
> >> uint32_t max_lro_pkt_size;
> >> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
> >
> > [snip]
> >
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-22 10:27 ` Ferruh Yigit
@ 2021-07-22 10:38 ` Andrew Rybchenko
0 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-07-22 10:38 UTC (permalink / raw)
To: Ferruh Yigit, Ajit Khaparde
Cc: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Neil Horman, Bernard Iremonger, Bruce Richardson,
Konstantin Ananyev, John McNamara, Igor Russkikh, Pavel Belous,
Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
Beilei Xing, Jingjing Wu, Qiming Yang, Andrew Boyer, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Zyta Szpak,
Liron Himi, Heinrich Kuhn, Devendra Singh Rawat, Keith Wiles,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, David Hunt, Harry van Haaren,
Cristian Dumitrescu, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Declan Doherty, Pavan Nikhilesh, Kirill Rybalchenko,
Jasvinder Singh, Thomas Monjalon, dpdk-dev
On 7/22/21 1:27 PM, Ferruh Yigit wrote:
> On 7/22/2021 2:31 AM, Ajit Khaparde wrote:
>>
>>
>>
>> > [snip]
>> >
>> >> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> >> index faf3bd901d75..9f288f98329c 100644
>> >> --- a/lib/ethdev/rte_ethdev.h
>> >> +++ b/lib/ethdev/rte_ethdev.h
>> >> @@ -410,7 +410,7 @@ enum rte_eth_tx_mq_mode {
>> >> struct rte_eth_rxmode {
>> >> /** The multi-queue packet distribution mode to be used, e.g. RSS. */
>> >> enum rte_eth_rx_mq_mode mq_mode;
>> >> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
>> >> + uint32_t mtu; /**< Requested MTU. */
>> >
>> > Maximum Transmit Unit looks a bit confusing in Rx mode
>> > structure.
>> >
>>
>> True, but I think it is already used for Rx already as concept, I believe the
>> intention will be clear enough. Do you think will be more clear if we pick a
>> DPDK specific variable name?
>>
>> Maybe use MRU - Max Receive Unit.
>>
>
> It can be an option, but this patch unifies 'max_rx_pkt_len' & 'mtu' => mtu,
> if we switch to 'mru', we should switch all usage to 'mru', including
> 'rte_eth_dev_set_mtu()' API name change, to not cause a new confusion between
> 'mru' & 'mtu' difference.
>
> Does 'mtu' really cause this much confusion to do all this change?
Reconsidering it I see no better options. Yes, mtu is a bit confusing
in Rx configuration, but just a bit.
>>
>> >> /** Maximum allowed size of LRO aggregated packet. */
>> >> uint32_t max_lro_pkt_size;
>> >> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
>> >
>> > [snip]
>> >
>>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-22 10:15 ` Andrew Rybchenko
@ 2021-07-22 14:43 ` Stephen Hemminger
2021-09-17 1:08 ` Min Hu (Connor)
0 siblings, 1 reply; 112+ messages in thread
From: Stephen Hemminger @ 2021-07-22 14:43 UTC (permalink / raw)
To: Andrew Rybchenko; +Cc: Ferruh Yigit, Huisong Li, dev
On Thu, 22 Jul 2021 13:15:04 +0300
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
> > I don't think we care about type of transmission in this level, I assume we
> > define min MTU mainly for the HW limitation and configuration. That is why it
> > makes sense to me to use Ethernet frame lenght limitation (not IPv4 one).
>
> +1
Also it is important that DPDK follow the conventions of other software
such as Linux and BSD. Cisco and Juniper already disagree about whether
header should be included in what is defined as MTU; i.e Cisco says 1514
and Juniper says 1500.
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v2 1/6] ethdev: fix max Rx packet length
2021-07-09 17:29 [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Ferruh Yigit
` (5 preceding siblings ...)
2021-07-19 3:35 ` Huisong Li
@ 2021-07-22 17:21 ` Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
` (5 more replies)
6 siblings, 6 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-22 17:21 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Bernard Iremonger, Bruce Richardson,
Konstantin Ananyev, Kiran Kumar K, Nithin Dabilpuram, David Hunt,
John McNamara, Igor Russkikh, Pavel Belous, Steven Webster,
Matt Peters, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Ajit Khaparde, Somnath Kotur, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Haiyue Wang, Marcin Wojtas, Michal Krawczyk,
Guy Tzalik, Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh,
John Daley, Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
Thomas Monjalon
Cc: Ferruh Yigit, dev
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also two different related method is confusing for
the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, but this may be different from device to
device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
As additional clarification, MTU is used to configure the device for
physical Rx/Tx limitation. Other related issue is size of the buffer to
store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
And compares MTU against Rx buffer size to decide enabling scattered Rx
or not, if PMD supports it. If scattered Rx is not supported by device,
MTU bigger than Rx buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
v2:
* Converted to explicit checks for zero/non-zero
* fixed hns3 checks
* fixed some sample app rxmode.mtu value
* fixed some sample app max-pkt-len argument and updated doc for it
---
app/test-eventdev/test_perf_common.c | 1 -
app/test-eventdev/test_pipeline_common.c | 5 +-
app/test-pmd/cmdline.c | 45 +++---
app/test-pmd/config.c | 22 ++-
app/test-pmd/parameters.c | 4 +-
app/test-pmd/testpmd.c | 100 ++++++++------
app/test-pmd/testpmd.h | 2 +-
app/test/test_link_bonding.c | 1 -
app/test/test_link_bonding_mode4.c | 1 -
| 2 -
app/test/test_pmd_perf.c | 1 -
doc/guides/nics/dpaa.rst | 2 +-
doc/guides/nics/dpaa2.rst | 2 +-
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/fm10k.rst | 2 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/octeontx.rst | 2 +-
doc/guides/nics/thunderx.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 25 ----
doc/guides/sample_app_ug/flow_classify.rst | 8 +-
doc/guides/sample_app_ug/ioat.rst | 1 -
doc/guides/sample_app_ug/ip_reassembly.rst | 2 +-
doc/guides/sample_app_ug/l3_forward.rst | 6 +-
.../sample_app_ug/l3_forward_access_ctrl.rst | 4 +-
doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +-
.../sample_app_ug/l3_forward_power_man.rst | 4 +-
.../sample_app_ug/performance_thread.rst | 4 +-
doc/guides/sample_app_ug/skeleton.rst | 8 +-
drivers/net/atlantic/atl_ethdev.c | 3 -
drivers/net/avp/avp_ethdev.c | 17 +--
drivers/net/axgbe/axgbe_ethdev.c | 7 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
drivers/net/bnxt/bnxt_ethdev.c | 21 +--
drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 9 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-
drivers/net/cxgbe/cxgbe_main.c | 3 +-
drivers/net/cxgbe/sge.c | 3 +-
drivers/net/dpaa/dpaa_ethdev.c | 52 +++----
drivers/net/dpaa2/dpaa2_ethdev.c | 31 ++---
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/igb_ethdev.c | 18 +--
drivers/net/e1000/igb_rxtx.c | 16 +--
drivers/net/ena/ena_ethdev.c | 27 ++--
drivers/net/enetc/enetc_ethdev.c | 24 +---
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 42 +++---
drivers/net/fm10k/fm10k_ethdev.c | 2 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-
drivers/net/hns3/hns3_ethdev.c | 42 +-----
drivers/net/hns3/hns3_ethdev_vf.c | 28 +---
drivers/net/hns3/hns3_rxtx.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 10 +-
drivers/net/i40e/i40e_ethdev_vf.c | 14 +-
drivers/net/i40e/i40e_rxtx.c | 4 +-
drivers/net/iavf/iavf_ethdev.c | 9 +-
drivers/net/ice/ice_dcf_ethdev.c | 5 +-
drivers/net/ice/ice_ethdev.c | 14 +-
drivers/net/ice/ice_rxtx.c | 12 +-
drivers/net/igc/igc_ethdev.c | 51 ++-----
drivers/net/igc/igc_ethdev.h | 7 +
drivers/net/igc/igc_txrx.c | 22 +--
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ionic/ionic_rxtx.c | 6 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 35 ++---
drivers/net/ixgbe/ixgbe_pf.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 15 +-
drivers/net/liquidio/lio_ethdev.c | 20 +--
drivers/net/mlx4/mlx4_rxq.c | 17 +--
drivers/net/mlx5/mlx5_rxq.c | 25 ++--
drivers/net/mvneta/mvneta_ethdev.c | 7 -
drivers/net/mvneta/mvneta_rxtx.c | 13 +-
drivers/net/mvpp2/mrvl_ethdev.c | 34 ++---
drivers/net/nfp/nfp_net.c | 9 +-
drivers/net/octeontx/octeontx_ethdev.c | 12 +-
drivers/net/octeontx2/otx2_ethdev.c | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-
drivers/net/pfe/pfe_ethdev.c | 7 +-
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/qede/qede_rxtx.c | 8 +-
drivers/net/sfc/sfc_ethdev.c | 4 +-
drivers/net/sfc/sfc_port.c | 6 +-
drivers/net/tap/rte_eth_tap.c | 7 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +-
drivers/net/txgbe/txgbe_ethdev.c | 7 +-
drivers/net/txgbe/txgbe_ethdev.h | 4 +
drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
drivers/net/txgbe/txgbe_rxtx.c | 19 +--
drivers/net/virtio/virtio_ethdev.c | 4 +-
examples/bbdev_app/main.c | 1 -
examples/bond/main.c | 1 -
examples/distributor/main.c | 1 -
.../pipeline_worker_generic.c | 1 -
.../eventdev_pipeline/pipeline_worker_tx.c | 1 -
examples/flow_classify/flow_classify.c | 10 +-
examples/ioat/ioatfwd.c | 1 -
examples/ip_fragmentation/main.c | 12 +-
examples/ip_pipeline/link.c | 2 +-
examples/ip_reassembly/main.c | 12 +-
examples/ipsec-secgw/ipsec-secgw.c | 7 +-
examples/ipv4_multicast/main.c | 9 +-
examples/kni/main.c | 6 +-
examples/l2fwd-cat/l2fwd-cat.c | 8 +-
examples/l2fwd-crypto/main.c | 1 -
examples/l2fwd-event/l2fwd_common.c | 1 -
examples/l3fwd-acl/main.c | 129 +++++++++---------
examples/l3fwd-graph/main.c | 83 +++++++----
examples/l3fwd-power/main.c | 90 +++++++-----
examples/l3fwd/main.c | 84 +++++++-----
.../performance-thread/l3fwd-thread/main.c | 88 +++++++-----
.../performance-thread/l3fwd-thread/test.sh | 24 ++--
examples/pipeline/obj.c | 2 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 1 -
examples/qos_sched/init.c | 1 -
examples/rxtx_callbacks/main.c | 10 +-
examples/skeleton/basicfwd.c | 10 +-
examples/vhost/main.c | 5 +-
examples/vm_power_manager/main.c | 11 +-
lib/ethdev/rte_ethdev.c | 98 +++++++------
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_trace.h | 2 +-
124 files changed, 806 insertions(+), 1076 deletions(-)
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..660d5a0364b6 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..5fcea74b4d43 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
return -EINVAL;
}
- port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
- if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
+ port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
+ if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 8468018cf35d..c183a8982f13 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1892,43 +1892,36 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
__rte_unused void *data)
{
struct cmd_config_max_pkt_len_result *res = parsed_result;
- uint32_t max_rx_pkt_len_backup = 0;
- portid_t pid;
+ portid_t port_id;
int ret;
+ if (strcmp(res->name, "max-pkt-len") != 0) {
+ printf("Unknown parameter\n");
+ return;
+ }
+
if (!all_ports_stopped()) {
printf("Please stop all ports first\n");
return;
}
- RTE_ETH_FOREACH_DEV(pid) {
- struct rte_port *port = &ports[pid];
-
- if (!strcmp(res->name, "max-pkt-len")) {
- if (res->value < RTE_ETHER_MIN_LEN) {
- printf("max-pkt-len can not be less than %d\n",
- RTE_ETHER_MIN_LEN);
- return;
- }
- if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
- return;
-
- ret = eth_dev_info_get_print_err(pid, &port->dev_info);
- if (ret != 0) {
- printf("rte_eth_dev_info_get() failed for port %u\n",
- pid);
- return;
- }
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_port *port = &ports[port_id];
- max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
+ if (res->value < RTE_ETHER_MIN_LEN) {
+ printf("max-pkt-len can not be less than %d\n",
+ RTE_ETHER_MIN_LEN);
+ return;
+ }
- port->dev_conf.rxmode.max_rx_pkt_len = res->value;
- if (update_jumbo_frame_offload(pid) != 0)
- port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
- } else {
- printf("Unknown parameter\n");
+ ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
+ if (ret != 0) {
+ printf("rte_eth_dev_info_get() failed for port %u\n",
+ port_id);
return;
}
+
+ update_jumbo_frame_offload(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 04ae0feb5852..918ee3af2a71 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1139,7 +1139,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
int diag;
struct rte_port *rte_port = &ports[port_id];
struct rte_eth_dev_info dev_info;
- uint16_t eth_overhead;
int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
@@ -1155,21 +1154,18 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
return;
}
diag = rte_eth_dev_set_mtu(port_id, mtu);
- if (diag)
+ if (diag != 0) {
printf("Set MTU failed. diag=%d\n", diag);
- else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- /*
- * Ether overhead in driver is equal to the difference of
- * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
- * device supports jumbo frame.
- */
- eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
- if (mtu > RTE_ETHER_MTU) {
+ return;
+ }
+
+ rte_port->dev_conf.rxmode.mtu = mtu;
+
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (mtu > RTE_ETHER_MTU)
rte_port->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- rte_port->dev_conf.rxmode.max_rx_pkt_len =
- mtu + eth_overhead;
- } else
+ else
rte_port->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
}
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 5e69d2aa8cfe..8e8556d74a4a 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -860,7 +860,9 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
n = atoi(optarg);
if (n >= RTE_ETHER_MIN_LEN)
- rx_mode.max_rx_pkt_len = (uint32_t) n;
+ rx_mode.mtu = (uint32_t) n -
+ (RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN);
else
rte_exit(EXIT_FAILURE,
"Invalid max-pkt-len=%d - should be > %d\n",
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index a48f70962f54..d2658bdc9ff3 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -445,13 +445,7 @@ lcoreid_t latencystats_lcore_id = -1;
/*
* Ethernet device configuration.
*/
-struct rte_eth_rxmode rx_mode = {
- /* Default maximum frame length.
- * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
- * in init_config().
- */
- .max_rx_pkt_len = 0,
-};
+struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
@@ -1417,11 +1411,24 @@ check_nb_hairpinq(queueid_t hairpinq)
return 0;
}
+static int
+get_eth_overhead(struct rte_eth_dev_info *dev_info)
+{
+ uint32_t eth_overhead;
+
+ if (dev_info->max_mtu != UINT16_MAX &&
+ dev_info->max_rx_pktlen > dev_info->max_mtu)
+ eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu;
+ else
+ eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return eth_overhead;
+}
+
static void
init_config_port_offloads(portid_t pid, uint32_t socket_id)
{
struct rte_port *port = &ports[pid];
- uint16_t data_size;
int ret;
int i;
@@ -1432,7 +1439,7 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid);
+ ret = update_jumbo_frame_offload(pid, 0);
if (ret != 0)
printf("Updating jumbo frame offload failed for port %u\n",
pid);
@@ -1463,14 +1470,20 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
*/
if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
- data_size = rx_mode.max_rx_pkt_len /
- port->dev_info.rx_desc_lim.nb_mtu_seg_max;
-
- if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
- mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
- TESTPMD_LOG(WARNING,
- "Configured mbuf size of the first segment %hu\n",
- mbuf_data_size[0]);
+ uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
+ uint16_t mtu;
+
+ if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
+ uint16_t data_size = (mtu + eth_overhead) /
+ port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+ uint16_t buffer_size = data_size + RTE_PKTMBUF_HEADROOM;
+
+ if (buffer_size > mbuf_data_size[0]) {
+ mbuf_data_size[0] = buffer_size;
+ TESTPMD_LOG(WARNING,
+ "Configured mbuf size of the first segment %hu\n",
+ mbuf_data_size[0]);
+ }
}
}
}
@@ -3337,43 +3350,44 @@ rxtx_port_config(struct rte_port *port)
/*
* Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned if JUMBO_FRAME offload is not set.
+ * MTU is also aligned.
*
* port->dev_info should be set before calling this function.
*
+ * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
+ * ETH_OVERHEAD". This is useful to update flags but not MTU value.
+ *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid)
+update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
uint64_t rx_offloads;
- int ret;
+ uint16_t mtu, new_mtu;
bool on;
- /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
- if (port->dev_info.max_mtu != UINT16_MAX &&
- port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
- eth_overhead = port->dev_info.max_rx_pktlen -
- port->dev_info.max_mtu;
- else
- eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ eth_overhead = get_eth_overhead(&port->dev_info);
- rx_offloads = port->dev_conf.rxmode.offloads;
+ if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
+ printf("Failed to get MTU for port %u\n", portid);
+ return -1;
+ }
+
+ if (max_rx_pktlen == 0)
+ max_rx_pktlen = mtu + eth_overhead;
- /* Default config value is 0 to use PMD specific overhead */
- if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
- port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
+ rx_offloads = port->dev_conf.rxmode.offloads;
+ new_mtu = max_rx_pktlen - eth_overhead;
- if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
+ if (new_mtu <= RTE_ETHER_MTU) {
rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
on = false;
} else {
if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
printf("Frame size (%u) is not supported by port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len,
- portid);
+ max_rx_pktlen, portid);
return -1;
}
rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -3394,18 +3408,16 @@ update_jumbo_frame_offload(portid_t portid)
}
}
- /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
- * if unset do it here
- */
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- ret = rte_eth_dev_set_mtu(portid,
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
- if (ret)
- printf("Failed to set MTU to %u for port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead,
- portid);
+ if (mtu == new_mtu)
+ return 0;
+
+ if (rte_eth_dev_set_mtu(portid, new_mtu) != 0) {
+ printf("Failed to set MTU to %u for port %u\n", new_mtu, portid);
+ return -1;
}
+ port->dev_conf.rxmode.mtu = new_mtu;
+
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index d61a055bdd1b..42143f85924f 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid);
+int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..5388d18125a6 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
.split_hdr_size = 0,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7adc7..3e9254fe896d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..e7bb0497b663 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
@@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..a3b4f52c65e6 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 917482dbe2a5..b8d43aa90098 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -335,7 +335,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 6470f1c05ac8..ce16e1047df2 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -551,7 +551,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index a96e12d15515..f4c0f212cb8a 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -166,7 +166,7 @@ Jumbo frame
Supports Rx jumbo frames.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.max_rx_pkt_len``.
+ ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..ed6afd62703d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -141,7 +141,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
up to 15364 bytes can still reach the host interface.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index f5b727c1eed4..cfc13b88a25f 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -603,9 +603,9 @@ Driver options
and each stride receives one packet. MPRQ can improve throughput for
small-packet traffic.
- When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
+ When MPRQ is enabled, MTU can be larger than the size of
user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
- configure large stride size enough to accommodate max_rx_pkt_len as long as
+ configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index b1a868b054d1..8236cc3e93e0 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -157,7 +157,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 12d43ce93e28..98f23a2b2a3d 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -392,7 +392,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
up to 9200 bytes can still reach the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 9584d6bfd723..86da47d8f9c6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -56,31 +56,6 @@ Deprecation Notices
In 19.11 PMDs will still update the field even when the offload is not
enabled.
-* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be
- replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
- The new ``mtu`` field will be used to configure the initial device MTU via
- ``rte_eth_dev_configure()`` API.
- Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
- The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store
- the configured ``mtu`` value,
- and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
- be used to store the user configuration request.
- Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled,
- ``mtu`` field will be always valid.
- When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU``
- value will be used.
- ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully,
- either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
-
- An application may need to configure device for a specific Rx packet size, like for
- cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
- can't be bigger than Rx buffer size.
- To cover these cases an application needs to know the device packet overhead to be
- able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
- ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
- the device packet overhead can be calculated as:
- ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
-
* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
will be removed in 21.11.
Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst
index 01915971ae83..2cc36a688af3 100644
--- a/doc/guides/sample_app_ug/flow_classify.rst
+++ b/doc/guides/sample_app_ug/flow_classify.rst
@@ -325,13 +325,7 @@ Forwarding application is shown below:
}
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
-
-.. code-block:: c
-
- static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
- };
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst
index 7eb557f91c7a..c5c06261e395 100644
--- a/doc/guides/sample_app_ug/ioat.rst
+++ b/doc/guides/sample_app_ug/ioat.rst
@@ -162,7 +162,6 @@ multiple CBDMA channels per port:
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst
index e72c8492e972..2090b23fdd1c 100644
--- a/doc/guides/sample_app_ug/ip_reassembly.rst
+++ b/doc/guides/sample_app_ug/ip_reassembly.rst
@@ -175,7 +175,7 @@ each RX queue uses its own mempool.
.. code-block:: c
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * RTE_LIBRTE_IP_FRAG_MAX_FRAGS;
- nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
+ nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += RTE_TEST_RX_DESC_DEFAULT + RTE_TEST_TX_DESC_DEFAULT;
nb_mbuf = RTE_MAX(nb_mbuf, (uint32_t)NB_MBUF);
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index a117502a664b..95e50cbe99fd 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -65,7 +65,7 @@ The application has a number of command line options::
[--lookup LOOKUP_METHOD]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--hash-entry-num]
[--ipv6]
@@ -95,9 +95,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 2cf6e4556f14..486247ac2e4f 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -236,7 +236,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
+ ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--max-pkt-len PKTLEN] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
where,
@@ -255,8 +255,6 @@ where,
* --alg=<val>: optional, ACL classify method to use, one of:
``scalar|sse|avx2|neon|altivec|avx512x16|avx512x32``
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst b/doc/guides/sample_app_ug/l3_forward_graph.rst
index 42fe9574bf2e..a285de8e4df5 100644
--- a/doc/guides/sample_app_ug/l3_forward_graph.rst
+++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
@@ -48,7 +48,7 @@ The application has a number of command line options similar to l3fwd::
[-P]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--per-port-pool]
@@ -63,9 +63,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index eb0c6d42f4de..3223a9808d28 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -88,7 +88,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa]
+ ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--max-pkt-len PKTLEN] [--no-numa]
where,
@@ -99,8 +99,6 @@ where,
* --config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which ports are mapped to which cores.
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst
index 4c6a1dbe5cbe..1c0c02b58166 100644
--- a/doc/guides/sample_app_ug/performance_thread.rst
+++ b/doc/guides/sample_app_ug/performance_thread.rst
@@ -59,7 +59,7 @@ The application has a number of command line options::
-p PORTMASK [-P]
--rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
--tx(lcore,thread)[,(lcore,thread)]
- [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
+ [--max-pkt-len PKTLEN] [--no-numa]
[--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
[--parse-ptype]
@@ -80,8 +80,6 @@ Where:
the lcore the thread runs on, and the id of RX thread with which it is
associated. The parameters are explained below.
-* ``--enable-jumbo``: optional, enables jumbo frames.
-
* ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
* ``--no-numa``: optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst
index 263d8debc81b..a88cb8f14a4b 100644
--- a/doc/guides/sample_app_ug/skeleton.rst
+++ b/doc/guides/sample_app_ug/skeleton.rst
@@ -157,13 +157,7 @@ Forwarding application is shown below:
}
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
-
-.. code-block:: c
-
- static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
- };
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..3f654c071566 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return 0;
}
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff5b..0feacc822433 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1059,17 +1059,18 @@ static int
avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
struct avp_dev *avp)
{
- unsigned int max_rx_pkt_len;
+ unsigned int max_rx_pktlen;
- max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
- if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
- (max_rx_pkt_len > avp->host_mbuf_size)) {
+ if (max_rx_pktlen > avp->guest_mbuf_size ||
+ max_rx_pktlen > avp->host_mbuf_size) {
/*
* If the guest MTU is greater than either the host or guest
* buffers then chained mbufs have to be enabled in the TX
* direction. It is assumed that the application will not need
- * to send packets larger than their max_rx_pkt_len (MRU).
+ * to send packets larger than their MTU.
*/
return 1;
}
@@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
avp->max_rx_pkt_len,
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN,
avp->host_mbuf_size,
avp->guest_mbuf_size);
@@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* function; send it truncated to avoid the performance
* hit of having to manage returning the already
* allocated buffer to the free list. This should not
- * happen since the application should have set the
- * max_rx_pkt_len based on its MTU and it should be
+ * happen since the application should have not send
+ * packages larger than its MTU and it should be
* policing its own packet sizes.
*/
txq->errors++;
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af11f..76aeec077f2b 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
int ret;
struct rte_eth_dev_data *dev_data = dev->data;
- uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len;
+ uint16_t max_pkt_len;
dev->dev_ops = &axgbe_eth_dev_ops;
@@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
+
+ max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (frame_size > AXGBE_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
val = 1;
@@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
val = 0;
}
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..009a94e9a8fa 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -175,16 +175,12 @@ static int
bnx2x_dev_configure(struct rte_eth_dev *dev)
{
struct bnx2x_softc *sc = dev->data->dev_private;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
PMD_INIT_FUNC_TRACE(sc);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- dev->data->mtu = sc->mtu;
- }
+ sc->mtu = dev->data->dev_conf.rxmode.mtu;
if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues");
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de34a2f0bb2d..e27720e71645 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1137,13 +1137,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
- BNXT_NUM_VLANS;
- bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
- }
+ bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
+
return 0;
resource_error:
@@ -1181,6 +1176,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
*/
static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
uint16_t buf_size;
int i;
@@ -1195,7 +1191,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size)
+ if (eth_dev->data->mtu + overhead > buf_size)
return 1;
}
return 0;
@@ -2996,6 +2992,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
uint32_t rc = 0;
@@ -3009,8 +3006,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
if (!eth_dev->data->nb_rx_queues)
return rc;
- new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
- VLAN_TAG_SIZE * BNXT_NUM_VLANS;
+ new_pkt_size = new_mtu + overhead;
/*
* Disallow any MTU change that would require scattered receive support
@@ -3037,7 +3033,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
/* Is there a change in mtu setting? */
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size)
+ if (eth_dev->data->mtu == new_mtu)
return rc;
for (i = 0; i < bp->nr_vnics; i++) {
@@ -3059,9 +3055,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
}
- if (!rc)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size;
-
if (bnxt_hwrm_config_host_mtu(bp))
PMD_DRV_LOG(WARNING, "Failed to configure host MTU\n");
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index a6755661c49c..ed3893f8d6fa 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1728,8 +1728,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_VLAN_FILTER;
- slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ slave_eth_dev->data->dev_conf.rxmode.mtu =
+ bonded_eth_dev->data->dev_conf.rxmode.mtu;
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_JUMBO_FRAME)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 0e3652ed5109..5c68e14c928b 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
}
@@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct cnxk_eth_rxq_sp *rxq;
- uint16_t mtu;
int rc;
rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
/* Setup scatter mode if needed by jumbo */
nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
- CNXK_NIX_MAX_VTAG_ACT_SIZE;
-
- rc = cnxk_nix_mtu_set(eth_dev, mtu);
+ rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
if (rc)
plt_err("Failed to set default MTU size, rc=%d", rc);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..695d0d6fd3e2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
goto exit;
}
- frame_size += RTE_ETHER_CRC_LEN;
-
- if (frame_size > RTE_ETHER_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca397600..8cf61f12a8d6 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return err;
/* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
+ if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
/* set to jumbo mode if needed */
- if (new_mtu > CXGBE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- if (!err)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;
-
return err;
}
@@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
- unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
struct rte_eth_dev_info dev_info;
@@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
rxq->fl.size = temp_nb_desc;
/* Set to jumbo mode if necessary */
- if (pkt_len > CXGBE_ETH_MAX_LEN)
+ if (eth_dev->data->mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..91d6bb9bbcb0 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
unsigned int mtu;
int ret;
- mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ mtu = pi->eth_dev->data->mtu;
conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..830f5192474d 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
u32 wr_mid;
u64 cntrl, *end;
bool v6;
- u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
+ u32 max_pkt_len;
/* Reject xmit if queue is stopped */
if (unlikely(txq->flags & EQ_STOPPED))
@@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
return 0;
}
+ max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
(unlikely(m->pkt_len > max_pkt_len)))
goto out_free;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 27d670f843d2..56703e3a39e8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > DPAA_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
@@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
struct fman_if *fif = dev->process_private;
struct __fman_if *__fif;
struct rte_intr_handle *intr_handle;
+ uint32_t max_rx_pktlen;
int speed, duplex;
int ret;
@@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- DPAA_PMD_DEBUG("enabling jumbo");
-
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- else {
- DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
- "supported is %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- DPAA_MAX_RX_PKT_LEN);
- max_len = DPAA_MAX_RX_PKT_LEN;
- }
-
- fman_if_set_maxfrm(dev->process_private, max_len);
- dev->data->mtu = max_len
- - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
+ DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
+ "supported is %d",
+ max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
+ max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
+ fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
+
if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
@@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
u32 flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -EINVAL;
}
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
+ VLAN_TAG_SIZE;
/* Max packet can fit in single buffer */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
+ if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- buffsz * DPAA_SGT_MAX_ENTRIES) {
- DPAA_PMD_ERR("max RxPkt size %d too big to fit "
+ if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
+ DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz * DPAA_SGT_MAX_ENTRIES);
+ max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES);
rte_errno = EOVERFLOW;
return -rte_errno;
}
@@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz - RTE_PKTMBUF_HEADROOM);
+ max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
}
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
@@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif),
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 8b803b8542dc..cc040a9a6d6e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
int tx_l3_csum_offload = false;
int tx_l4_csum_offload = false;
int ret, tc_index;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
- ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
- priv->token, eth_conf->rxmode.max_rx_pkt_len
- - RTE_ETHER_CRC_LEN);
- if (ret) {
- DPAA2_PMD_ERR(
- "Unable to set mtu. check config");
- return ret;
- }
- dev->data->mtu =
- dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
- VLAN_TAG_SIZE;
- } else {
- return -1;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
+ ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
+ priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
+ if (ret != 0) {
+ DPAA2_PMD_ERR("Unable to set mtu. check config");
+ return ret;
}
+ } else {
+ return -1;
}
if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
@@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (frame_size > DPAA2_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..6f418a36aa04 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 10ee0f33415a..35b517891d67 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2686,9 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD);
}
static void
@@ -2704,10 +2702,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE);
+ E1000_WRITE_REG(hw, E1000_RLPML,
+ dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
}
static int
@@ -4405,7 +4401,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -4416,11 +4412,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
return 0;
}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712af..e9a30d393bd7 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
@@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
rctl |= E1000_RCTL_LPE;
/*
@@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
@@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
/* setup MTU */
- e1000_rlpml_set_vf(hw,
- (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE));
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
+ e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index dfe68279fa7b..e9b718786a39 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -850,26 +850,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev,
return rc;
}
-static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
-{
- uint32_t max_frame_len = adapter->max_mtu;
-
- if (adapter->edev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- max_frame_len =
- adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
-
- return max_frame_len;
-}
-
static int ena_check_valid_conf(struct ena_adapter *adapter)
{
- uint32_t max_frame_len = ena_get_mtu_conf(adapter);
+ uint32_t mtu = adapter->edev_data->mtu;
- if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_INIT_LOG(ERR, "Unsupported MTU of %d. "
"max mtu: %d, min mtu: %d",
- max_frame_len, adapter->max_mtu, ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return ENA_COM_UNSUPPORTED;
}
@@ -1042,11 +1030,11 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
ena_dev = &adapter->ena_dev;
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
- if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_DRV_LOG(ERR,
"Invalid MTU setting. new_mtu: %d "
"max mtu: %d min mtu: %d\n",
- mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return -EINVAL;
}
@@ -2067,7 +2055,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
ETH_RSS_UDP;
dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
- dev_info->max_rx_pktlen = adapter->max_mtu;
+ dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ dev_info->min_mtu = ENA_MIN_MTU;
+ dev_info->max_mtu = adapter->max_mtu;
dev_info->max_mac_addrs = 1;
dev_info->max_rx_queues = adapter->max_num_io_queues;
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index b496cd470045..cdb9783b5372 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > ENETC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads &=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/*setting the MTU*/
enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) |
ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
@@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint32_t checksum = L3_CKSUM | L4_CKSUM;
+ uint32_t max_len;
PMD_INIT_FUNC_TRACE();
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
- enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
- ENETC_SET_MAXFRM(max_len));
- enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
- ENETC_MAC_MAXFRM_SIZE);
- enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
- 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
- RTE_ETHER_CRC_LEN;
- }
+ max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len));
+ enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
+ enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
int config;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b8f..6a81ceb62ba7 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
* max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
* a hint to the driver to size receive buffers accordingly so that
* larger-than-vnic-mtu packets get truncated.. For DPDK, we let
- * the user decide the buffer size via rxmode.max_rx_pkt_len, basically
+ * the user decide the buffer size via rxmode.mtu, basically
* ignoring vNIC mtu.
*/
device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu);
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..dfc7f5d1f94f 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
struct rq_enet_desc *rqd = rq->ring.descs;
unsigned i;
dma_addr_t dma_addr;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint16_t rq_buf_len;
if (!rq->in_use)
@@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
/*
* If *not* using scatter and the mbuf size is greater than the
- * requested max packet size (max_rx_pkt_len), then reduce the
- * posted buffer size to max_rx_pkt_len. HW still receives packets
- * larger than max_rx_pkt_len, but they will be truncated, which we
+ * requested max packet size (mtu + eth overhead), then reduce the
+ * posted buffer size to max packet size. HW still receives packets
+ * larger than max packet size, but they will be truncated, which we
* drop in the rx handler. Not ideal, but better than returning
* large packets when the user is not expecting them.
*/
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM;
- if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
- rq_buf_len = max_rx_pkt_len;
+ if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
+ rq_buf_len = max_rx_pktlen;
for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
mb = rte_mbuf_raw_alloc(rq->mp);
if (mb == NULL) {
@@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
unsigned int mbuf_size, mbufs_per_pkt;
unsigned int nb_sop_desc, nb_data_desc;
uint16_t min_sop, max_sop, min_data, max_data;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
/*
* Representor uses a reserved PF queue. Translate representor
@@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM);
- /* max_rx_pkt_len includes the ethernet header and CRC. */
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ /* max_rx_pktlen includes the ethernet header and CRC. */
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
- mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
+ mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
} else {
dev_info(enic, "Scatter rx mode disabled\n");
mbufs_per_pkt = 1;
- if (max_rx_pkt_len > mbuf_size) {
+ if (max_rx_pktlen > mbuf_size) {
dev_warning(enic, "The maximum Rx packet size (%u) is"
" larger than the mbuf size (%u), and"
" scatter is disabled. Larger packets will"
" be truncated.\n",
- max_rx_pkt_len, mbuf_size);
+ max_rx_pktlen, mbuf_size);
}
}
@@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
rq_sop->data_queue_enable = 1;
rq_data->in_use = 1;
/*
- * HW does not directly support rxmode.max_rx_pkt_len. HW always
+ * HW does not directly support MTU. HW always
* receives packet sizes up to the "max" MTU.
* If not using scatter, we can achieve the effect of dropping
* larger packets by reducing the size of posted buffers.
* See enic_alloc_rx_queue_mbufs().
*/
- if (max_rx_pkt_len <
- enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
- dev_warning(enic, "rxmode.max_rx_pkt_len is ignored"
- " when scatter rx mode is in use.\n");
+ if (enic->rte_dev->data->mtu < enic->max_mtu) {
+ dev_warning(enic,
+ "mtu is ignored when scatter rx mode is in use.\n");
}
} else {
dev_info(enic, "Rq %u Scatter rx mode not being used\n",
@@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
if (mbufs_per_pkt > 1) {
dev_info(enic, "For max packet size %u and mbuf size %u valid"
" rx descriptor range is %u to %u\n",
- max_rx_pkt_len, mbuf_size, min_sop + min_data,
+ max_rx_pktlen, mbuf_size, min_sop + min_data,
max_sop + max_data);
}
dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
@@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
"MTU (%u) is greater than value configured in NIC (%u)\n",
new_mtu, config_mtu);
- /* Update the MTU and maximum packet length */
- eth_dev->data->mtu = new_mtu;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- enic_mtu_to_max_rx_pktlen(new_mtu);
-
/*
* If the device has not started (enic_enable), nothing to do.
* Later, enic_enable() will set up RQs reflecting the new maximum
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e4021..5e4b361ca6c0 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
FM10K_SRRCTL_LOOPBACK_SUPPRESS);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
+ if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
uint32_t reg;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 946465779f2e..c737ef8d06d8 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -324,19 +324,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- HINIC_MAX_JUMBO_FRAME_SIZE) {
+ if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
+ HINIC_MIN_FRAME_SIZE ||
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
+ HINIC_MAX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR,
- "Max rx pkt len out of range, get max_rx_pkt_len:%d, "
+ "Packet length out of range, get packet length:%d, "
"expect between %d and %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu),
HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- nic_dev->mtu_size =
- HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
/* rss template */
err = hinic_config_mq_mode(dev, TRUE);
@@ -1539,7 +1539,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- uint32_t frame_size;
int ret = 0;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
@@ -1557,16 +1556,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- /* update max frame size */
- frame_size = HINIC_MTU_TO_PKTLEN(mtu);
- if (frame_size > HINIC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 17b995af1501..c5e90228bb3e 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2374,41 +2374,6 @@ hns3_init_ring_with_vector(struct hns3_hw *hw)
return 0;
}
-static int
-hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
-{
- struct hns3_adapter *hns = dev->data->dev_private;
- struct hns3_hw *hw = &hns->hw;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
- int ret;
-
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
- return 0;
-
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater than %u "
- "and no more than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- return -EINVAL;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3_dev_mtu_set(dev, mtu);
- if (ret)
- return ret;
- dev->data->mtu = mtu;
-
- return 0;
-}
-
static int
hns3_setup_dcb(struct rte_eth_dev *dev)
{
@@ -2526,8 +2491,8 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- ret = hns3_refresh_mtu(dev, conf);
- if (ret)
+ ret = hns3_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
goto cfg_err;
ret = hns3_mbuf_dyn_rx_timestamp_register(dev, conf);
@@ -2622,7 +2587,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false;
+ is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2643,7 +2608,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8f3be64b0b32..e44712ad499a 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -784,8 +784,6 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
struct rte_eth_rss_conf rss_conf;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
bool gro_en;
int ret;
@@ -825,28 +823,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater "
- "than %u and less than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- ret = -EINVAL;
- goto cfg_err;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3vf_dev_mtu_set(dev, mtu);
- if (ret)
- goto cfg_err;
- dev->data->mtu = mtu;
- }
+ ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
+ goto cfg_err;
ret = hns3vf_dev_configure_vlan(dev);
if (ret)
@@ -935,7 +914,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index a86e105fbc35..9b0ea1a1480b 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1734,18 +1734,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size,
uint16_t nb_desc)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
- struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
uint16_t min_vec_bds;
/*
* HNS3 hardware network engine set scattered as default. If the driver
* is not work in scattered mode and the pkts greater than buf_size
- * but smaller than max_rx_pkt_len will be distributed to multiple BDs.
+ * but smaller than frame size will be distributed to multiple BDs.
* Driver cannot handle this situation.
*/
- if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) {
- hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater "
+ if (!hw->data->scattered_rx && frame_size > buf_size) {
+ hns3_err(hw, "frame size is not allowed to be set greater "
"than rx_buf_len if scattered is off.");
return -EINVAL;
}
@@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
}
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
- dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
+ dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed17a..1161f301b9ae 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11772,14 +11772,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > I40E_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return ret;
}
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b227..086a167ca672 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1927,8 +1927,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << I40E_RXQ_CTX_DBUFF_SHIFT));
len = rxq->rx_buf_len * I40E_MAX_CHAINED_RX_BUFFERS;
- rxq->max_pkt_len = RTE_MIN(len,
- dev_data->dev_conf.rxmode.max_rx_pkt_len);
+ rxq->max_pkt_len = RTE_MIN(len, dev_data->mtu + I40E_ETH_OVERHEAD);
/**
* Check if the jumbo frame and maximum packet length are set correctly
@@ -2173,7 +2172,7 @@ i40evf_dev_start(struct rte_eth_dev *dev)
hw->adapter_stopped = 0;
- vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ vf->max_pkt_len = dev->data->mtu + I40E_ETH_OVERHEAD;
vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
@@ -2885,13 +2884,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > I40E_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return ret;
}
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 026cda948cd6..13c3760c8d13 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2923,8 +2923,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
}
rxq->max_pkt_len =
- RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
- rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
+ RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
+ data->mtu + I40E_ETH_OVERHEAD);
if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 41382c6d669b..13c2329d85a7 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -563,12 +563,13 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_dev_data *dev_data = dev->data;
uint16_t buf_size, max_pkt_len, len;
+ uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
/* Calculate the maximum packet length allowed */
len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS;
- max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_pkt_len = RTE_MIN(len, frame_size);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
@@ -815,7 +816,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
adapter->stopped = 0;
- vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
num_queue_pairs = vf->num_queue_pairs;
@@ -1445,15 +1446,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IAVF_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return ret;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da8759..c83941a908b6 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -59,9 +59,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ dev->data->mtu + ICE_ETH_OVERHEAD);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954f1..205d69f8fcfc 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3385,8 +3385,8 @@ ice_dev_start(struct rte_eth_dev *dev)
pf->adapter_stopped = false;
/* Set the max frame size to default value*/
- max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
- pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
+ max_frame_size = pf->dev_data->mtu ?
+ pf->dev_data->mtu + ICE_ETH_OVERHEAD :
ICE_FRAME_SIZE_MAX;
/* Set the max frame size to HW*/
@@ -3765,14 +3765,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > ICE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047ee..9da9a42aaad0 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -270,15 +270,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
+ uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
/* Set buffer size as the head split is disabled. */
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
RTE_PKTMBUF_HEADROOM);
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- rxq->max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev_data->dev_conf.rxmode.max_rx_pkt_len);
+ rxq->max_pkt_len =
+ RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ frame_size);
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
@@ -369,11 +370,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
return -EINVAL;
}
- buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
- RTE_PKTMBUF_HEADROOM);
-
/* Check if scattered RX needs to be used. */
- if (rxq->max_pkt_len > buf_size)
+ if (frame_size > buf_size)
dev_data->scattered_rx = 1;
rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a0954836b..b26723064b07 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -20,13 +20,6 @@
#define IGC_INTEL_VENDOR_ID 0x8086
-/*
- * The overhead from MTU to max frame size.
- * Considering VLAN so tag needs to be counted.
- */
-#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
- RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE)
-
#define IGC_FC_PAUSE_TIME 0x0680
#define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
#define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
@@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= IGC_RCTL_LPE;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~IGC_RCTL_LPE;
}
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
return 0;
}
@@ -2486,6 +2473,7 @@ static int
igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
- RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
+ if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
+ frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
@@ -2519,6 +2498,7 @@ static int
igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
+ if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
+ frame_size, MAX_RX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..b3473b5b1646 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -35,6 +35,13 @@ extern "C" {
#define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
#define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX)
+/*
+ * The overhead from MTU to max frame size.
+ * Considering VLAN so tag needs to be counted.
+ */
+#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2)
+
/*
* TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
* multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd220..28d3076439c3 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
struct igc_rx_queue *rxq;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint32_t rctl;
uint32_t rxcsum;
uint16_t buf_size;
@@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
rctl |= IGC_RCTL_LPE;
-
- /*
- * Set maximum packet length by default, and might be updated
- * together with enabling/disabling dual VLAN.
- */
- IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
- } else {
+ else
rctl &= ~IGC_RCTL_LPE;
- }
+
+ max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
+ /*
+ * Set maximum packet length by default, and might be updated
+ * together with enabling/disabling dual VLAN.
+ */
+ IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
+ if (max_rx_pktlen > buf_size)
dev->data->scattered_rx = 1;
} else {
/*
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e6207939665e..97447a10e46a 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -343,25 +343,15 @@ static int
ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
- uint32_t max_frame_size;
int err;
IONIC_PRINT_CALL();
/*
* Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
- * is done by the the API.
+ * is done by the API.
*/
- /*
- * Max frame size is MTU + Ethernet header + VLAN + QinQ
- * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
- */
- max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
-
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
- return -EINVAL;
-
err = ionic_lif_change_mtu(lif, mtu);
if (err)
return err;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa6a..3f5fc66abf71 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
struct rte_mbuf *rxm, *rxm_seg;
uint32_t max_frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint64_t pkt_flags = 0;
uint32_t pkt_type;
struct ionic_rx_stats *stats = &rxq->stats;
@@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
int __rte_cold
ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
{
- uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
struct ionic_rx_qcq *rxq;
int err;
@@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
{
struct ionic_rx_qcq *rxq = rx_queue;
uint32_t frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
struct ionic_rx_service service_cb_arg;
service_cb_arg.rx_pkts = rx_pkts;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa5877d..3634c0c8c5f0 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IPN3KE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b5371568b54d..b9048ade3c35 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5172,7 +5172,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct ixgbe_hw *hw;
struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
- struct rte_eth_dev_data *dev_data = dev->data;
int ret;
ret = ixgbe_dev_info_get(dev, &dev_info);
@@ -5186,9 +5185,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
*/
- if (dev_data->dev_started && !dev_data->scattered_rx &&
- (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ if (dev->data->dev_started && !dev->data->scattered_rx &&
+ frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -5197,23 +5196,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (frame_size > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU) {
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
return 0;
@@ -6267,12 +6261,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
* set as 0x4.
*/
if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_JUMBO_FRAME);
+ (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_DEFAULT);
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
/* Set RTTBCNRC of queue X */
IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
@@ -6556,8 +6548,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (mtu < RTE_ETHER_MIN_MTU ||
- max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
+ if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
return -EINVAL;
/* If device is started, refuse mtu that requires the support of
@@ -6565,7 +6556,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
*/
if (dev_data->dev_started && !dev_data->scattered_rx &&
(max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -6582,8 +6573,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (ixgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index fbf2b17d160f..9bcbc445f2d0 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* if PF has jumbo frames enabled which means legacy
* VFs are disabled.
*/
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ if (dev->data->mtu > RTE_ETHER_MTU)
break;
/* fall through */
default:
@@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* legacy VFs.
*/
if (max_frame > IXGBE_ETH_MAX_LEN ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ dev->data->mtu > RTE_ETHER_MTU)
return -1;
break;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index c814a28cb49a..eb11e22e59e3 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -5059,6 +5059,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
uint16_t buf_size;
uint16_t i;
struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
int rc;
PMD_INIT_FUNC_TRACE();
@@ -5094,7 +5095,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (rx_conf->max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
} else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
@@ -5168,8 +5169,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
+ if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -5649,6 +5649,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
struct ixgbe_hw *hw;
struct ixgbe_rx_queue *rxq;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
uint64_t bus_addr;
uint32_t srrctl, psrtype = 0;
uint16_t buf_size;
@@ -5685,10 +5686,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
* VF packets received can work in all cases.
*/
- if (ixgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ if (ixgbevf_rlpml_set_vf(hw, frame_size) != 0) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
return -EINVAL;
}
@@ -5747,8 +5747,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
+ (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->data->scattered_rx = 1;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index b72060a4499b..976916f870a5 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
- uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (frame_len > LIO_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
- eth_dev->data->mtu = mtu;
-
return 0;
}
@@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev)
static int
lio_dev_start(struct rte_eth_dev *eth_dev)
{
- uint16_t mtu;
- uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
int ret = 0;
@@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
goto dev_mtu_set_error;
}
- mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN);
- if (mtu < RTE_ETHER_MIN_MTU)
- mtu = RTE_ETHER_MIN_MTU;
-
- if (eth_dev->data->mtu != mtu) {
- ret = lio_dev_mtu_set(eth_dev, mtu);
- if (ret)
- goto dev_mtu_set_error;
- }
+ ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
+ if (ret != 0)
+ goto dev_mtu_set_error;
return 0;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 978cbb8201ea..4a5cfd22aa71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
int ret;
uint32_t crc_present;
uint64_t offloads;
+ uint32_t max_rx_pktlen;
offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
@@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
};
/* Enable scattered packets support for this queue if necessary. */
MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- (mb_len - RTE_PKTMBUF_HEADROOM)) {
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
- uint32_t size =
- RTE_PKTMBUF_HEADROOM +
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
/*
@@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
/* Make sure sges_n did not overflow. */
size = mb_len * (1 << rxq->sges_n);
size -= RTE_PKTMBUF_HEADROOM;
- if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
+ if (size < max_rx_pktlen) {
rte_errno = EOVERFLOW;
ERROR("%p: too many SGEs (%u) needed to handle"
" requested maximum packet size %u",
(void *)dev,
- 1 << sges_n,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ 1 << sges_n, max_rx_pktlen);
goto error;
}
} else {
WARN("%p: the requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- (void *)dev,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ (void *)dev, max_rx_pktlen,
mb_len - RTE_PKTMBUF_HEADROOM);
}
DEBUG("%p: maximum number of segments per packet: %u",
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 23685d76541f..78499c4cc496 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1336,10 +1336,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
- unsigned int max_rx_pkt_len = lro_on_queue ?
+ unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
+ dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
@@ -1378,7 +1379,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* needed to handle max size packets, replace zero length
* with the buffer length from the pool.
*/
- tail_len = max_rx_pkt_len;
+ tail_len = max_rx_pktlen;
do {
struct mlx5_eth_rxseg *hw_seg =
&tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
@@ -1416,7 +1417,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- tmpl->rxq.rxseg_n, max_rx_pkt_len,
+ tmpl->rxq.rxseg_n, max_rx_pktlen,
MLX5_MAX_RXQ_NSEG);
rte_errno = ENOTSUP;
goto error;
@@ -1441,7 +1442,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
- dev->data->port_id, idx, mb_len, max_rx_pkt_len,
+ dev->data->port_id, idx, mb_len, max_rx_pktlen,
RTE_PKTMBUF_HEADROOM);
rte_errno = ENOSPC;
goto error;
@@ -1460,7 +1461,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* following conditions are met:
* - MPRQ is enabled.
* - The number of descs is more than the number of strides.
- * - max_rx_pkt_len plus overhead is less than the max size
+ * - max_rx_pktlen plus overhead is less than the max size
* of a stride or mprq_stride_size is specified by a user.
* Need to make sure that there are enough strides to encap
* the maximum packet size in case mprq_stride_size is set.
@@ -1484,7 +1485,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
!!(offloads & DEV_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
- max_lro_size = RTE_MIN(max_rx_pkt_len,
+ max_lro_size = RTE_MIN(max_rx_pktlen,
(1u << tmpl->rxq.strd_num_n) *
(1u << tmpl->rxq.strd_sz_n));
DRV_LOG(DEBUG,
@@ -1493,9 +1494,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->port_id, idx,
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
} else if (tmpl->rxq.rxseg_n == 1) {
- MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
+ MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
@@ -1517,13 +1518,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- 1 << sges_n, max_rx_pkt_len,
+ 1 << sges_n, max_rx_pktlen,
1u << MLX5_MAX_LOG_RQ_SEGS);
rte_errno = ENOTSUP;
goto error;
}
tmpl->rxq.sges_n = sges_n;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
}
if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
DRV_LOG(WARNING,
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index a3ee15020466..520c6fdb1d31 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_NETA_ETH_HDRS_LEN;
-
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
@@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
/* It is OK. New MTU will be set later on mvneta_dev_start */
return 0;
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc09039..2cd4fb31348b 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mvneta_priv *priv = dev->data->dev_private;
struct mvneta_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS;
- if (frame_size < max_rx_pkt_len) {
+ if (frame_size < max_rx_pktlen) {
MVNETA_LOG(ERR,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8da4..5ce71661c84e 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_PP2_ETH_HDRS_LEN;
- if (dev->data->mtu > priv->max_mtu) {
- MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n",
- dev->data->mtu,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- priv->max_mtu);
- return -EINVAL;
- }
+ if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
+ MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
+ dev->data->dev_conf.rxmode.mtu,
+ priv->max_mtu);
+ return -EINVAL;
}
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
@@ -595,9 +590,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
return 0;
@@ -1994,7 +1986,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mrvl_priv *priv = dev->data->dev_private;
struct mrvl_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
int ret, tc, inq;
uint64_t offloads;
@@ -2009,17 +2001,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
return -EFAULT;
}
- frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
- MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
- if (frame_size < max_rx_pkt_len) {
+ frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS;
+ if (frame_size < max_rx_pktlen) {
MRVL_LOG(WARNING,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MRVL_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index a30e78db168c..6f9c279fde4d 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -646,7 +646,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = rxmode->max_rx_pkt_len;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -1553,16 +1553,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
/* switch to jumbo mode if needed */
- if ((uint32_t)mtu > RTE_ETHER_MTU)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
-
/* writing to configuration space */
- nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
+ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
hw->mtu = mtu;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9f4c0503b4d4..69c3bda12df8 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > OCCTX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
@@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
/* Setup scatter mode if needed by jumbo */
- if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (data->mtu > buffsz) {
nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
@@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
evdev_priv->rx_offload_flags = nic->rx_offload_flags;
evdev_priv->tx_offload_flags = nic->tx_offload_flags;
- /* Setup MTU based on max_rx_pkt_len */
- nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD;
+ /* Setup MTU */
+ nic->mtu = data->mtu;
return 0;
}
@@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
octeontx_recheck_rx_offloads(rxq);
}
- /* Setting up the mtu based on max_rx_pkt_len */
+ /* Setting up the mtu */
ret = octeontx_dev_mtu_set(dev, nic->mtu);
if (ret) {
octeontx_log_err("Failed to set default MTU size %d", ret);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e7c..787e8d890215 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
mbp_priv = rte_mempool_get_priv(rxq->pool);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5a4501208e9e..ba282762b749 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -58,14 +58,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > NIX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return rc;
}
@@ -74,7 +71,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct otx2_eth_rxq *rxq;
- uint16_t mtu;
int rc;
rxq = data->rx_queues[0];
@@ -82,10 +78,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
/* Setup scatter mode if needed by jumbo */
otx2_nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
-
- rc = otx2_nix_mtu_set(eth_dev, mtu);
+ rc = otx2_nix_mtu_set(eth_dev, data->mtu);
if (rc)
otx2_err("Failed to set default MTU size %d", rc);
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index feec4d10a26e..2619bd2f2a19 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev)
static int
pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- int ret;
struct pfe_eth_priv_s *priv = dev->data->dev_private;
uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
/*TODO Support VLAN*/
- ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
- if (!ret)
- dev->data->mtu = mtu;
-
- return ret;
+ return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
}
/* pfe_eth_enet_addr_byte_mac
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 323d46e6ebb2..53b2c0ca10e3 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- /* If jumbo enabled adjust MTU */
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
-
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
@@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
- uint32_t max_rx_pkt_len;
uint32_t frame_size;
uint16_t bufsz;
bool restart = false;
@@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
DP_ERR(edev, "Error during getting ethernet device info\n");
return rc;
}
- max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
- frame_size = max_rx_pkt_len;
+
+ frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
@@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (frame_size > QEDE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->dev_started = 1;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
-
return 0;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 298f4e3e4273..62a126999a5c 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
struct qede_rx_queue *rxq;
- uint16_t max_rx_pkt_len;
+ uint16_t max_rx_pktlen;
uint16_t bufsz;
int rc;
@@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
dev->data->rx_queues[qid] = NULL;
}
- max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
/* Fix up RX buffer size */
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
- (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
+ (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
dev->data->scattered_rx = 1;
}
}
- rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
+ rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
if (rc < 0)
return rc;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 88896db1f86f..02c28cbfc1d2 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1025,15 +1025,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
/*
* The driver does not use it, but other PMDs update jumbo frame
- * flag and max_rx_pkt_len when MTU is set.
+ * flag when MTU is set.
*/
if (mtu > RTE_ETHER_MTU) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index ac117f9c4814..ca9538fb8f2f 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -364,14 +364,10 @@ sfc_port_configure(struct sfc_adapter *sa)
{
const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
struct sfc_port *port = &sa->port;
- const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
sfc_log_init(sa, "entry");
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- port->pdu = rxmode->max_rx_pkt_len;
- else
- port->pdu = EFX_MAC_PDU(dev_data->mtu);
+ port->pdu = EFX_MAC_PDU(dev_data->mtu);
return 0;
}
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf71d..0a8d29277aeb 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_mtu = mtu };
- int err = 0;
- err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
- if (!err)
- dev->data->mtu = mtu;
-
- return err;
+ return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
}
static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fc1844ddfce1..1d1360faff66 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (frame_size > NIC_HW_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
- /* Update max_rx_pkt_len */
- rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
nic->mtu = mtu;
for (i = 0; i < nic->sqs_count; i++)
@@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
}
/* Setup scatter mode if needed by jumbo */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE > buffsz)
+ if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
- /* Setup MTU based on max_rx_pkt_len or default */
- mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
- dev->data->dev_conf.rxmode.max_rx_pkt_len
- - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
+ /* Setup MTU */
+ mtu = dev->data->mtu;
if (nicvf_dev_set_mtu(dev, mtu)) {
PMD_INIT_LOG(ERR, "Failed to set default mtu size");
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index e62675520a15..d773a81665d7 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3482,8 +3482,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965c8..44cfcd76bca4 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -55,6 +55,10 @@
#define TXGBE_5TUPLE_MAX_PRI 7
#define TXGBE_5TUPLE_MIN_PRI 1
+
+/* The overhead from MTU to max frame size. */
+#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
+
#define TXGBE_RSS_OFFLOAD_ALL ( \
ETH_RSS_IPV4 | \
ETH_RSS_NONFRAG_IPV4_TCP | \
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 6f577f4c80df..3362ca097ca7 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -1143,8 +1143,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (txgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1bd..c6cd3803c434 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
- } else {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
- }
+ wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+ TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD));
/*
* If loopback mode is configured, set LPBK bit.
@@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
+ if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
+ 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* VF packets received can work in all cases.
*/
if (txgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ dev->data->mtu + TXGBE_ETH_OVERHEAD);
return -EINVAL;
}
@@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
+ (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 05683056676c..9491cc2669f7 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2009,8 +2009,6 @@ virtio_dev_configure(struct rte_eth_dev *dev)
const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
struct virtio_hw *hw = dev->data->dev_private;
- uint32_t ether_hdr_len = RTE_ETHER_HDR_LEN + VLAN_TAG_LEN +
- hw->vtnet_hdr_size;
uint64_t rx_offloads = rxmode->offloads;
uint64_t tx_offloads = txmode->offloads;
uint64_t req_features;
@@ -2039,7 +2037,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len)
+ if (rxmode->mtu > hw->max_mtu)
req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index 5251db0b1674..98e47e0812d5 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/bond/main.c b/examples/bond/main.c
index f48400e21156..70c37a7d2ba7 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -117,7 +117,6 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 1b1029660e77..0b973d392dc8 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index f70ab0cc9e38..f5c28268d9f8 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ca6cd200caad..9d9f150522dd 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 94c155364842..3e1daa228316 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -59,12 +59,6 @@ static struct{
} parm_config;
const char cb_port_delim[] = ":";
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
struct flow_classifier {
struct rte_flow_classifier *cls;
};
@@ -191,7 +185,7 @@ static struct rte_flow_attr attr;
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
struct rte_ether_addr addr;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
@@ -202,6 +196,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 2e377e2d4bb6..5dbf60f7ef54 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -806,7 +806,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 77a6a18d1914..da4efdb83e64 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -146,7 +146,8 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_SCATTER |
@@ -914,9 +915,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
@@ -959,8 +960,7 @@ main(int argc, char **argv)
}
/* set the mtu to the maximum received packet size */
- ret = rte_eth_dev_set_mtu(portid,
- local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD);
+ ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu);
if (ret < 0) {
printf("\n");
rte_exit(EXIT_FAILURE, "Set MTU failed: "
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..a6485b32906f 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN, /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index ce8882a45883..253f7be2ca07 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -162,7 +162,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_JUMBO_FRAME),
@@ -875,7 +876,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
*/
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM;
- nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
+ nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
+ + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += nb_rxd + nb_txd;
@@ -1046,9 +1048,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f252d34985b4..f8a1f544c21d 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -2161,7 +2160,6 @@ cryptodevs_init(uint16_t req_queue_num)
static void
port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
{
- uint32_t frame_size;
struct rte_eth_dev_info dev_info;
struct rte_eth_txconf *txconf;
uint16_t nb_tx_queue, nb_rx_queue;
@@ -2209,10 +2207,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- frame_size = MTU_TO_FRAMELEN(mtu_size);
- if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
+ if (mtu_size > RTE_ETHER_MTU)
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- local_port_conf.rxmode.max_rx_pkt_len = frame_size;
+ local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index fd6207a18b79..c211ffeb127a 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -107,7 +107,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
@@ -694,9 +695,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/kni/main.c b/examples/kni/main.c
index beabb3c848aa..c10814c6a94f 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
memcpy(&conf, &port_conf, sizeof(conf));
/* Set new MTU */
- if (new_mtu > RTE_ETHER_MAX_LEN)
+ if (new_mtu > RTE_ETHER_MTU)
conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* mtu + length of header + length of FCS = max pkt length */
- conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
- KNI_ENET_FCS_SIZE;
+ conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
index 8e7eb3248589..cef4187467f0 100644
--- a/examples/l2fwd-cat/l2fwd-cat.c
+++ b/examples/l2fwd-cat/l2fwd-cat.c
@@ -19,10 +19,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
-};
-
/* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
/*
@@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0)
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 4f5161649234..b36c6123c652 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -215,7 +215,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index ab341e55b299..0d0857bf8041 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564b6..7abb612ee6a4 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -141,6 +140,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
/* ethernet addresses of ports */
@@ -201,8 +202,8 @@ enum {
OPT_CONFIG_NUM = 256,
#define OPT_NONUMA "no-numa"
OPT_NONUMA_NUM,
-#define OPT_ENBJMO "enable-jumbo"
- OPT_ENBJMO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_RULE_IPV4 "rule_ipv4"
OPT_RULE_IPV4_NUM,
#define OPT_RULE_IPV6 "rule_ipv6"
@@ -1619,26 +1620,21 @@ print_usage(const char *prgname)
usage_acl_alg(alg, sizeof(alg));
printf("%s [EAL options] -- -p PORTMASK -P"
- "--"OPT_RULE_IPV4"=FILE"
- "--"OPT_RULE_IPV6"=FILE"
+ " --"OPT_RULE_IPV4"=FILE"
+ " --"OPT_RULE_IPV6"=FILE"
" [--"OPT_CONFIG" (port,queue,lcore)[,(port,queue,lcore]]"
- " [--"OPT_ENBJMO" [--max-pkt-len PKTLEN]]\n"
+ " [--"OPT_MAX_PKT_LEN" PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
- " --"OPT_CONFIG": (port,queue,lcore): "
- "rx queues configuration\n"
+ " -P: enable promiscuous mode\n"
+ " --"OPT_CONFIG" (port,queue,lcore): rx queues configuration\n"
" --"OPT_NONUMA": optional, disable numa awareness\n"
- " --"OPT_ENBJMO": enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
- " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries "
- "file. "
+ " --"OPT_MAX_PKT_LEN" PKTLEN: maximum packet length in decimal (64-9600)\n"
+ " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries file. "
"Each rule occupy one line. "
"2 kinds of rules are supported. "
"One is ACL entry at while line leads with character '%c', "
- "another is route entry at while line leads with "
- "character '%c'.\n"
- " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules "
- "entries file.\n"
+ "another is route entry at while line leads with character '%c'.\n"
+ " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules entries file.\n"
" --"OPT_ALG": ACL classify method to use, one of: %s\n",
prgname, ACL_LEAD_CHAR, ROUTE_LEAD_CHAR, alg);
}
@@ -1758,14 +1754,14 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
- {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
- {OPT_ENBJMO, 0, NULL, OPT_ENBJMO_NUM },
- {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
- {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
- {OPT_ALG, 1, NULL, OPT_ALG_NUM },
- {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
- {NULL, 0, 0, 0 }
+ {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
+ {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
+ {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
+ {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
+ {OPT_ALG, 1, NULL, OPT_ALG_NUM },
+ {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
+ {NULL, 0, 0, 0 }
};
argvopt = argv;
@@ -1804,43 +1800,11 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case OPT_ENBJMO_NUM:
- {
- struct option lenopts = {
- "max-pkt-len",
- required_argument,
- 0,
- 0
- };
-
- printf("jumbo frame is enabled\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, then use the
- * default value RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length "
- "to %u\n",
- (unsigned int)
- port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
case OPT_RULE_IPV4_NUM:
parm_config.rule_ipv4_name = optarg;
break;
@@ -2007,6 +1971,43 @@ set_default_dest_mac(void)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -2080,6 +2081,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 75c2e0ef3f3f..627bdecbd95f 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
@@ -126,6 +125,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
@@ -259,7 +260,7 @@ print_usage(const char *prgname)
" [-P]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--per-port-pool]\n\n"
@@ -268,9 +269,7 @@ print_usage(const char *prgname)
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for "
"port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --per-port-pool: Use separate buffer pool per port\n\n",
prgname);
@@ -404,7 +403,7 @@ static const char short_options[] = "p:" /* portmask */
#define CMD_LINE_OPT_CONFIG "config"
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
enum {
/* Long options mapped to a short option */
@@ -416,7 +415,7 @@ enum {
CMD_LINE_OPT_CONFIG_NUM,
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
};
@@ -424,7 +423,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
{NULL, 0, 0, 0},
};
@@ -490,28 +489,8 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr, "Invalid maximum "
- "packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM: {
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
}
@@ -721,6 +700,43 @@ graph_main_loop(void *conf)
return 0;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -805,6 +821,13 @@ main(int argc, char **argv)
nb_rx_queue, n_tx_queue);
rte_eth_dev_info_get(portid, &dev_info);
+
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 6ce0f829dc41..92344c2114e1 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -251,7 +251,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -266,6 +265,8 @@ static struct rte_eth_conf port_conf = {
}
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool * pktmbuf_pool[NB_SOCKETS];
@@ -1600,16 +1601,15 @@ print_usage(const char *prgname)
" [--config (port,queue,lcore)[,(port,queue,lcore]]"
" [--high-perf-cores CORELIST"
" [--perf-config (port,queue,hi_perf,lcore_index)[,(port,queue,hi_perf,lcore_index]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
+ " -P: enable promiscuous mode\n"
" --config (port,queue,lcore): rx queues configuration\n"
" --high-perf-cores CORELIST: list of high performance cores\n"
" --perf-config: similar as config, cores specified as indices"
" for bins containing high or regular performance cores\n"
" --no-numa: optional, disable numa awareness\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --parse-ptype: parse packet type by software\n"
" --legacy: use legacy interrupt-based scaling\n"
" --empty-poll: enable empty poll detection"
@@ -1794,6 +1794,7 @@ parse_ep_config(const char *q_arg)
#define CMD_LINE_OPT_INTERRUPT_ONLY "interrupt-only"
#define CMD_LINE_OPT_TELEMETRY "telemetry"
#define CMD_LINE_OPT_PMD_MGMT "pmd-mgmt"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
/* Parse the argument given in the command line of the application */
static int
@@ -1809,7 +1810,7 @@ parse_args(int argc, char **argv)
{"perf-config", 1, 0, 0},
{"high-perf-cores", 1, 0, 0},
{"no-numa", 0, 0, 0},
- {"enable-jumbo", 0, 0, 0},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, 0},
{CMD_LINE_OPT_EMPTY_POLL, 1, 0, 0},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
{CMD_LINE_OPT_LEGACY, 0, 0, 0},
@@ -1953,36 +1954,10 @@ parse_args(int argc, char **argv)
}
if (!strncmp(lgopts[option_index].name,
- "enable-jumbo", 12)) {
- struct option lenopts =
- {"max-pkt-len", required_argument, \
- 0, 0};
-
- printf("jumbo frame is enabled \n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /**
- * if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (0 == getopt_long(argc, argvopt, "",
- &lenopts, &option_index)) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)){
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame "
- "max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ CMD_LINE_OPT_MAX_PKT_LEN,
+ sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
}
if (!strncmp(lgopts[option_index].name,
@@ -2504,6 +2479,43 @@ mode_to_str(enum appmode mode)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -2620,6 +2632,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 4cb800aa158d..b7249fce577b 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -136,6 +135,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static uint8_t lkp_per_socket[NB_SOCKETS];
@@ -326,7 +327,7 @@ print_usage(const char *prgname)
" [--lookup]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--hash-entry-num]"
" [--ipv6]"
@@ -344,9 +345,7 @@ print_usage(const char *prgname)
" Accepted: em (Exact Match), lpm (Longest Prefix Match), fib (Forwarding Information Base)\n"
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
" --ipv6: Set if running ipv6 packets\n"
@@ -566,7 +565,7 @@ static const char short_options[] =
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
#define CMD_LINE_OPT_IPV6 "ipv6"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
#define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
@@ -584,7 +583,7 @@ enum {
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
CMD_LINE_OPT_IPV6_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
CMD_LINE_OPT_PARSE_PTYPE_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
@@ -599,7 +598,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
{CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
@@ -698,31 +697,9 @@ parse_args(int argc, char **argv)
ipv6 = 1;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {
- "max-pkt-len", required_argument, 0, 0
- };
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr,
- "invalid maximum packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -981,6 +958,43 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
return 0;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
static void
l3fwd_poll_resource_setup(void)
{
@@ -1035,6 +1049,12 @@ l3fwd_poll_resource_setup(void)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf263d..b6cddc8c7b51 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -323,6 +322,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
@@ -2643,7 +2644,7 @@ print_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK -P"
" [--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread]]"
" [--tx (lcore,thread)[,(lcore,thread]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]"
" [--parse-ptype]\n\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -P : enable promiscuous mode\n"
@@ -2653,8 +2654,7 @@ print_usage(const char *prgname)
" --eth-dest=X,MM:MM:MM:MM:MM:MM: optional, ethernet destination for port X\n"
" --no-numa: optional, disable numa awareness\n"
" --ipv6: optional, specify it if running ipv6 packets\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --hash-entry-num: specify the hash entry number in hexadecimal to be setup\n"
" --no-lthreads: turn off lthread model\n"
" --parse-ptype: set to use software to analyze packet type\n\n",
@@ -2877,8 +2877,8 @@ enum {
OPT_NO_NUMA_NUM,
#define OPT_IPV6 "ipv6"
OPT_IPV6_NUM,
-#define OPT_ENABLE_JUMBO "enable-jumbo"
- OPT_ENABLE_JUMBO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_HASH_ENTRY_NUM "hash-entry-num"
OPT_HASH_ENTRY_NUM_NUM,
#define OPT_NO_LTHREADS "no-lthreads"
@@ -2902,7 +2902,7 @@ parse_args(int argc, char **argv)
{OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
{OPT_NO_NUMA, 0, NULL, OPT_NO_NUMA_NUM },
{OPT_IPV6, 0, NULL, OPT_IPV6_NUM },
- {OPT_ENABLE_JUMBO, 0, NULL, OPT_ENABLE_JUMBO_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
{OPT_HASH_ENTRY_NUM, 1, NULL, OPT_HASH_ENTRY_NUM_NUM },
{OPT_NO_LTHREADS, 0, NULL, OPT_NO_LTHREADS_NUM },
{OPT_PARSE_PTYPE, 0, NULL, OPT_PARSE_PTYPE_NUM },
@@ -2981,35 +2981,10 @@ parse_args(int argc, char **argv)
parse_ptype_on = 1;
break;
- case OPT_ENABLE_JUMBO_NUM:
- {
- struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- printf("jumbo frame is enabled - disabling simple TX path\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /* if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
-
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) || (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
case OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -3489,6 +3464,43 @@ check_all_ports_link_status(uint32_t port_mask)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -3577,6 +3589,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/test.sh b/examples/performance-thread/l3fwd-thread/test.sh
index f0b6e271a5f3..3dd33407ea41 100755
--- a/examples/performance-thread/l3fwd-thread/test.sh
+++ b/examples/performance-thread/l3fwd-thread/test.sh
@@ -11,7 +11,7 @@ case "$1" in
echo "1.1 1 L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -23,7 +23,7 @@ case "$1" in
echo "1.2 1 L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -34,7 +34,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=8)"
./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -45,7 +45,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -61,7 +61,7 @@ case "$1" in
echo "2.1 N L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 --lcores="2,(0-1)@0" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -73,7 +73,7 @@ case "$1" in
echo "2.2 N L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 --lcores="(0-3)@0,4" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -84,7 +84,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=8)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-7)@0,8" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -95,7 +95,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-15)@0,16" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -111,7 +111,7 @@ case "$1" in
echo "3.1 N L-threads per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(0,0)" \
--stat-lcore 1
@@ -121,7 +121,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(0,0)(0,1)" \
--stat-lcore 1
@@ -131,7 +131,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=8)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)" \
--tx="(0,0)(0,1)(0,2)(0,3)" \
--stat-lcore 1
@@ -141,7 +141,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=16)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)" \
--tx="(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)" \
--stat-lcore 1
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..018ae4058cf2 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN, /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 173451eedcbe..54148631f09e 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
uint8_t ptp_enabled_port_nb;
static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static const struct rte_ether_addr ether_multicast = {
.addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
};
@@ -178,7 +172,7 @@ static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_dev_info dev_info;
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1;
const uint16_t tx_rings = 1;
int retval;
@@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 6e724f37835a..2e9ed3cf7ef7 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -54,7 +54,6 @@ static struct rte_mempool *pool = NULL;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..1367569c65db 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 192521c3c6b0..ea86c69b07ad 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
static const char usage[] =
"%s EAL_ARGS -- [-t]\n";
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static struct {
uint64_t total_cycles;
uint64_t total_queue_cycles;
@@ -118,7 +112,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused,
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -131,6 +125,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index 43b9d17a3c91..26c63ffed742 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -17,12 +17,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
/* basicfwd.c: Basic DPDK skeleton forwarding example. */
/*
@@ -32,7 +26,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -44,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d2179eadb979..bbd540e5db61 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -639,8 +639,9 @@ us_vhost_parse_args(int argc, char **argv)
if (ret) {
vmdq_conf_default.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- vmdq_conf_default.rxmode.max_rx_pkt_len
- = JUMBO_FRAME_MAX_SIZE;
+ vmdq_conf_default.rxmode.mtu =
+ JUMBO_FRAME_MAX_SIZE -
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
}
break;
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index 7d5bf6855426..309d1a3a8444 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -51,17 +51,10 @@
static uint32_t enabled_port_mask;
static volatile bool force_quit;
-/****************/
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1b5..cea11d7abe50 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1249,15 +1249,15 @@ rte_eth_dev_tx_offload_name(uint64_t offload)
static inline int
eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
- uint32_t max_rx_pkt_len, uint32_t dev_info_size)
+ uint32_t max_rx_pktlen, uint32_t dev_info_size)
{
int ret = 0;
if (dev_info_size == 0) {
- if (config_size != max_rx_pkt_len) {
+ if (config_size != max_rx_pktlen) {
RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size"
" %u != %u is not allowed\n",
- port_id, config_size, max_rx_pkt_len);
+ port_id, config_size, max_rx_pktlen);
ret = -EINVAL;
}
} else if (config_size > dev_info_size) {
@@ -1325,6 +1325,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
return ret;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1332,6 +1345,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
+ uint32_t max_rx_pktlen;
uint16_t overhead_len;
int diag;
int ret;
@@ -1382,11 +1396,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
/* Get the real Ethernet overhead length */
- if (dev_info.max_mtu != UINT16_MAX &&
- dev_info.max_rx_pktlen > dev_info.max_mtu)
- overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
- else
- overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
/* If number of queues specified by application for both Rx and Tx is
* zero, use driver preferred values. This cannot be done individually
@@ -1455,49 +1466,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/*
- * If jumbo frames are enabled, check that the maximum RX packet
- * length is supported by the configured device.
+ * Check that the maximum RX packet length is supported by the
+ * configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- (unsigned int)RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
- goto rollback;
- }
+ if (dev_conf->rxmode.mtu == 0)
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
+ if (max_rx_pktlen > dev_info.max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
+ port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
+ ret = -EINVAL;
+ goto rollback;
+ } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
+ port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ goto rollback;
+ }
- /* Scale the MTU size to adapt max_rx_pkt_len */
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- overhead_len;
- } else {
- uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
- if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
- pktlen > RTE_ETHER_MTU + overhead_len)
+ if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
+ dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
/* Use default value */
- dev->data->dev_conf.rxmode.max_rx_pkt_len =
- RTE_ETHER_MTU + overhead_len;
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
}
+ dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
+
/*
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
goto rollback;
@@ -2157,13 +2164,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
+ /* Get the real Ethernet overhead length */
if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ uint16_t overhead_len;
+ uint32_t max_rx_pktlen;
+ int ret;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->mtu + overhead_len;
if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- int ret = eth_dev_check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
+ ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351fdb..93c3051cfca0 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -410,7 +410,7 @@ enum rte_eth_tx_mq_mode {
struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
- uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ uint32_t mtu; /**< Requested MTU. */
/** Maximum allowed size of LRO aggregated packet. */
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index 0036bda7465c..1491c815c312 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -28,7 +28,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(nb_tx_q);
rte_trace_point_emit_u32(dev_conf->link_speeds);
rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
- rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
+ rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
rte_trace_point_emit_u64(dev_conf->txmode.offloads);
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v2 2/6] ethdev: move jumbo frame offload check to library
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
@ 2021-07-22 17:21 ` Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 3/6] ethdev: move check to library for MTU set Ferruh Yigit
` (4 subsequent siblings)
5 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-22 17:21 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Igor Russkikh,
Andrew Rybchenko, Maciej Czekaj, Jiawen Wu, Jian Wang,
Thomas Monjalon
Cc: Ferruh Yigit, dev
Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
and application should enable the jumbo frame offload support for it.
When jumbo frame offload is not enabled by application, but MTU bigger
than RTE_ETHER_MTU is requested there are two options, either fail or
enable jumbo frame offload implicitly.
Enabling jumbo frame offload implicitly is selected by many drivers
since setting a big MTU value already implies it, and this increases
usability.
This patch moves this logic from drivers to the library, both to reduce
the duplicated code in the drivers and to make behaviour more visible.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
drivers/net/dpaa/dpaa_ethdev.c | 7 -------
drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
drivers/net/e1000/em_ethdev.c | 9 ++-------
drivers/net/e1000/igb_ethdev.c | 9 ++-------
drivers/net/enetc/enetc_ethdev.c | 7 -------
drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
drivers/net/hns3/hns3_ethdev.c | 8 --------
drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
drivers/net/i40e/i40e_ethdev.c | 5 -----
drivers/net/i40e/i40e_ethdev_vf.c | 5 -----
drivers/net/iavf/iavf_ethdev.c | 7 -------
drivers/net/ice/ice_ethdev.c | 5 -----
drivers/net/igc/igc_ethdev.c | 9 ++-------
drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
drivers/net/liquidio/lio_ethdev.c | 7 -------
drivers/net/nfp/nfp_net.c | 6 ------
drivers/net/octeontx/octeontx_ethdev.c | 5 -----
drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
drivers/net/qede/qede_ethdev.c | 4 ----
drivers/net/sfc/sfc_ethdev.c | 9 ---------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 6 ------
lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
28 files changed, 29 insertions(+), 171 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 76aeec077f2b..2960834b4539 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
val = 1;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
val = 0;
- }
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index e27720e71645..18511b28e4a3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3022,15 +3022,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
return -EINVAL;
}
- if (new_mtu > RTE_ETHER_MTU) {
+ if (new_mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* Is there a change in mtu setting? */
if (eth_dev->data->mtu == new_mtu)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 695d0d6fd3e2..349896f6a1bf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
plt_err("Failed to max Rx frame length, rc=%d", rc);
goto exit;
}
-
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 8cf61f12a8d6..0c9cc2f5bb3f 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
- /* set to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
return err;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 56703e3a39e8..a444f749bb96 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index cc040a9a6d6e..febe3d0b754e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1470,13 +1470,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6f418a36aa04..1b41dd04df5a 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
return 0;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 35b517891d67..f15774eae20d 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4401,15 +4401,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index cdb9783b5372..fbcbbb6c0533 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads &=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c737ef8d06d8..c1cde811a252 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1556,13 +1556,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index c5e90228bb3e..b2328d3690b8 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2577,7 +2577,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct hns3_adapter *hns = dev->data->dev_private;
uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
struct hns3_hw *hw = &hns->hw;
- bool is_jumbo_frame;
int ret;
if (dev->data->dev_started) {
@@ -2587,7 +2586,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2602,12 +2600,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (is_jumbo_frame)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index e44712ad499a..178de997d138 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -908,12 +908,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rte_spinlock_unlock(&hw->lock);
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 1161f301b9ae..c5058f26dff2 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11772,11 +11772,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 086a167ca672..2015a86ba5ca 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -2884,11 +2884,6 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 13c2329d85a7..ba5be45e8c5e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1446,13 +1446,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 205d69f8fcfc..be84992ea419 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3765,11 +3765,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index b26723064b07..dcbc26b8186e 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rctl = IGC_READ_REG(hw, IGC_RCTL);
-
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~IGC_RCTL_LPE;
- }
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 3634c0c8c5f0..e8a33f04bd69 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
mtu);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b9048ade3c35..c4696f34a7a1 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5196,13 +5196,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 976916f870a5..3a516c52d199 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index 6f9c279fde4d..3af636ee3912 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -1552,12 +1552,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* writing to configuration space */
nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 69c3bda12df8..fb65be2c2dc3 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index ba282762b749..0c97ef7584a0 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -58,11 +58,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return rc;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 53b2c0ca10e3..71065f8072ac 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 02c28cbfc1d2..159a51953cd8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1023,15 +1023,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
- /*
- * The driver does not use it, but other PMDs update jumbo frame
- * flag when MTU is set.
- */
- if (mtu > RTE_ETHER_MTU) {
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1d1360faff66..0639889b2144 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct nicvf *nic = nicvf_pmd_priv(dev);
uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
size_t i;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
PMD_INIT_FUNC_TRACE();
@@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index d773a81665d7..b1a3f9fbb84d 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3482,12 +3482,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
TXGBE_FRAME_SIZE_MAX);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index cea11d7abe50..56a09172ceb2 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3640,6 +3640,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
+ int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3658,12 +3659,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+
+ if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ is_jumbo_frame_capable = 1;
}
+ if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
+ return -EINVAL;
+
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (!ret)
+ if (ret == 0) {
dev->data->mtu = mtu;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
return eth_err(port_id, ret);
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v2 3/6] ethdev: move check to library for MTU set
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-07-22 17:21 ` Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
` (3 subsequent siblings)
5 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-22 17:21 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Heinrich Kuhn,
Harman Kalra, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
Rasesh Mody, Devendra Singh Rawat, Igor Russkikh, Maciej Czekaj,
Jiawen Wu, Jian Wang, Thomas Monjalon, Andrew Rybchenko
Cc: Ferruh Yigit, dev
Move requested MTU value check to the API to prevent the duplicated
code.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 15 ++++-----------
drivers/net/bnxt/bnxt_ethdev.c | 2 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------
drivers/net/dpaa/dpaa_ethdev.c | 2 --
drivers/net/dpaa2/dpaa2_ethdev.c | 4 ----
drivers/net/e1000/em_ethdev.c | 10 ----------
drivers/net/e1000/igb_ethdev.c | 11 -----------
drivers/net/enetc/enetc_ethdev.c | 4 ----
drivers/net/hinic/hinic_pmd_ethdev.c | 8 +-------
drivers/net/i40e/i40e_ethdev.c | 17 ++++-------------
drivers/net/i40e/i40e_ethdev_vf.c | 17 ++++-------------
drivers/net/iavf/iavf_ethdev.c | 10 ++--------
drivers/net/ice/ice_ethdev.c | 14 +++-----------
drivers/net/igc/igc_ethdev.c | 5 -----
drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------
drivers/net/liquidio/lio_ethdev.c | 10 ----------
drivers/net/nfp/nfp_net.c | 4 ----
drivers/net/octeontx/octeontx_ethdev.c | 4 ----
drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
drivers/net/qede/qede_ethdev.c | 12 ------------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 10 ----------
lib/ethdev/rte_ethdev.c | 9 +++++++++
23 files changed, 29 insertions(+), 169 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 2960834b4539..c36cd7b1d2f0 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct axgbe_port *pdata = dev->data->dev_private;
- uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- unsigned int val = 0;
- axgbe_dev_info_get(dev, &dev_info);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ unsigned int val;
+
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- val = 1;
- else
- val = 0;
+ val = mtu > RTE_ETHER_MTU ? 1 : 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
+
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 18511b28e4a3..2c58f7f681c6 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -2995,7 +2995,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
- uint32_t rc = 0;
+ uint32_t rc;
uint32_t i;
rc = is_bnxt_in_error(bp);
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 0c9cc2f5bb3f..70b879fed100 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
- struct rte_eth_dev_info dev_info;
- int err;
uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- err = cxgbe_dev_info_get(eth_dev, &dev_info);
- if (err != 0)
- return err;
-
- /* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
- return -EINVAL;
-
- err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
+ return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- return err;
}
/*
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index a444f749bb96..60dd4f67fc26 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
- return -EINVAL;
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index febe3d0b754e..7bb309691ce2 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1466,10 +1466,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
- return -EINVAL;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 1b41dd04df5a..6ebef55588bc 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1788,22 +1788,12 @@ eth_em_default_mac_addr_set(struct rte_eth_dev *dev,
static int
eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct e1000_hw *hw;
uint32_t frame_size;
uint32_t rctl;
- int ret;
-
- ret = eth_em_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
frame_size = mtu + E1000_ETH_OVERHEAD;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index f15774eae20d..fb69210ba9f4 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4368,9 +4368,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
uint32_t rctl;
struct e1000_hw *hw;
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + E1000_ETH_OVERHEAD;
- int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -4379,15 +4377,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (hw->mac.type == e1000_82571)
return -ENOTSUP;
#endif
- ret = eth_igb_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index fbcbbb6c0533..a7372c1787c7 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -662,10 +662,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- /* check that mtu is within the allowed range */
- if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size > ENETC_MAC_MAXFRM_SIZE)
- return -EINVAL;
-
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c1cde811a252..ce0b52c718ab 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1539,17 +1539,11 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- int ret = 0;
+ int ret;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu));
- if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) {
- PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d",
- mtu, HINIC_MIN_MTU_SIZE, HINIC_MAX_MTU_SIZE);
- return -EINVAL;
- }
-
ret = hinic_set_port_mtu(nic_dev->hwdev, mtu);
if (ret) {
PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret);
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index c5058f26dff2..16a184ad1035 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11754,25 +11754,16 @@ static int i40e_set_default_mac_addr(struct rte_eth_dev *dev,
}
static int
-i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
- int ret = 0;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
- return ret;
+ return 0;
}
/* Restore ethertype filter */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 2015a86ba5ca..0e6065da8a3f 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -2866,25 +2866,16 @@ i40evf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
static int
-i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = vf->dev_data;
- uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
- int ret = 0;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
- return ret;
+ return 0;
}
static int
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index ba5be45e8c5e..049671ef3da9 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1432,21 +1432,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
static int
-iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD;
- int ret = 0;
-
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port must be stopped before configuration");
return -EBUSY;
}
- return ret;
+ return 0;
}
static int
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index be84992ea419..05bcd300bb30 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3747,21 +3747,13 @@ ice_dev_set_link_down(struct rte_eth_dev *dev)
}
static int
-ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + ICE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR,
"port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index dcbc26b8186e..e279ae1fff1d 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1576,11 +1576,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN)
frame_size += VLAN_TAG_SIZE;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > MAX_RX_JUMBO_FRAME_SIZE)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index e8a33f04bd69..377b96c0236a 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2778,12 +2778,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
int ret = 0;
struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev);
struct rte_eth_dev_data *dev_data = ethdev->data;
- uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)
- return -EINVAL;
/* mtu setting is forbidden if port is start */
/* make sure NIC port is stopped */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 3a516c52d199..9d1d811a2e37 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -434,7 +434,6 @@ static int
lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -446,15 +445,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -EINVAL;
}
- /* check if VF MTU is within allowed range.
- * New value should not exceed PF MTU.
- */
- if (mtu < RTE_ETHER_MIN_MTU || mtu > pf_mtu) {
- lio_dev_err(lio_dev, "VF MTU should be >= %d and <= %d\n",
- RTE_ETHER_MIN_MTU, pf_mtu);
- return -EINVAL;
- }
-
/* flush added to prevent cmd failure
* incase the queue is full
*/
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index 3af636ee3912..6c54f0e358e9 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -1541,10 +1541,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || (uint32_t)mtu > hw->max_mtu)
- return -EINVAL;
-
/* mtu setting is forbidden if port is started */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index fb65be2c2dc3..b2355fa695bc 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -524,10 +524,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
struct rte_eth_dev_data *data = eth_dev->data;
int rc = 0;
- /* Check if MTU is within the allowed range */
- if (frame_size < OCCTX_MIN_FRS || frame_size > OCCTX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 0c97ef7584a0..cba03b4bb9b8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -18,11 +18,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
int rc;
frame_size += NIX_TIMESYNC_RX_OFFSET * otx2_ethdev_is_ptp_en(dev);
-
- /* Check if MTU is within the allowed range */
- if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 71065f8072ac..098e56e9822f 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2307,7 +2307,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
uint32_t frame_size;
uint16_t bufsz;
@@ -2315,19 +2314,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
int i, rc;
PMD_INIT_FUNC_TRACE(edev);
- rc = qede_dev_info_get(dev, &dev_info);
- if (rc != 0) {
- DP_ERR(edev, "Error during getting ethernet device info\n");
- return rc;
- }
frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
- DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
- mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
- QEDE_ETH_OVERHEAD);
- return -EINVAL;
- }
if (!dev->data->scattered_rx &&
frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
DP_INFO(edev, "MTU greater than minimum RX buffer size of %u\n",
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 0639889b2144..ac8477cbd7f4 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -154,12 +154,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (frame_size > NIC_HW_MAX_FRS)
- return -EINVAL;
-
- if (frame_size < NIC_HW_MIN_FRS)
- return -EINVAL;
-
buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/*
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b1a3f9fbb84d..41b0e63cd79e 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3459,18 +3459,8 @@ static int
txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct rte_eth_dev_data *dev_data = dev->data;
- int ret;
-
- ret = txgbe_dev_info_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 56a09172ceb2..cae456c70a28 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3653,6 +3653,9 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
@@ -3660,6 +3663,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ frame_size = mtu + overhead_len;
+ if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
+ return -EINVAL;
+
if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
is_jumbo_frame_capable = 1;
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v2 4/6] ethdev: remove jumbo offload flag
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 3/6] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-07-22 17:21 ` Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 5/6] ethdev: unify MTU checks Ferruh Yigit
` (2 subsequent siblings)
5 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-22 17:21 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Pavel Belous, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Yong Wang, Konstantin Ananyev, Radu Nicolau, Akhil Goyal,
David Hunt, John McNamara, Thomas Monjalon
Cc: Ferruh Yigit, dev
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application explicitly set this flag to enable jumbo
frames, this can be deducted by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
---
app/test-eventdev/test_pipeline_common.c | 2 -
app/test-pmd/cmdline.c | 2 +-
app/test-pmd/config.c | 24 +---------
app/test-pmd/testpmd.c | 46 +------------------
app/test-pmd/testpmd.h | 2 +-
doc/guides/howto/debug_troubleshoot.rst | 2 -
doc/guides/nics/bnxt.rst | 1 -
doc/guides/nics/features.rst | 3 +-
drivers/net/atlantic/atl_ethdev.c | 1 -
drivers/net/axgbe/axgbe_ethdev.c | 1 -
drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
drivers/net/bnxt/bnxt.h | 1 -
drivers/net/bnxt/bnxt_ethdev.c | 10 +---
drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
drivers/net/cnxk/cnxk_ethdev.h | 5 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
drivers/net/cxgbe/cxgbe.h | 1 -
drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
drivers/net/cxgbe/sge.c | 5 +-
drivers/net/dpaa/dpaa_ethdev.c | 2 -
drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
drivers/net/e1000/e1000_ethdev.h | 4 +-
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/em_rxtx.c | 19 +++-----
drivers/net/e1000/igb_rxtx.c | 3 +-
drivers/net/ena/ena_ethdev.c | 2 -
drivers/net/enetc/enetc_ethdev.c | 3 +-
drivers/net/enic/enic_res.c | 1 -
drivers/net/failsafe/failsafe_ops.c | 2 -
drivers/net/fm10k/fm10k_ethdev.c | 1 -
drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev_vf.c | 1 -
drivers/net/i40e/i40e_ethdev.c | 1 -
drivers/net/i40e/i40e_ethdev_vf.c | 3 +-
drivers/net/i40e/i40e_rxtx.c | 2 +-
drivers/net/iavf/iavf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_vf_representor.c | 1 -
drivers/net/ice/ice_ethdev.c | 1 -
drivers/net/ice/ice_rxtx.c | 3 +-
drivers/net/igc/igc_ethdev.h | 1 -
drivers/net/igc/igc_txrx.c | 2 +-
drivers/net/ionic/ionic_ethdev.c | 1 -
drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
drivers/net/ixgbe/ixgbe_pf.c | 9 +---
drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
drivers/net/mlx4/mlx4_rxq.c | 1 -
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/mvneta/mvneta_ethdev.h | 3 +-
drivers/net/mvpp2/mrvl_ethdev.c | 1 -
drivers/net/nfp/nfp_net.c | 6 +--
drivers/net/octeontx/octeontx_ethdev.h | 1 -
drivers/net/octeontx2/otx2_ethdev.h | 1 -
drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
drivers/net/qede/qede_ethdev.c | 1 -
drivers/net/sfc/sfc_rx.c | 2 -
drivers/net/thunderx/nicvf_ethdev.h | 1 -
drivers/net/txgbe/txgbe_rxtx.c | 1 -
drivers/net/virtio/virtio_ethdev.c | 1 -
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
examples/ip_fragmentation/main.c | 3 +-
examples/ip_reassembly/main.c | 3 +-
examples/ipsec-secgw/ipsec-secgw.c | 2 -
examples/ipv4_multicast/main.c | 1 -
examples/kni/main.c | 5 --
examples/l3fwd-acl/main.c | 4 +-
examples/l3fwd-graph/main.c | 4 +-
examples/l3fwd-power/main.c | 4 +-
examples/l3fwd/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 4 +-
examples/vhost/main.c | 2 -
lib/ethdev/rte_ethdev.c | 26 +----------
lib/ethdev/rte_ethdev.h | 1 -
76 files changed, 47 insertions(+), 257 deletions(-)
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 5fcea74b4d43..2775e72c580d 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN;
- if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
RTE_ETH_FOREACH_DEV(i) {
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index c183a8982f13..90f2b7dcfcc5 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1921,7 +1921,7 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
return;
}
- update_jumbo_frame_offload(port_id, res->value);
+ update_mtu_from_frame_size(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 918ee3af2a71..82ba6d667f0d 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1136,39 +1136,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v)
void
port_mtu_set(portid_t port_id, uint16_t mtu)
{
+ struct rte_port *port = &ports[port_id];
int diag;
- struct rte_port *rte_port = &ports[port_id];
- struct rte_eth_dev_info dev_info;
- int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ret = eth_dev_info_get_print_err(port_id, &dev_info);
- if (ret != 0)
- return;
-
- if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
- printf("Set MTU failed. MTU:%u is not in valid range, min:%u - max:%u\n",
- mtu, dev_info.min_mtu, dev_info.max_mtu);
- return;
- }
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag != 0) {
printf("Set MTU failed. diag=%d\n", diag);
return;
}
- rte_port->dev_conf.rxmode.mtu = mtu;
-
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (mtu > RTE_ETHER_MTU)
- rte_port->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rte_port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
+ port->dev_conf.rxmode.mtu = mtu;
}
/* Generic flow management functions. */
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index d2658bdc9ff3..2ca68baa3aa4 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1439,11 +1439,6 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid, 0);
- if (ret != 0)
- printf("Updating jumbo frame offload failed for port %u\n",
- pid);
-
if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
@@ -3349,24 +3344,18 @@ rxtx_port_config(struct rte_port *port)
}
/*
- * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned.
+ * Helper function to set MTU from frame size
*
* port->dev_info should be set before calling this function.
*
- * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
- * ETH_OVERHEAD". This is useful to update flags but not MTU value.
- *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
+update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
- uint64_t rx_offloads;
uint16_t mtu, new_mtu;
- bool on;
eth_overhead = get_eth_overhead(&port->dev_info);
@@ -3375,39 +3364,8 @@ update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
return -1;
}
- if (max_rx_pktlen == 0)
- max_rx_pktlen = mtu + eth_overhead;
-
- rx_offloads = port->dev_conf.rxmode.offloads;
new_mtu = max_rx_pktlen - eth_overhead;
- if (new_mtu <= RTE_ETHER_MTU) {
- rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = false;
- } else {
- if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- printf("Frame size (%u) is not supported by port %u\n",
- max_rx_pktlen, portid);
- return -1;
- }
- rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = true;
- }
-
- if (rx_offloads != port->dev_conf.rxmode.offloads) {
- uint16_t qid;
-
- port->dev_conf.rxmode.offloads = rx_offloads;
-
- /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
- for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
- if (on)
- port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
- }
-
if (mtu == new_mtu)
return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 42143f85924f..b94bf668dc4d 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
+int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..df69fa8bcc24 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
* Identify if port Speed and Duplex is matching to desired values with
``rte_eth_link_get``.
- * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
-
* Check promiscuous mode if the drops do not occur for unique MAC address
with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3bc..8f10c6c78a1f 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -885,7 +885,6 @@ processing. This improved performance is derived from a number of optimizations:
DEV_RX_OFFLOAD_VLAN_STRIP
DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_JUMBO_FRAME
DEV_RX_OFFLOAD_IPV4_CKSUM
DEV_RX_OFFLOAD_UDP_CKSUM
DEV_RX_OFFLOAD_TCP_CKSUM
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index f4c0f212cb8a..be63fa1cc3b7 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -165,8 +165,7 @@ Jumbo frame
Supports Rx jumbo frames.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.mtu``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 3f654c071566..5a198f53fce7 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
| DEV_RX_OFFLOAD_IPV4_CKSUM \
| DEV_RX_OFFLOAD_UDP_CKSUM \
| DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_JUMBO_FRAME \
| DEV_RX_OFFLOAD_MACSEC_STRIP \
| DEV_RX_OFFLOAD_VLAN_FILTER)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index c36cd7b1d2f0..0bc9e5eeeb10 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 009a94e9a8fa..50ff04bb2241 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 494a1eff3700..061ec3bccb10 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -597,7 +597,6 @@ struct bnxt_rep_info {
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_VLAN_EXTEND | \
DEV_RX_OFFLOAD_TCP_LRO | \
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 2c58f7f681c6..3c9a8c4e624e 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -736,15 +736,10 @@ static int bnxt_start_nic(struct bnxt *bp)
unsigned int i, j;
int rc;
- if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* THOR does not support ring groups.
* But we will use the array to save RSS context IDs.
@@ -1230,7 +1225,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
if (eth_dev->data->dev_conf.rxmode.offloads &
~(DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index ed3893f8d6fa..149058deb673 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1731,14 +1731,6 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 2528b3cdaa0c..77620580b2b0 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -75,9 +75,8 @@
#define CNXK_NIX_RX_OFFLOAD_CAPA \
(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
+ DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP)
#define RSS_IPV4_ENABLE \
(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 349896f6a1bf..d0924df76152 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_SECURITY, " Security,"},
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..37625c5bfb69 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -51,7 +51,6 @@
DEV_RX_OFFLOAD_IPV4_CKSUM | \
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 70b879fed100..1374f32b6826 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -661,14 +661,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
if ((&rxq->fl) != NULL)
rxq->fl.size = temp_nb_desc;
- /* Set to jumbo mode if necessary */
- if (eth_dev->data->mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
&rxq->fl, NULL,
is_pf4(adapter) ?
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 830f5192474d..21b8fe61c9a7 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
struct rte_mbuf *buf_bulk[n];
int ret, i;
struct rte_pktmbuf_pool_private *mbp_priv;
- u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME;
/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
- if (jumbo_en &&
- ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000))
+ if ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000)
buf_size_idx = RX_LARGE_MTU_BUF;
ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk, n);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 60dd4f67fc26..9cc808b767ea 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,7 +54,6 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
@@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 7bb309691ce2..eb0bcfd6f867 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
@@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6f4..1ae78fe71f02 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
void em_dev_clear_queues(struct rte_eth_dev *dev);
void em_dev_free_queues(struct rte_eth_dev *dev);
-uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
-uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
+uint64_t em_get_rx_port_offloads_capa(void);
+uint64_t em_get_rx_queue_offloads_capa(void);
int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6ebef55588bc..8a752eef52cf 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
- dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(dev);
- dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
+ dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa();
+ dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
dev_info->rx_queue_offload_capa;
dev_info->tx_queue_offload_capa = em_get_tx_queue_offloads_capa(dev);
dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd0074..e061f80a906a 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1359,12 +1359,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
}
uint64_t
-em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_port_offloads_capa(void)
{
uint64_t rx_offload_capa;
- uint32_t max_rx_pktlen;
-
- max_rx_pktlen = em_get_max_pktlen(dev);
rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
@@ -1374,14 +1371,12 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER;
- if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
- rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
return rx_offload_capa;
}
uint64_t
-em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_queue_offloads_capa(void)
{
uint64_t rx_queue_offload_capa;
@@ -1390,7 +1385,7 @@ em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
* capability be same to per port queue offloading capability
* for better convenience.
*/
- rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
+ rx_queue_offload_capa = em_get_rx_port_offloads_capa();
return rx_queue_offload_capa;
}
@@ -1839,7 +1834,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* to avoid splitting packets that don't fit into
* one buffer.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+ if (dev->data->mtu > RTE_ETHER_MTU ||
rctl_bsize < RTE_ETHER_MAX_LEN) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1874,14 +1869,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
if ((hw->mac.type == e1000_ich9lan ||
hw->mac.type == e1000_pch2lan ||
hw->mac.type == e1000_ich10lan) &&
- rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ dev->data->mtu > RTE_ETHER_MTU) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
}
if (hw->mac.type == e1000_pch2lan) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
@@ -1908,7 +1903,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index e9a30d393bd7..dda4d2101adb 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1640,7 +1640,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -2344,7 +2343,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Configure support of jumbo frames, if any.
*/
max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
rctl |= E1000_RCTL_LPE;
/*
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index e9b718786a39..4322dce260f5 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -2042,8 +2042,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
- rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* Inform framework about available features */
dev_info->rx_offload_capa = rx_feat;
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index a7372c1787c7..6457677d300a 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
(DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME);
+ DEV_RX_OFFLOAD_KEEP_CRC);
return 0;
}
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index a8f5332a407f..6a4758ea8e8a 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
DEV_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e034..47c5efe9ea77 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1193,7 +1193,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
@@ -1211,7 +1210,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 5e4b361ca6c0..093021246286 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_RSS_HASH);
}
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index ce0b52c718ab..b1563350ec0e 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -747,7 +747,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index b2328d3690b8..a2dd68ab50b5 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2697,7 +2697,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 178de997d138..97b96b61ba95 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -944,7 +944,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 16a184ad1035..3efb1a59fcd3 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3758,7 +3758,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0e6065da8a3f..e47ea9a9290e 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1932,7 +1932,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
/**
* Check if the jumbo frame and maximum packet length are set correctly
*/
- if (dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -2378,7 +2378,6 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = 0;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 13c3760c8d13..2184a1f78296 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2925,7 +2925,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
rxq->max_pkt_len =
RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
data->mtu + I40E_ETH_OVERHEAD);
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 049671ef3da9..f156add80e0d 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -574,7 +574,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->mtu & RTE_ETHER_MTU) {
if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
max_pkt_len > IAVF_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -939,7 +939,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index c83941a908b6..3969b03aa8eb 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -65,7 +65,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (max_pkt_len <= ICE_ETH_MAX_LEN ||
max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -663,7 +663,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e90a..07843c6dbc92 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -141,7 +141,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 05bcd300bb30..cbc8858f068d 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3450,7 +3450,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_FILTER;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 9da9a42aaad0..a3db5257d418 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -267,7 +267,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
struct ice_rlan_ctx rx_ctx;
enum ice_status err;
uint16_t buf_size;
- struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
@@ -281,7 +280,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
frame_size);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index b3473b5b1646..5e6c2ff30157 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -73,7 +73,6 @@ extern "C" {
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 28d3076439c3..30940857eac0 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ if (dev->data->mtu & RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
else
rctl &= ~IGC_RCTL_LPE;
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index 97447a10e46a..795980cb1ca5 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_SCATTER |
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 377b96c0236a..4e5d234e8c7d 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ DEV_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index c4696f34a7a1..8c180f77a04e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6229,7 +6229,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct rte_eth_rxmode *rxmode;
uint32_t rf_dec, rf_int;
uint32_t bcnrc_val;
uint16_t link_speed = dev->data->dev_link.link_speed;
@@ -6251,14 +6250,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
bcnrc_val = 0;
}
- rxmode = &dev->data->dev_conf.rxmode;
/*
* Set global transmit compensation time to the MMW_SIZE in RTTBCNRM
* register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
* set as 0x4.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ if (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 9bcbc445f2d0..6e64f9a0ade2 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -600,15 +600,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT;
if (max_frs < max_frame) {
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (max_frame > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (max_frame > IXGBE_ETH_MAX_LEN)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index eb11e22e59e3..057be2b0dbd2 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3029,7 +3029,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -5091,7 +5090,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 4a5cfd22aa71..e73112c44749 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 78499c4cc496..4f5d8f5349d2 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..6428f9ff7931 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,8 +54,7 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 5ce71661c84e..ef987b7de1b5 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -59,7 +59,6 @@
/** Port Rx offload capabilities */
#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index 6c54f0e358e9..fdbada5f1cd8 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -645,8 +645,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = dev->data->mtu;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -1309,9 +1308,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
};
- /* All NFP devices support jumbo frames */
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..3a02824e3948 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -60,7 +60,6 @@
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index e95d933a866d..25f6cbe42512 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -147,7 +147,6 @@
DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
DEV_RX_OFFLOAD_VLAN_FILTER | \
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index a243683d61d3..c65041a16ba7 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
- devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..aa4dcd33cc79 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l3_len = hdr_lens.l3_len;
droq_pkt->l4_len = hdr_lens.l4_len;
- if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
- rte_pktmbuf_free(droq_pkt);
- goto oq_read_fail;
- }
-
if (droq_pkt->nb_segs > 1 &&
!(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 098e56e9822f..abd4b998bd3a 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_RSS_HASH);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index f6a8ac68e814..c589f8fbdf48 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -939,8 +939,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
{
uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
- caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return caps & sfc_rx_get_offload_mask(sa);
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..5d38750d6313 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -40,7 +40,6 @@
#define NICVF_RX_OFFLOAD_CAPA ( \
DEV_RX_OFFLOAD_CHECKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index c6cd3803c434..0ce754fb25b0 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1953,7 +1953,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_SCATTER;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 9491cc2669f7..efb76ccf63e6 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2442,7 +2442,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
host_features = VIRTIO_OPS(hw)->get_features(hw);
dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
dev_info->rx_offload_capa |=
DEV_RX_OFFLOAD_TCP_CKSUM |
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 1a3291273a11..8df98170587a 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -54,7 +54,6 @@
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index da4efdb83e64..0c9d7c8a294e 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -150,8 +150,7 @@ static struct rte_eth_conf port_conf = {
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ DEV_RX_OFFLOAD_SCATTER),
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 253f7be2ca07..b92f0e460178 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -165,8 +165,7 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f8a1f544c21d..bcddd30c486a 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2207,8 +2207,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- if (mtu_size > RTE_ETHER_MTU)
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index c211ffeb127a..0ccc3ba4a7c4 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -110,7 +110,6 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/kni/main.c b/examples/kni/main.c
index c10814c6a94f..0fd945e7e0b2 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
}
memcpy(&conf, &port_conf, sizeof(conf));
- /* Set new MTU */
- if (new_mtu > RTE_ETHER_MTU)
- conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 7abb612ee6a4..f6dfb156ac56 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -2000,10 +2000,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 627bdecbd95f..939b42e352ea 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -729,10 +729,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 92344c2114e1..f8097a4fdb99 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -2508,10 +2508,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index b7249fce577b..cab89e6b7ba4 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -987,10 +987,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index b6cddc8c7b51..8fc3a7c675a2 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -3493,10 +3493,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index bbd540e5db61..4411adec40f3 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -637,8 +637,6 @@ us_vhost_parse_args(int argc, char **argv)
}
mergeable = !!ret;
if (ret) {
- vmdq_conf_default.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
vmdq_conf_default.rxmode.mtu =
JUMBO_FRAME_MAX_SIZE -
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index cae456c70a28..97d5c7d42d3b 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -118,7 +118,6 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT),
RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
- RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
RTE_RX_OFFLOAD_BIT2STR(SCATTER),
RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
RTE_RX_OFFLOAD_BIT2STR(SECURITY),
@@ -1486,13 +1485,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
- dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
- /* Use default value */
- dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- }
-
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
/*
@@ -3640,7 +3632,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
- int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3668,27 +3659,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
frame_size = mtu + overhead_len;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
-
- if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
- is_jumbo_frame_capable = 1;
}
- if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
- return -EINVAL;
-
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (ret == 0) {
+ if (ret == 0)
dev->data->mtu = mtu;
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
return eth_err(port_id, ret);
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 93c3051cfca0..892840e66227 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1359,7 +1359,6 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v2 5/6] ethdev: unify MTU checks
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
` (2 preceding siblings ...)
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-07-22 17:21 ` Ferruh Yigit
2021-07-23 3:29 ` Huisong Li
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
5 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-22 17:21 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko; +Cc: Ferruh Yigit, dev, Huisong Li
Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
have slightly different checks. Like one checks min MTU against
RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
Checks moved into common function to unify the checks. Also this has
benefit to have common error logs.
Suggested-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
lib/ethdev/rte_ethdev.c | 82 ++++++++++++++++++++++++++---------------
lib/ethdev/rte_ethdev.h | 2 +-
2 files changed, 54 insertions(+), 30 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 97d5c7d42d3b..1957fdec46a7 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1337,6 +1337,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
return overhead_len;
}
+/* rte_eth_dev_info_get() should be called prior to this function */
+static int
+eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
+ uint16_t mtu)
+{
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
+ if (mtu < dev_info->min_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) < device min MTU (%u) for port_id %u\n",
+ mtu, dev_info->min_mtu, port_id);
+ return -EINVAL;
+ }
+ if (mtu > dev_info->max_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) > device max MTU (%u) for port_id %u\n",
+ mtu, dev_info->max_mtu, port_id);
+ return -EINVAL;
+ }
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ frame_size = mtu + overhead_len;
+ if (frame_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) < min frame size (%u) for port_id %u\n",
+ frame_size, RTE_ETHER_MIN_LEN, port_id);
+ return -EINVAL;
+ }
+
+ if (frame_size > dev_info->max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) > device max frame size (%u) for port_id %u\n",
+ frame_size, dev_info->max_rx_pktlen, port_id);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1464,26 +1505,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- /*
- * Check that the maximum RX packet length is supported by the
- * configured device.
- */
if (dev_conf->rxmode.mtu == 0)
dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
- if (max_rx_pktlen > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
- port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
- port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
+
+ ret = eth_dev_validate_mtu(port_id, &dev_info,
+ dev->data->dev_conf.rxmode.mtu);
+ if (ret != 0)
goto rollback;
- }
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
@@ -1492,6 +1520,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
if (dev_conf->rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
@@ -3438,7 +3469,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = lim;
dev_info->tx_desc_lim = lim;
dev_info->device = dev->device;
- dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
dev_info->max_mtu = UINT16_MAX;
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
@@ -3644,21 +3676,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
- uint16_t overhead_len;
- uint32_t frame_size;
-
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
- if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
- return -EINVAL;
-
- overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
- dev_info.max_mtu);
- frame_size = mtu + overhead_len;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
+ if (ret != 0)
+ return ret;
}
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 892840e66227..dbb14c1978e7 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3032,7 +3032,7 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
* };
*
* device = dev->device
- * min_mtu = RTE_ETHER_MIN_MTU
+ * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
* max_mtu = UINT16_MAX
*
* The following fields will be populated if support for dev_infos_get()
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v2 6/6] examples/ip_reassembly: remove unused parameter
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
` (3 preceding siblings ...)
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 5/6] ethdev: unify MTU checks Ferruh Yigit
@ 2021-07-22 17:21 ` Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
5 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-07-22 17:21 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: Ferruh Yigit, dev
Remove 'max-pkt-len' parameter.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
examples/ip_reassembly/main.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index b92f0e460178..5c1b951c0d80 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -512,7 +512,6 @@ static void
print_usage(const char *prgname)
{
printf("%s [EAL options] -- -p PORTMASK [-q NQ]"
- " [--max-pkt-len PKTLEN]"
" [--maxflows=<flows>] [--flowttl=<ttl>[(s|ms)]]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of RX queues per lcore\n"
@@ -614,7 +613,6 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {"max-pkt-len", 1, 0, 0},
{"maxflows", 1, 0, 0},
{"flowttl", 1, 0, 0},
{NULL, 0, 0, 0}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v2 5/6] ethdev: unify MTU checks
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 5/6] ethdev: unify MTU checks Ferruh Yigit
@ 2021-07-23 3:29 ` Huisong Li
0 siblings, 0 replies; 112+ messages in thread
From: Huisong Li @ 2021-07-23 3:29 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, Thomas Monjalon, Andrew Rybchenko
在 2021/7/23 1:21, Ferruh Yigit 写道:
> Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
> have slightly different checks. Like one checks min MTU against
> RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
>
> Checks moved into common function to unify the checks. Also this has
> benefit to have common error logs.
>
> Suggested-by: Huisong Li <lihuisong@huawei.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> lib/ethdev/rte_ethdev.c | 82 ++++++++++++++++++++++++++---------------
> lib/ethdev/rte_ethdev.h | 2 +-
> 2 files changed, 54 insertions(+), 30 deletions(-)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 97d5c7d42d3b..1957fdec46a7 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1337,6 +1337,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> return overhead_len;
> }
>
> +/* rte_eth_dev_info_get() should be called prior to this function */
> +static int
> +eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
> + uint16_t mtu)
> +{
> + uint16_t overhead_len;
> + uint32_t frame_size;
> +
> + if (mtu < dev_info->min_mtu) {
> + RTE_ETHDEV_LOG(ERR,
> + "MTU (%u) < device min MTU (%u) for port_id %u\n",
> + mtu, dev_info->min_mtu, port_id);
> + return -EINVAL;
> + }
> + if (mtu > dev_info->max_mtu) {
> + RTE_ETHDEV_LOG(ERR,
> + "MTU (%u) > device max MTU (%u) for port_id %u\n",
> + mtu, dev_info->max_mtu, port_id);
> + return -EINVAL;
> + }
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
> + dev_info->max_mtu);
> + frame_size = mtu + overhead_len;
> + if (frame_size < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR,
> + "Frame size (%u) < min frame size (%u) for port_id %u\n",
> + frame_size, RTE_ETHER_MIN_LEN, port_id);
> + return -EINVAL;
> + }
> +
> + if (frame_size > dev_info->max_rx_pktlen) {
> + RTE_ETHDEV_LOG(ERR,
> + "Frame size (%u) > device max frame size (%u) for port_id %u\n",
> + frame_size, dev_info->max_rx_pktlen, port_id);
> + return -EINVAL;
> + }
Because the MTU validity check is performed earlier, the verification of
"frame_size" is logically redundant.
> +
> + return 0;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1464,26 +1505,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
> }
>
> - /*
> - * Check that the maximum RX packet length is supported by the
> - * configured device.
> - */
> if (dev_conf->rxmode.mtu == 0)
> dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> - max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> - if (max_rx_pktlen > dev_info.max_rx_pktlen) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
> - port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
> - ret = -EINVAL;
> - goto rollback;
> - } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
> - port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
> - ret = -EINVAL;
> +
> + ret = eth_dev_validate_mtu(port_id, &dev_info,
> + dev->data->dev_conf.rxmode.mtu);
> + if (ret != 0)
> goto rollback;
> - }
>
> dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
>
> @@ -1492,6 +1520,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> * size is supported by the configured device.
> */
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
These lines are not related to the current patch. This is already done
in other patche.
> if (dev_conf->rxmode.max_lro_pkt_size == 0)
> dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> ret = eth_dev_check_lro_pkt_size(port_id,
> @@ -3438,7 +3469,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
> dev_info->rx_desc_lim = lim;
> dev_info->tx_desc_lim = lim;
> dev_info->device = dev->device;
> - dev_info->min_mtu = RTE_ETHER_MIN_MTU;
> + dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN;
> dev_info->max_mtu = UINT16_MAX;
>
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
> @@ -3644,21 +3676,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> * which relies on dev->dev_ops->dev_infos_get.
> */
> if (*dev->dev_ops->dev_infos_get != NULL) {
> - uint16_t overhead_len;
> - uint32_t frame_size;
> -
> ret = rte_eth_dev_info_get(port_id, &dev_info);
> if (ret != 0)
> return ret;
>
> - if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
> - return -EINVAL;
> -
> - overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> - dev_info.max_mtu);
> - frame_size = mtu + overhead_len;
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> - return -EINVAL;
> + ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
> + if (ret != 0)
> + return ret;
> }
>
> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 892840e66227..dbb14c1978e7 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -3032,7 +3032,7 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
> * };
> *
> * device = dev->device
> - * min_mtu = RTE_ETHER_MIN_MTU
> + * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
I think there is something else we need to do for RTE_ETHER_MIN_MTU.
Maybe an announcement is necessary to prevent future use it and
eventually replace it with the current value.
> * max_mtu = UINT16_MAX
> *
> * The following fields will be populated if support for dev_infos_get()
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-07-22 14:43 ` Stephen Hemminger
@ 2021-09-17 1:08 ` Min Hu (Connor)
2021-09-17 8:04 ` Ferruh Yigit
0 siblings, 1 reply; 112+ messages in thread
From: Min Hu (Connor) @ 2021-09-17 1:08 UTC (permalink / raw)
To: Stephen Hemminger, Andrew Rybchenko; +Cc: Ferruh Yigit, Huisong Li, dev
Hi, Ferruh,
What is the status of this set of your patches ?
Could they be merged?
在 2021/7/22 22:43, Stephen Hemminger 写道:
> On Thu, 22 Jul 2021 13:15:04 +0300
> Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
>
>>> I don't think we care about type of transmission in this level, I assume we
>>> define min MTU mainly for the HW limitation and configuration. That is why it
>>> makes sense to me to use Ethernet frame lenght limitation (not IPv4 one).
>>
>> +1
>
> Also it is important that DPDK follow the conventions of other software
> such as Linux and BSD. Cisco and Juniper already disagree about whether
> header should be included in what is defined as MTU; i.e Cisco says 1514
> and Juniper says 1500.
> .
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-09-17 1:08 ` Min Hu (Connor)
@ 2021-09-17 8:04 ` Ferruh Yigit
2021-09-17 8:16 ` Min Hu (Connor)
2021-09-17 8:17 ` Min Hu (Connor)
0 siblings, 2 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-09-17 8:04 UTC (permalink / raw)
To: Min Hu (Connor), Stephen Hemminger, Andrew Rybchenko; +Cc: Huisong Li, dev
On 9/17/2021 2:08 AM, Min Hu (Connor) wrote:
> Hi, Ferruh,
> What is the status of this set of your patches ?
> Could they be merged?
>
Hi Connor,
I should send a new version of it, will do soon.
>
> 在 2021/7/22 22:43, Stephen Hemminger 写道:
>> On Thu, 22 Jul 2021 13:15:04 +0300
>> Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>>>> I don't think we care about type of transmission in this level, I assume we
>>>> define min MTU mainly for the HW limitation and configuration. That is why it
>>>> makes sense to me to use Ethernet frame lenght limitation (not IPv4 one).
>>>
>>> +1
>>
>> Also it is important that DPDK follow the conventions of other software
>> such as Linux and BSD. Cisco and Juniper already disagree about whether
>> header should be included in what is defined as MTU; i.e Cisco says 1514
>> and Juniper says 1500.
>> .
>>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-09-17 8:04 ` Ferruh Yigit
@ 2021-09-17 8:16 ` Min Hu (Connor)
2021-09-17 8:17 ` Min Hu (Connor)
1 sibling, 0 replies; 112+ messages in thread
From: Min Hu (Connor) @ 2021-09-17 8:16 UTC (permalink / raw)
To: Ferruh Yigit, Stephen Hemminger, Andrew Rybchenko; +Cc: Huisong Li, dev
在 2021/9/17 16:04, Ferruh Yigit 写道:
> On 9/17/2021 2:08 AM, Min Hu (Connor) wrote:
>> Hi, Ferruh,
>> What is the status of this set of your patches ?
>> Could they be merged?
>>
>
> Hi Connor,
>
> I should send a new version of it, will do soon.
>
Thanks Ferruh.
>>
>> 在 2021/7/22 22:43, Stephen Hemminger 写道:
>>> On Thu, 22 Jul 2021 13:15:04 +0300
>>> Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
>>>
>>>>> I don't think we care about type of transmission in this level, I assume we
>>>>> define min MTU mainly for the HW limitation and configuration. That is why it
>>>>> makes sense to me to use Ethernet frame lenght limitation (not IPv4 one).
>>>>
>>>> +1
>>>
>>> Also it is important that DPDK follow the conventions of other software
>>> such as Linux and BSD. Cisco and Juniper already disagree about whether
>>> header should be included in what is defined as MTU; i.e Cisco says 1514
>>> and Juniper says 1500.
>>> .
>>>
>
> .
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
2021-09-17 8:04 ` Ferruh Yigit
2021-09-17 8:16 ` Min Hu (Connor)
@ 2021-09-17 8:17 ` Min Hu (Connor)
1 sibling, 0 replies; 112+ messages in thread
From: Min Hu (Connor) @ 2021-09-17 8:17 UTC (permalink / raw)
To: Ferruh Yigit, Stephen Hemminger, Andrew Rybchenko; +Cc: Huisong Li, dev
在 2021/9/17 16:04, Ferruh Yigit 写道:
> On 9/17/2021 2:08 AM, Min Hu (Connor) wrote:
>> Hi, Ferruh,
>> What is the status of this set of your patches ?
>> Could they be merged?
>>
>
> Hi Connor,
>
> I should send a new version of it, will do soon.
>
Please Cc me next time your send the new version, thanks.
>>
>> 在 2021/7/22 22:43, Stephen Hemminger 写道:
>>> On Thu, 22 Jul 2021 13:15:04 +0300
>>> Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
>>>
>>>>> I don't think we care about type of transmission in this level, I assume we
>>>>> define min MTU mainly for the HW limitation and configuration. That is why it
>>>>> makes sense to me to use Ethernet frame lenght limitation (not IPv4 one).
>>>>
>>>> +1
>>>
>>> Also it is important that DPDK follow the conventions of other software
>>> such as Linux and BSD. Cisco and Juniper already disagree about whether
>>> header should be included in what is defined as MTU; i.e Cisco says 1514
>>> and Juniper says 1500.
>>> .
>>>
>
> .
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
` (4 preceding siblings ...)
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
@ 2021-10-01 14:36 ` Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
` (9 more replies)
5 siblings, 10 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 14:36 UTC (permalink / raw)
To: Andrew Rybchenko, Thomas Monjalon, Jerin Jacob, Xiaoyun Li,
Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Keith Wiles, Jiawen Wu,
Jian Wang, Maxime Coquelin, Chenbo Xia, Nicolas Chautru,
Harry van Haaren, Cristian Dumitrescu, Radu Nicolau, Akhil Goyal,
Tomasz Kantecki, Declan Doherty, Pavan Nikhilesh,
Kirill Rybalchenko, Jasvinder Singh
Cc: Ferruh Yigit, dev
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Min Hu (Connor) <humin29@huawei.com>
v2:
* Converted to explicit checks for zero/non-zero
* fixed hns3 checks
* fixed some sample app rxmode.mtu value
* fixed some sample app max-pkt-len argument and updated doc for it
v3:
* rebased
---
app/test-eventdev/test_perf_common.c | 1 -
app/test-eventdev/test_pipeline_common.c | 5 +-
app/test-pmd/cmdline.c | 49 +++----
app/test-pmd/config.c | 22 ++-
app/test-pmd/parameters.c | 4 +-
app/test-pmd/testpmd.c | 103 ++++++++------
app/test-pmd/testpmd.h | 2 +-
app/test/test_link_bonding.c | 1 -
app/test/test_link_bonding_mode4.c | 1 -
| 2 -
app/test/test_pmd_perf.c | 1 -
doc/guides/nics/dpaa.rst | 2 +-
doc/guides/nics/dpaa2.rst | 2 +-
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/fm10k.rst | 2 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/octeontx.rst | 2 +-
doc/guides/nics/thunderx.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 25 ----
doc/guides/sample_app_ug/flow_classify.rst | 7 +-
doc/guides/sample_app_ug/l3_forward.rst | 6 +-
.../sample_app_ug/l3_forward_access_ctrl.rst | 4 +-
doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +-
.../sample_app_ug/l3_forward_power_man.rst | 4 +-
.../sample_app_ug/performance_thread.rst | 4 +-
doc/guides/sample_app_ug/skeleton.rst | 7 +-
drivers/net/atlantic/atl_ethdev.c | 3 -
drivers/net/avp/avp_ethdev.c | 17 +--
drivers/net/axgbe/axgbe_ethdev.c | 7 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
drivers/net/bnxt/bnxt_ethdev.c | 21 +--
drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 9 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-
drivers/net/cxgbe/cxgbe_main.c | 3 +-
drivers/net/cxgbe/sge.c | 3 +-
drivers/net/dpaa/dpaa_ethdev.c | 52 +++----
drivers/net/dpaa2/dpaa2_ethdev.c | 31 ++---
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/igb_ethdev.c | 18 +--
drivers/net/e1000/igb_rxtx.c | 16 +--
drivers/net/ena/ena_ethdev.c | 27 ++--
drivers/net/enetc/enetc_ethdev.c | 24 +---
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 42 +++---
drivers/net/fm10k/fm10k_ethdev.c | 2 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-
drivers/net/hns3/hns3_ethdev.c | 42 +-----
drivers/net/hns3/hns3_ethdev_vf.c | 28 +---
drivers/net/hns3/hns3_rxtx.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 10 +-
drivers/net/i40e/i40e_rxtx.c | 4 +-
drivers/net/iavf/iavf_ethdev.c | 9 +-
drivers/net/ice/ice_dcf_ethdev.c | 5 +-
drivers/net/ice/ice_ethdev.c | 14 +-
drivers/net/ice/ice_rxtx.c | 12 +-
drivers/net/igc/igc_ethdev.c | 51 ++-----
drivers/net/igc/igc_ethdev.h | 7 +
drivers/net/igc/igc_txrx.c | 22 +--
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ionic/ionic_rxtx.c | 6 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 35 ++---
drivers/net/ixgbe/ixgbe_pf.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 15 +-
drivers/net/liquidio/lio_ethdev.c | 20 +--
drivers/net/mlx4/mlx4_rxq.c | 17 +--
drivers/net/mlx5/mlx5_rxq.c | 25 ++--
drivers/net/mvneta/mvneta_ethdev.c | 7 -
drivers/net/mvneta/mvneta_rxtx.c | 13 +-
drivers/net/mvpp2/mrvl_ethdev.c | 34 ++---
drivers/net/nfp/nfp_common.c | 9 +-
drivers/net/octeontx/octeontx_ethdev.c | 12 +-
drivers/net/octeontx2/otx2_ethdev.c | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-
drivers/net/pfe/pfe_ethdev.c | 7 +-
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/qede/qede_rxtx.c | 8 +-
drivers/net/sfc/sfc_ethdev.c | 4 +-
drivers/net/sfc/sfc_port.c | 6 +-
drivers/net/tap/rte_eth_tap.c | 7 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +-
drivers/net/txgbe/txgbe_ethdev.c | 7 +-
drivers/net/txgbe/txgbe_ethdev.h | 4 +
drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
drivers/net/txgbe/txgbe_rxtx.c | 19 +--
drivers/net/virtio/virtio_ethdev.c | 9 +-
examples/bbdev_app/main.c | 1 -
examples/bond/main.c | 1 -
examples/distributor/main.c | 1 -
.../pipeline_worker_generic.c | 1 -
.../eventdev_pipeline/pipeline_worker_tx.c | 1 -
examples/flow_classify/flow_classify.c | 12 +-
examples/ioat/ioatfwd.c | 1 -
examples/ip_fragmentation/main.c | 12 +-
examples/ip_pipeline/link.c | 2 +-
examples/ip_reassembly/main.c | 12 +-
examples/ipsec-secgw/ipsec-secgw.c | 7 +-
examples/ipv4_multicast/main.c | 9 +-
examples/kni/main.c | 6 +-
examples/l2fwd-cat/l2fwd-cat.c | 8 +-
examples/l2fwd-crypto/main.c | 1 -
examples/l2fwd-event/l2fwd_common.c | 1 -
examples/l3fwd-acl/main.c | 129 +++++++++---------
examples/l3fwd-graph/main.c | 83 +++++++----
examples/l3fwd-power/main.c | 90 +++++++-----
examples/l3fwd/main.c | 84 +++++++-----
.../performance-thread/l3fwd-thread/main.c | 88 +++++++-----
.../performance-thread/l3fwd-thread/test.sh | 24 ++--
examples/pipeline/obj.c | 2 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 1 -
examples/qos_sched/init.c | 1 -
examples/rxtx_callbacks/main.c | 10 +-
examples/skeleton/basicfwd.c | 12 +-
examples/vhost/main.c | 4 +-
examples/vm_power_manager/main.c | 11 +-
lib/ethdev/rte_ethdev.c | 92 +++++++------
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_trace.h | 2 +-
121 files changed, 801 insertions(+), 1071 deletions(-)
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..660d5a0364b6 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..5fcea74b4d43 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
return -EINVAL;
}
- port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
- if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
+ port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
+ if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index a9efd027c376..a677451073ae 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1892,45 +1892,38 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
__rte_unused void *data)
{
struct cmd_config_max_pkt_len_result *res = parsed_result;
- uint32_t max_rx_pkt_len_backup = 0;
- portid_t pid;
+ portid_t port_id;
int ret;
+ if (strcmp(res->name, "max-pkt-len") != 0) {
+ printf("Unknown parameter\n");
+ return;
+ }
+
if (!all_ports_stopped()) {
fprintf(stderr, "Please stop all ports first\n");
return;
}
- RTE_ETH_FOREACH_DEV(pid) {
- struct rte_port *port = &ports[pid];
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_port *port = &ports[port_id];
- if (!strcmp(res->name, "max-pkt-len")) {
- if (res->value < RTE_ETHER_MIN_LEN) {
- fprintf(stderr,
- "max-pkt-len can not be less than %d\n",
- RTE_ETHER_MIN_LEN);
- return;
- }
- if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
- return;
-
- ret = eth_dev_info_get_print_err(pid, &port->dev_info);
- if (ret != 0) {
- fprintf(stderr,
- "rte_eth_dev_info_get() failed for port %u\n",
- pid);
- return;
- }
-
- max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
+ if (res->value < RTE_ETHER_MIN_LEN) {
+ fprintf(stderr,
+ "max-pkt-len can not be less than %d\n",
+ RTE_ETHER_MIN_LEN);
+ return;
+ }
- port->dev_conf.rxmode.max_rx_pkt_len = res->value;
- if (update_jumbo_frame_offload(pid) != 0)
- port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
- } else {
- fprintf(stderr, "Unknown parameter\n");
+ ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
+ if (ret != 0) {
+ fprintf(stderr,
+ "rte_eth_dev_info_get() failed for port %u\n",
+ port_id);
return;
}
+
+ update_jumbo_frame_offload(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9c66329e96ee..db3eeffa0093 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1147,7 +1147,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
int diag;
struct rte_port *rte_port = &ports[port_id];
struct rte_eth_dev_info dev_info;
- uint16_t eth_overhead;
int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
@@ -1164,21 +1163,18 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
return;
}
diag = rte_eth_dev_set_mtu(port_id, mtu);
- if (diag)
+ if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
- else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- /*
- * Ether overhead in driver is equal to the difference of
- * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
- * device supports jumbo frame.
- */
- eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
- if (mtu > RTE_ETHER_MTU) {
+ return;
+ }
+
+ rte_port->dev_conf.rxmode.mtu = mtu;
+
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (mtu > RTE_ETHER_MTU)
rte_port->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- rte_port->dev_conf.rxmode.max_rx_pkt_len =
- mtu + eth_overhead;
- } else
+ else
rte_port->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
}
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 3f94a82e321f..27eb4bc667df 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -870,7 +870,9 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
n = atoi(optarg);
if (n >= RTE_ETHER_MIN_LEN)
- rx_mode.max_rx_pkt_len = (uint32_t) n;
+ rx_mode.mtu = (uint32_t) n -
+ (RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN);
else
rte_exit(EXIT_FAILURE,
"Invalid max-pkt-len=%d - should be > %d\n",
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97ae52e17ecd..8c23cfe7c3da 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -446,13 +446,7 @@ lcoreid_t latencystats_lcore_id = -1;
/*
* Ethernet device configuration.
*/
-struct rte_eth_rxmode rx_mode = {
- /* Default maximum frame length.
- * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
- * in init_config().
- */
- .max_rx_pkt_len = 0,
-};
+struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
@@ -1481,11 +1475,24 @@ check_nb_hairpinq(queueid_t hairpinq)
return 0;
}
+static int
+get_eth_overhead(struct rte_eth_dev_info *dev_info)
+{
+ uint32_t eth_overhead;
+
+ if (dev_info->max_mtu != UINT16_MAX &&
+ dev_info->max_rx_pktlen > dev_info->max_mtu)
+ eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu;
+ else
+ eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return eth_overhead;
+}
+
static void
init_config_port_offloads(portid_t pid, uint32_t socket_id)
{
struct rte_port *port = &ports[pid];
- uint16_t data_size;
int ret;
int i;
@@ -1496,7 +1503,7 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid);
+ ret = update_jumbo_frame_offload(pid, 0);
if (ret != 0)
fprintf(stderr,
"Updating jumbo frame offload failed for port %u\n",
@@ -1528,14 +1535,20 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
*/
if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
- data_size = rx_mode.max_rx_pkt_len /
- port->dev_info.rx_desc_lim.nb_mtu_seg_max;
-
- if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
- mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
- TESTPMD_LOG(WARNING,
- "Configured mbuf size of the first segment %hu\n",
- mbuf_data_size[0]);
+ uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
+ uint16_t mtu;
+
+ if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
+ uint16_t data_size = (mtu + eth_overhead) /
+ port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+ uint16_t buffer_size = data_size + RTE_PKTMBUF_HEADROOM;
+
+ if (buffer_size > mbuf_data_size[0]) {
+ mbuf_data_size[0] = buffer_size;
+ TESTPMD_LOG(WARNING,
+ "Configured mbuf size of the first segment %hu\n",
+ mbuf_data_size[0]);
+ }
}
}
}
@@ -3451,44 +3464,45 @@ rxtx_port_config(struct rte_port *port)
/*
* Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned if JUMBO_FRAME offload is not set.
+ * MTU is also aligned.
*
* port->dev_info should be set before calling this function.
*
+ * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
+ * ETH_OVERHEAD". This is useful to update flags but not MTU value.
+ *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid)
+update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
uint64_t rx_offloads;
- int ret;
+ uint16_t mtu, new_mtu;
bool on;
- /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
- if (port->dev_info.max_mtu != UINT16_MAX &&
- port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
- eth_overhead = port->dev_info.max_rx_pktlen -
- port->dev_info.max_mtu;
- else
- eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ eth_overhead = get_eth_overhead(&port->dev_info);
- rx_offloads = port->dev_conf.rxmode.offloads;
+ if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
+ printf("Failed to get MTU for port %u\n", portid);
+ return -1;
+ }
+
+ if (max_rx_pktlen == 0)
+ max_rx_pktlen = mtu + eth_overhead;
- /* Default config value is 0 to use PMD specific overhead */
- if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
- port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
+ rx_offloads = port->dev_conf.rxmode.offloads;
+ new_mtu = max_rx_pktlen - eth_overhead;
- if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
+ if (new_mtu <= RTE_ETHER_MTU) {
rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
on = false;
} else {
if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
fprintf(stderr,
"Frame size (%u) is not supported by port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len,
- portid);
+ max_rx_pktlen, portid);
return -1;
}
rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -3509,19 +3523,18 @@ update_jumbo_frame_offload(portid_t portid)
}
}
- /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
- * if unset do it here
- */
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- ret = eth_dev_set_mtu_mp(portid,
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
- if (ret)
- fprintf(stderr,
- "Failed to set MTU to %u for port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead,
- portid);
+ if (mtu == new_mtu)
+ return 0;
+
+ if (eth_dev_set_mtu_mp(portid, new_mtu) != 0) {
+ fprintf(stderr,
+ "Failed to set MTU to %u for port %u\n",
+ new_mtu, portid);
+ return -1;
}
+ port->dev_conf.rxmode.mtu = new_mtu;
+
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5863b2f43f3e..17562215c733 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1022,7 +1022,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid);
+int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..5388d18125a6 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
.split_hdr_size = 0,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7adc7..3e9254fe896d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..e7bb0497b663 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
@@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..a3b4f52c65e6 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 7355ec305916..9dad612058c6 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -335,7 +335,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index df23a5704dca..831bc564883a 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -545,7 +545,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4fce8cd1c976..483cb7da576f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -166,7 +166,7 @@ Jumbo frame
Supports Rx jumbo frames.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.max_rx_pkt_len``.
+ ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..ed6afd62703d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -141,7 +141,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
up to 15364 bytes can still reach the host interface.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d882..1f5619ed53fc 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -606,9 +606,9 @@ Driver options
and each stride receives one packet. MPRQ can improve throughput for
small-packet traffic.
- When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
+ When MPRQ is enabled, MTU can be larger than the size of
user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
- configure large stride size enough to accommodate max_rx_pkt_len as long as
+ configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index b1a868b054d1..8236cc3e93e0 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -157,7 +157,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 12d43ce93e28..98f23a2b2a3d 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -392,7 +392,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
up to 9200 bytes can still reach the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b4f..1063a1fe4bea 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -81,31 +81,6 @@ Deprecation Notices
In 19.11 PMDs will still update the field even when the offload is not
enabled.
-* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be
- replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
- The new ``mtu`` field will be used to configure the initial device MTU via
- ``rte_eth_dev_configure()`` API.
- Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
- The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store
- the configured ``mtu`` value,
- and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
- be used to store the user configuration request.
- Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled,
- ``mtu`` field will be always valid.
- When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU``
- value will be used.
- ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully,
- either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
-
- An application may need to configure device for a specific Rx packet size, like for
- cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
- can't be bigger than Rx buffer size.
- To cover these cases an application needs to know the device packet overhead to be
- able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
- ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
- the device packet overhead can be calculated as:
- ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
-
* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
will be removed in 21.11.
Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst
index 812aaa87b05b..6c4c04e935e4 100644
--- a/doc/guides/sample_app_ug/flow_classify.rst
+++ b/doc/guides/sample_app_ug/flow_classify.rst
@@ -162,12 +162,7 @@ Forwarding application is shown below:
:end-before: >8 End of initializing a given port.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
-
-.. literalinclude:: ../../../examples/flow_classify/flow_classify.c
- :language: c
- :start-after: Ethernet ports configured with default settings using struct. 8<
- :end-before: >8 End of configuration of Ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 2d5cd5f1c0ba..56af5cd5b383 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -65,7 +65,7 @@ The application has a number of command line options::
[--lookup LOOKUP_METHOD]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--hash-entry-num]
[--ipv6]
@@ -95,9 +95,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 2cf6e4556f14..486247ac2e4f 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -236,7 +236,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
+ ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--max-pkt-len PKTLEN] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
where,
@@ -255,8 +255,6 @@ where,
* --alg=<val>: optional, ACL classify method to use, one of:
``scalar|sse|avx2|neon|altivec|avx512x16|avx512x32``
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst b/doc/guides/sample_app_ug/l3_forward_graph.rst
index 03e9a85aa68c..0a3e0d44ecea 100644
--- a/doc/guides/sample_app_ug/l3_forward_graph.rst
+++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
@@ -48,7 +48,7 @@ The application has a number of command line options similar to l3fwd::
[-P]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--per-port-pool]
@@ -63,9 +63,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index 0495314c87d5..8817eaadbfc3 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -88,7 +88,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa]
+ ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--max-pkt-len PKTLEN] [--no-numa]
where,
@@ -99,8 +99,6 @@ where,
* --config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which ports are mapped to which cores.
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst
index 9b09838f6448..7d1bf6eaae8c 100644
--- a/doc/guides/sample_app_ug/performance_thread.rst
+++ b/doc/guides/sample_app_ug/performance_thread.rst
@@ -59,7 +59,7 @@ The application has a number of command line options::
-p PORTMASK [-P]
--rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
--tx(lcore,thread)[,(lcore,thread)]
- [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
+ [--max-pkt-len PKTLEN] [--no-numa]
[--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
[--parse-ptype]
@@ -80,8 +80,6 @@ Where:
the lcore the thread runs on, and the id of RX thread with which it is
associated. The parameters are explained below.
-* ``--enable-jumbo``: optional, enables jumbo frames.
-
* ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
* ``--no-numa``: optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst
index f7bcd7ed2a1d..6d0de6440105 100644
--- a/doc/guides/sample_app_ug/skeleton.rst
+++ b/doc/guides/sample_app_ug/skeleton.rst
@@ -106,12 +106,7 @@ Forwarding application is shown below:
:end-before: >8 End of main functional part of port initialization.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
-
-.. literalinclude:: ../../../examples/skeleton/basicfwd.c
- :language: c
- :start-after: Configuration of ethernet ports. 8<
- :end-before: >8 End of configuration of ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..3f654c071566 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return 0;
}
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff5b..0feacc822433 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1059,17 +1059,18 @@ static int
avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
struct avp_dev *avp)
{
- unsigned int max_rx_pkt_len;
+ unsigned int max_rx_pktlen;
- max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
- if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
- (max_rx_pkt_len > avp->host_mbuf_size)) {
+ if (max_rx_pktlen > avp->guest_mbuf_size ||
+ max_rx_pktlen > avp->host_mbuf_size) {
/*
* If the guest MTU is greater than either the host or guest
* buffers then chained mbufs have to be enabled in the TX
* direction. It is assumed that the application will not need
- * to send packets larger than their max_rx_pkt_len (MRU).
+ * to send packets larger than their MTU.
*/
return 1;
}
@@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
avp->max_rx_pkt_len,
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN,
avp->host_mbuf_size,
avp->guest_mbuf_size);
@@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* function; send it truncated to avoid the performance
* hit of having to manage returning the already
* allocated buffer to the free list. This should not
- * happen since the application should have set the
- * max_rx_pkt_len based on its MTU and it should be
+ * happen since the application should have not send
+ * packages larger than its MTU and it should be
* policing its own packet sizes.
*/
txq->errors++;
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af11f..76aeec077f2b 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
int ret;
struct rte_eth_dev_data *dev_data = dev->data;
- uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len;
+ uint16_t max_pkt_len;
dev->dev_ops = &axgbe_eth_dev_ops;
@@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
+
+ max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (frame_size > AXGBE_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
val = 1;
@@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
val = 0;
}
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..009a94e9a8fa 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -175,16 +175,12 @@ static int
bnx2x_dev_configure(struct rte_eth_dev *dev)
{
struct bnx2x_softc *sc = dev->data->dev_private;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
PMD_INIT_FUNC_TRACE(sc);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- dev->data->mtu = sc->mtu;
- }
+ sc->mtu = dev->data->dev_conf.rxmode.mtu;
if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues");
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85fa..8c6f20b75aed 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1157,13 +1157,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
- BNXT_NUM_VLANS;
- bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
- }
+ bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
+
return 0;
resource_error:
@@ -1201,6 +1196,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
*/
static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
uint16_t buf_size;
int i;
@@ -1215,7 +1211,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size)
+ if (eth_dev->data->mtu + overhead > buf_size)
return 1;
}
return 0;
@@ -3026,6 +3022,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
uint32_t rc = 0;
@@ -3039,8 +3036,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
if (!eth_dev->data->nb_rx_queues)
return rc;
- new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
- VLAN_TAG_SIZE * BNXT_NUM_VLANS;
+ new_pkt_size = new_mtu + overhead;
/*
* Disallow any MTU change that would require scattered receive support
@@ -3067,7 +3063,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
/* Is there a change in mtu setting? */
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size)
+ if (eth_dev->data->mtu == new_mtu)
return rc;
for (i = 0; i < bp->nr_vnics; i++) {
@@ -3089,9 +3085,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
}
- if (!rc)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size;
-
if (bnxt_hwrm_config_host_mtu(bp))
PMD_DRV_LOG(WARNING, "Failed to configure host MTU\n");
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 54987d96b34d..412acff42f65 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1724,8 +1724,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_VLAN_FILTER;
- slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ slave_eth_dev->data->dev_conf.rxmode.mtu =
+ bonded_eth_dev->data->dev_conf.rxmode.mtu;
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_JUMBO_FRAME)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 8629193d5049..8d0677cd89d9 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
}
@@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct cnxk_eth_rxq_sp *rxq;
- uint16_t mtu;
int rc;
rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
/* Setup scatter mode if needed by jumbo */
nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
- CNXK_NIX_MAX_VTAG_ACT_SIZE;
-
- rc = cnxk_nix_mtu_set(eth_dev, mtu);
+ rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
if (rc)
plt_err("Failed to set default MTU size, rc=%d", rc);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..695d0d6fd3e2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
goto exit;
}
- frame_size += RTE_ETHER_CRC_LEN;
-
- if (frame_size > RTE_ETHER_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca397600..8cf61f12a8d6 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return err;
/* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
+ if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
/* set to jumbo mode if needed */
- if (new_mtu > CXGBE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- if (!err)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;
-
return err;
}
@@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
- unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
struct rte_eth_dev_info dev_info;
@@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
rxq->fl.size = temp_nb_desc;
/* Set to jumbo mode if necessary */
- if (pkt_len > CXGBE_ETH_MAX_LEN)
+ if (eth_dev->data->mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..91d6bb9bbcb0 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
unsigned int mtu;
int ret;
- mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ mtu = pi->eth_dev->data->mtu;
conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..830f5192474d 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
u32 wr_mid;
u64 cntrl, *end;
bool v6;
- u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
+ u32 max_pkt_len;
/* Reject xmit if queue is stopped */
if (unlikely(txq->flags & EQ_STOPPED))
@@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
return 0;
}
+ max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
(unlikely(m->pkt_len > max_pkt_len)))
goto out_free;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249df1..adbdb87baab9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > DPAA_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
@@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
struct fman_if *fif = dev->process_private;
struct __fman_if *__fif;
struct rte_intr_handle *intr_handle;
+ uint32_t max_rx_pktlen;
int speed, duplex;
int ret;
@@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- DPAA_PMD_DEBUG("enabling jumbo");
-
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- else {
- DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
- "supported is %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- DPAA_MAX_RX_PKT_LEN);
- max_len = DPAA_MAX_RX_PKT_LEN;
- }
-
- fman_if_set_maxfrm(dev->process_private, max_len);
- dev->data->mtu = max_len
- - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
+ DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
+ "supported is %d",
+ max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
+ max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
+ fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
+
if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
@@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
u32 flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -EINVAL;
}
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
+ VLAN_TAG_SIZE;
/* Max packet can fit in single buffer */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
+ if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- buffsz * DPAA_SGT_MAX_ENTRIES) {
- DPAA_PMD_ERR("max RxPkt size %d too big to fit "
+ if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
+ DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz * DPAA_SGT_MAX_ENTRIES);
+ max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES);
rte_errno = EOVERFLOW;
return -rte_errno;
}
@@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz - RTE_PKTMBUF_HEADROOM);
+ max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
}
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
@@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif),
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e22..758a14e0ad2d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
int tx_l3_csum_offload = false;
int tx_l4_csum_offload = false;
int ret, tc_index;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
- ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
- priv->token, eth_conf->rxmode.max_rx_pkt_len
- - RTE_ETHER_CRC_LEN);
- if (ret) {
- DPAA2_PMD_ERR(
- "Unable to set mtu. check config");
- return ret;
- }
- dev->data->mtu =
- dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
- VLAN_TAG_SIZE;
- } else {
- return -1;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
+ ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
+ priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
+ if (ret != 0) {
+ DPAA2_PMD_ERR("Unable to set mtu. check config");
+ return ret;
}
+ } else {
+ return -1;
}
if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
@@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (frame_size > DPAA2_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..6f418a36aa04 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d80fad01e36d..4c114bf90fc7 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2681,9 +2681,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD);
}
static void
@@ -2699,10 +2697,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE);
+ E1000_WRITE_REG(hw, E1000_RLPML,
+ dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
}
static int
@@ -4400,7 +4396,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -4411,11 +4407,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
return 0;
}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712af..e9a30d393bd7 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
@@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
rctl |= E1000_RCTL_LPE;
/*
@@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
@@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
/* setup MTU */
- e1000_rlpml_set_vf(hw,
- (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE));
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
+ e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68a7..3a9d5031b262 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -679,26 +679,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev,
return rc;
}
-static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
-{
- uint32_t max_frame_len = adapter->max_mtu;
-
- if (adapter->edev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- max_frame_len =
- adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
-
- return max_frame_len;
-}
-
static int ena_check_valid_conf(struct ena_adapter *adapter)
{
- uint32_t max_frame_len = ena_get_mtu_conf(adapter);
+ uint32_t mtu = adapter->edev_data->mtu;
- if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_INIT_LOG(ERR,
"Unsupported MTU of %d. Max MTU: %d, min MTU: %d\n",
- max_frame_len, adapter->max_mtu, ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return ENA_COM_UNSUPPORTED;
}
@@ -871,10 +859,10 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
ena_dev = &adapter->ena_dev;
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
- if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_DRV_LOG(ERR,
"Invalid MTU setting. New MTU: %d, max MTU: %d, min MTU: %d\n",
- mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return -EINVAL;
}
@@ -1945,7 +1933,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = ENA_HASH_KEY_SIZE;
dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
- dev_info->max_rx_pktlen = adapter->max_mtu;
+ dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ dev_info->min_mtu = ENA_MIN_MTU;
+ dev_info->max_mtu = adapter->max_mtu;
dev_info->max_mac_addrs = 1;
dev_info->max_rx_queues = adapter->max_num_io_queues;
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index b496cd470045..cdb9783b5372 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > ENETC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads &=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/*setting the MTU*/
enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) |
ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
@@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint32_t checksum = L3_CKSUM | L4_CKSUM;
+ uint32_t max_len;
PMD_INIT_FUNC_TRACE();
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
- enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
- ENETC_SET_MAXFRM(max_len));
- enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
- ENETC_MAC_MAXFRM_SIZE);
- enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
- 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
- RTE_ETHER_CRC_LEN;
- }
+ max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len));
+ enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
+ enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
int config;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b8f..6a81ceb62ba7 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
* max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
* a hint to the driver to size receive buffers accordingly so that
* larger-than-vnic-mtu packets get truncated.. For DPDK, we let
- * the user decide the buffer size via rxmode.max_rx_pkt_len, basically
+ * the user decide the buffer size via rxmode.mtu, basically
* ignoring vNIC mtu.
*/
device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu);
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..dfc7f5d1f94f 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
struct rq_enet_desc *rqd = rq->ring.descs;
unsigned i;
dma_addr_t dma_addr;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint16_t rq_buf_len;
if (!rq->in_use)
@@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
/*
* If *not* using scatter and the mbuf size is greater than the
- * requested max packet size (max_rx_pkt_len), then reduce the
- * posted buffer size to max_rx_pkt_len. HW still receives packets
- * larger than max_rx_pkt_len, but they will be truncated, which we
+ * requested max packet size (mtu + eth overhead), then reduce the
+ * posted buffer size to max packet size. HW still receives packets
+ * larger than max packet size, but they will be truncated, which we
* drop in the rx handler. Not ideal, but better than returning
* large packets when the user is not expecting them.
*/
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM;
- if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
- rq_buf_len = max_rx_pkt_len;
+ if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
+ rq_buf_len = max_rx_pktlen;
for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
mb = rte_mbuf_raw_alloc(rq->mp);
if (mb == NULL) {
@@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
unsigned int mbuf_size, mbufs_per_pkt;
unsigned int nb_sop_desc, nb_data_desc;
uint16_t min_sop, max_sop, min_data, max_data;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
/*
* Representor uses a reserved PF queue. Translate representor
@@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM);
- /* max_rx_pkt_len includes the ethernet header and CRC. */
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ /* max_rx_pktlen includes the ethernet header and CRC. */
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
- mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
+ mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
} else {
dev_info(enic, "Scatter rx mode disabled\n");
mbufs_per_pkt = 1;
- if (max_rx_pkt_len > mbuf_size) {
+ if (max_rx_pktlen > mbuf_size) {
dev_warning(enic, "The maximum Rx packet size (%u) is"
" larger than the mbuf size (%u), and"
" scatter is disabled. Larger packets will"
" be truncated.\n",
- max_rx_pkt_len, mbuf_size);
+ max_rx_pktlen, mbuf_size);
}
}
@@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
rq_sop->data_queue_enable = 1;
rq_data->in_use = 1;
/*
- * HW does not directly support rxmode.max_rx_pkt_len. HW always
+ * HW does not directly support MTU. HW always
* receives packet sizes up to the "max" MTU.
* If not using scatter, we can achieve the effect of dropping
* larger packets by reducing the size of posted buffers.
* See enic_alloc_rx_queue_mbufs().
*/
- if (max_rx_pkt_len <
- enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
- dev_warning(enic, "rxmode.max_rx_pkt_len is ignored"
- " when scatter rx mode is in use.\n");
+ if (enic->rte_dev->data->mtu < enic->max_mtu) {
+ dev_warning(enic,
+ "mtu is ignored when scatter rx mode is in use.\n");
}
} else {
dev_info(enic, "Rq %u Scatter rx mode not being used\n",
@@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
if (mbufs_per_pkt > 1) {
dev_info(enic, "For max packet size %u and mbuf size %u valid"
" rx descriptor range is %u to %u\n",
- max_rx_pkt_len, mbuf_size, min_sop + min_data,
+ max_rx_pktlen, mbuf_size, min_sop + min_data,
max_sop + max_data);
}
dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
@@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
"MTU (%u) is greater than value configured in NIC (%u)\n",
new_mtu, config_mtu);
- /* Update the MTU and maximum packet length */
- eth_dev->data->mtu = new_mtu;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- enic_mtu_to_max_rx_pktlen(new_mtu);
-
/*
* If the device has not started (enic_enable), nothing to do.
* Later, enic_enable() will set up RQs reflecting the new maximum
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e4021..5e4b361ca6c0 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
FM10K_SRRCTL_LOOPBACK_SUPPRESS);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
+ if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
uint32_t reg;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c01e2ec1d450..2d8271cb6095 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -315,19 +315,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- HINIC_MAX_JUMBO_FRAME_SIZE) {
+ if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
+ HINIC_MIN_FRAME_SIZE ||
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
+ HINIC_MAX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR,
- "Max rx pkt len out of range, get max_rx_pkt_len:%d, "
+ "Packet length out of range, get packet length:%d, "
"expect between %d and %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu),
HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- nic_dev->mtu_size =
- HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
/* rss template */
err = hinic_config_mq_mode(dev, TRUE);
@@ -1530,7 +1530,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- uint32_t frame_size;
int ret = 0;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
@@ -1548,16 +1547,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- /* update max frame size */
- frame_size = HINIC_MTU_TO_PKTLEN(mtu);
- if (frame_size > HINIC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972bf..4ead227f9122 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2371,41 +2371,6 @@ hns3_init_ring_with_vector(struct hns3_hw *hw)
return 0;
}
-static int
-hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
-{
- struct hns3_adapter *hns = dev->data->dev_private;
- struct hns3_hw *hw = &hns->hw;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
- int ret;
-
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
- return 0;
-
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater than %u "
- "and no more than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- return -EINVAL;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3_dev_mtu_set(dev, mtu);
- if (ret)
- return ret;
- dev->data->mtu = mtu;
-
- return 0;
-}
-
static int
hns3_setup_dcb(struct rte_eth_dev *dev)
{
@@ -2520,8 +2485,8 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- ret = hns3_refresh_mtu(dev, conf);
- if (ret)
+ ret = hns3_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
goto cfg_err;
ret = hns3_mbuf_dyn_rx_timestamp_register(dev, conf);
@@ -2616,7 +2581,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false;
+ is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2637,7 +2602,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c806..0b5db486f8d6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -784,8 +784,6 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
struct rte_eth_rss_conf rss_conf;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
bool gro_en;
int ret;
@@ -825,28 +823,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater "
- "than %u and less than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- ret = -EINVAL;
- goto cfg_err;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3vf_dev_mtu_set(dev, mtu);
- if (ret)
- goto cfg_err;
- dev->data->mtu = mtu;
- }
+ ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
+ goto cfg_err;
ret = hns3vf_dev_configure_vlan(dev);
if (ret)
@@ -935,7 +914,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e3957f..a260212f73f1 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1735,18 +1735,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size,
uint16_t nb_desc)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
- struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
uint16_t min_vec_bds;
/*
* HNS3 hardware network engine set scattered as default. If the driver
* is not work in scattered mode and the pkts greater than buf_size
- * but smaller than max_rx_pkt_len will be distributed to multiple BDs.
+ * but smaller than frame size will be distributed to multiple BDs.
* Driver cannot handle this situation.
*/
- if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) {
- hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater "
+ if (!hw->data->scattered_rx && frame_size > buf_size) {
+ hns3_err(hw, "frame size is not allowed to be set greater "
"than rx_buf_len if scattered is off.");
return -EINVAL;
}
@@ -1958,7 +1958,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
}
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
- dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
+ dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7a2a8281d2d5..2033f8f55cd6 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11774,14 +11774,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > I40E_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return ret;
}
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index d5847ac6b546..1d27cf2b0a01 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2909,8 +2909,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
}
rxq->max_pkt_len =
- RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
- rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
+ RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
+ data->mtu + I40E_ETH_OVERHEAD);
if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 5a5a7f59e152..0eabce275d92 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -576,13 +576,14 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_dev_data *dev_data = dev->data;
uint16_t buf_size, max_pkt_len;
+ uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
/* Calculate the maximum packet length allowed */
max_pkt_len = RTE_MIN((uint32_t)
rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
@@ -839,7 +840,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
adapter->stopped = 0;
- vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
num_queue_pairs = vf->num_queue_pairs;
@@ -1472,15 +1473,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IAVF_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return ret;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index f510bad381db..8f14a494203a 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -65,9 +65,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ dev->data->mtu + ICE_ETH_OVERHEAD);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index ea3b5c02aa1b..e6d5128599e1 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3603,8 +3603,8 @@ ice_dev_start(struct rte_eth_dev *dev)
pf->adapter_stopped = false;
/* Set the max frame size to default value*/
- max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
- pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
+ max_frame_size = pf->dev_data->mtu ?
+ pf->dev_data->mtu + ICE_ETH_OVERHEAD :
ICE_FRAME_SIZE_MAX;
/* Set the max frame size to HW*/
@@ -3992,14 +3992,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > ICE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 83fb788e6930..f9ef6ce57277 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -271,15 +271,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
+ uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
/* Set buffer size as the head split is disabled. */
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
RTE_PKTMBUF_HEADROOM);
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- rxq->max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev_data->dev_conf.rxmode.max_rx_pkt_len);
+ rxq->max_pkt_len =
+ RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ frame_size);
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
@@ -385,11 +386,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
return -EINVAL;
}
- buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
- RTE_PKTMBUF_HEADROOM);
-
/* Check if scattered RX needs to be used. */
- if (rxq->max_pkt_len > buf_size)
+ if (frame_size > buf_size)
dev_data->scattered_rx = 1;
rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a0954836b..b26723064b07 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -20,13 +20,6 @@
#define IGC_INTEL_VENDOR_ID 0x8086
-/*
- * The overhead from MTU to max frame size.
- * Considering VLAN so tag needs to be counted.
- */
-#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
- RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE)
-
#define IGC_FC_PAUSE_TIME 0x0680
#define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
#define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
@@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= IGC_RCTL_LPE;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~IGC_RCTL_LPE;
}
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
return 0;
}
@@ -2486,6 +2473,7 @@ static int
igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
- RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
+ if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
+ frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
@@ -2519,6 +2498,7 @@ static int
igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
+ if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
+ frame_size, MAX_RX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..b3473b5b1646 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -35,6 +35,13 @@ extern "C" {
#define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
#define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX)
+/*
+ * The overhead from MTU to max frame size.
+ * Considering VLAN so tag needs to be counted.
+ */
+#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2)
+
/*
* TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
* multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd220..28d3076439c3 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
struct igc_rx_queue *rxq;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint32_t rctl;
uint32_t rxcsum;
uint16_t buf_size;
@@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
rctl |= IGC_RCTL_LPE;
-
- /*
- * Set maximum packet length by default, and might be updated
- * together with enabling/disabling dual VLAN.
- */
- IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
- } else {
+ else
rctl &= ~IGC_RCTL_LPE;
- }
+
+ max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
+ /*
+ * Set maximum packet length by default, and might be updated
+ * together with enabling/disabling dual VLAN.
+ */
+ IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
+ if (max_rx_pktlen > buf_size)
dev->data->scattered_rx = 1;
} else {
/*
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e6207939665e..97447a10e46a 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -343,25 +343,15 @@ static int
ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
- uint32_t max_frame_size;
int err;
IONIC_PRINT_CALL();
/*
* Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
- * is done by the the API.
+ * is done by the API.
*/
- /*
- * Max frame size is MTU + Ethernet header + VLAN + QinQ
- * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
- */
- max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
-
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
- return -EINVAL;
-
err = ionic_lif_change_mtu(lif, mtu);
if (err)
return err;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa6a..3f5fc66abf71 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
struct rte_mbuf *rxm, *rxm_seg;
uint32_t max_frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint64_t pkt_flags = 0;
uint32_t pkt_type;
struct ionic_rx_stats *stats = &rxq->stats;
@@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
int __rte_cold
ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
{
- uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
struct ionic_rx_qcq *rxq;
int err;
@@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
{
struct ionic_rx_qcq *rxq = rx_queue;
uint32_t frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
struct ionic_rx_service service_cb_arg;
service_cb_arg.rx_pkts = rx_pkts;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa5877d..3634c0c8c5f0 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IPN3KE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 47693c0c47cd..31e67d86e77b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5174,7 +5174,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct ixgbe_hw *hw;
struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
- struct rte_eth_dev_data *dev_data = dev->data;
int ret;
ret = ixgbe_dev_info_get(dev, &dev_info);
@@ -5188,9 +5187,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
*/
- if (dev_data->dev_started && !dev_data->scattered_rx &&
- (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ if (dev->data->dev_started && !dev->data->scattered_rx &&
+ frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -5199,23 +5198,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (frame_size > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU) {
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
return 0;
@@ -6272,12 +6266,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
* set as 0x4.
*/
if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_JUMBO_FRAME);
+ (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_DEFAULT);
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
/* Set RTTBCNRC of queue X */
IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
@@ -6549,8 +6541,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (mtu < RTE_ETHER_MIN_MTU ||
- max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
+ if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
return -EINVAL;
/* If device is started, refuse mtu that requires the support of
@@ -6558,7 +6549,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
*/
if (dev_data->dev_started && !dev_data->scattered_rx &&
(max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -6575,8 +6566,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (ixgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index fbf2b17d160f..9bcbc445f2d0 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* if PF has jumbo frames enabled which means legacy
* VFs are disabled.
*/
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ if (dev->data->mtu > RTE_ETHER_MTU)
break;
/* fall through */
default:
@@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* legacy VFs.
*/
if (max_frame > IXGBE_ETH_MAX_LEN ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ dev->data->mtu > RTE_ETHER_MTU)
return -1;
break;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755de..03991711fd6e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -5063,6 +5063,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
uint16_t buf_size;
uint16_t i;
struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
int rc;
PMD_INIT_FUNC_TRACE();
@@ -5098,7 +5099,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (rx_conf->max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
} else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
@@ -5172,8 +5173,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
+ if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -5653,6 +5653,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
struct ixgbe_hw *hw;
struct ixgbe_rx_queue *rxq;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
uint64_t bus_addr;
uint32_t srrctl, psrtype = 0;
uint16_t buf_size;
@@ -5689,10 +5690,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
* VF packets received can work in all cases.
*/
- if (ixgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ if (ixgbevf_rlpml_set_vf(hw, frame_size) != 0) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
return -EINVAL;
}
@@ -5751,8 +5751,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
+ (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->data->scattered_rx = 1;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index b72060a4499b..976916f870a5 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
- uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (frame_len > LIO_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
- eth_dev->data->mtu = mtu;
-
return 0;
}
@@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev)
static int
lio_dev_start(struct rte_eth_dev *eth_dev)
{
- uint16_t mtu;
- uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
int ret = 0;
@@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
goto dev_mtu_set_error;
}
- mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN);
- if (mtu < RTE_ETHER_MIN_MTU)
- mtu = RTE_ETHER_MIN_MTU;
-
- if (eth_dev->data->mtu != mtu) {
- ret = lio_dev_mtu_set(eth_dev, mtu);
- if (ret)
- goto dev_mtu_set_error;
- }
+ ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
+ if (ret != 0)
+ goto dev_mtu_set_error;
return 0;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 978cbb8201ea..4a5cfd22aa71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
int ret;
uint32_t crc_present;
uint64_t offloads;
+ uint32_t max_rx_pktlen;
offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
@@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
};
/* Enable scattered packets support for this queue if necessary. */
MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- (mb_len - RTE_PKTMBUF_HEADROOM)) {
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
- uint32_t size =
- RTE_PKTMBUF_HEADROOM +
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
/*
@@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
/* Make sure sges_n did not overflow. */
size = mb_len * (1 << rxq->sges_n);
size -= RTE_PKTMBUF_HEADROOM;
- if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
+ if (size < max_rx_pktlen) {
rte_errno = EOVERFLOW;
ERROR("%p: too many SGEs (%u) needed to handle"
" requested maximum packet size %u",
(void *)dev,
- 1 << sges_n,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ 1 << sges_n, max_rx_pktlen);
goto error;
}
} else {
WARN("%p: the requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- (void *)dev,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ (void *)dev, max_rx_pktlen,
mb_len - RTE_PKTMBUF_HEADROOM);
}
DEBUG("%p: maximum number of segments per packet: %u",
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce798986..6f4f351222d3 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1330,10 +1330,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
- unsigned int max_rx_pkt_len = lro_on_queue ?
+ unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
+ dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
@@ -1372,7 +1373,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* needed to handle max size packets, replace zero length
* with the buffer length from the pool.
*/
- tail_len = max_rx_pkt_len;
+ tail_len = max_rx_pktlen;
do {
struct mlx5_eth_rxseg *hw_seg =
&tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
@@ -1410,7 +1411,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- tmpl->rxq.rxseg_n, max_rx_pkt_len,
+ tmpl->rxq.rxseg_n, max_rx_pktlen,
MLX5_MAX_RXQ_NSEG);
rte_errno = ENOTSUP;
goto error;
@@ -1435,7 +1436,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
- dev->data->port_id, idx, mb_len, max_rx_pkt_len,
+ dev->data->port_id, idx, mb_len, max_rx_pktlen,
RTE_PKTMBUF_HEADROOM);
rte_errno = ENOSPC;
goto error;
@@ -1454,7 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* following conditions are met:
* - MPRQ is enabled.
* - The number of descs is more than the number of strides.
- * - max_rx_pkt_len plus overhead is less than the max size
+ * - max_rx_pktlen plus overhead is less than the max size
* of a stride or mprq_stride_size is specified by a user.
* Need to make sure that there are enough strides to encap
* the maximum packet size in case mprq_stride_size is set.
@@ -1478,7 +1479,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
!!(offloads & DEV_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
- max_lro_size = RTE_MIN(max_rx_pkt_len,
+ max_lro_size = RTE_MIN(max_rx_pktlen,
(1u << tmpl->rxq.strd_num_n) *
(1u << tmpl->rxq.strd_sz_n));
DRV_LOG(DEBUG,
@@ -1487,9 +1488,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->port_id, idx,
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
} else if (tmpl->rxq.rxseg_n == 1) {
- MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
+ MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
@@ -1511,13 +1512,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- 1 << sges_n, max_rx_pkt_len,
+ 1 << sges_n, max_rx_pktlen,
1u << MLX5_MAX_LOG_RQ_SEGS);
rte_errno = ENOTSUP;
goto error;
}
tmpl->rxq.sges_n = sges_n;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
}
if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
DRV_LOG(WARNING,
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index a3ee15020466..520c6fdb1d31 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_NETA_ETH_HDRS_LEN;
-
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
@@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
/* It is OK. New MTU will be set later on mvneta_dev_start */
return 0;
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc09039..2cd4fb31348b 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mvneta_priv *priv = dev->data->dev_private;
struct mvneta_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS;
- if (frame_size < max_rx_pkt_len) {
+ if (frame_size < max_rx_pktlen) {
MVNETA_LOG(ERR,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8da4..5ce71661c84e 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_PP2_ETH_HDRS_LEN;
- if (dev->data->mtu > priv->max_mtu) {
- MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n",
- dev->data->mtu,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- priv->max_mtu);
- return -EINVAL;
- }
+ if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
+ MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
+ dev->data->dev_conf.rxmode.mtu,
+ priv->max_mtu);
+ return -EINVAL;
}
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
@@ -595,9 +590,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
return 0;
@@ -1994,7 +1986,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mrvl_priv *priv = dev->data->dev_private;
struct mrvl_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
int ret, tc, inq;
uint64_t offloads;
@@ -2009,17 +2001,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
return -EFAULT;
}
- frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
- MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
- if (frame_size < max_rx_pkt_len) {
+ frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS;
+ if (frame_size < max_rx_pktlen) {
MRVL_LOG(WARNING,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MRVL_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 1b4bc33593fb..a2031a7a82cc 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -370,7 +370,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = rxmode->max_rx_pkt_len;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -963,16 +963,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
/* switch to jumbo mode if needed */
- if ((uint32_t)mtu > RTE_ETHER_MTU)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
-
/* writing to configuration space */
- nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
+ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
hw->mtu = mtu;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9f4c0503b4d4..69c3bda12df8 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > OCCTX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
@@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
/* Setup scatter mode if needed by jumbo */
- if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (data->mtu > buffsz) {
nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
@@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
evdev_priv->rx_offload_flags = nic->rx_offload_flags;
evdev_priv->tx_offload_flags = nic->tx_offload_flags;
- /* Setup MTU based on max_rx_pkt_len */
- nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD;
+ /* Setup MTU */
+ nic->mtu = data->mtu;
return 0;
}
@@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
octeontx_recheck_rx_offloads(rxq);
}
- /* Setting up the mtu based on max_rx_pkt_len */
+ /* Setting up the mtu */
ret = octeontx_dev_mtu_set(dev, nic->mtu);
if (ret) {
octeontx_log_err("Failed to set default MTU size %d", ret);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e7c..787e8d890215 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
mbp_priv = rte_mempool_get_priv(rxq->pool);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d2b..cf7804157198 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,14 +59,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > NIX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return rc;
}
@@ -75,7 +72,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct otx2_eth_rxq *rxq;
- uint16_t mtu;
int rc;
rxq = data->rx_queues[0];
@@ -83,10 +79,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
/* Setup scatter mode if needed by jumbo */
otx2_nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
-
- rc = otx2_nix_mtu_set(eth_dev, mtu);
+ rc = otx2_nix_mtu_set(eth_dev, data->mtu);
if (rc)
otx2_err("Failed to set default MTU size %d", rc);
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index feec4d10a26e..2619bd2f2a19 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev)
static int
pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- int ret;
struct pfe_eth_priv_s *priv = dev->data->dev_private;
uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
/*TODO Support VLAN*/
- ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
- if (!ret)
- dev->data->mtu = mtu;
-
- return ret;
+ return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
}
/* pfe_eth_enet_addr_byte_mac
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a4304e0eff44..4b971fd1fe3c 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- /* If jumbo enabled adjust MTU */
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
-
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
@@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
- uint32_t max_rx_pkt_len;
uint32_t frame_size;
uint16_t bufsz;
bool restart = false;
@@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
DP_ERR(edev, "Error during getting ethernet device info\n");
return rc;
}
- max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
- frame_size = max_rx_pkt_len;
+
+ frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
@@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (frame_size > QEDE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->dev_started = 1;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
-
return 0;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 35cde561ba59..c2263787b4ec 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
struct qede_rx_queue *rxq;
- uint16_t max_rx_pkt_len;
+ uint16_t max_rx_pktlen;
uint16_t bufsz;
int rc;
@@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
dev->data->rx_queues[qid] = NULL;
}
- max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
/* Fix up RX buffer size */
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
- (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
+ (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
dev->data->scattered_rx = 1;
}
}
- rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
+ rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
if (rc < 0)
return rc;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3ad..1f55c90b419d 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1066,15 +1066,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
/*
* The driver does not use it, but other PMDs update jumbo frame
- * flag and max_rx_pkt_len when MTU is set.
+ * flag when MTU is set.
*/
if (mtu > RTE_ETHER_MTU) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb8175..22f74735db08 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -383,14 +383,10 @@ sfc_port_configure(struct sfc_adapter *sa)
{
const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
struct sfc_port *port = &sa->port;
- const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
sfc_log_init(sa, "entry");
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- port->pdu = rxmode->max_rx_pkt_len;
- else
- port->pdu = EFX_MAC_PDU(dev_data->mtu);
+ port->pdu = EFX_MAC_PDU(dev_data->mtu);
return 0;
}
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf71d..0a8d29277aeb 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_mtu = mtu };
- int err = 0;
- err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
- if (!err)
- dev->data->mtu = mtu;
-
- return err;
+ return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
}
static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81a3..c8ae95a61306 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (frame_size > NIC_HW_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
- /* Update max_rx_pkt_len */
- rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
nic->mtu = mtu;
for (i = 0; i < nic->sqs_count; i++)
@@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
}
/* Setup scatter mode if needed by jumbo */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE > buffsz)
+ if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
- /* Setup MTU based on max_rx_pkt_len or default */
- mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
- dev->data->dev_conf.rxmode.max_rx_pkt_len
- - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
+ /* Setup MTU */
+ mtu = dev->data->mtu;
if (nicvf_dev_set_mtu(dev, mtu)) {
PMD_INIT_LOG(ERR, "Failed to set default mtu size");
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 006399468841..269de9f848dd 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3486,8 +3486,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965c8..44cfcd76bca4 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -55,6 +55,10 @@
#define TXGBE_5TUPLE_MAX_PRI 7
#define TXGBE_5TUPLE_MIN_PRI 1
+
+/* The overhead from MTU to max frame size. */
+#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
+
#define TXGBE_RSS_OFFLOAD_ALL ( \
ETH_RSS_IPV4 | \
ETH_RSS_NONFRAG_IPV4_TCP | \
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 896da8a88770..43dc0ed39b75 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -1128,8 +1128,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (txgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1bd..c6cd3803c434 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
- } else {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
- }
+ wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+ TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD));
/*
* If loopback mode is configured, set LPBK bit.
@@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
+ if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
+ 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* VF packets received can work in all cases.
*/
if (txgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ dev->data->mtu + TXGBE_ETH_OVERHEAD);
return -EINVAL;
}
@@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
+ (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b60eeb24abe7..5d341a3e23bb 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -930,7 +930,6 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
hw->max_rx_pkt_len = frame_size;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = hw->max_rx_pkt_len;
return 0;
}
@@ -2116,14 +2115,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
+ if (rxmode->mtu > hw->max_mtu)
req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
- else
- hw->max_rx_pkt_len = ether_hdr_len + dev->data->mtu;
+ hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM))
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index adbd40808396..68e3c13730ad 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/bond/main.c b/examples/bond/main.c
index a63ca70a7f06..25ca459be57b 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -116,7 +116,6 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index d0f40a1fb4bc..8c4a8feec0c2 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 5ed0dc73ec60..e26be8edf28f 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ab8c6d6a0dad..476b147bdfcc 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 65c1d85cf2fb..8a43f6ac0f92 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -59,14 +59,6 @@ static struct{
} parm_config;
const char cb_port_delim[] = ":";
-/* Ethernet ports configured with default settings using struct. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of Ethernet ports. */
-
/* Creation of flow classifier object. 8< */
struct flow_classifier {
struct rte_flow_classifier *cls;
@@ -200,7 +192,7 @@ static struct rte_flow_attr attr;
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
struct rte_ether_addr addr;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
@@ -211,6 +203,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index b3977a8be561..fdc66368dce9 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -820,7 +820,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f24536972084..12062a785dc6 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -146,7 +146,8 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_SCATTER |
@@ -918,9 +919,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
@@ -963,8 +964,7 @@ main(int argc, char **argv)
}
/* set the mtu to the maximum received packet size */
- ret = rte_eth_dev_set_mtu(portid,
- local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD);
+ ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu);
if (ret < 0) {
printf("\n");
rte_exit(EXIT_FAILURE, "Set MTU failed: "
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..9ba02e687adb 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790be4..e5c7d46d2caa 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -162,7 +162,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_JUMBO_FRAME),
@@ -882,7 +883,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
/* mbufs stored int the gragment table. 8< */
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM;
- nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
+ nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
+ + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += nb_rxd + nb_txd;
@@ -1054,9 +1056,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7ad94cb8228b..d032a47d1c3b 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -2163,7 +2162,6 @@ cryptodevs_init(uint16_t req_queue_num)
static void
port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
{
- uint32_t frame_size;
struct rte_eth_dev_info dev_info;
struct rte_eth_txconf *txconf;
uint16_t nb_tx_queue, nb_rx_queue;
@@ -2211,10 +2209,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- frame_size = MTU_TO_FRAMELEN(mtu_size);
- if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
+ if (mtu_size > RTE_ETHER_MTU)
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- local_port_conf.rxmode.max_rx_pkt_len = frame_size;
+ local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b38..b3993685ec92 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -110,7 +110,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
@@ -715,9 +716,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/kni/main.c b/examples/kni/main.c
index beabb3c848aa..c10814c6a94f 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
memcpy(&conf, &port_conf, sizeof(conf));
/* Set new MTU */
- if (new_mtu > RTE_ETHER_MAX_LEN)
+ if (new_mtu > RTE_ETHER_MTU)
conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* mtu + length of header + length of FCS = max pkt length */
- conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
- KNI_ENET_FCS_SIZE;
+ conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
index 9b3e324efb23..d9cf00c9dfc7 100644
--- a/examples/l2fwd-cat/l2fwd-cat.c
+++ b/examples/l2fwd-cat/l2fwd-cat.c
@@ -19,10 +19,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
-};
-
/* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
/*
@@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0)
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 66d1491bf76d..f9438176cbb1 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -217,7 +217,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 19f32809aa9d..9040be5ed9b6 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564b6..7abb612ee6a4 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -141,6 +140,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
/* ethernet addresses of ports */
@@ -201,8 +202,8 @@ enum {
OPT_CONFIG_NUM = 256,
#define OPT_NONUMA "no-numa"
OPT_NONUMA_NUM,
-#define OPT_ENBJMO "enable-jumbo"
- OPT_ENBJMO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_RULE_IPV4 "rule_ipv4"
OPT_RULE_IPV4_NUM,
#define OPT_RULE_IPV6 "rule_ipv6"
@@ -1619,26 +1620,21 @@ print_usage(const char *prgname)
usage_acl_alg(alg, sizeof(alg));
printf("%s [EAL options] -- -p PORTMASK -P"
- "--"OPT_RULE_IPV4"=FILE"
- "--"OPT_RULE_IPV6"=FILE"
+ " --"OPT_RULE_IPV4"=FILE"
+ " --"OPT_RULE_IPV6"=FILE"
" [--"OPT_CONFIG" (port,queue,lcore)[,(port,queue,lcore]]"
- " [--"OPT_ENBJMO" [--max-pkt-len PKTLEN]]\n"
+ " [--"OPT_MAX_PKT_LEN" PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
- " --"OPT_CONFIG": (port,queue,lcore): "
- "rx queues configuration\n"
+ " -P: enable promiscuous mode\n"
+ " --"OPT_CONFIG" (port,queue,lcore): rx queues configuration\n"
" --"OPT_NONUMA": optional, disable numa awareness\n"
- " --"OPT_ENBJMO": enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
- " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries "
- "file. "
+ " --"OPT_MAX_PKT_LEN" PKTLEN: maximum packet length in decimal (64-9600)\n"
+ " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries file. "
"Each rule occupy one line. "
"2 kinds of rules are supported. "
"One is ACL entry at while line leads with character '%c', "
- "another is route entry at while line leads with "
- "character '%c'.\n"
- " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules "
- "entries file.\n"
+ "another is route entry at while line leads with character '%c'.\n"
+ " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules entries file.\n"
" --"OPT_ALG": ACL classify method to use, one of: %s\n",
prgname, ACL_LEAD_CHAR, ROUTE_LEAD_CHAR, alg);
}
@@ -1758,14 +1754,14 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
- {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
- {OPT_ENBJMO, 0, NULL, OPT_ENBJMO_NUM },
- {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
- {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
- {OPT_ALG, 1, NULL, OPT_ALG_NUM },
- {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
- {NULL, 0, 0, 0 }
+ {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
+ {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
+ {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
+ {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
+ {OPT_ALG, 1, NULL, OPT_ALG_NUM },
+ {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
+ {NULL, 0, 0, 0 }
};
argvopt = argv;
@@ -1804,43 +1800,11 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case OPT_ENBJMO_NUM:
- {
- struct option lenopts = {
- "max-pkt-len",
- required_argument,
- 0,
- 0
- };
-
- printf("jumbo frame is enabled\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, then use the
- * default value RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length "
- "to %u\n",
- (unsigned int)
- port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
case OPT_RULE_IPV4_NUM:
parm_config.rule_ipv4_name = optarg;
break;
@@ -2007,6 +1971,43 @@ set_default_dest_mac(void)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -2080,6 +2081,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index a0de8ca9b42d..b431b9ff5f3c 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
@@ -126,6 +125,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
@@ -259,7 +260,7 @@ print_usage(const char *prgname)
" [-P]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--per-port-pool]\n\n"
@@ -268,9 +269,7 @@ print_usage(const char *prgname)
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for "
"port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --per-port-pool: Use separate buffer pool per port\n\n",
prgname);
@@ -404,7 +403,7 @@ static const char short_options[] = "p:" /* portmask */
#define CMD_LINE_OPT_CONFIG "config"
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
enum {
/* Long options mapped to a short option */
@@ -416,7 +415,7 @@ enum {
CMD_LINE_OPT_CONFIG_NUM,
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
};
@@ -424,7 +423,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
{NULL, 0, 0, 0},
};
@@ -490,28 +489,8 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr, "Invalid maximum "
- "packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM: {
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
}
@@ -722,6 +701,43 @@ graph_main_loop(void *conf)
}
/* >8 End of main processing loop. */
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -807,6 +823,13 @@ main(int argc, char **argv)
nb_rx_queue, n_tx_queue);
rte_eth_dev_info_get(portid, &dev_info);
+
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44ae8..e58561327c48 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -251,7 +251,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -266,6 +265,8 @@ static struct rte_eth_conf port_conf = {
}
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool * pktmbuf_pool[NB_SOCKETS];
@@ -1601,16 +1602,15 @@ print_usage(const char *prgname)
" [--config (port,queue,lcore)[,(port,queue,lcore]]"
" [--high-perf-cores CORELIST"
" [--perf-config (port,queue,hi_perf,lcore_index)[,(port,queue,hi_perf,lcore_index]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
+ " -P: enable promiscuous mode\n"
" --config (port,queue,lcore): rx queues configuration\n"
" --high-perf-cores CORELIST: list of high performance cores\n"
" --perf-config: similar as config, cores specified as indices"
" for bins containing high or regular performance cores\n"
" --no-numa: optional, disable numa awareness\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --parse-ptype: parse packet type by software\n"
" --legacy: use legacy interrupt-based scaling\n"
" --empty-poll: enable empty poll detection"
@@ -1795,6 +1795,7 @@ parse_ep_config(const char *q_arg)
#define CMD_LINE_OPT_INTERRUPT_ONLY "interrupt-only"
#define CMD_LINE_OPT_TELEMETRY "telemetry"
#define CMD_LINE_OPT_PMD_MGMT "pmd-mgmt"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
/* Parse the argument given in the command line of the application */
static int
@@ -1810,7 +1811,7 @@ parse_args(int argc, char **argv)
{"perf-config", 1, 0, 0},
{"high-perf-cores", 1, 0, 0},
{"no-numa", 0, 0, 0},
- {"enable-jumbo", 0, 0, 0},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, 0},
{CMD_LINE_OPT_EMPTY_POLL, 1, 0, 0},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
{CMD_LINE_OPT_LEGACY, 0, 0, 0},
@@ -1954,36 +1955,10 @@ parse_args(int argc, char **argv)
}
if (!strncmp(lgopts[option_index].name,
- "enable-jumbo", 12)) {
- struct option lenopts =
- {"max-pkt-len", required_argument, \
- 0, 0};
-
- printf("jumbo frame is enabled \n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /**
- * if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (0 == getopt_long(argc, argvopt, "",
- &lenopts, &option_index)) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)){
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame "
- "max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ CMD_LINE_OPT_MAX_PKT_LEN,
+ sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
}
if (!strncmp(lgopts[option_index].name,
@@ -2505,6 +2480,43 @@ mode_to_str(enum appmode mode)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
/* Power library initialized in the main routine. 8< */
int
main(int argc, char **argv)
@@ -2622,6 +2634,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 00ac267af1dd..cb9bc7ad6002 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -136,6 +135,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static uint8_t lkp_per_socket[NB_SOCKETS];
@@ -326,7 +327,7 @@ print_usage(const char *prgname)
" [--lookup]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--hash-entry-num]"
" [--ipv6]"
@@ -344,9 +345,7 @@ print_usage(const char *prgname)
" Accepted: em (Exact Match), lpm (Longest Prefix Match), fib (Forwarding Information Base)\n"
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
" --ipv6: Set if running ipv6 packets\n"
@@ -566,7 +565,7 @@ static const char short_options[] =
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
#define CMD_LINE_OPT_IPV6 "ipv6"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
#define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
@@ -584,7 +583,7 @@ enum {
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
CMD_LINE_OPT_IPV6_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
CMD_LINE_OPT_PARSE_PTYPE_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
@@ -599,7 +598,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
{CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
@@ -698,31 +697,9 @@ parse_args(int argc, char **argv)
ipv6 = 1;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {
- "max-pkt-len", required_argument, 0, 0
- };
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr,
- "invalid maximum packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -981,6 +958,43 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
return 0;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
static void
l3fwd_poll_resource_setup(void)
{
@@ -1035,6 +1049,12 @@ l3fwd_poll_resource_setup(void)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf263d..b6cddc8c7b51 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -323,6 +322,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
@@ -2643,7 +2644,7 @@ print_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK -P"
" [--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread]]"
" [--tx (lcore,thread)[,(lcore,thread]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]"
" [--parse-ptype]\n\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -P : enable promiscuous mode\n"
@@ -2653,8 +2654,7 @@ print_usage(const char *prgname)
" --eth-dest=X,MM:MM:MM:MM:MM:MM: optional, ethernet destination for port X\n"
" --no-numa: optional, disable numa awareness\n"
" --ipv6: optional, specify it if running ipv6 packets\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --hash-entry-num: specify the hash entry number in hexadecimal to be setup\n"
" --no-lthreads: turn off lthread model\n"
" --parse-ptype: set to use software to analyze packet type\n\n",
@@ -2877,8 +2877,8 @@ enum {
OPT_NO_NUMA_NUM,
#define OPT_IPV6 "ipv6"
OPT_IPV6_NUM,
-#define OPT_ENABLE_JUMBO "enable-jumbo"
- OPT_ENABLE_JUMBO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_HASH_ENTRY_NUM "hash-entry-num"
OPT_HASH_ENTRY_NUM_NUM,
#define OPT_NO_LTHREADS "no-lthreads"
@@ -2902,7 +2902,7 @@ parse_args(int argc, char **argv)
{OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
{OPT_NO_NUMA, 0, NULL, OPT_NO_NUMA_NUM },
{OPT_IPV6, 0, NULL, OPT_IPV6_NUM },
- {OPT_ENABLE_JUMBO, 0, NULL, OPT_ENABLE_JUMBO_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
{OPT_HASH_ENTRY_NUM, 1, NULL, OPT_HASH_ENTRY_NUM_NUM },
{OPT_NO_LTHREADS, 0, NULL, OPT_NO_LTHREADS_NUM },
{OPT_PARSE_PTYPE, 0, NULL, OPT_PARSE_PTYPE_NUM },
@@ -2981,35 +2981,10 @@ parse_args(int argc, char **argv)
parse_ptype_on = 1;
break;
- case OPT_ENABLE_JUMBO_NUM:
- {
- struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- printf("jumbo frame is enabled - disabling simple TX path\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /* if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
-
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) || (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
case OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -3489,6 +3464,43 @@ check_all_ports_link_status(uint32_t port_mask)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -3577,6 +3589,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/test.sh b/examples/performance-thread/l3fwd-thread/test.sh
index f0b6e271a5f3..3dd33407ea41 100755
--- a/examples/performance-thread/l3fwd-thread/test.sh
+++ b/examples/performance-thread/l3fwd-thread/test.sh
@@ -11,7 +11,7 @@ case "$1" in
echo "1.1 1 L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -23,7 +23,7 @@ case "$1" in
echo "1.2 1 L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -34,7 +34,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=8)"
./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -45,7 +45,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -61,7 +61,7 @@ case "$1" in
echo "2.1 N L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 --lcores="2,(0-1)@0" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -73,7 +73,7 @@ case "$1" in
echo "2.2 N L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 --lcores="(0-3)@0,4" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -84,7 +84,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=8)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-7)@0,8" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -95,7 +95,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-15)@0,16" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -111,7 +111,7 @@ case "$1" in
echo "3.1 N L-threads per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(0,0)" \
--stat-lcore 1
@@ -121,7 +121,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(0,0)(0,1)" \
--stat-lcore 1
@@ -131,7 +131,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=8)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)" \
--tx="(0,0)(0,1)(0,2)(0,3)" \
--stat-lcore 1
@@ -141,7 +141,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=16)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)" \
--tx="(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)" \
--stat-lcore 1
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..4f20dfc4be06 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fbf7..3b6c6c297f43 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
uint8_t ptp_enabled_port_nb;
static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static const struct rte_ether_addr ether_multicast = {
.addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
};
@@ -178,7 +172,7 @@ static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_dev_info dev_info;
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1;
const uint16_t tx_rings = 1;
int retval;
@@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369dc..c32d2e12e633 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -52,7 +52,6 @@ static struct rte_mempool *pool = NULL;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..1367569c65db 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index ab6fa7d56c5d..6845c396b8d9 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
static const char usage[] =
"%s EAL_ARGS -- [-t]\n";
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static struct {
uint64_t total_cycles;
uint64_t total_queue_cycles;
@@ -124,7 +118,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused,
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -137,6 +131,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index ae9bbee8d820..fd7207aee758 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -17,14 +17,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-/* Configuration of ethernet ports. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of ethernet ports. */
-
/* basicfwd.c: Basic DPDK skeleton forwarding example. */
/*
@@ -36,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -48,6 +40,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d0bf1f31e36a..da381b41c0c5 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -44,6 +44,7 @@
#define BURST_RX_RETRIES 4 /* Number of retries on RX. */
#define JUMBO_FRAME_MAX_SIZE 0x2600
+#define MAX_MTU (JUMBO_FRAME_MAX_SIZE - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN))
/* State of virtio device. */
#define DEVICE_MAC_LEARNING 0
@@ -633,8 +634,7 @@ us_vhost_parse_args(int argc, char **argv)
if (ret) {
vmdq_conf_default.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- vmdq_conf_default.rxmode.max_rx_pkt_len
- = JUMBO_FRAME_MAX_SIZE;
+ vmdq_conf_default.rxmode.mtu = MAX_MTU;
}
break;
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e59fb7d3478b..e19d79a40802 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -51,17 +51,10 @@
static uint32_t enabled_port_mask;
static volatile bool force_quit;
-/****************/
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca924221..4d0584af52e3 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1324,6 +1324,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
return ret;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1331,6 +1344,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
+ uint32_t max_rx_pktlen;
uint16_t overhead_len;
int diag;
int ret;
@@ -1381,11 +1395,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
/* Get the real Ethernet overhead length */
- if (dev_info.max_mtu != UINT16_MAX &&
- dev_info.max_rx_pktlen > dev_info.max_mtu)
- overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
- else
- overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
/* If number of queues specified by application for both Rx and Tx is
* zero, use driver preferred values. This cannot be done individually
@@ -1454,49 +1465,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/*
- * If jumbo frames are enabled, check that the maximum RX packet
- * length is supported by the configured device.
+ * Check that the maximum RX packet length is supported by the
+ * configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- (unsigned int)RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
- goto rollback;
- }
+ if (dev_conf->rxmode.mtu == 0)
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
+ if (max_rx_pktlen > dev_info.max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
+ port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
+ ret = -EINVAL;
+ goto rollback;
+ } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
+ port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ goto rollback;
+ }
- /* Scale the MTU size to adapt max_rx_pkt_len */
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- overhead_len;
- } else {
- uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
- if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
- pktlen > RTE_ETHER_MTU + overhead_len)
+ if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
+ dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
/* Use default value */
- dev->data->dev_conf.rxmode.max_rx_pkt_len =
- RTE_ETHER_MTU + overhead_len;
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
}
+ dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
+
/*
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
goto rollback;
@@ -2156,13 +2163,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
+ /* Get the real Ethernet overhead length */
if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ uint16_t overhead_len;
+ uint32_t max_rx_pktlen;
+ int ret;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->mtu + overhead_len;
if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- int ret = eth_dev_check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
+ ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674cc..9fba2bd73c84 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -416,7 +416,7 @@ enum rte_eth_tx_mq_mode {
struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
- uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ uint32_t mtu; /**< Requested MTU. */
/** Maximum allowed size of LRO aggregated packet. */
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index 0036bda7465c..1491c815c312 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -28,7 +28,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(nb_tx_q);
rte_trace_point_emit_u32(dev_conf->link_speeds);
rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
- rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
+ rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
rte_trace_point_emit_u64(dev_conf->txmode.offloads);
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v3 2/6] ethdev: move jumbo frame offload check to library
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
@ 2021-10-01 14:36 ` Ferruh Yigit
2021-10-04 5:08 ` Somnath Kotur
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 3/6] ethdev: move check to library for MTU set Ferruh Yigit
` (8 subsequent siblings)
9 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 14:36 UTC (permalink / raw)
To: Andrew Rybchenko, Thomas Monjalon, Somalapuram Amaranath,
Ajit Khaparde, Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Hemant Agrawal,
Sachin Saxena, Haiyue Wang, Gagandeep Singh, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang
Cc: Ferruh Yigit, dev
Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
and application should enable the jumbo frame offload support for it.
When jumbo frame offload is not enabled by application, but MTU bigger
than RTE_ETHER_MTU is requested there are two options, either fail or
enable jumbo frame offload implicitly.
Enabling jumbo frame offload implicitly is selected by many drivers
since setting a big MTU value already implies it, and this increases
usability.
This patch moves this logic from drivers to the library, both to reduce
the duplicated code in the drivers and to make behaviour more visible.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
drivers/net/dpaa/dpaa_ethdev.c | 7 -------
drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
drivers/net/e1000/em_ethdev.c | 9 ++-------
drivers/net/e1000/igb_ethdev.c | 9 ++-------
drivers/net/enetc/enetc_ethdev.c | 7 -------
drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
drivers/net/hns3/hns3_ethdev.c | 8 --------
drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
drivers/net/i40e/i40e_ethdev.c | 5 -----
drivers/net/iavf/iavf_ethdev.c | 7 -------
drivers/net/ice/ice_ethdev.c | 5 -----
drivers/net/igc/igc_ethdev.c | 9 ++-------
drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
drivers/net/liquidio/lio_ethdev.c | 7 -------
drivers/net/nfp/nfp_common.c | 6 ------
drivers/net/octeontx/octeontx_ethdev.c | 5 -----
drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
drivers/net/qede/qede_ethdev.c | 4 ----
drivers/net/sfc/sfc_ethdev.c | 9 ---------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 6 ------
lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
27 files changed, 29 insertions(+), 166 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 76aeec077f2b..2960834b4539 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
val = 1;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
val = 0;
- }
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 8c6f20b75aed..07ee19938930 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3052,15 +3052,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
return -EINVAL;
}
- if (new_mtu > RTE_ETHER_MTU) {
+ if (new_mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* Is there a change in mtu setting? */
if (eth_dev->data->mtu == new_mtu)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 695d0d6fd3e2..349896f6a1bf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
plt_err("Failed to max Rx frame length, rc=%d", rc);
goto exit;
}
-
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 8cf61f12a8d6..0c9cc2f5bb3f 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
- /* set to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
return err;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index adbdb87baab9..57b09f16ba44 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 758a14e0ad2d..df44bb204f65 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1470,13 +1470,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6f418a36aa04..1b41dd04df5a 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
return 0;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 4c114bf90fc7..a061d0529dd1 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4396,15 +4396,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index cdb9783b5372..fbcbbb6c0533 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads &=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 2d8271cb6095..4b30dfa222a8 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1547,13 +1547,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 4ead227f9122..e1d465de8234 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2571,7 +2571,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct hns3_adapter *hns = dev->data->dev_private;
uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
struct hns3_hw *hw = &hns->hw;
- bool is_jumbo_frame;
int ret;
if (dev->data->dev_started) {
@@ -2581,7 +2580,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2596,12 +2594,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (is_jumbo_frame)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 0b5db486f8d6..3438b3650de6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -908,12 +908,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rte_spinlock_unlock(&hw->lock);
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2033f8f55cd6..e14859db9cfd 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11774,11 +11774,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 0eabce275d92..844d26d87ba6 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1473,13 +1473,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index e6d5128599e1..83e8f0da687c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3992,11 +3992,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index b26723064b07..dcbc26b8186e 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rctl = IGC_READ_REG(hw, IGC_RCTL);
-
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~IGC_RCTL_LPE;
- }
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 3634c0c8c5f0..e8a33f04bd69 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
mtu);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 31e67d86e77b..574a7bffc9cb 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5198,13 +5198,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 976916f870a5..3a516c52d199 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index a2031a7a82cc..850ec7655f82 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -962,12 +962,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* writing to configuration space */
nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 69c3bda12df8..fb65be2c2dc3 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index cf7804157198..293306c7be2a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,11 +59,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return rc;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4b971fd1fe3c..6886a4e5efb4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 1f55c90b419d..2ee80e2dc41f 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1064,15 +1064,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
- /*
- * The driver does not use it, but other PMDs update jumbo frame
- * flag when MTU is set.
- */
- if (mtu > RTE_ETHER_MTU) {
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index c8ae95a61306..b501fee5332c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct nicvf *nic = nicvf_pmd_priv(dev);
uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
size_t i;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
PMD_INIT_FUNC_TRACE();
@@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 269de9f848dd..35b98097c3a4 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3486,12 +3486,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
TXGBE_FRAME_SIZE_MAX);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d0584af52e3..1740bab98a83 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3639,6 +3639,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
+ int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3657,12 +3658,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+
+ if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ is_jumbo_frame_capable = 1;
}
+ if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
+ return -EINVAL;
+
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (!ret)
+ if (ret == 0) {
dev->data->mtu = mtu;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
return eth_err(port_id, ret);
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v3 3/6] ethdev: move check to library for MTU set
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-10-01 14:36 ` Ferruh Yigit
2021-10-04 5:09 ` Somnath Kotur
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
` (7 subsequent siblings)
9 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 14:36 UTC (permalink / raw)
To: Andrew Rybchenko, Thomas Monjalon, Somalapuram Amaranath,
Ajit Khaparde, Somnath Kotur, Rahul Lakkireddy, Hemant Agrawal,
Sachin Saxena, Haiyue Wang, Gagandeep Singh, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Beilei Xing, Jingjing Wu,
Qiming Yang, Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K, Rasesh Mody,
Devendra Singh Rawat, Maciej Czekaj, Jiawen Wu, Jian Wang
Cc: Ferruh Yigit, dev
Move requested MTU value check to the API to prevent the duplicated
code.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 15 ++++-----------
drivers/net/bnxt/bnxt_ethdev.c | 2 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------
drivers/net/dpaa/dpaa_ethdev.c | 2 --
drivers/net/dpaa2/dpaa2_ethdev.c | 4 ----
drivers/net/e1000/em_ethdev.c | 10 ----------
drivers/net/e1000/igb_ethdev.c | 11 -----------
drivers/net/enetc/enetc_ethdev.c | 4 ----
drivers/net/hinic/hinic_pmd_ethdev.c | 8 +-------
drivers/net/i40e/i40e_ethdev.c | 17 ++++-------------
drivers/net/iavf/iavf_ethdev.c | 10 ++--------
drivers/net/ice/ice_ethdev.c | 14 +++-----------
drivers/net/igc/igc_ethdev.c | 5 -----
drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------
drivers/net/liquidio/lio_ethdev.c | 10 ----------
drivers/net/nfp/nfp_common.c | 4 ----
drivers/net/octeontx/octeontx_ethdev.c | 4 ----
drivers/net/octeontx2/otx2_ethdev_ops.c | 4 ----
drivers/net/qede/qede_ethdev.c | 12 ------------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 10 ----------
lib/ethdev/rte_ethdev.c | 9 +++++++++
22 files changed, 25 insertions(+), 155 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 2960834b4539..c36cd7b1d2f0 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct axgbe_port *pdata = dev->data->dev_private;
- uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- unsigned int val = 0;
- axgbe_dev_info_get(dev, &dev_info);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ unsigned int val;
+
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- val = 1;
- else
- val = 0;
+ val = mtu > RTE_ETHER_MTU ? 1 : 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
+
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 07ee19938930..dc33b961320a 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3025,7 +3025,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
- uint32_t rc = 0;
+ uint32_t rc;
uint32_t i;
rc = is_bnxt_in_error(bp);
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 0c9cc2f5bb3f..70b879fed100 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
- struct rte_eth_dev_info dev_info;
- int err;
uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- err = cxgbe_dev_info_get(eth_dev, &dev_info);
- if (err != 0)
- return err;
-
- /* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
- return -EINVAL;
-
- err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
+ return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- return err;
}
/*
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 57b09f16ba44..3172e3b2de87 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
- return -EINVAL;
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index df44bb204f65..c28f03641bbc 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1466,10 +1466,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
- return -EINVAL;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 1b41dd04df5a..6ebef55588bc 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1788,22 +1788,12 @@ eth_em_default_mac_addr_set(struct rte_eth_dev *dev,
static int
eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct e1000_hw *hw;
uint32_t frame_size;
uint32_t rctl;
- int ret;
-
- ret = eth_em_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
frame_size = mtu + E1000_ETH_OVERHEAD;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a061d0529dd1..3164fde5b939 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4363,9 +4363,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
uint32_t rctl;
struct e1000_hw *hw;
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + E1000_ETH_OVERHEAD;
- int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -4374,15 +4372,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (hw->mac.type == e1000_82571)
return -ENOTSUP;
#endif
- ret = eth_igb_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index fbcbbb6c0533..a7372c1787c7 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -662,10 +662,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- /* check that mtu is within the allowed range */
- if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size > ENETC_MAC_MAXFRM_SIZE)
- return -EINVAL;
-
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 4b30dfa222a8..79987bec273c 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1530,17 +1530,11 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- int ret = 0;
+ int ret;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu));
- if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) {
- PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d",
- mtu, HINIC_MIN_MTU_SIZE, HINIC_MAX_MTU_SIZE);
- return -EINVAL;
- }
-
ret = hinic_set_port_mtu(nic_dev->hwdev, mtu);
if (ret) {
PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret);
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index e14859db9cfd..b93e314d3d0c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11756,25 +11756,16 @@ static int i40e_set_default_mac_addr(struct rte_eth_dev *dev,
}
static int
-i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
- int ret = 0;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
- return ret;
+ return 0;
}
/* Restore ethertype filter */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 844d26d87ba6..2d43c666fdbb 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1459,21 +1459,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
static int
-iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD;
- int ret = 0;
-
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port must be stopped before configuration");
return -EBUSY;
}
- return ret;
+ return 0;
}
static int
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 83e8f0da687c..02c06d4da8bc 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3974,21 +3974,13 @@ ice_dev_set_link_down(struct rte_eth_dev *dev)
}
static int
-ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + ICE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR,
"port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index dcbc26b8186e..e279ae1fff1d 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1576,11 +1576,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN)
frame_size += VLAN_TAG_SIZE;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > MAX_RX_JUMBO_FRAME_SIZE)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index e8a33f04bd69..377b96c0236a 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2778,12 +2778,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
int ret = 0;
struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev);
struct rte_eth_dev_data *dev_data = ethdev->data;
- uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)
- return -EINVAL;
/* mtu setting is forbidden if port is start */
/* make sure NIC port is stopped */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 3a516c52d199..9d1d811a2e37 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -434,7 +434,6 @@ static int
lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -446,15 +445,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -EINVAL;
}
- /* check if VF MTU is within allowed range.
- * New value should not exceed PF MTU.
- */
- if (mtu < RTE_ETHER_MIN_MTU || mtu > pf_mtu) {
- lio_dev_err(lio_dev, "VF MTU should be >= %d and <= %d\n",
- RTE_ETHER_MIN_MTU, pf_mtu);
- return -EINVAL;
- }
-
/* flush added to prevent cmd failure
* incase the queue is full
*/
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 850ec7655f82..b1ce35b334da 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -951,10 +951,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || (uint32_t)mtu > hw->max_mtu)
- return -EINVAL;
-
/* mtu setting is forbidden if port is started */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index fb65be2c2dc3..b2355fa695bc 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -524,10 +524,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
struct rte_eth_dev_data *data = eth_dev->data;
int rc = 0;
- /* Check if MTU is within the allowed range */
- if (frame_size < OCCTX_MIN_FRS || frame_size > OCCTX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 293306c7be2a..206da6f7cfda 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -20,10 +20,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (dev->configured && otx2_ethdev_is_ptp_en(dev))
frame_size += NIX_TIMESYNC_RX_OFFSET;
- /* Check if MTU is within the allowed range */
- if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 6886a4e5efb4..84e23ff03418 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2307,7 +2307,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
uint32_t frame_size;
uint16_t bufsz;
@@ -2315,19 +2314,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
int i, rc;
PMD_INIT_FUNC_TRACE(edev);
- rc = qede_dev_info_get(dev, &dev_info);
- if (rc != 0) {
- DP_ERR(edev, "Error during getting ethernet device info\n");
- return rc;
- }
frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
- DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
- mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
- QEDE_ETH_OVERHEAD);
- return -EINVAL;
- }
if (!dev->data->scattered_rx &&
frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
DP_INFO(edev, "MTU greater than minimum RX buffer size of %u\n",
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index b501fee5332c..44c6b1c72354 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -154,12 +154,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (frame_size > NIC_HW_MAX_FRS)
- return -EINVAL;
-
- if (frame_size < NIC_HW_MIN_FRS)
- return -EINVAL;
-
buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/*
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 35b98097c3a4..c6fcb1871981 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3463,18 +3463,8 @@ static int
txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct rte_eth_dev_data *dev_data = dev->data;
- int ret;
-
- ret = txgbe_dev_info_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 1740bab98a83..ce0ed509d28f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3652,6 +3652,9 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
@@ -3659,6 +3662,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ frame_size = mtu + overhead_len;
+ if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
+ return -EINVAL;
+
if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
is_jumbo_frame_capable = 1;
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v3 4/6] ethdev: remove jumbo offload flag
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 3/6] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-10-01 14:36 ` Ferruh Yigit
[not found] ` <CAOBf=muYkU2dwgi3iC8Q7pdSNTJsMUwWYdXj14KeN_=_mUGa0w@mail.gmail.com>
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 5/6] ethdev: unify MTU checks Ferruh Yigit
` (6 subsequent siblings)
9 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 14:36 UTC (permalink / raw)
To: Andrew Rybchenko, Thomas Monjalon, Jerin Jacob, Xiaoyun Li,
Ajit Khaparde, Somnath Kotur, Igor Russkikh,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Chas Williams,
Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Konstantin Ananyev, Radu Nicolau, Akhil Goyal, David Hunt,
John McNamara
Cc: Ferruh Yigit, dev
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application explicitly set this flag to enable jumbo
frames, this can be deducted by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
---
app/test-eventdev/test_pipeline_common.c | 2 -
app/test-pmd/cmdline.c | 2 +-
app/test-pmd/config.c | 25 +---------
app/test-pmd/testpmd.c | 48 +------------------
app/test-pmd/testpmd.h | 2 +-
doc/guides/howto/debug_troubleshoot.rst | 2 -
doc/guides/nics/bnxt.rst | 1 -
doc/guides/nics/features.rst | 3 +-
drivers/net/atlantic/atl_ethdev.c | 1 -
drivers/net/axgbe/axgbe_ethdev.c | 1 -
drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
drivers/net/bnxt/bnxt.h | 1 -
drivers/net/bnxt/bnxt_ethdev.c | 10 +---
drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
drivers/net/cnxk/cnxk_ethdev.h | 5 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
drivers/net/cxgbe/cxgbe.h | 1 -
drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
drivers/net/cxgbe/sge.c | 5 +-
drivers/net/dpaa/dpaa_ethdev.c | 2 -
drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
drivers/net/e1000/e1000_ethdev.h | 4 +-
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/em_rxtx.c | 19 +++-----
drivers/net/e1000/igb_rxtx.c | 3 +-
drivers/net/ena/ena_ethdev.c | 1 -
drivers/net/enetc/enetc_ethdev.c | 3 +-
drivers/net/enic/enic_res.c | 1 -
drivers/net/failsafe/failsafe_ops.c | 2 -
drivers/net/fm10k/fm10k_ethdev.c | 1 -
drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev_vf.c | 1 -
drivers/net/i40e/i40e_ethdev.c | 1 -
drivers/net/i40e/i40e_rxtx.c | 2 +-
drivers/net/iavf/iavf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_vf_representor.c | 1 -
drivers/net/ice/ice_ethdev.c | 1 -
drivers/net/ice/ice_rxtx.c | 3 +-
drivers/net/igc/igc_ethdev.h | 1 -
drivers/net/igc/igc_txrx.c | 2 +-
drivers/net/ionic/ionic_ethdev.c | 1 -
drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
drivers/net/ixgbe/ixgbe_pf.c | 9 +---
drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
drivers/net/mlx4/mlx4_rxq.c | 1 -
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/mvneta/mvneta_ethdev.h | 3 +-
drivers/net/mvpp2/mrvl_ethdev.c | 1 -
drivers/net/nfp/nfp_common.c | 6 +--
drivers/net/octeontx/octeontx_ethdev.h | 1 -
drivers/net/octeontx2/otx2_ethdev.h | 1 -
drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
drivers/net/qede/qede_ethdev.c | 1 -
drivers/net/sfc/sfc_rx.c | 2 -
drivers/net/thunderx/nicvf_ethdev.h | 1 -
drivers/net/txgbe/txgbe_rxtx.c | 1 -
drivers/net/virtio/virtio_ethdev.c | 1 -
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
examples/ip_fragmentation/main.c | 3 +-
examples/ip_reassembly/main.c | 3 +-
examples/ipsec-secgw/ipsec-secgw.c | 2 -
examples/ipv4_multicast/main.c | 1 -
examples/kni/main.c | 5 --
examples/l3fwd-acl/main.c | 4 +-
examples/l3fwd-graph/main.c | 4 +-
examples/l3fwd-power/main.c | 4 +-
examples/l3fwd/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 4 +-
examples/vhost/main.c | 5 +-
lib/ethdev/rte_ethdev.c | 26 +---------
lib/ethdev/rte_ethdev.h | 1 -
75 files changed, 47 insertions(+), 259 deletions(-)
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 5fcea74b4d43..2775e72c580d 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN;
- if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
RTE_ETH_FOREACH_DEV(i) {
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index a677451073ae..117945c2c61e 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1923,7 +1923,7 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
return;
}
- update_jumbo_frame_offload(port_id, res->value);
+ update_mtu_from_frame_size(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index db3eeffa0093..e890fadc716c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1144,40 +1144,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v)
void
port_mtu_set(portid_t port_id, uint16_t mtu)
{
+ struct rte_port *port = &ports[port_id];
int diag;
- struct rte_port *rte_port = &ports[port_id];
- struct rte_eth_dev_info dev_info;
- int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ret = eth_dev_info_get_print_err(port_id, &dev_info);
- if (ret != 0)
- return;
-
- if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
- fprintf(stderr,
- "Set MTU failed. MTU:%u is not in valid range, min:%u - max:%u\n",
- mtu, dev_info.min_mtu, dev_info.max_mtu);
- return;
- }
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
return;
}
- rte_port->dev_conf.rxmode.mtu = mtu;
-
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (mtu > RTE_ETHER_MTU)
- rte_port->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rte_port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
+ port->dev_conf.rxmode.mtu = mtu;
}
/* Generic flow management functions. */
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 8c23cfe7c3da..d2a2a9ac6cda 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1503,12 +1503,6 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid, 0);
- if (ret != 0)
- fprintf(stderr,
- "Updating jumbo frame offload failed for port %u\n",
- pid);
-
if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
@@ -3463,24 +3457,18 @@ rxtx_port_config(struct rte_port *port)
}
/*
- * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned.
+ * Helper function to set MTU from frame size
*
* port->dev_info should be set before calling this function.
*
- * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
- * ETH_OVERHEAD". This is useful to update flags but not MTU value.
- *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
+update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
- uint64_t rx_offloads;
uint16_t mtu, new_mtu;
- bool on;
eth_overhead = get_eth_overhead(&port->dev_info);
@@ -3489,40 +3477,8 @@ update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
return -1;
}
- if (max_rx_pktlen == 0)
- max_rx_pktlen = mtu + eth_overhead;
-
- rx_offloads = port->dev_conf.rxmode.offloads;
new_mtu = max_rx_pktlen - eth_overhead;
- if (new_mtu <= RTE_ETHER_MTU) {
- rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = false;
- } else {
- if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- fprintf(stderr,
- "Frame size (%u) is not supported by port %u\n",
- max_rx_pktlen, portid);
- return -1;
- }
- rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = true;
- }
-
- if (rx_offloads != port->dev_conf.rxmode.offloads) {
- uint16_t qid;
-
- port->dev_conf.rxmode.offloads = rx_offloads;
-
- /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
- for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
- if (on)
- port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
- }
-
if (mtu == new_mtu)
return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 17562215c733..eed9d031fd9a 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1022,7 +1022,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
+int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..df69fa8bcc24 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
* Identify if port Speed and Duplex is matching to desired values with
``rte_eth_link_get``.
- * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
-
* Check promiscuous mode if the drops do not occur for unique MAC address
with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3bc..8f10c6c78a1f 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -885,7 +885,6 @@ processing. This improved performance is derived from a number of optimizations:
DEV_RX_OFFLOAD_VLAN_STRIP
DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_JUMBO_FRAME
DEV_RX_OFFLOAD_IPV4_CKSUM
DEV_RX_OFFLOAD_UDP_CKSUM
DEV_RX_OFFLOAD_TCP_CKSUM
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 483cb7da576f..9580445828bf 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -165,8 +165,7 @@ Jumbo frame
Supports Rx jumbo frames.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.mtu``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 3f654c071566..5a198f53fce7 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
| DEV_RX_OFFLOAD_IPV4_CKSUM \
| DEV_RX_OFFLOAD_UDP_CKSUM \
| DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_JUMBO_FRAME \
| DEV_RX_OFFLOAD_MACSEC_STRIP \
| DEV_RX_OFFLOAD_VLAN_FILTER)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index c36cd7b1d2f0..0bc9e5eeeb10 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 009a94e9a8fa..50ff04bb2241 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 5121d05da65f..6743cf92b0e6 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -595,7 +595,6 @@ struct bnxt_rep_info {
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_VLAN_EXTEND | \
DEV_RX_OFFLOAD_TCP_LRO | \
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index dc33b961320a..e9d04f354a39 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -742,15 +742,10 @@ static int bnxt_start_nic(struct bnxt *bp)
unsigned int i, j;
int rc;
- if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* THOR does not support ring groups.
* But we will use the array to save RSS context IDs.
@@ -1250,7 +1245,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
if (eth_dev->data->dev_conf.rxmode.offloads &
~(DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 412acff42f65..2f3a1759419f 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1727,14 +1727,6 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 10e05e6b5edd..fa8c48f1eeb0 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -75,9 +75,8 @@
#define CNXK_NIX_RX_OFFLOAD_CAPA \
(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
+ DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP)
#define RSS_IPV4_ENABLE \
(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 349896f6a1bf..d0924df76152 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_SECURITY, " Security,"},
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..37625c5bfb69 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -51,7 +51,6 @@
DEV_RX_OFFLOAD_IPV4_CKSUM | \
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 70b879fed100..1374f32b6826 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -661,14 +661,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
if ((&rxq->fl) != NULL)
rxq->fl.size = temp_nb_desc;
- /* Set to jumbo mode if necessary */
- if (eth_dev->data->mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
&rxq->fl, NULL,
is_pf4(adapter) ?
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 830f5192474d..21b8fe61c9a7 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
struct rte_mbuf *buf_bulk[n];
int ret, i;
struct rte_pktmbuf_pool_private *mbp_priv;
- u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME;
/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
- if (jumbo_en &&
- ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000))
+ if ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000)
buf_size_idx = RX_LARGE_MTU_BUF;
ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk, n);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3172e3b2de87..defc072072af 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,7 +54,6 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
@@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c28f03641bbc..dc25eefb33b0 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
@@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6f4..1ae78fe71f02 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
void em_dev_clear_queues(struct rte_eth_dev *dev);
void em_dev_free_queues(struct rte_eth_dev *dev);
-uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
-uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
+uint64_t em_get_rx_port_offloads_capa(void);
+uint64_t em_get_rx_queue_offloads_capa(void);
int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6ebef55588bc..8a752eef52cf 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
- dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(dev);
- dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
+ dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa();
+ dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
dev_info->rx_queue_offload_capa;
dev_info->tx_queue_offload_capa = em_get_tx_queue_offloads_capa(dev);
dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd0074..e061f80a906a 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1359,12 +1359,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
}
uint64_t
-em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_port_offloads_capa(void)
{
uint64_t rx_offload_capa;
- uint32_t max_rx_pktlen;
-
- max_rx_pktlen = em_get_max_pktlen(dev);
rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
@@ -1374,14 +1371,12 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER;
- if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
- rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
return rx_offload_capa;
}
uint64_t
-em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_queue_offloads_capa(void)
{
uint64_t rx_queue_offload_capa;
@@ -1390,7 +1385,7 @@ em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
* capability be same to per port queue offloading capability
* for better convenience.
*/
- rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
+ rx_queue_offload_capa = em_get_rx_port_offloads_capa();
return rx_queue_offload_capa;
}
@@ -1839,7 +1834,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* to avoid splitting packets that don't fit into
* one buffer.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+ if (dev->data->mtu > RTE_ETHER_MTU ||
rctl_bsize < RTE_ETHER_MAX_LEN) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1874,14 +1869,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
if ((hw->mac.type == e1000_ich9lan ||
hw->mac.type == e1000_pch2lan ||
hw->mac.type == e1000_ich10lan) &&
- rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ dev->data->mtu > RTE_ETHER_MTU) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
}
if (hw->mac.type == e1000_pch2lan) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
@@ -1908,7 +1903,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index e9a30d393bd7..dda4d2101adb 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1640,7 +1640,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -2344,7 +2343,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Configure support of jumbo frames, if any.
*/
max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
rctl |= E1000_RCTL_LPE;
/*
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 3a9d5031b262..6d1026d31951 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -1918,7 +1918,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
- rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
/* Inform framework about available features */
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index a7372c1787c7..6457677d300a 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
(DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME);
+ DEV_RX_OFFLOAD_KEEP_CRC);
return 0;
}
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 0493e096d031..c5777772a09e 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
DEV_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e034..47c5efe9ea77 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1193,7 +1193,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
@@ -1211,7 +1210,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 5e4b361ca6c0..093021246286 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_RSS_HASH);
}
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 79987bec273c..4005414aeb71 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -738,7 +738,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index e1d465de8234..dbd4c54b18c6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2691,7 +2691,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 3438b3650de6..eee65ac77399 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -944,7 +944,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index b93e314d3d0c..f27746ae295e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3760,7 +3760,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 1d27cf2b0a01..69c282baa723 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2911,7 +2911,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
rxq->max_pkt_len =
RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
data->mtu + I40E_ETH_OVERHEAD);
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 2d43c666fdbb..2c4103ac7ef9 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -588,7 +588,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->mtu & RTE_ETHER_MTU) {
if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
max_pkt_len > IAVF_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -968,7 +968,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 8f14a494203a..b6d79a51fa8c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -71,7 +71,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (max_pkt_len <= ICE_ETH_MAX_LEN ||
max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -682,7 +682,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f9137..d28fedc96e1a 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -149,7 +149,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 02c06d4da8bc..9b39e9c023ef 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3676,7 +3676,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_FILTER;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index f9ef6ce57277..cc7908d32584 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -267,7 +267,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
struct ice_rlan_ctx rx_ctx;
enum ice_status err;
uint16_t buf_size;
- struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
@@ -282,7 +281,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
frame_size);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index b3473b5b1646..5e6c2ff30157 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -73,7 +73,6 @@ extern "C" {
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 28d3076439c3..30940857eac0 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ if (dev->data->mtu & RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
else
rctl &= ~IGC_RCTL_LPE;
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index 97447a10e46a..795980cb1ca5 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_SCATTER |
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 377b96c0236a..4e5d234e8c7d 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ DEV_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 574a7bffc9cb..3205c37c3b82 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6234,7 +6234,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct rte_eth_rxmode *rxmode;
uint32_t rf_dec, rf_int;
uint32_t bcnrc_val;
uint16_t link_speed = dev->data->dev_link.link_speed;
@@ -6256,14 +6255,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
bcnrc_val = 0;
}
- rxmode = &dev->data->dev_conf.rxmode;
/*
* Set global transmit compensation time to the MMW_SIZE in RTTBCNRM
* register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
* set as 0x4.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ if (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 9bcbc445f2d0..6e64f9a0ade2 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -600,15 +600,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT;
if (max_frs < max_frame) {
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (max_frame > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (max_frame > IXGBE_ETH_MAX_LEN)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 03991711fd6e..c223ef37c79f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3033,7 +3033,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -5095,7 +5094,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 4a5cfd22aa71..e73112c44749 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 6f4f351222d3..0cc3bccc0825 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..6428f9ff7931 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,8 +54,7 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 5ce71661c84e..ef987b7de1b5 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -59,7 +59,6 @@
/** Port Rx offload capabilities */
#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index b1ce35b334da..a0bb5b9640c2 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -369,8 +369,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = dev->data->mtu;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -757,9 +756,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
};
- /* All NFP devices support jumbo frames */
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..3a02824e3948 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -60,7 +60,6 @@
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30bda..47ee126ed7fd 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -148,7 +148,6 @@
DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
DEV_RX_OFFLOAD_VLAN_FILTER | \
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index a243683d61d3..c65041a16ba7 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
- devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..aa4dcd33cc79 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l3_len = hdr_lens.l3_len;
droq_pkt->l4_len = hdr_lens.l4_len;
- if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
- rte_pktmbuf_free(droq_pkt);
- goto oq_read_fail;
- }
-
if (droq_pkt->nb_segs > 1 &&
!(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 84e23ff03418..06c3ccf20716 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_RSS_HASH);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9e0..62b215f62cd6 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -940,8 +940,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
{
uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
- caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return caps & sfc_rx_get_offload_mask(sa);
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..5d38750d6313 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -40,7 +40,6 @@
#define NICVF_RX_OFFLOAD_CAPA ( \
DEV_RX_OFFLOAD_CHECKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index c6cd3803c434..0ce754fb25b0 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1953,7 +1953,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_SCATTER;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 5d341a3e23bb..a05e73cd8b60 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2556,7 +2556,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
host_features = VIRTIO_OPS(hw)->get_features(hw);
dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 2f40ae907dcd..0210f9140b48 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -54,7 +54,6 @@
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 12062a785dc6..7c0cb093eda3 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -150,8 +150,7 @@ static struct rte_eth_conf port_conf = {
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ DEV_RX_OFFLOAD_SCATTER),
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index e5c7d46d2caa..af67db49f7fb 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -165,8 +165,7 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index d032a47d1c3b..4a741bfdde4d 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2209,8 +2209,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- if (mtu_size > RTE_ETHER_MTU)
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index b3993685ec92..63bbd7e64ceb 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -113,7 +113,6 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/kni/main.c b/examples/kni/main.c
index c10814c6a94f..0fd945e7e0b2 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
}
memcpy(&conf, &port_conf, sizeof(conf));
- /* Set new MTU */
- if (new_mtu > RTE_ETHER_MTU)
- conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 7abb612ee6a4..f6dfb156ac56 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -2000,10 +2000,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index b431b9ff5f3c..a185a0512826 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -730,10 +730,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index e58561327c48..12b4dce77ce1 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -2509,10 +2509,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index cb9bc7ad6002..22d35749410b 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -987,10 +987,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index b6cddc8c7b51..8fc3a7c675a2 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -3493,10 +3493,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index da381b41c0c5..a9c207124153 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -631,11 +631,8 @@ us_vhost_parse_args(int argc, char **argv)
return -1;
}
mergeable = !!ret;
- if (ret) {
- vmdq_conf_default.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (ret)
vmdq_conf_default.rxmode.mtu = MAX_MTU;
- }
break;
case OPT_STATS_NUM:
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index ce0ed509d28f..c2b624aba1a0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -118,7 +118,6 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT),
RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
- RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
RTE_RX_OFFLOAD_BIT2STR(SCATTER),
RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
RTE_RX_OFFLOAD_BIT2STR(SECURITY),
@@ -1485,13 +1484,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
- dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
- /* Use default value */
- dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- }
-
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
/*
@@ -3639,7 +3631,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
- int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3667,27 +3658,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
frame_size = mtu + overhead_len;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
-
- if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
- is_jumbo_frame_capable = 1;
}
- if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
- return -EINVAL;
-
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (ret == 0) {
+ if (ret == 0)
dev->data->mtu = mtu;
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
return eth_err(port_id, ret);
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9fba2bd73c84..4d0f956a4b28 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1389,7 +1389,6 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v3 5/6] ethdev: unify MTU checks
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
` (2 preceding siblings ...)
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-10-01 14:36 ` Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
` (5 subsequent siblings)
9 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 14:36 UTC (permalink / raw)
To: Andrew Rybchenko, Thomas Monjalon; +Cc: Ferruh Yigit, dev, Huisong Li
Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
have slightly different checks. Like one checks min MTU against
RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
Checks moved into common function to unify the checks. Also this has
benefit to have common error logs.
Suggested-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
lib/ethdev/rte_ethdev.c | 82 ++++++++++++++++++++++++++---------------
lib/ethdev/rte_ethdev.h | 2 +-
2 files changed, 54 insertions(+), 30 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index c2b624aba1a0..0a6e952722ae 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1336,6 +1336,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
return overhead_len;
}
+/* rte_eth_dev_info_get() should be called prior to this function */
+static int
+eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
+ uint16_t mtu)
+{
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
+ if (mtu < dev_info->min_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) < device min MTU (%u) for port_id %u\n",
+ mtu, dev_info->min_mtu, port_id);
+ return -EINVAL;
+ }
+ if (mtu > dev_info->max_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) > device max MTU (%u) for port_id %u\n",
+ mtu, dev_info->max_mtu, port_id);
+ return -EINVAL;
+ }
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ frame_size = mtu + overhead_len;
+ if (frame_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) < min frame size (%u) for port_id %u\n",
+ frame_size, RTE_ETHER_MIN_LEN, port_id);
+ return -EINVAL;
+ }
+
+ if (frame_size > dev_info->max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) > device max frame size (%u) for port_id %u\n",
+ frame_size, dev_info->max_rx_pktlen, port_id);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1463,26 +1504,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- /*
- * Check that the maximum RX packet length is supported by the
- * configured device.
- */
if (dev_conf->rxmode.mtu == 0)
dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
- if (max_rx_pktlen > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
- port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
- port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
+
+ ret = eth_dev_validate_mtu(port_id, &dev_info,
+ dev->data->dev_conf.rxmode.mtu);
+ if (ret != 0)
goto rollback;
- }
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
@@ -1491,6 +1519,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
if (dev_conf->rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
@@ -3437,7 +3468,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = lim;
dev_info->tx_desc_lim = lim;
dev_info->device = dev->device;
- dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
dev_info->max_mtu = UINT16_MAX;
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
@@ -3643,21 +3675,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
- uint16_t overhead_len;
- uint32_t frame_size;
-
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
- if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
- return -EINVAL;
-
- overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
- dev_info.max_mtu);
- frame_size = mtu + overhead_len;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
+ if (ret != 0)
+ return ret;
}
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 4d0f956a4b28..50e124ff631f 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3056,7 +3056,7 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
* };
*
* device = dev->device
- * min_mtu = RTE_ETHER_MIN_MTU
+ * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
* max_mtu = UINT16_MAX
*
* The following fields will be populated if support for dev_infos_get()
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v3 6/6] examples/ip_reassembly: remove unused parameter
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
` (3 preceding siblings ...)
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 5/6] ethdev: unify MTU checks Ferruh Yigit
@ 2021-10-01 14:36 ` Ferruh Yigit
2021-10-01 15:07 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Stephen Hemminger
` (4 subsequent siblings)
9 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-01 14:36 UTC (permalink / raw)
To: Andrew Rybchenko, Thomas Monjalon, Konstantin Ananyev; +Cc: Ferruh Yigit, dev
Remove 'max-pkt-len' parameter.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
examples/ip_reassembly/main.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index af67db49f7fb..2ff5ea3e7bc5 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -516,7 +516,6 @@ static void
print_usage(const char *prgname)
{
printf("%s [EAL options] -- -p PORTMASK [-q NQ]"
- " [--max-pkt-len PKTLEN]"
" [--maxflows=<flows>] [--flowttl=<ttl>[(s|ms)]]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of RX queues per lcore\n"
@@ -618,7 +617,6 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {"max-pkt-len", 1, 0, 0},
{"maxflows", 1, 0, 0},
{"flowttl", 1, 0, 0},
{NULL, 0, 0, 0}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
` (4 preceding siblings ...)
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
@ 2021-10-01 15:07 ` Stephen Hemminger
2021-10-05 16:46 ` Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
` (3 subsequent siblings)
9 siblings, 1 reply; 112+ messages in thread
From: Stephen Hemminger @ 2021-10-01 15:07 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Andrew Rybchenko, Thomas Monjalon, Jerin Jacob, Xiaoyun Li,
Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Keith Wiles, Jiawen Wu,
Jian Wang, Maxime Coquelin, Chenbo Xia, Nicolas Chautru,
Harry van Haaren, Cristian Dumitrescu, Radu Nicolau, Akhil Goyal,
Tomasz Kantecki, Declan Doherty, Pavan Nikhilesh,
Kirill Rybalchenko, Jasvinder Singh, dev
On Fri, 1 Oct 2021 15:36:18 +0100
Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
One other issue which DPDK inherits from Linux and BSD is that
MTU (Maximum Transmission Unit) is overloaded to mean MRU (Maximum Receive Unit).
On Linux, network devices are allowed to receive packets of any size they
want. MTU is used as a hint about "you need to accept at least MTU size
packets on receive". So MRU >= MTU.
In practice, and documentation, MRU and MTU are used synonymously.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/6] ethdev: move jumbo frame offload check to library
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-10-04 5:08 ` Somnath Kotur
0 siblings, 0 replies; 112+ messages in thread
From: Somnath Kotur @ 2021-10-04 5:08 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Andrew Rybchenko, Thomas Monjalon, Somalapuram Amaranath,
Ajit Khaparde, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Hemant Agrawal,
Sachin Saxena, Haiyue Wang, Gagandeep Singh, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, dev
[-- Attachment #1: Type: text/plain, Size: 24401 bytes --]
On Fri, Oct 1, 2021 at 8:06 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
> and application should enable the jumbo frame offload support for it.
>
> When jumbo frame offload is not enabled by application, but MTU bigger
> than RTE_ETHER_MTU is requested there are two options, either fail or
> enable jumbo frame offload implicitly.
>
> Enabling jumbo frame offload implicitly is selected by many drivers
> since setting a big MTU value already implies it, and this increases
> usability.
>
> This patch moves this logic from drivers to the library, both to reduce
> the duplicated code in the drivers and to make behaviour more visible.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
> drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
> drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
> drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
> drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
> drivers/net/dpaa/dpaa_ethdev.c | 7 -------
> drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
> drivers/net/e1000/em_ethdev.c | 9 ++-------
> drivers/net/e1000/igb_ethdev.c | 9 ++-------
> drivers/net/enetc/enetc_ethdev.c | 7 -------
> drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
> drivers/net/hns3/hns3_ethdev.c | 8 --------
> drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
> drivers/net/i40e/i40e_ethdev.c | 5 -----
> drivers/net/iavf/iavf_ethdev.c | 7 -------
> drivers/net/ice/ice_ethdev.c | 5 -----
> drivers/net/igc/igc_ethdev.c | 9 ++-------
> drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
> drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
> drivers/net/liquidio/lio_ethdev.c | 7 -------
> drivers/net/nfp/nfp_common.c | 6 ------
> drivers/net/octeontx/octeontx_ethdev.c | 5 -----
> drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
> drivers/net/qede/qede_ethdev.c | 4 ----
> drivers/net/sfc/sfc_ethdev.c | 9 ---------
> drivers/net/thunderx/nicvf_ethdev.c | 6 ------
> drivers/net/txgbe/txgbe_ethdev.c | 6 ------
> lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
> 27 files changed, 29 insertions(+), 166 deletions(-)
>
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
> index 76aeec077f2b..2960834b4539 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> val = 1;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> val = 0;
> - }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> return 0;
> }
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index 8c6f20b75aed..07ee19938930 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -3052,15 +3052,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> return -EINVAL;
> }
>
> - if (new_mtu > RTE_ETHER_MTU) {
> + if (new_mtu > RTE_ETHER_MTU)
> bp->flags |= BNXT_FLAG_JUMBO;
> - bp->eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - } else {
> - bp->eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> bp->flags &= ~BNXT_FLAG_JUMBO;
> - }
>
Acked-by: Somnath kotur <somnath.kotur@broadcom.com>
> /* Is there a change in mtu setting? */
> if (eth_dev->data->mtu == new_mtu)
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index 695d0d6fd3e2..349896f6a1bf 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> plt_err("Failed to max Rx frame length, rc=%d", rc);
> goto exit;
> }
> -
> - if (mtu > RTE_ETHER_MTU)
> - dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> exit:
> return rc;
> }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 8cf61f12a8d6..0c9cc2f5bb3f 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
> return -EINVAL;
>
> - /* set to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
> -1, -1, true);
> return err;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
> index adbdb87baab9..57b09f16ba44 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> fman_if_set_maxfrm(dev->process_private, frame_size);
>
> return 0;
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 758a14e0ad2d..df44bb204f65 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -1470,13 +1470,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
> return -EINVAL;
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
> index 6f418a36aa04..1b41dd04df5a 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~E1000_RCTL_LPE;
> - }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> return 0;
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index 4c114bf90fc7..a061d0529dd1 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -4396,15 +4396,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~E1000_RCTL_LPE;
> - }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
> diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
> index cdb9783b5372..fbcbbb6c0533 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads &=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
> enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
>
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 2d8271cb6095..4b30dfa222a8 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -1547,13 +1547,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> return ret;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> nic_dev->mtu_size = mtu;
>
> return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
> index 4ead227f9122..e1d465de8234 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2571,7 +2571,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> struct hns3_adapter *hns = dev->data->dev_private;
> uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
> struct hns3_hw *hw = &hns->hw;
> - bool is_jumbo_frame;
> int ret;
>
> if (dev->data->dev_started) {
> @@ -2581,7 +2580,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> rte_spinlock_lock(&hw->lock);
> - is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
> frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
>
> /*
> @@ -2596,12 +2594,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return ret;
> }
>
> - if (is_jumbo_frame)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
> index 0b5db486f8d6..3438b3650de6 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -908,12 +908,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rte_spinlock_unlock(&hw->lock);
> return ret;
> }
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 2033f8f55cd6..e14859db9cfd 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11774,11 +11774,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return ret;
> }
>
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 0eabce275d92..844d26d87ba6 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -1473,13 +1473,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return ret;
> }
>
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index e6d5128599e1..83e8f0da687c 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3992,11 +3992,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return 0;
> }
>
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
> index b26723064b07..dcbc26b8186e 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> rctl = IGC_READ_REG(hw, IGC_RCTL);
> -
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= IGC_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~IGC_RCTL_LPE;
> - }
> IGC_WRITE_REG(hw, IGC_RCTL, rctl);
>
> IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 3634c0c8c5f0..e8a33f04bd69 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (rpst->i40e_pf_eth) {
> ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
> mtu);
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 31e67d86e77b..574a7bffc9cb 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -5198,13 +5198,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> - }
> IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
>
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> index 976916f870a5..3a516c52d199 100644
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ b/drivers/net/liquidio/lio_ethdev.c
> @@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> return -1;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return 0;
> }
>
> diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
> index a2031a7a82cc..850ec7655f82 100644
> --- a/drivers/net/nfp/nfp_common.c
> +++ b/drivers/net/nfp/nfp_common.c
> @@ -962,12 +962,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> /* writing to configuration space */
> nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
> index 69c3bda12df8..fb65be2c2dc3 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.c
> +++ b/drivers/net/octeontx/octeontx_ethdev.c
> @@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (mtu > RTE_ETHER_MTU)
> - nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
> frame_size);
>
> diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
> index cf7804157198..293306c7be2a 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_ops.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
> @@ -59,11 +59,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return rc;
> }
>
> diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
> index 4b971fd1fe3c..6886a4e5efb4 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> fp->rxq->rx_buf_size = rc;
> }
> }
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (!dev->data->dev_started && restart) {
> qede_dev_start(dev);
> diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
> index 1f55c90b419d..2ee80e2dc41f 100644
> --- a/drivers/net/sfc/sfc_ethdev.c
> +++ b/drivers/net/sfc/sfc_ethdev.c
> @@ -1064,15 +1064,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> }
> }
>
> - /*
> - * The driver does not use it, but other PMDs update jumbo frame
> - * flag when MTU is set.
> - */
> - if (mtu > RTE_ETHER_MTU) {
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> - rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
> -
> sfc_adapter_unlock(sa);
>
> sfc_log_init(sa, "done");
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
> index c8ae95a61306..b501fee5332c 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> struct nicvf *nic = nicvf_pmd_priv(dev);
> uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
> size_t i;
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> (frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
> return -EINVAL;
>
> - if (mtu > RTE_ETHER_MTU)
> - rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (nicvf_mbox_update_hw_max_frs(nic, mtu))
> return -EINVAL;
>
> diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
> index 269de9f848dd..35b98097c3a4 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -3486,12 +3486,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (hw->mode)
> wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> TXGBE_FRAME_SIZE_MAX);
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 4d0584af52e3..1740bab98a83 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -3639,6 +3639,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> int ret;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_dev *dev;
> + int is_jumbo_frame_capable = 0;
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> dev = &rte_eth_devices[port_id];
> @@ -3657,12 +3658,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>
> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
> return -EINVAL;
> +
> + if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> + is_jumbo_frame_capable = 1;
> }
>
> + if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
> + return -EINVAL;
> +
> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
> - if (!ret)
> + if (ret == 0) {
> dev->data->mtu = mtu;
>
> + /* switch to jumbo mode if needed */
> + if (mtu > RTE_ETHER_MTU)
> + dev->data->dev_conf.rxmode.offloads |=
> + DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> + dev->data->dev_conf.rxmode.offloads &=
> + ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> return eth_err(port_id, ret);
> }
>
> --
> 2.31.1
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/6] ethdev: move check to library for MTU set
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 3/6] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-10-04 5:09 ` Somnath Kotur
0 siblings, 0 replies; 112+ messages in thread
From: Somnath Kotur @ 2021-10-04 5:09 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Andrew Rybchenko, Thomas Monjalon, Somalapuram Amaranath,
Ajit Khaparde, Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena,
Haiyue Wang, Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang,
Rosen Xu, Shijith Thotton, Srisivasubramanian Srinivasan,
Heinrich Kuhn, Harman Kalra, Jerin Jacob, Nithin Dabilpuram,
Kiran Kumar K, Rasesh Mody, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, dev
[-- Attachment #1: Type: text/plain, Size: 22405 bytes --]
On Fri, Oct 1, 2021 at 8:07 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> Move requested MTU value check to the API to prevent the duplicated
> code.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> ---
> drivers/net/axgbe/axgbe_ethdev.c | 15 ++++-----------
> drivers/net/bnxt/bnxt_ethdev.c | 2 +-
> drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------
> drivers/net/dpaa/dpaa_ethdev.c | 2 --
> drivers/net/dpaa2/dpaa2_ethdev.c | 4 ----
> drivers/net/e1000/em_ethdev.c | 10 ----------
> drivers/net/e1000/igb_ethdev.c | 11 -----------
> drivers/net/enetc/enetc_ethdev.c | 4 ----
> drivers/net/hinic/hinic_pmd_ethdev.c | 8 +-------
> drivers/net/i40e/i40e_ethdev.c | 17 ++++-------------
> drivers/net/iavf/iavf_ethdev.c | 10 ++--------
> drivers/net/ice/ice_ethdev.c | 14 +++-----------
> drivers/net/igc/igc_ethdev.c | 5 -----
> drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------
> drivers/net/liquidio/lio_ethdev.c | 10 ----------
> drivers/net/nfp/nfp_common.c | 4 ----
> drivers/net/octeontx/octeontx_ethdev.c | 4 ----
> drivers/net/octeontx2/otx2_ethdev_ops.c | 4 ----
> drivers/net/qede/qede_ethdev.c | 12 ------------
> drivers/net/thunderx/nicvf_ethdev.c | 6 ------
> drivers/net/txgbe/txgbe_ethdev.c | 10 ----------
> lib/ethdev/rte_ethdev.c | 9 +++++++++
> 22 files changed, 25 insertions(+), 155 deletions(-)
>
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
> index 2960834b4539..c36cd7b1d2f0 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
>
> static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> - struct rte_eth_dev_info dev_info;
> struct axgbe_port *pdata = dev->data->dev_private;
> - uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> - unsigned int val = 0;
> - axgbe_dev_info_get(dev, &dev_info);
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> - return -EINVAL;
> + unsigned int val;
> +
> /* mtu setting is forbidden if port is start */
> if (dev->data->dev_started) {
> PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
> dev->data->port_id);
> return -EBUSY;
> }
> - if (mtu > RTE_ETHER_MTU)
> - val = 1;
> - else
> - val = 0;
> + val = mtu > RTE_ETHER_MTU ? 1 : 0;
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> +
> return 0;
> }
>
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index 07ee19938930..dc33b961320a 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -3025,7 +3025,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> struct bnxt *bp = eth_dev->data->dev_private;
> uint32_t new_pkt_size;
> - uint32_t rc = 0;
> + uint32_t rc;
> uint32_t i;
>
> rc = is_bnxt_in_error(bp);
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 0c9cc2f5bb3f..70b879fed100 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> {
> struct port_info *pi = eth_dev->data->dev_private;
> struct adapter *adapter = pi->adapter;
> - struct rte_eth_dev_info dev_info;
> - int err;
> uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
>
> - err = cxgbe_dev_info_get(eth_dev, &dev_info);
> - if (err != 0)
> - return err;
> -
> - /* Must accommodate at least RTE_ETHER_MIN_MTU */
> - if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
> - return -EINVAL;
> -
> - err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
> + return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
> -1, -1, true);
> - return err;
> }
>
> /*
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
> index 57b09f16ba44..3172e3b2de87 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
>
> PMD_INIT_FUNC_TRACE();
>
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
> - return -EINVAL;
> /*
> * Refuse mtu that requires the support of scattered packets
> * when this feature has not been enabled before.
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index df44bb204f65..c28f03641bbc 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -1466,10 +1466,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
> - return -EINVAL;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
> index 1b41dd04df5a..6ebef55588bc 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1788,22 +1788,12 @@ eth_em_default_mac_addr_set(struct rte_eth_dev *dev,
> static int
> eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> - struct rte_eth_dev_info dev_info;
> struct e1000_hw *hw;
> uint32_t frame_size;
> uint32_t rctl;
> - int ret;
> -
> - ret = eth_em_infos_get(dev, &dev_info);
> - if (ret != 0)
> - return ret;
>
> frame_size = mtu + E1000_ETH_OVERHEAD;
>
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> - return -EINVAL;
> -
> /*
> * If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index a061d0529dd1..3164fde5b939 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -4363,9 +4363,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> uint32_t rctl;
> struct e1000_hw *hw;
> - struct rte_eth_dev_info dev_info;
> uint32_t frame_size = mtu + E1000_ETH_OVERHEAD;
> - int ret;
>
> hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> @@ -4374,15 +4372,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> if (hw->mac.type == e1000_82571)
> return -ENOTSUP;
> #endif
> - ret = eth_igb_infos_get(dev, &dev_info);
> - if (ret != 0)
> - return ret;
> -
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - frame_size > dev_info.max_rx_pktlen)
> - return -EINVAL;
> -
> /*
> * If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
> index fbcbbb6c0533..a7372c1787c7 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -662,10 +662,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> struct enetc_hw *enetc_hw = &hw->hw;
> uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
>
> - /* check that mtu is within the allowed range */
> - if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size > ENETC_MAC_MAXFRM_SIZE)
> - return -EINVAL;
> -
> /*
> * Refuse mtu that requires the support of scattered packets
> * when this feature has not been enabled before.
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 4b30dfa222a8..79987bec273c 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -1530,17 +1530,11 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
> static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
> - int ret = 0;
> + int ret;
>
> PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
> dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu));
>
> - if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) {
> - PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d",
> - mtu, HINIC_MIN_MTU_SIZE, HINIC_MAX_MTU_SIZE);
> - return -EINVAL;
> - }
> -
> ret = hinic_set_port_mtu(nic_dev->hwdev, mtu);
> if (ret) {
> PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret);
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index e14859db9cfd..b93e314d3d0c 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11756,25 +11756,16 @@ static int i40e_set_default_mac_addr(struct rte_eth_dev *dev,
> }
>
> static int
> -i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> +i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
> {
> - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> - struct rte_eth_dev_data *dev_data = pf->dev_data;
> - uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
> - int ret = 0;
> -
> - /* check if mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
> - return -EINVAL;
> -
> /* mtu setting is forbidden if port is start */
> - if (dev_data->dev_started) {
> + if (dev->data->dev_started != 0) {
> PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
> - dev_data->port_id);
> + dev->data->port_id);
> return -EBUSY;
> }
>
> - return ret;
> + return 0;
> }
>
> /* Restore ethertype filter */
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 844d26d87ba6..2d43c666fdbb 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -1459,21 +1459,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
> }
>
> static int
> -iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> +iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
> {
> - uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD;
> - int ret = 0;
> -
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX)
> - return -EINVAL;
> -
> /* mtu setting is forbidden if port is start */
> if (dev->data->dev_started) {
> PMD_DRV_LOG(ERR, "port must be stopped before configuration");
> return -EBUSY;
> }
>
> - return ret;
> + return 0;
> }
>
> static int
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 83e8f0da687c..02c06d4da8bc 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3974,21 +3974,13 @@ ice_dev_set_link_down(struct rte_eth_dev *dev)
> }
>
> static int
> -ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> +ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
> {
> - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> - struct rte_eth_dev_data *dev_data = pf->dev_data;
> - uint32_t frame_size = mtu + ICE_ETH_OVERHEAD;
> -
> - /* check if mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
> - return -EINVAL;
> -
> /* mtu setting is forbidden if port is start */
> - if (dev_data->dev_started) {
> + if (dev->data->dev_started != 0) {
> PMD_DRV_LOG(ERR,
> "port %d must be stopped before configuration",
> - dev_data->port_id);
> + dev->data->port_id);
> return -EBUSY;
> }
>
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
> index dcbc26b8186e..e279ae1fff1d 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -1576,11 +1576,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN)
> frame_size += VLAN_TAG_SIZE;
>
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - frame_size > MAX_RX_JUMBO_FRAME_SIZE)
> - return -EINVAL;
> -
> /*
> * If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
> index e8a33f04bd69..377b96c0236a 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2778,12 +2778,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
> int ret = 0;
> struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev);
> struct rte_eth_dev_data *dev_data = ethdev->data;
> - uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD;
> -
> - /* check if mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)
> - return -EINVAL;
>
> /* mtu setting is forbidden if port is start */
> /* make sure NIC port is stopped */
> diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> index 3a516c52d199..9d1d811a2e37 100644
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ b/drivers/net/liquidio/lio_ethdev.c
> @@ -434,7 +434,6 @@ static int
> lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> {
> struct lio_device *lio_dev = LIO_DEV(eth_dev);
> - uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
> struct lio_dev_ctrl_cmd ctrl_cmd;
> struct lio_ctrl_pkt ctrl_pkt;
>
> @@ -446,15 +445,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - /* check if VF MTU is within allowed range.
> - * New value should not exceed PF MTU.
> - */
> - if (mtu < RTE_ETHER_MIN_MTU || mtu > pf_mtu) {
> - lio_dev_err(lio_dev, "VF MTU should be >= %d and <= %d\n",
> - RTE_ETHER_MIN_MTU, pf_mtu);
> - return -EINVAL;
> - }
> -
> /* flush added to prevent cmd failure
> * incase the queue is full
> */
> diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
> index 850ec7655f82..b1ce35b334da 100644
> --- a/drivers/net/nfp/nfp_common.c
> +++ b/drivers/net/nfp/nfp_common.c
> @@ -951,10 +951,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
>
> hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || (uint32_t)mtu > hw->max_mtu)
> - return -EINVAL;
> -
> /* mtu setting is forbidden if port is started */
> if (dev->data->dev_started) {
> PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
> diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
> index fb65be2c2dc3..b2355fa695bc 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.c
> +++ b/drivers/net/octeontx/octeontx_ethdev.c
> @@ -524,10 +524,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> struct rte_eth_dev_data *data = eth_dev->data;
> int rc = 0;
>
> - /* Check if MTU is within the allowed range */
> - if (frame_size < OCCTX_MIN_FRS || frame_size > OCCTX_MAX_FRS)
> - return -EINVAL;
> -
> buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
>
> /* Refuse MTU that requires the support of scattered packets
> diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
> index 293306c7be2a..206da6f7cfda 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_ops.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
> @@ -20,10 +20,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (dev->configured && otx2_ethdev_is_ptp_en(dev))
> frame_size += NIX_TIMESYNC_RX_OFFSET;
>
> - /* Check if MTU is within the allowed range */
> - if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
> - return -EINVAL;
> -
> buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
>
> /* Refuse MTU that requires the support of scattered packets
> diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
> index 6886a4e5efb4..84e23ff03418 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -2307,7 +2307,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
> struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
> - struct rte_eth_dev_info dev_info = {0};
> struct qede_fastpath *fp;
> uint32_t frame_size;
> uint16_t bufsz;
> @@ -2315,19 +2314,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> int i, rc;
>
> PMD_INIT_FUNC_TRACE(edev);
> - rc = qede_dev_info_get(dev, &dev_info);
> - if (rc != 0) {
> - DP_ERR(edev, "Error during getting ethernet device info\n");
> - return rc;
> - }
>
> frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
> - DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
> - mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
> - QEDE_ETH_OVERHEAD);
> - return -EINVAL;
> - }
> if (!dev->data->scattered_rx &&
> frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
> DP_INFO(edev, "MTU greater than minimum RX buffer size of %u\n",
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
> index b501fee5332c..44c6b1c72354 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -154,12 +154,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
>
> PMD_INIT_FUNC_TRACE();
>
> - if (frame_size > NIC_HW_MAX_FRS)
> - return -EINVAL;
> -
> - if (frame_size < NIC_HW_MIN_FRS)
> - return -EINVAL;
> -
> buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
>
> /*
> diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
> index 35b98097c3a4..c6fcb1871981 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -3463,18 +3463,8 @@ static int
> txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
> - struct rte_eth_dev_info dev_info;
> uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> struct rte_eth_dev_data *dev_data = dev->data;
> - int ret;
> -
> - ret = txgbe_dev_info_get(dev, &dev_info);
> - if (ret != 0)
> - return ret;
> -
> - /* check that mtu is within the allowed range */
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> - return -EINVAL;
>
> /* If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 1740bab98a83..ce0ed509d28f 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -3652,6 +3652,9 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> * which relies on dev->dev_ops->dev_infos_get.
> */
> if (*dev->dev_ops->dev_infos_get != NULL) {
> + uint16_t overhead_len;
> + uint32_t frame_size;
> +
> ret = rte_eth_dev_info_get(port_id, &dev_info);
> if (ret != 0)
> return ret;
> @@ -3659,6 +3662,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
> return -EINVAL;
>
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
> + frame_size = mtu + overhead_len;
> + if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> + return -EINVAL;
> +
> if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> is_jumbo_frame_capable = 1;
> }
> --
> 2.31.1
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/6] ethdev: remove jumbo offload flag
[not found] ` <CAOBf=muYkU2dwgi3iC8Q7pdSNTJsMUwWYdXj14KeN_=_mUGa0w@mail.gmail.com>
@ 2021-10-04 7:55 ` Somnath Kotur
2021-10-05 16:48 ` Ferruh Yigit
0 siblings, 1 reply; 112+ messages in thread
From: Somnath Kotur @ 2021-10-04 7:55 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev
[-- Attachment #1: Type: text/plain, Size: 70802 bytes --]
On Mon, Oct 4, 2021 at 10:42 AM Somnath Kotur
<somnath.kotur@broadcom.com> wrote:
>
> On Fri, Oct 1, 2021 at 8:07 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >
> > Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
> >
> > Instead of drivers announce this capability, application can deduct the
> > capability by checking reported 'dev_info.max_mtu' or
> > 'dev_info.max_rx_pktlen'.
> >
> > And instead of application explicitly set this flag to enable jumbo
> application setting this flag explicitly sounds better?
> > frames, this can be deducted by driver by comparing requested 'mtu' to
> typo, think you meant 'deduced' ? :)
>
> > 'RTE_ETHER_MTU'.
> >
> > Removing this additional configuration for simplification.
> >
> > Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> > ---
> > app/test-eventdev/test_pipeline_common.c | 2 -
> > app/test-pmd/cmdline.c | 2 +-
> > app/test-pmd/config.c | 25 +---------
> > app/test-pmd/testpmd.c | 48 +------------------
> > app/test-pmd/testpmd.h | 2 +-
> > doc/guides/howto/debug_troubleshoot.rst | 2 -
> > doc/guides/nics/bnxt.rst | 1 -
> > doc/guides/nics/features.rst | 3 +-
> > drivers/net/atlantic/atl_ethdev.c | 1 -
> > drivers/net/axgbe/axgbe_ethdev.c | 1 -
> > drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
> > drivers/net/bnxt/bnxt.h | 1 -
> > drivers/net/bnxt/bnxt_ethdev.c | 10 +---
> > drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
> > drivers/net/cnxk/cnxk_ethdev.h | 5 +-
> > drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
> > drivers/net/cxgbe/cxgbe.h | 1 -
> > drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
> > drivers/net/cxgbe/sge.c | 5 +-
> > drivers/net/dpaa/dpaa_ethdev.c | 2 -
> > drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
> > drivers/net/e1000/e1000_ethdev.h | 4 +-
> > drivers/net/e1000/em_ethdev.c | 4 +-
> > drivers/net/e1000/em_rxtx.c | 19 +++-----
> > drivers/net/e1000/igb_rxtx.c | 3 +-
> > drivers/net/ena/ena_ethdev.c | 1 -
> > drivers/net/enetc/enetc_ethdev.c | 3 +-
> > drivers/net/enic/enic_res.c | 1 -
> > drivers/net/failsafe/failsafe_ops.c | 2 -
> > drivers/net/fm10k/fm10k_ethdev.c | 1 -
> > drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
> > drivers/net/hns3/hns3_ethdev.c | 1 -
> > drivers/net/hns3/hns3_ethdev_vf.c | 1 -
> > drivers/net/i40e/i40e_ethdev.c | 1 -
> > drivers/net/i40e/i40e_rxtx.c | 2 +-
> > drivers/net/iavf/iavf_ethdev.c | 3 +-
> > drivers/net/ice/ice_dcf_ethdev.c | 3 +-
> > drivers/net/ice/ice_dcf_vf_representor.c | 1 -
> > drivers/net/ice/ice_ethdev.c | 1 -
> > drivers/net/ice/ice_rxtx.c | 3 +-
> > drivers/net/igc/igc_ethdev.h | 1 -
> > drivers/net/igc/igc_txrx.c | 2 +-
> > drivers/net/ionic/ionic_ethdev.c | 1 -
> > drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
> > drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
> > drivers/net/ixgbe/ixgbe_pf.c | 9 +---
> > drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
> > drivers/net/mlx4/mlx4_rxq.c | 1 -
> > drivers/net/mlx5/mlx5_rxq.c | 1 -
> > drivers/net/mvneta/mvneta_ethdev.h | 3 +-
> > drivers/net/mvpp2/mrvl_ethdev.c | 1 -
> > drivers/net/nfp/nfp_common.c | 6 +--
> > drivers/net/octeontx/octeontx_ethdev.h | 1 -
> > drivers/net/octeontx2/otx2_ethdev.h | 1 -
> > drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
> > drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
> > drivers/net/qede/qede_ethdev.c | 1 -
> > drivers/net/sfc/sfc_rx.c | 2 -
> > drivers/net/thunderx/nicvf_ethdev.h | 1 -
> > drivers/net/txgbe/txgbe_rxtx.c | 1 -
> > drivers/net/virtio/virtio_ethdev.c | 1 -
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
> > examples/ip_fragmentation/main.c | 3 +-
> > examples/ip_reassembly/main.c | 3 +-
> > examples/ipsec-secgw/ipsec-secgw.c | 2 -
> > examples/ipv4_multicast/main.c | 1 -
> > examples/kni/main.c | 5 --
> > examples/l3fwd-acl/main.c | 4 +-
> > examples/l3fwd-graph/main.c | 4 +-
> > examples/l3fwd-power/main.c | 4 +-
> > examples/l3fwd/main.c | 4 +-
> > .../performance-thread/l3fwd-thread/main.c | 4 +-
> > examples/vhost/main.c | 5 +-
> > lib/ethdev/rte_ethdev.c | 26 +---------
> > lib/ethdev/rte_ethdev.h | 1 -
> > 75 files changed, 47 insertions(+), 259 deletions(-)
> >
> > diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
> > index 5fcea74b4d43..2775e72c580d 100644
> > --- a/app/test-eventdev/test_pipeline_common.c
> > +++ b/app/test-eventdev/test_pipeline_common.c
> > @@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
> >
> > port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> > RTE_ETHER_CRC_LEN;
> > - if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
> > - port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> >
> > t->internal_port = 1;
> > RTE_ETH_FOREACH_DEV(i) {
> > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> > index a677451073ae..117945c2c61e 100644
> > --- a/app/test-pmd/cmdline.c
> > +++ b/app/test-pmd/cmdline.c
> > @@ -1923,7 +1923,7 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
> > return;
> > }
> >
> > - update_jumbo_frame_offload(port_id, res->value);
> > + update_mtu_from_frame_size(port_id, res->value);
> > }
> >
> > init_port_config();
> > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> > index db3eeffa0093..e890fadc716c 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -1144,40 +1144,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v)
> > void
> > port_mtu_set(portid_t port_id, uint16_t mtu)
> > {
> > + struct rte_port *port = &ports[port_id];
> > int diag;
> > - struct rte_port *rte_port = &ports[port_id];
> > - struct rte_eth_dev_info dev_info;
> > - int ret;
> >
> > if (port_id_is_invalid(port_id, ENABLED_WARN))
> > return;
> >
> > - ret = eth_dev_info_get_print_err(port_id, &dev_info);
> > - if (ret != 0)
> > - return;
> > -
> > - if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
> > - fprintf(stderr,
> > - "Set MTU failed. MTU:%u is not in valid range, min:%u - max:%u\n",
> > - mtu, dev_info.min_mtu, dev_info.max_mtu);
> > - return;
> > - }
> > diag = rte_eth_dev_set_mtu(port_id, mtu);
> > if (diag != 0) {
> > fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
> > return;
> > }
> >
> > - rte_port->dev_conf.rxmode.mtu = mtu;
> > -
> > - if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> > - if (mtu > RTE_ETHER_MTU)
> > - rte_port->dev_conf.rxmode.offloads |=
> > - DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - else
> > - rte_port->dev_conf.rxmode.offloads &=
> > - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - }
> > + port->dev_conf.rxmode.mtu = mtu;
> > }
> >
> > /* Generic flow management functions. */
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> > index 8c23cfe7c3da..d2a2a9ac6cda 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -1503,12 +1503,6 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
> > if (ret != 0)
> > rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
> >
> > - ret = update_jumbo_frame_offload(pid, 0);
> > - if (ret != 0)
> > - fprintf(stderr,
> > - "Updating jumbo frame offload failed for port %u\n",
> > - pid);
> > -
> > if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
> > port->dev_conf.txmode.offloads &=
> > ~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> > @@ -3463,24 +3457,18 @@ rxtx_port_config(struct rte_port *port)
> > }
> >
> > /*
> > - * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
> > - * MTU is also aligned.
> > + * Helper function to set MTU from frame size
> > *
> > * port->dev_info should be set before calling this function.
> > *
> > - * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
> > - * ETH_OVERHEAD". This is useful to update flags but not MTU value.
> > - *
> > * return 0 on success, negative on error
> > */
> > int
> > -update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
> > +update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
> > {
> > struct rte_port *port = &ports[portid];
> > uint32_t eth_overhead;
> > - uint64_t rx_offloads;
> > uint16_t mtu, new_mtu;
> > - bool on;
> >
> > eth_overhead = get_eth_overhead(&port->dev_info);
> >
> > @@ -3489,40 +3477,8 @@ update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
> > return -1;
> > }
> >
> > - if (max_rx_pktlen == 0)
> > - max_rx_pktlen = mtu + eth_overhead;
> > -
> > - rx_offloads = port->dev_conf.rxmode.offloads;
> > new_mtu = max_rx_pktlen - eth_overhead;
> >
> > - if (new_mtu <= RTE_ETHER_MTU) {
> > - rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - on = false;
> > - } else {
> > - if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> > - fprintf(stderr,
> > - "Frame size (%u) is not supported by port %u\n",
> > - max_rx_pktlen, portid);
> > - return -1;
> > - }
> > - rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - on = true;
> > - }
> > -
> > - if (rx_offloads != port->dev_conf.rxmode.offloads) {
> > - uint16_t qid;
> > -
> > - port->dev_conf.rxmode.offloads = rx_offloads;
> > -
> > - /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
> > - for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
> > - if (on)
> > - port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - else
> > - port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - }
> > - }
> > -
> > if (mtu == new_mtu)
> > return 0;
> >
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> > index 17562215c733..eed9d031fd9a 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -1022,7 +1022,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
> > __rte_unused void *user_param);
> > void add_tx_dynf_callback(portid_t portid);
> > void remove_tx_dynf_callback(portid_t portid);
> > -int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
> > +int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen);
> >
> > /*
> > * Work-around of a compilation error with ICC on invocations of the
> > diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
> > index 457ac441429a..df69fa8bcc24 100644
> > --- a/doc/guides/howto/debug_troubleshoot.rst
> > +++ b/doc/guides/howto/debug_troubleshoot.rst
> > @@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
> > * Identify if port Speed and Duplex is matching to desired values with
> > ``rte_eth_link_get``.
> >
> > - * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
> > -
> > * Check promiscuous mode if the drops do not occur for unique MAC address
> > with ``rte_eth_promiscuous_get``.
> >
> > diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
> > index e75f4fa9e3bc..8f10c6c78a1f 100644
> > --- a/doc/guides/nics/bnxt.rst
> > +++ b/doc/guides/nics/bnxt.rst
> > @@ -885,7 +885,6 @@ processing. This improved performance is derived from a number of optimizations:
> >
> > DEV_RX_OFFLOAD_VLAN_STRIP
> > DEV_RX_OFFLOAD_KEEP_CRC
> > - DEV_RX_OFFLOAD_JUMBO_FRAME
> > DEV_RX_OFFLOAD_IPV4_CKSUM
> > DEV_RX_OFFLOAD_UDP_CKSUM
> > DEV_RX_OFFLOAD_TCP_CKSUM
> > diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> > index 483cb7da576f..9580445828bf 100644
> > --- a/doc/guides/nics/features.rst
> > +++ b/doc/guides/nics/features.rst
> > @@ -165,8 +165,7 @@ Jumbo frame
> >
> > Supports Rx jumbo frames.
> >
> > -* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> > - ``dev_conf.rxmode.mtu``.
> > +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
> > * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
> > * **[related] API**: ``rte_eth_dev_set_mtu()``.
> >
> > diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
> > index 3f654c071566..5a198f53fce7 100644
> > --- a/drivers/net/atlantic/atl_ethdev.c
> > +++ b/drivers/net/atlantic/atl_ethdev.c
> > @@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
> > | DEV_RX_OFFLOAD_IPV4_CKSUM \
> > | DEV_RX_OFFLOAD_UDP_CKSUM \
> > | DEV_RX_OFFLOAD_TCP_CKSUM \
> > - | DEV_RX_OFFLOAD_JUMBO_FRAME \
> > | DEV_RX_OFFLOAD_MACSEC_STRIP \
> > | DEV_RX_OFFLOAD_VLAN_FILTER)
> >
> > diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
> > index c36cd7b1d2f0..0bc9e5eeeb10 100644
> > --- a/drivers/net/axgbe/axgbe_ethdev.c
> > +++ b/drivers/net/axgbe/axgbe_ethdev.c
> > @@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> > DEV_RX_OFFLOAD_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_KEEP_CRC;
> >
> > diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
> > index 009a94e9a8fa..50ff04bb2241 100644
> > --- a/drivers/net/bnx2x/bnx2x_ethdev.c
> > +++ b/drivers/net/bnx2x/bnx2x_ethdev.c
> > @@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> > dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
> > dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
> > dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
> > - dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
> >
> > dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
> > dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
> > diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
> > index 5121d05da65f..6743cf92b0e6 100644
> > --- a/drivers/net/bnxt/bnxt.h
> > +++ b/drivers/net/bnxt/bnxt.h
> > @@ -595,7 +595,6 @@ struct bnxt_rep_info {
> > DEV_RX_OFFLOAD_TCP_CKSUM | \
> > DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
> > DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
> > - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> > DEV_RX_OFFLOAD_KEEP_CRC | \
> > DEV_RX_OFFLOAD_VLAN_EXTEND | \
> > DEV_RX_OFFLOAD_TCP_LRO | \
> > diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> > index dc33b961320a..e9d04f354a39 100644
> > --- a/drivers/net/bnxt/bnxt_ethdev.c
> > +++ b/drivers/net/bnxt/bnxt_ethdev.c
> > @@ -742,15 +742,10 @@ static int bnxt_start_nic(struct bnxt *bp)
> > unsigned int i, j;
> > int rc;
> >
> > - if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
> > - bp->eth_dev->data->dev_conf.rxmode.offloads |=
> > - DEV_RX_OFFLOAD_JUMBO_FRAME;
> > + if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
> > bp->flags |= BNXT_FLAG_JUMBO;
> > - } else {
> > - bp->eth_dev->data->dev_conf.rxmode.offloads &=
> > - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> > + else
> > bp->flags &= ~BNXT_FLAG_JUMBO;
> > - }
> >
> > /* THOR does not support ring groups.
> > * But we will use the array to save RSS context IDs.
> > @@ -1250,7 +1245,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
> > if (eth_dev->data->dev_conf.rxmode.offloads &
> > ~(DEV_RX_OFFLOAD_VLAN_STRIP |
> > DEV_RX_OFFLOAD_KEEP_CRC |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > DEV_RX_OFFLOAD_TCP_CKSUM |
>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
>
> > diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> > index 412acff42f65..2f3a1759419f 100644
> > --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> > +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> > @@ -1727,14 +1727,6 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
> > slave_eth_dev->data->dev_conf.rxmode.mtu =
> > bonded_eth_dev->data->dev_conf.rxmode.mtu;
> >
> > - if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
> > - DEV_RX_OFFLOAD_JUMBO_FRAME)
> > - slave_eth_dev->data->dev_conf.rxmode.offloads |=
> > - DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - else
> > - slave_eth_dev->data->dev_conf.rxmode.offloads &=
> > - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> > -
> > nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
> > nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
> >
> > diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
> > index 10e05e6b5edd..fa8c48f1eeb0 100644
> > --- a/drivers/net/cnxk/cnxk_ethdev.h
> > +++ b/drivers/net/cnxk/cnxk_ethdev.h
> > @@ -75,9 +75,8 @@
> > #define CNXK_NIX_RX_OFFLOAD_CAPA \
> > (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
> > DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
> > - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
> > - DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
> > - DEV_RX_OFFLOAD_VLAN_STRIP)
> > + DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
> > + DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP)
> >
> > #define RSS_IPV4_ENABLE \
> > (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
> > diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
> > index 349896f6a1bf..d0924df76152 100644
> > --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> > +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> > @@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
> > {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
> > {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
> > {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
> > - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
> > {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
> > {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
> > {DEV_RX_OFFLOAD_SECURITY, " Security,"},
> > diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
> > index 7c89a028bf16..37625c5bfb69 100644
> > --- a/drivers/net/cxgbe/cxgbe.h
> > +++ b/drivers/net/cxgbe/cxgbe.h
> > @@ -51,7 +51,6 @@
> > DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > DEV_RX_OFFLOAD_UDP_CKSUM | \
> > DEV_RX_OFFLOAD_TCP_CKSUM | \
> > - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> > DEV_RX_OFFLOAD_SCATTER | \
> > DEV_RX_OFFLOAD_RSS_HASH)
> >
> > diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
> > index 70b879fed100..1374f32b6826 100644
> > --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> > +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> > @@ -661,14 +661,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
> > if ((&rxq->fl) != NULL)
> > rxq->fl.size = temp_nb_desc;
> >
> > - /* Set to jumbo mode if necessary */
> > - if (eth_dev->data->mtu > RTE_ETHER_MTU)
> > - eth_dev->data->dev_conf.rxmode.offloads |=
> > - DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - else
> > - eth_dev->data->dev_conf.rxmode.offloads &=
> > - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> > -
> > err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
> > &rxq->fl, NULL,
> > is_pf4(adapter) ?
> > diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
> > index 830f5192474d..21b8fe61c9a7 100644
> > --- a/drivers/net/cxgbe/sge.c
> > +++ b/drivers/net/cxgbe/sge.c
> > @@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
> > struct rte_mbuf *buf_bulk[n];
> > int ret, i;
> > struct rte_pktmbuf_pool_private *mbp_priv;
> > - u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
> > - DEV_RX_OFFLOAD_JUMBO_FRAME;
> >
> > /* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
> > mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
> > - if (jumbo_en &&
> > - ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000))
> > + if ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000)
> > buf_size_idx = RX_LARGE_MTU_BUF;
> >
> > ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk, n);
> > diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
> > index 3172e3b2de87..defc072072af 100644
> > --- a/drivers/net/dpaa/dpaa_ethdev.c
> > +++ b/drivers/net/dpaa/dpaa_ethdev.c
> > @@ -54,7 +54,6 @@
> >
> > /* Supported Rx offloads */
> > static uint64_t dev_rx_offloads_sup =
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_SCATTER;
> >
> > /* Rx offloads which cannot be disabled */
> > @@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
> > uint64_t flags;
> > const char *output;
> > } rx_offload_map[] = {
> > - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
> > {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
> > {DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
> > {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
> > diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> > index c28f03641bbc..dc25eefb33b0 100644
> > --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> > +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> > @@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
> > DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
> > DEV_RX_OFFLOAD_VLAN_STRIP |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_TIMESTAMP;
> >
> > /* Rx offloads which cannot be disabled */
> > @@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
> > {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
> > {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
> > {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
> > - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
> > {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
> > {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
> > {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
> > diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
> > index 3b4d9c3ee6f4..1ae78fe71f02 100644
> > --- a/drivers/net/e1000/e1000_ethdev.h
> > +++ b/drivers/net/e1000/e1000_ethdev.h
> > @@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
> > void em_dev_clear_queues(struct rte_eth_dev *dev);
> > void em_dev_free_queues(struct rte_eth_dev *dev);
> >
> > -uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
> > -uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
> > +uint64_t em_get_rx_port_offloads_capa(void);
> > +uint64_t em_get_rx_queue_offloads_capa(void);
> >
> > int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> > uint16_t nb_rx_desc, unsigned int socket_id,
> > diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
> > index 6ebef55588bc..8a752eef52cf 100644
> > --- a/drivers/net/e1000/em_ethdev.c
> > +++ b/drivers/net/e1000/em_ethdev.c
> > @@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> > dev_info->max_rx_queues = 1;
> > dev_info->max_tx_queues = 1;
> >
> > - dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(dev);
> > - dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
> > + dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa();
> > + dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
> > dev_info->rx_queue_offload_capa;
> > dev_info->tx_queue_offload_capa = em_get_tx_queue_offloads_capa(dev);
> > dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
> > diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
> > index dfd8f2fd0074..e061f80a906a 100644
> > --- a/drivers/net/e1000/em_rxtx.c
> > +++ b/drivers/net/e1000/em_rxtx.c
> > @@ -1359,12 +1359,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
> > }
> >
> > uint64_t
> > -em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
> > +em_get_rx_port_offloads_capa(void)
> > {
> > uint64_t rx_offload_capa;
> > - uint32_t max_rx_pktlen;
> > -
> > - max_rx_pktlen = em_get_max_pktlen(dev);
> >
> > rx_offload_capa =
> > DEV_RX_OFFLOAD_VLAN_STRIP |
> > @@ -1374,14 +1371,12 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > DEV_RX_OFFLOAD_KEEP_CRC |
> > DEV_RX_OFFLOAD_SCATTER;
> > - if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
> > - rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> >
> > return rx_offload_capa;
> > }
> >
> > uint64_t
> > -em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
> > +em_get_rx_queue_offloads_capa(void)
> > {
> > uint64_t rx_queue_offload_capa;
> >
> > @@ -1390,7 +1385,7 @@ em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
> > * capability be same to per port queue offloading capability
> > * for better convenience.
> > */
> > - rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
> > + rx_queue_offload_capa = em_get_rx_port_offloads_capa();
> >
> > return rx_queue_offload_capa;
> > }
> > @@ -1839,7 +1834,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
> > * to avoid splitting packets that don't fit into
> > * one buffer.
> > */
> > - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
> > + if (dev->data->mtu > RTE_ETHER_MTU ||
> > rctl_bsize < RTE_ETHER_MAX_LEN) {
> > if (!dev->data->scattered_rx)
> > PMD_INIT_LOG(DEBUG, "forcing scatter mode");
> > @@ -1874,14 +1869,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
> > if ((hw->mac.type == e1000_ich9lan ||
> > hw->mac.type == e1000_pch2lan ||
> > hw->mac.type == e1000_ich10lan) &&
> > - rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> > + dev->data->mtu > RTE_ETHER_MTU) {
> > u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
> > E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
> > E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
> > }
> >
> > if (hw->mac.type == e1000_pch2lan) {
> > - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> > + if (dev->data->mtu > RTE_ETHER_MTU)
> > e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
> > else
> > e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
> > @@ -1908,7 +1903,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
> > /*
> > * Configure support of jumbo frames, if any.
> > */
> > - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> > + if (dev->data->mtu > RTE_ETHER_MTU)
> > rctl |= E1000_RCTL_LPE;
> > else
> > rctl &= ~E1000_RCTL_LPE;
> > diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> > index e9a30d393bd7..dda4d2101adb 100644
> > --- a/drivers/net/e1000/igb_rxtx.c
> > +++ b/drivers/net/e1000/igb_rxtx.c
> > @@ -1640,7 +1640,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
> > DEV_RX_OFFLOAD_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_KEEP_CRC |
> > DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_RSS_HASH;
> > @@ -2344,7 +2343,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> > * Configure support of jumbo frames, if any.
> > */
> > max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> > - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> > + if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
> > rctl |= E1000_RCTL_LPE;
> >
> > /*
> > diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> > index 3a9d5031b262..6d1026d31951 100644
> > --- a/drivers/net/ena/ena_ethdev.c
> > +++ b/drivers/net/ena/ena_ethdev.c
> > @@ -1918,7 +1918,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > DEV_RX_OFFLOAD_TCP_CKSUM;
> >
> > - rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
> >
> > /* Inform framework about available features */
> > diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
> > index a7372c1787c7..6457677d300a 100644
> > --- a/drivers/net/enetc/enetc_ethdev.c
> > +++ b/drivers/net/enetc/enetc_ethdev.c
> > @@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
> > (DEV_RX_OFFLOAD_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > - DEV_RX_OFFLOAD_KEEP_CRC |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME);
> > + DEV_RX_OFFLOAD_KEEP_CRC);
> >
> > return 0;
> > }
> > diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
> > index 0493e096d031..c5777772a09e 100644
> > --- a/drivers/net/enic/enic_res.c
> > +++ b/drivers/net/enic/enic_res.c
> > @@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
> > DEV_TX_OFFLOAD_TCP_TSO;
> > enic->rx_offload_capa =
> > DEV_RX_OFFLOAD_SCATTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_VLAN_STRIP |
> > DEV_RX_OFFLOAD_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
> > index 5ff33e03e034..47c5efe9ea77 100644
> > --- a/drivers/net/failsafe/failsafe_ops.c
> > +++ b/drivers/net/failsafe/failsafe_ops.c
> > @@ -1193,7 +1193,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
> > DEV_RX_OFFLOAD_HEADER_SPLIT |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_VLAN_EXTEND |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_TIMESTAMP |
> > DEV_RX_OFFLOAD_SECURITY |
> > @@ -1211,7 +1210,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
> > DEV_RX_OFFLOAD_HEADER_SPLIT |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_VLAN_EXTEND |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_TIMESTAMP |
> > DEV_RX_OFFLOAD_SECURITY |
> > diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
> > index 5e4b361ca6c0..093021246286 100644
> > --- a/drivers/net/fm10k/fm10k_ethdev.c
> > +++ b/drivers/net/fm10k/fm10k_ethdev.c
> > @@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
> > DEV_RX_OFFLOAD_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_HEADER_SPLIT |
> > DEV_RX_OFFLOAD_RSS_HASH);
> > }
> > diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
> > index 79987bec273c..4005414aeb71 100644
> > --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> > +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> > @@ -738,7 +738,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_SCATTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_TCP_LRO |
> > DEV_RX_OFFLOAD_RSS_HASH;
> >
> > diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
> > index e1d465de8234..dbd4c54b18c6 100644
> > --- a/drivers/net/hns3/hns3_ethdev.c
> > +++ b/drivers/net/hns3/hns3_ethdev.c
> > @@ -2691,7 +2691,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
> > DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_VLAN_STRIP |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_RSS_HASH |
> > DEV_RX_OFFLOAD_TCP_LRO);
> > info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> > diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
> > index 3438b3650de6..eee65ac77399 100644
> > --- a/drivers/net/hns3/hns3_ethdev_vf.c
> > +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> > @@ -944,7 +944,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
> > DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_VLAN_STRIP |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_RSS_HASH |
> > DEV_RX_OFFLOAD_TCP_LRO);
> > info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> > diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> > index b93e314d3d0c..f27746ae295e 100644
> > --- a/drivers/net/i40e/i40e_ethdev.c
> > +++ b/drivers/net/i40e/i40e_ethdev.c
> > @@ -3760,7 +3760,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> > DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_VLAN_EXTEND |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_RSS_HASH;
> >
> > dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> > diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> > index 1d27cf2b0a01..69c282baa723 100644
> > --- a/drivers/net/i40e/i40e_rxtx.c
> > +++ b/drivers/net/i40e/i40e_rxtx.c
> > @@ -2911,7 +2911,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
> > rxq->max_pkt_len =
> > RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
> > data->mtu + I40E_ETH_OVERHEAD);
> > - if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> > + if (data->mtu > RTE_ETHER_MTU) {
> > if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
> > rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> > PMD_DRV_LOG(ERR, "maximum packet length must "
> > diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> > index 2d43c666fdbb..2c4103ac7ef9 100644
> > --- a/drivers/net/iavf/iavf_ethdev.c
> > +++ b/drivers/net/iavf/iavf_ethdev.c
> > @@ -588,7 +588,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
> > /* Check if the jumbo frame and maximum packet length are set
> > * correctly.
> > */
> > - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> > + if (dev->data->mtu & RTE_ETHER_MTU) {
> > if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
> > max_pkt_len > IAVF_FRAME_SIZE_MAX) {
> > PMD_DRV_LOG(ERR, "maximum packet length must be "
> > @@ -968,7 +968,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_SCATTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_RSS_HASH;
> >
> > diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
> > index 8f14a494203a..b6d79a51fa8c 100644
> > --- a/drivers/net/ice/ice_dcf_ethdev.c
> > +++ b/drivers/net/ice/ice_dcf_ethdev.c
> > @@ -71,7 +71,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
> > /* Check if the jumbo frame and maximum packet length are set
> > * correctly.
> > */
> > - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> > + if (dev_data->mtu > RTE_ETHER_MTU) {
> > if (max_pkt_len <= ICE_ETH_MAX_LEN ||
> > max_pkt_len > ICE_FRAME_SIZE_MAX) {
> > PMD_DRV_LOG(ERR, "maximum packet length must be "
> > @@ -682,7 +682,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_SCATTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_RSS_HASH;
> > dev_info->tx_offload_capa =
> > diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
> > index b547c42f9137..d28fedc96e1a 100644
> > --- a/drivers/net/ice/ice_dcf_vf_representor.c
> > +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> > @@ -149,7 +149,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_SCATTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_VLAN_EXTEND |
> > DEV_RX_OFFLOAD_RSS_HASH;
> > diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> > index 02c06d4da8bc..9b39e9c023ef 100644
> > --- a/drivers/net/ice/ice_ethdev.c
> > +++ b/drivers/net/ice/ice_ethdev.c
> > @@ -3676,7 +3676,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> >
> > dev_info->rx_offload_capa =
> > DEV_RX_OFFLOAD_VLAN_STRIP |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_KEEP_CRC |
> > DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_VLAN_FILTER;
> > diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> > index f9ef6ce57277..cc7908d32584 100644
> > --- a/drivers/net/ice/ice_rxtx.c
> > +++ b/drivers/net/ice/ice_rxtx.c
> > @@ -267,7 +267,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
> > struct ice_rlan_ctx rx_ctx;
> > enum ice_status err;
> > uint16_t buf_size;
> > - struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
> > uint32_t rxdid = ICE_RXDID_COMMS_OVS;
> > uint32_t regval;
> > struct ice_adapter *ad = rxq->vsi->adapter;
> > @@ -282,7 +281,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
> > RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> > frame_size);
> >
> > - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> > + if (dev_data->mtu > RTE_ETHER_MTU) {
> > if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
> > rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
> > PMD_DRV_LOG(ERR, "maximum packet length must "
> > diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
> > index b3473b5b1646..5e6c2ff30157 100644
> > --- a/drivers/net/igc/igc_ethdev.h
> > +++ b/drivers/net/igc/igc_ethdev.h
> > @@ -73,7 +73,6 @@ extern "C" {
> > DEV_RX_OFFLOAD_UDP_CKSUM | \
> > DEV_RX_OFFLOAD_TCP_CKSUM | \
> > DEV_RX_OFFLOAD_SCTP_CKSUM | \
> > - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> > DEV_RX_OFFLOAD_KEEP_CRC | \
> > DEV_RX_OFFLOAD_SCATTER | \
> > DEV_RX_OFFLOAD_RSS_HASH)
> > diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
> > index 28d3076439c3..30940857eac0 100644
> > --- a/drivers/net/igc/igc_txrx.c
> > +++ b/drivers/net/igc/igc_txrx.c
> > @@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> > IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
> >
> > /* Configure support of jumbo frames, if any. */
> > - if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> > + if (dev->data->mtu & RTE_ETHER_MTU)
> > rctl |= IGC_RCTL_LPE;
> > else
> > rctl &= ~IGC_RCTL_LPE;
> > diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
> > index 97447a10e46a..795980cb1ca5 100644
> > --- a/drivers/net/ionic/ionic_ethdev.c
> > +++ b/drivers/net/ionic/ionic_ethdev.c
> > @@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
> > DEV_RX_OFFLOAD_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_VLAN_STRIP |
> > DEV_RX_OFFLOAD_SCATTER |
> > diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
> > index 377b96c0236a..4e5d234e8c7d 100644
> > --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> > +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> > @@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> > DEV_RX_OFFLOAD_VLAN_EXTEND |
> > - DEV_RX_OFFLOAD_VLAN_FILTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME;
> > + DEV_RX_OFFLOAD_VLAN_FILTER;
> >
> > dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> > dev_info->tx_offload_capa =
> > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> > index 574a7bffc9cb..3205c37c3b82 100644
> > --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> > @@ -6234,7 +6234,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
> > uint16_t queue_idx, uint16_t tx_rate)
> > {
> > struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> > - struct rte_eth_rxmode *rxmode;
> > uint32_t rf_dec, rf_int;
> > uint32_t bcnrc_val;
> > uint16_t link_speed = dev->data->dev_link.link_speed;
> > @@ -6256,14 +6255,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
> > bcnrc_val = 0;
> > }
> >
> > - rxmode = &dev->data->dev_conf.rxmode;
> > /*
> > * Set global transmit compensation time to the MMW_SIZE in RTTBCNRM
> > * register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
> > * set as 0x4.
> > */
> > - if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
> > - (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
> > + if (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)
> > IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
> > else
> > IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
> > diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
> > index 9bcbc445f2d0..6e64f9a0ade2 100644
> > --- a/drivers/net/ixgbe/ixgbe_pf.c
> > +++ b/drivers/net/ixgbe/ixgbe_pf.c
> > @@ -600,15 +600,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
> > IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT;
> > if (max_frs < max_frame) {
> > hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
> > - if (max_frame > IXGBE_ETH_MAX_LEN) {
> > - dev->data->dev_conf.rxmode.offloads |=
> > - DEV_RX_OFFLOAD_JUMBO_FRAME;
> > + if (max_frame > IXGBE_ETH_MAX_LEN)
> > hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> > - } else {
> > - dev->data->dev_conf.rxmode.offloads &=
> > - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> > + else
> > hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> > - }
> > IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
> >
> > max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT;
> > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> > index 03991711fd6e..c223ef37c79f 100644
> > --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> > @@ -3033,7 +3033,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > DEV_RX_OFFLOAD_KEEP_CRC |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_RSS_HASH;
> > @@ -5095,7 +5094,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> > /*
> > * Configure jumbo frame support, if any.
> > */
> > - if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> > + if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
> > hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> > maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> > maxfrs &= 0x0000FFFF;
> > diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
> > index 4a5cfd22aa71..e73112c44749 100644
> > --- a/drivers/net/mlx4/mlx4_rxq.c
> > +++ b/drivers/net/mlx4/mlx4_rxq.c
> > @@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
> > {
> > uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_KEEP_CRC |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_RSS_HASH;
> >
> > if (priv->hw_csum)
> > diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> > index 6f4f351222d3..0cc3bccc0825 100644
> > --- a/drivers/net/mlx5/mlx5_rxq.c
> > +++ b/drivers/net/mlx5/mlx5_rxq.c
> > @@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
> > struct mlx5_dev_config *config = &priv->config;
> > uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
> > DEV_RX_OFFLOAD_TIMESTAMP |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_RSS_HASH);
> >
> > if (!config->mprq.enabled)
> > diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
> > index ef8067790f82..6428f9ff7931 100644
> > --- a/drivers/net/mvneta/mvneta_ethdev.h
> > +++ b/drivers/net/mvneta/mvneta_ethdev.h
> > @@ -54,8 +54,7 @@
> > #define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
> >
> > /** Rx offloads capabilities */
> > -#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
> > - DEV_RX_OFFLOAD_CHECKSUM)
> > +#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
> >
> > /** Tx offloads capabilities */
> > #define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
> > diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
> > index 5ce71661c84e..ef987b7de1b5 100644
> > --- a/drivers/net/mvpp2/mrvl_ethdev.c
> > +++ b/drivers/net/mvpp2/mrvl_ethdev.c
> > @@ -59,7 +59,6 @@
> >
> > /** Port Rx offload capabilities */
> > #define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
> > - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> > DEV_RX_OFFLOAD_CHECKSUM)
> >
> > /** Port Tx offloads capabilities */
> > diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
> > index b1ce35b334da..a0bb5b9640c2 100644
> > --- a/drivers/net/nfp/nfp_common.c
> > +++ b/drivers/net/nfp/nfp_common.c
> > @@ -369,8 +369,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
> > ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
> > }
> >
> > - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> > - hw->mtu = dev->data->mtu;
> > + hw->mtu = dev->data->mtu;
> >
> > if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
> > ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
> > @@ -757,9 +756,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> > .nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
> > };
> >
> > - /* All NFP devices support jumbo frames */
> > - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > -
> > if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
> > dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
> >
> > diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
> > index b73515de37ca..3a02824e3948 100644
> > --- a/drivers/net/octeontx/octeontx_ethdev.h
> > +++ b/drivers/net/octeontx/octeontx_ethdev.h
> > @@ -60,7 +60,6 @@
> > DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
> > DEV_RX_OFFLOAD_SCATTER | \
> > DEV_RX_OFFLOAD_SCATTER | \
> > - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> > DEV_RX_OFFLOAD_VLAN_FILTER)
> >
> > #define OCTEONTX_TX_OFFLOADS ( \
> > diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
> > index 7871e3d30bda..47ee126ed7fd 100644
> > --- a/drivers/net/octeontx2/otx2_ethdev.h
> > +++ b/drivers/net/octeontx2/otx2_ethdev.h
> > @@ -148,7 +148,6 @@
> > DEV_RX_OFFLOAD_SCTP_CKSUM | \
> > DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
> > DEV_RX_OFFLOAD_SCATTER | \
> > - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> > DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
> > DEV_RX_OFFLOAD_VLAN_STRIP | \
> > DEV_RX_OFFLOAD_VLAN_FILTER | \
> > diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
> > index a243683d61d3..c65041a16ba7 100644
> > --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
> > +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
> > @@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
> >
> > devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
> > devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
> > - devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
> > + devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
> > devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
> >
> > devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
> > diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
> > index a7d433547e36..aa4dcd33cc79 100644
> > --- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
> > +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
> > @@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
> > droq_pkt->l3_len = hdr_lens.l3_len;
> > droq_pkt->l4_len = hdr_lens.l4_len;
> >
> > - if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
> > - !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
> > - rte_pktmbuf_free(droq_pkt);
> > - goto oq_read_fail;
> > - }
> > -
> > if (droq_pkt->nb_segs > 1 &&
> > !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
> > rte_pktmbuf_free(droq_pkt);
> > diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
> > index 84e23ff03418..06c3ccf20716 100644
> > --- a/drivers/net/qede/qede_ethdev.c
> > +++ b/drivers/net/qede/qede_ethdev.c
> > @@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
> > DEV_RX_OFFLOAD_TCP_LRO |
> > DEV_RX_OFFLOAD_KEEP_CRC |
> > DEV_RX_OFFLOAD_SCATTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_VLAN_STRIP |
> > DEV_RX_OFFLOAD_RSS_HASH);
> > diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
> > index 280e8a61f9e0..62b215f62cd6 100644
> > --- a/drivers/net/sfc/sfc_rx.c
> > +++ b/drivers/net/sfc/sfc_rx.c
> > @@ -940,8 +940,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
> > {
> > uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
> >
> > - caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > -
> > return caps & sfc_rx_get_offload_mask(sa);
> > }
> >
> > diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
> > index b8dd905d0bd6..5d38750d6313 100644
> > --- a/drivers/net/thunderx/nicvf_ethdev.h
> > +++ b/drivers/net/thunderx/nicvf_ethdev.h
> > @@ -40,7 +40,6 @@
> > #define NICVF_RX_OFFLOAD_CAPA ( \
> > DEV_RX_OFFLOAD_CHECKSUM | \
> > DEV_RX_OFFLOAD_VLAN_STRIP | \
> > - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> > DEV_RX_OFFLOAD_SCATTER | \
> > DEV_RX_OFFLOAD_RSS_HASH)
> >
> > diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
> > index c6cd3803c434..0ce754fb25b0 100644
> > --- a/drivers/net/txgbe/txgbe_rxtx.c
> > +++ b/drivers/net/txgbe/txgbe_rxtx.c
> > @@ -1953,7 +1953,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
> > DEV_RX_OFFLOAD_UDP_CKSUM |
> > DEV_RX_OFFLOAD_TCP_CKSUM |
> > DEV_RX_OFFLOAD_KEEP_CRC |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME |
> > DEV_RX_OFFLOAD_VLAN_FILTER |
> > DEV_RX_OFFLOAD_RSS_HASH |
> > DEV_RX_OFFLOAD_SCATTER;
> > diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> > index 5d341a3e23bb..a05e73cd8b60 100644
> > --- a/drivers/net/virtio/virtio_ethdev.c
> > +++ b/drivers/net/virtio/virtio_ethdev.c
> > @@ -2556,7 +2556,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> >
> > host_features = VIRTIO_OPS(hw)->get_features(hw);
> > dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
> > - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
> > dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
> > if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
> > diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> > index 2f40ae907dcd..0210f9140b48 100644
> > --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
> > +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> > @@ -54,7 +54,6 @@
> > DEV_RX_OFFLOAD_UDP_CKSUM | \
> > DEV_RX_OFFLOAD_TCP_CKSUM | \
> > DEV_RX_OFFLOAD_TCP_LRO | \
> > - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> > DEV_RX_OFFLOAD_RSS_HASH)
> >
> > int vmxnet3_segs_dynfield_offset = -1;
> > diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
> > index 12062a785dc6..7c0cb093eda3 100644
> > --- a/examples/ip_fragmentation/main.c
> > +++ b/examples/ip_fragmentation/main.c
> > @@ -150,8 +150,7 @@ static struct rte_eth_conf port_conf = {
> > RTE_ETHER_CRC_LEN,
> > .split_hdr_size = 0,
> > .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> > - DEV_RX_OFFLOAD_SCATTER |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME),
> > + DEV_RX_OFFLOAD_SCATTER),
> > },
> > .txmode = {
> > .mq_mode = ETH_MQ_TX_NONE,
> > diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
> > index e5c7d46d2caa..af67db49f7fb 100644
> > --- a/examples/ip_reassembly/main.c
> > +++ b/examples/ip_reassembly/main.c
> > @@ -165,8 +165,7 @@ static struct rte_eth_conf port_conf = {
> > .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> > RTE_ETHER_CRC_LEN,
> > .split_hdr_size = 0,
> > - .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> > - DEV_RX_OFFLOAD_JUMBO_FRAME),
> > + .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> > },
> > .rx_adv_conf = {
> > .rss_conf = {
> > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
> > index d032a47d1c3b..4a741bfdde4d 100644
> > --- a/examples/ipsec-secgw/ipsec-secgw.c
> > +++ b/examples/ipsec-secgw/ipsec-secgw.c
> > @@ -2209,8 +2209,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
> > printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
> > nb_rx_queue, nb_tx_queue);
> >
> > - if (mtu_size > RTE_ETHER_MTU)
> > - local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > local_port_conf.rxmode.mtu = mtu_size;
> >
> > if (multi_seg_required()) {
> > diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
> > index b3993685ec92..63bbd7e64ceb 100644
> > --- a/examples/ipv4_multicast/main.c
> > +++ b/examples/ipv4_multicast/main.c
> > @@ -113,7 +113,6 @@ static struct rte_eth_conf port_conf = {
> > .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> > RTE_ETHER_CRC_LEN,
> > .split_hdr_size = 0,
> > - .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
> > },
> > .txmode = {
> > .mq_mode = ETH_MQ_TX_NONE,
> > diff --git a/examples/kni/main.c b/examples/kni/main.c
> > index c10814c6a94f..0fd945e7e0b2 100644
> > --- a/examples/kni/main.c
> > +++ b/examples/kni/main.c
> > @@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
> > }
> >
> > memcpy(&conf, &port_conf, sizeof(conf));
> > - /* Set new MTU */
> > - if (new_mtu > RTE_ETHER_MTU)
> > - conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - else
> > - conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> >
> > conf.rxmode.mtu = new_mtu;
> > ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
> > diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
> > index 7abb612ee6a4..f6dfb156ac56 100644
> > --- a/examples/l3fwd-acl/main.c
> > +++ b/examples/l3fwd-acl/main.c
> > @@ -2000,10 +2000,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
> > dev_info->max_mtu);
> > conf->rxmode.mtu = max_pkt_len - overhead_len;
> >
> > - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> > + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> > conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> > - conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - }
> >
> > return 0;
> > }
> > diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
> > index b431b9ff5f3c..a185a0512826 100644
> > --- a/examples/l3fwd-graph/main.c
> > +++ b/examples/l3fwd-graph/main.c
> > @@ -730,10 +730,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
> > dev_info->max_mtu);
> > conf->rxmode.mtu = max_pkt_len - overhead_len;
> >
> > - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> > + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> > conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> > - conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - }
> >
> > return 0;
> > }
> > diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
> > index e58561327c48..12b4dce77ce1 100644
> > --- a/examples/l3fwd-power/main.c
> > +++ b/examples/l3fwd-power/main.c
> > @@ -2509,10 +2509,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
> > dev_info->max_mtu);
> > conf->rxmode.mtu = max_pkt_len - overhead_len;
> >
> > - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> > + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> > conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> > - conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - }
> >
> > return 0;
> > }
> > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> > index cb9bc7ad6002..22d35749410b 100644
> > --- a/examples/l3fwd/main.c
> > +++ b/examples/l3fwd/main.c
> > @@ -987,10 +987,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
> > dev_info->max_mtu);
> > conf->rxmode.mtu = max_pkt_len - overhead_len;
> >
> > - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> > + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> > conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> > - conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - }
> >
> > return 0;
> > }
> > diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
> > index b6cddc8c7b51..8fc3a7c675a2 100644
> > --- a/examples/performance-thread/l3fwd-thread/main.c
> > +++ b/examples/performance-thread/l3fwd-thread/main.c
> > @@ -3493,10 +3493,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
> > dev_info->max_mtu);
> > conf->rxmode.mtu = max_pkt_len - overhead_len;
> >
> > - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> > + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> > conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> > - conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - }
> >
> > return 0;
> > }
> > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > index da381b41c0c5..a9c207124153 100644
> > --- a/examples/vhost/main.c
> > +++ b/examples/vhost/main.c
> > @@ -631,11 +631,8 @@ us_vhost_parse_args(int argc, char **argv)
> > return -1;
> > }
> > mergeable = !!ret;
> > - if (ret) {
> > - vmdq_conf_default.rxmode.offloads |=
> > - DEV_RX_OFFLOAD_JUMBO_FRAME;
> > + if (ret)
> > vmdq_conf_default.rxmode.mtu = MAX_MTU;
> > - }
> > break;
> >
> > case OPT_STATS_NUM:
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index ce0ed509d28f..c2b624aba1a0 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -118,7 +118,6 @@ static const struct {
> > RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT),
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> > - RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> > RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> > RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> > RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> > @@ -1485,13 +1484,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> > goto rollback;
> > }
> >
> > - if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> > - if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
> > - dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
> > - /* Use default value */
> > - dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> > - }
> > -
> > dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
> >
> > /*
> > @@ -3639,7 +3631,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> > int ret;
> > struct rte_eth_dev_info dev_info;
> > struct rte_eth_dev *dev;
> > - int is_jumbo_frame_capable = 0;
> >
> > RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > dev = &rte_eth_devices[port_id];
> > @@ -3667,27 +3658,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> > frame_size = mtu + overhead_len;
> > if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> > return -EINVAL;
> > -
> > - if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> > - is_jumbo_frame_capable = 1;
> > }
> >
> > - if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
> > - return -EINVAL;
> > -
> > ret = (*dev->dev_ops->mtu_set)(dev, mtu);
> > - if (ret == 0) {
> > + if (ret == 0)
> > dev->data->mtu = mtu;
> >
> > - /* switch to jumbo mode if needed */
> > - if (mtu > RTE_ETHER_MTU)
> > - dev->data->dev_conf.rxmode.offloads |=
> > - DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - else
> > - dev->data->dev_conf.rxmode.offloads &=
> > - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> > - }
> > -
> > return eth_err(port_id, ret);
> > }
> >
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index 9fba2bd73c84..4d0f956a4b28 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1389,7 +1389,6 @@ struct rte_eth_conf {
> > #define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
> > #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> > #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> > -#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> > #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> > /**
> > * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> > --
> > 2.31.1
> >
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length
2021-10-01 15:07 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Stephen Hemminger
@ 2021-10-05 16:46 ` Ferruh Yigit
0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-05 16:46 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Andrew Rybchenko, Thomas Monjalon, Jerin Jacob, Xiaoyun Li,
Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Keith Wiles, Jiawen Wu,
Jian Wang, Maxime Coquelin, Chenbo Xia, Nicolas Chautru,
Harry van Haaren, Cristian Dumitrescu, Radu Nicolau, Akhil Goyal,
Tomasz Kantecki, Declan Doherty, Pavan Nikhilesh,
Kirill Rybalchenko, Jasvinder Singh, dev
On 10/1/2021 4:07 PM, Stephen Hemminger wrote:
> On Fri, 1 Oct 2021 15:36:18 +0100
> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
>> Other issues causing confusion is:
>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>> Ethernet frame overhead, and this overhead may be different from
>> device to device based on what device supports, like VLAN and QinQ.
>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>> which adds additional confusion and some APIs and PMDs already
>> discards this documented behavior.
>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>> field, this adds configuration complexity for application.
>
> One other issue which DPDK inherits from Linux and BSD is that
> MTU (Maximum Transmission Unit) is overloaded to mean MRU (Maximum Receive Unit).
>
> On Linux, network devices are allowed to receive packets of any size they
> want. MTU is used as a hint about "you need to accept at least MTU size
> packets on receive". So MRU >= MTU.
>
> In practice, and documentation, MRU and MTU are used synonymously.
>
Yes MTU is used to refer both MTU & MRU, same config value (MTU) is used
to configure both.
I don't know if there is a need to configure them separately, if there is we
can address it in another patch.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/6] ethdev: remove jumbo offload flag
2021-10-04 7:55 ` Somnath Kotur
@ 2021-10-05 16:48 ` Ferruh Yigit
0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-05 16:48 UTC (permalink / raw)
To: Somnath Kotur; +Cc: dev
On 10/4/2021 8:55 AM, Somnath Kotur wrote:
> On Mon, Oct 4, 2021 at 10:42 AM Somnath Kotur
> <somnath.kotur@broadcom.com> wrote:
>> On Fri, Oct 1, 2021 at 8:07 PM Ferruh Yigit<ferruh.yigit@intel.com> wrote:
>>> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>>>
>>> Instead of drivers announce this capability, application can deduct the
>>> capability by checking reported 'dev_info.max_mtu' or
>>> 'dev_info.max_rx_pktlen'.
>>>
>>> And instead of application explicitly set this flag to enable jumbo
>> application setting this flag explicitly sounds better?
ack
>>> frames, this can be deducted by driver by comparing requested 'mtu' to
>> typo, think you meant 'deduced' ?:)
yep.
Thanks Somnath, I am sending a new version with above fixes.
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
` (5 preceding siblings ...)
2021-10-01 15:07 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Stephen Hemminger
@ 2021-10-05 17:16 ` Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
` (7 more replies)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
` (2 subsequent siblings)
9 siblings, 8 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-05 17:16 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
Thomas Monjalon
Cc: Ferruh Yigit, dev
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Min Hu (Connor) <humin29@huawei.com>
v2:
* Converted to explicit checks for zero/non-zero
* fixed hns3 checks
* fixed some sample app rxmode.mtu value
* fixed some sample app max-pkt-len argument and updated doc for it
v3:
* rebased
v4:
* fix typos in commit logs
---
app/test-eventdev/test_perf_common.c | 1 -
app/test-eventdev/test_pipeline_common.c | 5 +-
app/test-pmd/cmdline.c | 49 +++----
app/test-pmd/config.c | 22 ++-
app/test-pmd/parameters.c | 4 +-
app/test-pmd/testpmd.c | 103 ++++++++------
app/test-pmd/testpmd.h | 2 +-
app/test/test_link_bonding.c | 1 -
app/test/test_link_bonding_mode4.c | 1 -
| 2 -
app/test/test_pmd_perf.c | 1 -
doc/guides/nics/dpaa.rst | 2 +-
doc/guides/nics/dpaa2.rst | 2 +-
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/fm10k.rst | 2 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/octeontx.rst | 2 +-
doc/guides/nics/thunderx.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 25 ----
doc/guides/sample_app_ug/flow_classify.rst | 7 +-
doc/guides/sample_app_ug/l3_forward.rst | 6 +-
.../sample_app_ug/l3_forward_access_ctrl.rst | 4 +-
doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +-
.../sample_app_ug/l3_forward_power_man.rst | 4 +-
.../sample_app_ug/performance_thread.rst | 4 +-
doc/guides/sample_app_ug/skeleton.rst | 7 +-
drivers/net/atlantic/atl_ethdev.c | 3 -
drivers/net/avp/avp_ethdev.c | 17 +--
drivers/net/axgbe/axgbe_ethdev.c | 7 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
drivers/net/bnxt/bnxt_ethdev.c | 21 +--
drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 9 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-
drivers/net/cxgbe/cxgbe_main.c | 3 +-
drivers/net/cxgbe/sge.c | 3 +-
drivers/net/dpaa/dpaa_ethdev.c | 52 +++----
drivers/net/dpaa2/dpaa2_ethdev.c | 31 ++---
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/igb_ethdev.c | 18 +--
drivers/net/e1000/igb_rxtx.c | 16 +--
drivers/net/ena/ena_ethdev.c | 27 ++--
drivers/net/enetc/enetc_ethdev.c | 24 +---
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 42 +++---
drivers/net/fm10k/fm10k_ethdev.c | 2 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-
drivers/net/hns3/hns3_ethdev.c | 42 +-----
drivers/net/hns3/hns3_ethdev_vf.c | 28 +---
drivers/net/hns3/hns3_rxtx.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 10 +-
drivers/net/i40e/i40e_rxtx.c | 4 +-
drivers/net/iavf/iavf_ethdev.c | 9 +-
drivers/net/ice/ice_dcf_ethdev.c | 5 +-
drivers/net/ice/ice_ethdev.c | 14 +-
drivers/net/ice/ice_rxtx.c | 12 +-
drivers/net/igc/igc_ethdev.c | 51 ++-----
drivers/net/igc/igc_ethdev.h | 7 +
drivers/net/igc/igc_txrx.c | 22 +--
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ionic/ionic_rxtx.c | 6 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 35 ++---
drivers/net/ixgbe/ixgbe_pf.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 15 +-
drivers/net/liquidio/lio_ethdev.c | 20 +--
drivers/net/mlx4/mlx4_rxq.c | 17 +--
drivers/net/mlx5/mlx5_rxq.c | 25 ++--
drivers/net/mvneta/mvneta_ethdev.c | 7 -
drivers/net/mvneta/mvneta_rxtx.c | 13 +-
drivers/net/mvpp2/mrvl_ethdev.c | 34 ++---
drivers/net/nfp/nfp_common.c | 9 +-
drivers/net/octeontx/octeontx_ethdev.c | 12 +-
drivers/net/octeontx2/otx2_ethdev.c | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-
drivers/net/pfe/pfe_ethdev.c | 7 +-
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/qede/qede_rxtx.c | 8 +-
drivers/net/sfc/sfc_ethdev.c | 4 +-
drivers/net/sfc/sfc_port.c | 6 +-
drivers/net/tap/rte_eth_tap.c | 7 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +-
drivers/net/txgbe/txgbe_ethdev.c | 7 +-
drivers/net/txgbe/txgbe_ethdev.h | 4 +
drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
drivers/net/txgbe/txgbe_rxtx.c | 19 +--
drivers/net/virtio/virtio_ethdev.c | 9 +-
examples/bbdev_app/main.c | 1 -
examples/bond/main.c | 1 -
examples/distributor/main.c | 1 -
.../pipeline_worker_generic.c | 1 -
.../eventdev_pipeline/pipeline_worker_tx.c | 1 -
examples/flow_classify/flow_classify.c | 12 +-
examples/ioat/ioatfwd.c | 1 -
examples/ip_fragmentation/main.c | 12 +-
examples/ip_pipeline/link.c | 2 +-
examples/ip_reassembly/main.c | 12 +-
examples/ipsec-secgw/ipsec-secgw.c | 7 +-
examples/ipv4_multicast/main.c | 9 +-
examples/kni/main.c | 6 +-
examples/l2fwd-cat/l2fwd-cat.c | 8 +-
examples/l2fwd-crypto/main.c | 1 -
examples/l2fwd-event/l2fwd_common.c | 1 -
examples/l3fwd-acl/main.c | 129 +++++++++---------
examples/l3fwd-graph/main.c | 83 +++++++----
examples/l3fwd-power/main.c | 90 +++++++-----
examples/l3fwd/main.c | 84 +++++++-----
.../performance-thread/l3fwd-thread/main.c | 88 +++++++-----
.../performance-thread/l3fwd-thread/test.sh | 24 ++--
examples/pipeline/obj.c | 2 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 1 -
examples/qos_sched/init.c | 1 -
examples/rxtx_callbacks/main.c | 10 +-
examples/skeleton/basicfwd.c | 12 +-
examples/vhost/main.c | 4 +-
examples/vm_power_manager/main.c | 11 +-
lib/ethdev/rte_ethdev.c | 92 +++++++------
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_trace.h | 2 +-
121 files changed, 801 insertions(+), 1071 deletions(-)
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..660d5a0364b6 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..5fcea74b4d43 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
return -EINVAL;
}
- port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
- if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
+ port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
+ if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index a9efd027c376..a677451073ae 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1892,45 +1892,38 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
__rte_unused void *data)
{
struct cmd_config_max_pkt_len_result *res = parsed_result;
- uint32_t max_rx_pkt_len_backup = 0;
- portid_t pid;
+ portid_t port_id;
int ret;
+ if (strcmp(res->name, "max-pkt-len") != 0) {
+ printf("Unknown parameter\n");
+ return;
+ }
+
if (!all_ports_stopped()) {
fprintf(stderr, "Please stop all ports first\n");
return;
}
- RTE_ETH_FOREACH_DEV(pid) {
- struct rte_port *port = &ports[pid];
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_port *port = &ports[port_id];
- if (!strcmp(res->name, "max-pkt-len")) {
- if (res->value < RTE_ETHER_MIN_LEN) {
- fprintf(stderr,
- "max-pkt-len can not be less than %d\n",
- RTE_ETHER_MIN_LEN);
- return;
- }
- if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
- return;
-
- ret = eth_dev_info_get_print_err(pid, &port->dev_info);
- if (ret != 0) {
- fprintf(stderr,
- "rte_eth_dev_info_get() failed for port %u\n",
- pid);
- return;
- }
-
- max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
+ if (res->value < RTE_ETHER_MIN_LEN) {
+ fprintf(stderr,
+ "max-pkt-len can not be less than %d\n",
+ RTE_ETHER_MIN_LEN);
+ return;
+ }
- port->dev_conf.rxmode.max_rx_pkt_len = res->value;
- if (update_jumbo_frame_offload(pid) != 0)
- port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
- } else {
- fprintf(stderr, "Unknown parameter\n");
+ ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
+ if (ret != 0) {
+ fprintf(stderr,
+ "rte_eth_dev_info_get() failed for port %u\n",
+ port_id);
return;
}
+
+ update_jumbo_frame_offload(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9c66329e96ee..db3eeffa0093 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1147,7 +1147,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
int diag;
struct rte_port *rte_port = &ports[port_id];
struct rte_eth_dev_info dev_info;
- uint16_t eth_overhead;
int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
@@ -1164,21 +1163,18 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
return;
}
diag = rte_eth_dev_set_mtu(port_id, mtu);
- if (diag)
+ if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
- else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- /*
- * Ether overhead in driver is equal to the difference of
- * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
- * device supports jumbo frame.
- */
- eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
- if (mtu > RTE_ETHER_MTU) {
+ return;
+ }
+
+ rte_port->dev_conf.rxmode.mtu = mtu;
+
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (mtu > RTE_ETHER_MTU)
rte_port->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- rte_port->dev_conf.rxmode.max_rx_pkt_len =
- mtu + eth_overhead;
- } else
+ else
rte_port->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
}
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 3f94a82e321f..27eb4bc667df 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -870,7 +870,9 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
n = atoi(optarg);
if (n >= RTE_ETHER_MIN_LEN)
- rx_mode.max_rx_pkt_len = (uint32_t) n;
+ rx_mode.mtu = (uint32_t) n -
+ (RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN);
else
rte_exit(EXIT_FAILURE,
"Invalid max-pkt-len=%d - should be > %d\n",
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97ae52e17ecd..8c23cfe7c3da 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -446,13 +446,7 @@ lcoreid_t latencystats_lcore_id = -1;
/*
* Ethernet device configuration.
*/
-struct rte_eth_rxmode rx_mode = {
- /* Default maximum frame length.
- * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
- * in init_config().
- */
- .max_rx_pkt_len = 0,
-};
+struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
@@ -1481,11 +1475,24 @@ check_nb_hairpinq(queueid_t hairpinq)
return 0;
}
+static int
+get_eth_overhead(struct rte_eth_dev_info *dev_info)
+{
+ uint32_t eth_overhead;
+
+ if (dev_info->max_mtu != UINT16_MAX &&
+ dev_info->max_rx_pktlen > dev_info->max_mtu)
+ eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu;
+ else
+ eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return eth_overhead;
+}
+
static void
init_config_port_offloads(portid_t pid, uint32_t socket_id)
{
struct rte_port *port = &ports[pid];
- uint16_t data_size;
int ret;
int i;
@@ -1496,7 +1503,7 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid);
+ ret = update_jumbo_frame_offload(pid, 0);
if (ret != 0)
fprintf(stderr,
"Updating jumbo frame offload failed for port %u\n",
@@ -1528,14 +1535,20 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
*/
if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
- data_size = rx_mode.max_rx_pkt_len /
- port->dev_info.rx_desc_lim.nb_mtu_seg_max;
-
- if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
- mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
- TESTPMD_LOG(WARNING,
- "Configured mbuf size of the first segment %hu\n",
- mbuf_data_size[0]);
+ uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
+ uint16_t mtu;
+
+ if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
+ uint16_t data_size = (mtu + eth_overhead) /
+ port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+ uint16_t buffer_size = data_size + RTE_PKTMBUF_HEADROOM;
+
+ if (buffer_size > mbuf_data_size[0]) {
+ mbuf_data_size[0] = buffer_size;
+ TESTPMD_LOG(WARNING,
+ "Configured mbuf size of the first segment %hu\n",
+ mbuf_data_size[0]);
+ }
}
}
}
@@ -3451,44 +3464,45 @@ rxtx_port_config(struct rte_port *port)
/*
* Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned if JUMBO_FRAME offload is not set.
+ * MTU is also aligned.
*
* port->dev_info should be set before calling this function.
*
+ * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
+ * ETH_OVERHEAD". This is useful to update flags but not MTU value.
+ *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid)
+update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
uint64_t rx_offloads;
- int ret;
+ uint16_t mtu, new_mtu;
bool on;
- /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
- if (port->dev_info.max_mtu != UINT16_MAX &&
- port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
- eth_overhead = port->dev_info.max_rx_pktlen -
- port->dev_info.max_mtu;
- else
- eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ eth_overhead = get_eth_overhead(&port->dev_info);
- rx_offloads = port->dev_conf.rxmode.offloads;
+ if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
+ printf("Failed to get MTU for port %u\n", portid);
+ return -1;
+ }
+
+ if (max_rx_pktlen == 0)
+ max_rx_pktlen = mtu + eth_overhead;
- /* Default config value is 0 to use PMD specific overhead */
- if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
- port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
+ rx_offloads = port->dev_conf.rxmode.offloads;
+ new_mtu = max_rx_pktlen - eth_overhead;
- if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
+ if (new_mtu <= RTE_ETHER_MTU) {
rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
on = false;
} else {
if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
fprintf(stderr,
"Frame size (%u) is not supported by port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len,
- portid);
+ max_rx_pktlen, portid);
return -1;
}
rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -3509,19 +3523,18 @@ update_jumbo_frame_offload(portid_t portid)
}
}
- /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
- * if unset do it here
- */
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- ret = eth_dev_set_mtu_mp(portid,
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
- if (ret)
- fprintf(stderr,
- "Failed to set MTU to %u for port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead,
- portid);
+ if (mtu == new_mtu)
+ return 0;
+
+ if (eth_dev_set_mtu_mp(portid, new_mtu) != 0) {
+ fprintf(stderr,
+ "Failed to set MTU to %u for port %u\n",
+ new_mtu, portid);
+ return -1;
}
+ port->dev_conf.rxmode.mtu = new_mtu;
+
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5863b2f43f3e..17562215c733 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1022,7 +1022,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid);
+int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..5388d18125a6 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
.split_hdr_size = 0,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7adc7..3e9254fe896d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..e7bb0497b663 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
@@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..a3b4f52c65e6 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 7355ec305916..9dad612058c6 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -335,7 +335,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index df23a5704dca..831bc564883a 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -545,7 +545,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4fce8cd1c976..483cb7da576f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -166,7 +166,7 @@ Jumbo frame
Supports Rx jumbo frames.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.max_rx_pkt_len``.
+ ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..ed6afd62703d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -141,7 +141,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
up to 15364 bytes can still reach the host interface.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d882..1f5619ed53fc 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -606,9 +606,9 @@ Driver options
and each stride receives one packet. MPRQ can improve throughput for
small-packet traffic.
- When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
+ When MPRQ is enabled, MTU can be larger than the size of
user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
- configure large stride size enough to accommodate max_rx_pkt_len as long as
+ configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index b1a868b054d1..8236cc3e93e0 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -157,7 +157,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 12d43ce93e28..98f23a2b2a3d 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -392,7 +392,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
up to 9200 bytes can still reach the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b4f..1063a1fe4bea 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -81,31 +81,6 @@ Deprecation Notices
In 19.11 PMDs will still update the field even when the offload is not
enabled.
-* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be
- replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
- The new ``mtu`` field will be used to configure the initial device MTU via
- ``rte_eth_dev_configure()`` API.
- Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
- The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store
- the configured ``mtu`` value,
- and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
- be used to store the user configuration request.
- Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled,
- ``mtu`` field will be always valid.
- When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU``
- value will be used.
- ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully,
- either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
-
- An application may need to configure device for a specific Rx packet size, like for
- cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
- can't be bigger than Rx buffer size.
- To cover these cases an application needs to know the device packet overhead to be
- able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
- ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
- the device packet overhead can be calculated as:
- ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
-
* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
will be removed in 21.11.
Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst
index 812aaa87b05b..6c4c04e935e4 100644
--- a/doc/guides/sample_app_ug/flow_classify.rst
+++ b/doc/guides/sample_app_ug/flow_classify.rst
@@ -162,12 +162,7 @@ Forwarding application is shown below:
:end-before: >8 End of initializing a given port.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
-
-.. literalinclude:: ../../../examples/flow_classify/flow_classify.c
- :language: c
- :start-after: Ethernet ports configured with default settings using struct. 8<
- :end-before: >8 End of configuration of Ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 2d5cd5f1c0ba..56af5cd5b383 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -65,7 +65,7 @@ The application has a number of command line options::
[--lookup LOOKUP_METHOD]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--hash-entry-num]
[--ipv6]
@@ -95,9 +95,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 2cf6e4556f14..486247ac2e4f 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -236,7 +236,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
+ ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--max-pkt-len PKTLEN] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
where,
@@ -255,8 +255,6 @@ where,
* --alg=<val>: optional, ACL classify method to use, one of:
``scalar|sse|avx2|neon|altivec|avx512x16|avx512x32``
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst b/doc/guides/sample_app_ug/l3_forward_graph.rst
index 03e9a85aa68c..0a3e0d44ecea 100644
--- a/doc/guides/sample_app_ug/l3_forward_graph.rst
+++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
@@ -48,7 +48,7 @@ The application has a number of command line options similar to l3fwd::
[-P]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--per-port-pool]
@@ -63,9 +63,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index 0495314c87d5..8817eaadbfc3 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -88,7 +88,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa]
+ ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--max-pkt-len PKTLEN] [--no-numa]
where,
@@ -99,8 +99,6 @@ where,
* --config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which ports are mapped to which cores.
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst
index 9b09838f6448..7d1bf6eaae8c 100644
--- a/doc/guides/sample_app_ug/performance_thread.rst
+++ b/doc/guides/sample_app_ug/performance_thread.rst
@@ -59,7 +59,7 @@ The application has a number of command line options::
-p PORTMASK [-P]
--rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
--tx(lcore,thread)[,(lcore,thread)]
- [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
+ [--max-pkt-len PKTLEN] [--no-numa]
[--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
[--parse-ptype]
@@ -80,8 +80,6 @@ Where:
the lcore the thread runs on, and the id of RX thread with which it is
associated. The parameters are explained below.
-* ``--enable-jumbo``: optional, enables jumbo frames.
-
* ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
* ``--no-numa``: optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst
index f7bcd7ed2a1d..6d0de6440105 100644
--- a/doc/guides/sample_app_ug/skeleton.rst
+++ b/doc/guides/sample_app_ug/skeleton.rst
@@ -106,12 +106,7 @@ Forwarding application is shown below:
:end-before: >8 End of main functional part of port initialization.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
-
-.. literalinclude:: ../../../examples/skeleton/basicfwd.c
- :language: c
- :start-after: Configuration of ethernet ports. 8<
- :end-before: >8 End of configuration of ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..3f654c071566 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return 0;
}
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff5b..0feacc822433 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1059,17 +1059,18 @@ static int
avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
struct avp_dev *avp)
{
- unsigned int max_rx_pkt_len;
+ unsigned int max_rx_pktlen;
- max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
- if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
- (max_rx_pkt_len > avp->host_mbuf_size)) {
+ if (max_rx_pktlen > avp->guest_mbuf_size ||
+ max_rx_pktlen > avp->host_mbuf_size) {
/*
* If the guest MTU is greater than either the host or guest
* buffers then chained mbufs have to be enabled in the TX
* direction. It is assumed that the application will not need
- * to send packets larger than their max_rx_pkt_len (MRU).
+ * to send packets larger than their MTU.
*/
return 1;
}
@@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
avp->max_rx_pkt_len,
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN,
avp->host_mbuf_size,
avp->guest_mbuf_size);
@@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* function; send it truncated to avoid the performance
* hit of having to manage returning the already
* allocated buffer to the free list. This should not
- * happen since the application should have set the
- * max_rx_pkt_len based on its MTU and it should be
+ * happen since the application should have not send
+ * packages larger than its MTU and it should be
* policing its own packet sizes.
*/
txq->errors++;
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af11f..76aeec077f2b 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
int ret;
struct rte_eth_dev_data *dev_data = dev->data;
- uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len;
+ uint16_t max_pkt_len;
dev->dev_ops = &axgbe_eth_dev_ops;
@@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
+
+ max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (frame_size > AXGBE_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
val = 1;
@@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
val = 0;
}
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..009a94e9a8fa 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -175,16 +175,12 @@ static int
bnx2x_dev_configure(struct rte_eth_dev *dev)
{
struct bnx2x_softc *sc = dev->data->dev_private;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
PMD_INIT_FUNC_TRACE(sc);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- dev->data->mtu = sc->mtu;
- }
+ sc->mtu = dev->data->dev_conf.rxmode.mtu;
if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues");
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85fa..8c6f20b75aed 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1157,13 +1157,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
- BNXT_NUM_VLANS;
- bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
- }
+ bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
+
return 0;
resource_error:
@@ -1201,6 +1196,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
*/
static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
uint16_t buf_size;
int i;
@@ -1215,7 +1211,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size)
+ if (eth_dev->data->mtu + overhead > buf_size)
return 1;
}
return 0;
@@ -3026,6 +3022,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
uint32_t rc = 0;
@@ -3039,8 +3036,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
if (!eth_dev->data->nb_rx_queues)
return rc;
- new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
- VLAN_TAG_SIZE * BNXT_NUM_VLANS;
+ new_pkt_size = new_mtu + overhead;
/*
* Disallow any MTU change that would require scattered receive support
@@ -3067,7 +3063,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
/* Is there a change in mtu setting? */
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size)
+ if (eth_dev->data->mtu == new_mtu)
return rc;
for (i = 0; i < bp->nr_vnics; i++) {
@@ -3089,9 +3085,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
}
- if (!rc)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size;
-
if (bnxt_hwrm_config_host_mtu(bp))
PMD_DRV_LOG(WARNING, "Failed to configure host MTU\n");
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 54987d96b34d..412acff42f65 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1724,8 +1724,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_VLAN_FILTER;
- slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ slave_eth_dev->data->dev_conf.rxmode.mtu =
+ bonded_eth_dev->data->dev_conf.rxmode.mtu;
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_JUMBO_FRAME)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 8629193d5049..8d0677cd89d9 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
}
@@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct cnxk_eth_rxq_sp *rxq;
- uint16_t mtu;
int rc;
rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
/* Setup scatter mode if needed by jumbo */
nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
- CNXK_NIX_MAX_VTAG_ACT_SIZE;
-
- rc = cnxk_nix_mtu_set(eth_dev, mtu);
+ rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
if (rc)
plt_err("Failed to set default MTU size, rc=%d", rc);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..695d0d6fd3e2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
goto exit;
}
- frame_size += RTE_ETHER_CRC_LEN;
-
- if (frame_size > RTE_ETHER_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca397600..8cf61f12a8d6 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return err;
/* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
+ if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
/* set to jumbo mode if needed */
- if (new_mtu > CXGBE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- if (!err)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;
-
return err;
}
@@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
- unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
struct rte_eth_dev_info dev_info;
@@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
rxq->fl.size = temp_nb_desc;
/* Set to jumbo mode if necessary */
- if (pkt_len > CXGBE_ETH_MAX_LEN)
+ if (eth_dev->data->mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..91d6bb9bbcb0 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
unsigned int mtu;
int ret;
- mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ mtu = pi->eth_dev->data->mtu;
conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..830f5192474d 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
u32 wr_mid;
u64 cntrl, *end;
bool v6;
- u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
+ u32 max_pkt_len;
/* Reject xmit if queue is stopped */
if (unlikely(txq->flags & EQ_STOPPED))
@@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
return 0;
}
+ max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
(unlikely(m->pkt_len > max_pkt_len)))
goto out_free;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249df1..adbdb87baab9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > DPAA_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
@@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
struct fman_if *fif = dev->process_private;
struct __fman_if *__fif;
struct rte_intr_handle *intr_handle;
+ uint32_t max_rx_pktlen;
int speed, duplex;
int ret;
@@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- DPAA_PMD_DEBUG("enabling jumbo");
-
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- else {
- DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
- "supported is %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- DPAA_MAX_RX_PKT_LEN);
- max_len = DPAA_MAX_RX_PKT_LEN;
- }
-
- fman_if_set_maxfrm(dev->process_private, max_len);
- dev->data->mtu = max_len
- - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
+ DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
+ "supported is %d",
+ max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
+ max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
+ fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
+
if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
@@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
u32 flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -EINVAL;
}
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
+ VLAN_TAG_SIZE;
/* Max packet can fit in single buffer */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
+ if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- buffsz * DPAA_SGT_MAX_ENTRIES) {
- DPAA_PMD_ERR("max RxPkt size %d too big to fit "
+ if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
+ DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz * DPAA_SGT_MAX_ENTRIES);
+ max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES);
rte_errno = EOVERFLOW;
return -rte_errno;
}
@@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz - RTE_PKTMBUF_HEADROOM);
+ max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
}
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
@@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif),
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e22..758a14e0ad2d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
int tx_l3_csum_offload = false;
int tx_l4_csum_offload = false;
int ret, tc_index;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
- ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
- priv->token, eth_conf->rxmode.max_rx_pkt_len
- - RTE_ETHER_CRC_LEN);
- if (ret) {
- DPAA2_PMD_ERR(
- "Unable to set mtu. check config");
- return ret;
- }
- dev->data->mtu =
- dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
- VLAN_TAG_SIZE;
- } else {
- return -1;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
+ ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
+ priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
+ if (ret != 0) {
+ DPAA2_PMD_ERR("Unable to set mtu. check config");
+ return ret;
}
+ } else {
+ return -1;
}
if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
@@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (frame_size > DPAA2_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..6f418a36aa04 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d80fad01e36d..4c114bf90fc7 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2681,9 +2681,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD);
}
static void
@@ -2699,10 +2697,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE);
+ E1000_WRITE_REG(hw, E1000_RLPML,
+ dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
}
static int
@@ -4400,7 +4396,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -4411,11 +4407,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
return 0;
}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712af..e9a30d393bd7 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
@@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
rctl |= E1000_RCTL_LPE;
/*
@@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
@@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
/* setup MTU */
- e1000_rlpml_set_vf(hw,
- (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE));
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
+ e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68a7..3a9d5031b262 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -679,26 +679,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev,
return rc;
}
-static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
-{
- uint32_t max_frame_len = adapter->max_mtu;
-
- if (adapter->edev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- max_frame_len =
- adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
-
- return max_frame_len;
-}
-
static int ena_check_valid_conf(struct ena_adapter *adapter)
{
- uint32_t max_frame_len = ena_get_mtu_conf(adapter);
+ uint32_t mtu = adapter->edev_data->mtu;
- if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_INIT_LOG(ERR,
"Unsupported MTU of %d. Max MTU: %d, min MTU: %d\n",
- max_frame_len, adapter->max_mtu, ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return ENA_COM_UNSUPPORTED;
}
@@ -871,10 +859,10 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
ena_dev = &adapter->ena_dev;
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
- if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_DRV_LOG(ERR,
"Invalid MTU setting. New MTU: %d, max MTU: %d, min MTU: %d\n",
- mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return -EINVAL;
}
@@ -1945,7 +1933,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = ENA_HASH_KEY_SIZE;
dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
- dev_info->max_rx_pktlen = adapter->max_mtu;
+ dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ dev_info->min_mtu = ENA_MIN_MTU;
+ dev_info->max_mtu = adapter->max_mtu;
dev_info->max_mac_addrs = 1;
dev_info->max_rx_queues = adapter->max_num_io_queues;
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index b496cd470045..cdb9783b5372 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > ENETC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads &=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/*setting the MTU*/
enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) |
ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
@@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint32_t checksum = L3_CKSUM | L4_CKSUM;
+ uint32_t max_len;
PMD_INIT_FUNC_TRACE();
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
- enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
- ENETC_SET_MAXFRM(max_len));
- enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
- ENETC_MAC_MAXFRM_SIZE);
- enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
- 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
- RTE_ETHER_CRC_LEN;
- }
+ max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len));
+ enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
+ enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
int config;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b8f..6a81ceb62ba7 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
* max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
* a hint to the driver to size receive buffers accordingly so that
* larger-than-vnic-mtu packets get truncated.. For DPDK, we let
- * the user decide the buffer size via rxmode.max_rx_pkt_len, basically
+ * the user decide the buffer size via rxmode.mtu, basically
* ignoring vNIC mtu.
*/
device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu);
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..dfc7f5d1f94f 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
struct rq_enet_desc *rqd = rq->ring.descs;
unsigned i;
dma_addr_t dma_addr;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint16_t rq_buf_len;
if (!rq->in_use)
@@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
/*
* If *not* using scatter and the mbuf size is greater than the
- * requested max packet size (max_rx_pkt_len), then reduce the
- * posted buffer size to max_rx_pkt_len. HW still receives packets
- * larger than max_rx_pkt_len, but they will be truncated, which we
+ * requested max packet size (mtu + eth overhead), then reduce the
+ * posted buffer size to max packet size. HW still receives packets
+ * larger than max packet size, but they will be truncated, which we
* drop in the rx handler. Not ideal, but better than returning
* large packets when the user is not expecting them.
*/
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM;
- if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
- rq_buf_len = max_rx_pkt_len;
+ if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
+ rq_buf_len = max_rx_pktlen;
for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
mb = rte_mbuf_raw_alloc(rq->mp);
if (mb == NULL) {
@@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
unsigned int mbuf_size, mbufs_per_pkt;
unsigned int nb_sop_desc, nb_data_desc;
uint16_t min_sop, max_sop, min_data, max_data;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
/*
* Representor uses a reserved PF queue. Translate representor
@@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM);
- /* max_rx_pkt_len includes the ethernet header and CRC. */
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ /* max_rx_pktlen includes the ethernet header and CRC. */
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
- mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
+ mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
} else {
dev_info(enic, "Scatter rx mode disabled\n");
mbufs_per_pkt = 1;
- if (max_rx_pkt_len > mbuf_size) {
+ if (max_rx_pktlen > mbuf_size) {
dev_warning(enic, "The maximum Rx packet size (%u) is"
" larger than the mbuf size (%u), and"
" scatter is disabled. Larger packets will"
" be truncated.\n",
- max_rx_pkt_len, mbuf_size);
+ max_rx_pktlen, mbuf_size);
}
}
@@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
rq_sop->data_queue_enable = 1;
rq_data->in_use = 1;
/*
- * HW does not directly support rxmode.max_rx_pkt_len. HW always
+ * HW does not directly support MTU. HW always
* receives packet sizes up to the "max" MTU.
* If not using scatter, we can achieve the effect of dropping
* larger packets by reducing the size of posted buffers.
* See enic_alloc_rx_queue_mbufs().
*/
- if (max_rx_pkt_len <
- enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
- dev_warning(enic, "rxmode.max_rx_pkt_len is ignored"
- " when scatter rx mode is in use.\n");
+ if (enic->rte_dev->data->mtu < enic->max_mtu) {
+ dev_warning(enic,
+ "mtu is ignored when scatter rx mode is in use.\n");
}
} else {
dev_info(enic, "Rq %u Scatter rx mode not being used\n",
@@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
if (mbufs_per_pkt > 1) {
dev_info(enic, "For max packet size %u and mbuf size %u valid"
" rx descriptor range is %u to %u\n",
- max_rx_pkt_len, mbuf_size, min_sop + min_data,
+ max_rx_pktlen, mbuf_size, min_sop + min_data,
max_sop + max_data);
}
dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
@@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
"MTU (%u) is greater than value configured in NIC (%u)\n",
new_mtu, config_mtu);
- /* Update the MTU and maximum packet length */
- eth_dev->data->mtu = new_mtu;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- enic_mtu_to_max_rx_pktlen(new_mtu);
-
/*
* If the device has not started (enic_enable), nothing to do.
* Later, enic_enable() will set up RQs reflecting the new maximum
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e4021..5e4b361ca6c0 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
FM10K_SRRCTL_LOOPBACK_SUPPRESS);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
+ if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
uint32_t reg;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c01e2ec1d450..2d8271cb6095 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -315,19 +315,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- HINIC_MAX_JUMBO_FRAME_SIZE) {
+ if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
+ HINIC_MIN_FRAME_SIZE ||
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
+ HINIC_MAX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR,
- "Max rx pkt len out of range, get max_rx_pkt_len:%d, "
+ "Packet length out of range, get packet length:%d, "
"expect between %d and %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu),
HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- nic_dev->mtu_size =
- HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
/* rss template */
err = hinic_config_mq_mode(dev, TRUE);
@@ -1530,7 +1530,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- uint32_t frame_size;
int ret = 0;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
@@ -1548,16 +1547,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- /* update max frame size */
- frame_size = HINIC_MTU_TO_PKTLEN(mtu);
- if (frame_size > HINIC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972bf..4ead227f9122 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2371,41 +2371,6 @@ hns3_init_ring_with_vector(struct hns3_hw *hw)
return 0;
}
-static int
-hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
-{
- struct hns3_adapter *hns = dev->data->dev_private;
- struct hns3_hw *hw = &hns->hw;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
- int ret;
-
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
- return 0;
-
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater than %u "
- "and no more than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- return -EINVAL;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3_dev_mtu_set(dev, mtu);
- if (ret)
- return ret;
- dev->data->mtu = mtu;
-
- return 0;
-}
-
static int
hns3_setup_dcb(struct rte_eth_dev *dev)
{
@@ -2520,8 +2485,8 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- ret = hns3_refresh_mtu(dev, conf);
- if (ret)
+ ret = hns3_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
goto cfg_err;
ret = hns3_mbuf_dyn_rx_timestamp_register(dev, conf);
@@ -2616,7 +2581,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false;
+ is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2637,7 +2602,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c806..0b5db486f8d6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -784,8 +784,6 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
struct rte_eth_rss_conf rss_conf;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
bool gro_en;
int ret;
@@ -825,28 +823,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater "
- "than %u and less than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- ret = -EINVAL;
- goto cfg_err;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3vf_dev_mtu_set(dev, mtu);
- if (ret)
- goto cfg_err;
- dev->data->mtu = mtu;
- }
+ ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
+ goto cfg_err;
ret = hns3vf_dev_configure_vlan(dev);
if (ret)
@@ -935,7 +914,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e3957f..a260212f73f1 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1735,18 +1735,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size,
uint16_t nb_desc)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
- struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
uint16_t min_vec_bds;
/*
* HNS3 hardware network engine set scattered as default. If the driver
* is not work in scattered mode and the pkts greater than buf_size
- * but smaller than max_rx_pkt_len will be distributed to multiple BDs.
+ * but smaller than frame size will be distributed to multiple BDs.
* Driver cannot handle this situation.
*/
- if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) {
- hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater "
+ if (!hw->data->scattered_rx && frame_size > buf_size) {
+ hns3_err(hw, "frame size is not allowed to be set greater "
"than rx_buf_len if scattered is off.");
return -EINVAL;
}
@@ -1958,7 +1958,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
}
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
- dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
+ dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index bd97d93dd746..ab571a921f9e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11775,14 +11775,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > I40E_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return ret;
}
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index d5847ac6b546..1d27cf2b0a01 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2909,8 +2909,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
}
rxq->max_pkt_len =
- RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
- rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
+ RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
+ data->mtu + I40E_ETH_OVERHEAD);
if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 5a5a7f59e152..0eabce275d92 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -576,13 +576,14 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_dev_data *dev_data = dev->data;
uint16_t buf_size, max_pkt_len;
+ uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
/* Calculate the maximum packet length allowed */
max_pkt_len = RTE_MIN((uint32_t)
rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
@@ -839,7 +840,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
adapter->stopped = 0;
- vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
num_queue_pairs = vf->num_queue_pairs;
@@ -1472,15 +1473,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IAVF_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return ret;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4e4cdbcd7d71..c3c7ad88f250 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -66,9 +66,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ dev->data->mtu + ICE_ETH_OVERHEAD);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 9ab7704ff003..8ee1335ac6cf 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3603,8 +3603,8 @@ ice_dev_start(struct rte_eth_dev *dev)
pf->adapter_stopped = false;
/* Set the max frame size to default value*/
- max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
- pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
+ max_frame_size = pf->dev_data->mtu ?
+ pf->dev_data->mtu + ICE_ETH_OVERHEAD :
ICE_FRAME_SIZE_MAX;
/* Set the max frame size to HW*/
@@ -3992,14 +3992,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > ICE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 83fb788e6930..f9ef6ce57277 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -271,15 +271,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
+ uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
/* Set buffer size as the head split is disabled. */
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
RTE_PKTMBUF_HEADROOM);
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- rxq->max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev_data->dev_conf.rxmode.max_rx_pkt_len);
+ rxq->max_pkt_len =
+ RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ frame_size);
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
@@ -385,11 +386,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
return -EINVAL;
}
- buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
- RTE_PKTMBUF_HEADROOM);
-
/* Check if scattered RX needs to be used. */
- if (rxq->max_pkt_len > buf_size)
+ if (frame_size > buf_size)
dev_data->scattered_rx = 1;
rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a0954836b..b26723064b07 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -20,13 +20,6 @@
#define IGC_INTEL_VENDOR_ID 0x8086
-/*
- * The overhead from MTU to max frame size.
- * Considering VLAN so tag needs to be counted.
- */
-#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
- RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE)
-
#define IGC_FC_PAUSE_TIME 0x0680
#define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
#define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
@@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= IGC_RCTL_LPE;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~IGC_RCTL_LPE;
}
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
return 0;
}
@@ -2486,6 +2473,7 @@ static int
igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
- RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
+ if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
+ frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
@@ -2519,6 +2498,7 @@ static int
igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
+ if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
+ frame_size, MAX_RX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..b3473b5b1646 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -35,6 +35,13 @@ extern "C" {
#define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
#define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX)
+/*
+ * The overhead from MTU to max frame size.
+ * Considering VLAN so tag needs to be counted.
+ */
+#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2)
+
/*
* TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
* multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd220..28d3076439c3 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
struct igc_rx_queue *rxq;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint32_t rctl;
uint32_t rxcsum;
uint16_t buf_size;
@@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
rctl |= IGC_RCTL_LPE;
-
- /*
- * Set maximum packet length by default, and might be updated
- * together with enabling/disabling dual VLAN.
- */
- IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
- } else {
+ else
rctl &= ~IGC_RCTL_LPE;
- }
+
+ max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
+ /*
+ * Set maximum packet length by default, and might be updated
+ * together with enabling/disabling dual VLAN.
+ */
+ IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
+ if (max_rx_pktlen > buf_size)
dev->data->scattered_rx = 1;
} else {
/*
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e6207939665e..97447a10e46a 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -343,25 +343,15 @@ static int
ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
- uint32_t max_frame_size;
int err;
IONIC_PRINT_CALL();
/*
* Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
- * is done by the the API.
+ * is done by the API.
*/
- /*
- * Max frame size is MTU + Ethernet header + VLAN + QinQ
- * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
- */
- max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
-
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
- return -EINVAL;
-
err = ionic_lif_change_mtu(lif, mtu);
if (err)
return err;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa6a..3f5fc66abf71 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
struct rte_mbuf *rxm, *rxm_seg;
uint32_t max_frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint64_t pkt_flags = 0;
uint32_t pkt_type;
struct ionic_rx_stats *stats = &rxq->stats;
@@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
int __rte_cold
ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
{
- uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
struct ionic_rx_qcq *rxq;
int err;
@@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
{
struct ionic_rx_qcq *rxq = rx_queue;
uint32_t frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
struct ionic_rx_service service_cb_arg;
service_cb_arg.rx_pkts = rx_pkts;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa5877d..3634c0c8c5f0 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IPN3KE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 47693c0c47cd..31e67d86e77b 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5174,7 +5174,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct ixgbe_hw *hw;
struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
- struct rte_eth_dev_data *dev_data = dev->data;
int ret;
ret = ixgbe_dev_info_get(dev, &dev_info);
@@ -5188,9 +5187,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
*/
- if (dev_data->dev_started && !dev_data->scattered_rx &&
- (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ if (dev->data->dev_started && !dev->data->scattered_rx &&
+ frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -5199,23 +5198,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (frame_size > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU) {
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
return 0;
@@ -6272,12 +6266,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
* set as 0x4.
*/
if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_JUMBO_FRAME);
+ (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_DEFAULT);
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
/* Set RTTBCNRC of queue X */
IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
@@ -6549,8 +6541,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (mtu < RTE_ETHER_MIN_MTU ||
- max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
+ if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
return -EINVAL;
/* If device is started, refuse mtu that requires the support of
@@ -6558,7 +6549,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
*/
if (dev_data->dev_started && !dev_data->scattered_rx &&
(max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -6575,8 +6566,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (ixgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index fbf2b17d160f..9bcbc445f2d0 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* if PF has jumbo frames enabled which means legacy
* VFs are disabled.
*/
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ if (dev->data->mtu > RTE_ETHER_MTU)
break;
/* fall through */
default:
@@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* legacy VFs.
*/
if (max_frame > IXGBE_ETH_MAX_LEN ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ dev->data->mtu > RTE_ETHER_MTU)
return -1;
break;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755de..03991711fd6e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -5063,6 +5063,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
uint16_t buf_size;
uint16_t i;
struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
int rc;
PMD_INIT_FUNC_TRACE();
@@ -5098,7 +5099,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (rx_conf->max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
} else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
@@ -5172,8 +5173,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
+ if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -5653,6 +5653,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
struct ixgbe_hw *hw;
struct ixgbe_rx_queue *rxq;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
uint64_t bus_addr;
uint32_t srrctl, psrtype = 0;
uint16_t buf_size;
@@ -5689,10 +5690,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
* VF packets received can work in all cases.
*/
- if (ixgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ if (ixgbevf_rlpml_set_vf(hw, frame_size) != 0) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
return -EINVAL;
}
@@ -5751,8 +5751,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
+ (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->data->scattered_rx = 1;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index b72060a4499b..976916f870a5 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
- uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (frame_len > LIO_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
- eth_dev->data->mtu = mtu;
-
return 0;
}
@@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev)
static int
lio_dev_start(struct rte_eth_dev *eth_dev)
{
- uint16_t mtu;
- uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
int ret = 0;
@@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
goto dev_mtu_set_error;
}
- mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN);
- if (mtu < RTE_ETHER_MIN_MTU)
- mtu = RTE_ETHER_MIN_MTU;
-
- if (eth_dev->data->mtu != mtu) {
- ret = lio_dev_mtu_set(eth_dev, mtu);
- if (ret)
- goto dev_mtu_set_error;
- }
+ ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
+ if (ret != 0)
+ goto dev_mtu_set_error;
return 0;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 978cbb8201ea..4a5cfd22aa71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
int ret;
uint32_t crc_present;
uint64_t offloads;
+ uint32_t max_rx_pktlen;
offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
@@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
};
/* Enable scattered packets support for this queue if necessary. */
MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- (mb_len - RTE_PKTMBUF_HEADROOM)) {
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
- uint32_t size =
- RTE_PKTMBUF_HEADROOM +
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
/*
@@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
/* Make sure sges_n did not overflow. */
size = mb_len * (1 << rxq->sges_n);
size -= RTE_PKTMBUF_HEADROOM;
- if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
+ if (size < max_rx_pktlen) {
rte_errno = EOVERFLOW;
ERROR("%p: too many SGEs (%u) needed to handle"
" requested maximum packet size %u",
(void *)dev,
- 1 << sges_n,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ 1 << sges_n, max_rx_pktlen);
goto error;
}
} else {
WARN("%p: the requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- (void *)dev,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ (void *)dev, max_rx_pktlen,
mb_len - RTE_PKTMBUF_HEADROOM);
}
DEBUG("%p: maximum number of segments per packet: %u",
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce798986..6f4f351222d3 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1330,10 +1330,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
- unsigned int max_rx_pkt_len = lro_on_queue ?
+ unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
+ dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
@@ -1372,7 +1373,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* needed to handle max size packets, replace zero length
* with the buffer length from the pool.
*/
- tail_len = max_rx_pkt_len;
+ tail_len = max_rx_pktlen;
do {
struct mlx5_eth_rxseg *hw_seg =
&tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
@@ -1410,7 +1411,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- tmpl->rxq.rxseg_n, max_rx_pkt_len,
+ tmpl->rxq.rxseg_n, max_rx_pktlen,
MLX5_MAX_RXQ_NSEG);
rte_errno = ENOTSUP;
goto error;
@@ -1435,7 +1436,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
- dev->data->port_id, idx, mb_len, max_rx_pkt_len,
+ dev->data->port_id, idx, mb_len, max_rx_pktlen,
RTE_PKTMBUF_HEADROOM);
rte_errno = ENOSPC;
goto error;
@@ -1454,7 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* following conditions are met:
* - MPRQ is enabled.
* - The number of descs is more than the number of strides.
- * - max_rx_pkt_len plus overhead is less than the max size
+ * - max_rx_pktlen plus overhead is less than the max size
* of a stride or mprq_stride_size is specified by a user.
* Need to make sure that there are enough strides to encap
* the maximum packet size in case mprq_stride_size is set.
@@ -1478,7 +1479,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
!!(offloads & DEV_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
- max_lro_size = RTE_MIN(max_rx_pkt_len,
+ max_lro_size = RTE_MIN(max_rx_pktlen,
(1u << tmpl->rxq.strd_num_n) *
(1u << tmpl->rxq.strd_sz_n));
DRV_LOG(DEBUG,
@@ -1487,9 +1488,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->port_id, idx,
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
} else if (tmpl->rxq.rxseg_n == 1) {
- MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
+ MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
@@ -1511,13 +1512,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- 1 << sges_n, max_rx_pkt_len,
+ 1 << sges_n, max_rx_pktlen,
1u << MLX5_MAX_LOG_RQ_SEGS);
rte_errno = ENOTSUP;
goto error;
}
tmpl->rxq.sges_n = sges_n;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
}
if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
DRV_LOG(WARNING,
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index a3ee15020466..520c6fdb1d31 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_NETA_ETH_HDRS_LEN;
-
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
@@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
/* It is OK. New MTU will be set later on mvneta_dev_start */
return 0;
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc09039..2cd4fb31348b 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mvneta_priv *priv = dev->data->dev_private;
struct mvneta_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS;
- if (frame_size < max_rx_pkt_len) {
+ if (frame_size < max_rx_pktlen) {
MVNETA_LOG(ERR,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8da4..5ce71661c84e 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_PP2_ETH_HDRS_LEN;
- if (dev->data->mtu > priv->max_mtu) {
- MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n",
- dev->data->mtu,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- priv->max_mtu);
- return -EINVAL;
- }
+ if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
+ MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
+ dev->data->dev_conf.rxmode.mtu,
+ priv->max_mtu);
+ return -EINVAL;
}
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
@@ -595,9 +590,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
return 0;
@@ -1994,7 +1986,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mrvl_priv *priv = dev->data->dev_private;
struct mrvl_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
int ret, tc, inq;
uint64_t offloads;
@@ -2009,17 +2001,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
return -EFAULT;
}
- frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
- MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
- if (frame_size < max_rx_pkt_len) {
+ frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS;
+ if (frame_size < max_rx_pktlen) {
MRVL_LOG(WARNING,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MRVL_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 1b4bc33593fb..a2031a7a82cc 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -370,7 +370,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = rxmode->max_rx_pkt_len;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -963,16 +963,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
/* switch to jumbo mode if needed */
- if ((uint32_t)mtu > RTE_ETHER_MTU)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
-
/* writing to configuration space */
- nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
+ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
hw->mtu = mtu;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9f4c0503b4d4..69c3bda12df8 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > OCCTX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
@@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
/* Setup scatter mode if needed by jumbo */
- if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (data->mtu > buffsz) {
nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
@@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
evdev_priv->rx_offload_flags = nic->rx_offload_flags;
evdev_priv->tx_offload_flags = nic->tx_offload_flags;
- /* Setup MTU based on max_rx_pkt_len */
- nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD;
+ /* Setup MTU */
+ nic->mtu = data->mtu;
return 0;
}
@@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
octeontx_recheck_rx_offloads(rxq);
}
- /* Setting up the mtu based on max_rx_pkt_len */
+ /* Setting up the mtu */
ret = octeontx_dev_mtu_set(dev, nic->mtu);
if (ret) {
octeontx_log_err("Failed to set default MTU size %d", ret);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e7c..787e8d890215 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
mbp_priv = rte_mempool_get_priv(rxq->pool);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d2b..cf7804157198 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,14 +59,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > NIX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return rc;
}
@@ -75,7 +72,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct otx2_eth_rxq *rxq;
- uint16_t mtu;
int rc;
rxq = data->rx_queues[0];
@@ -83,10 +79,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
/* Setup scatter mode if needed by jumbo */
otx2_nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
-
- rc = otx2_nix_mtu_set(eth_dev, mtu);
+ rc = otx2_nix_mtu_set(eth_dev, data->mtu);
if (rc)
otx2_err("Failed to set default MTU size %d", rc);
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index feec4d10a26e..2619bd2f2a19 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev)
static int
pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- int ret;
struct pfe_eth_priv_s *priv = dev->data->dev_private;
uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
/*TODO Support VLAN*/
- ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
- if (!ret)
- dev->data->mtu = mtu;
-
- return ret;
+ return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
}
/* pfe_eth_enet_addr_byte_mac
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a4304e0eff44..4b971fd1fe3c 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- /* If jumbo enabled adjust MTU */
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
-
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
@@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
- uint32_t max_rx_pkt_len;
uint32_t frame_size;
uint16_t bufsz;
bool restart = false;
@@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
DP_ERR(edev, "Error during getting ethernet device info\n");
return rc;
}
- max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
- frame_size = max_rx_pkt_len;
+
+ frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
@@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (frame_size > QEDE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->dev_started = 1;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
-
return 0;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 35cde561ba59..c2263787b4ec 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
struct qede_rx_queue *rxq;
- uint16_t max_rx_pkt_len;
+ uint16_t max_rx_pktlen;
uint16_t bufsz;
int rc;
@@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
dev->data->rx_queues[qid] = NULL;
}
- max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
/* Fix up RX buffer size */
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
- (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
+ (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
dev->data->scattered_rx = 1;
}
}
- rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
+ rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
if (rc < 0)
return rc;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3ad..1f55c90b419d 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1066,15 +1066,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
/*
* The driver does not use it, but other PMDs update jumbo frame
- * flag and max_rx_pkt_len when MTU is set.
+ * flag when MTU is set.
*/
if (mtu > RTE_ETHER_MTU) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb8175..22f74735db08 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -383,14 +383,10 @@ sfc_port_configure(struct sfc_adapter *sa)
{
const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
struct sfc_port *port = &sa->port;
- const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
sfc_log_init(sa, "entry");
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- port->pdu = rxmode->max_rx_pkt_len;
- else
- port->pdu = EFX_MAC_PDU(dev_data->mtu);
+ port->pdu = EFX_MAC_PDU(dev_data->mtu);
return 0;
}
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf71d..0a8d29277aeb 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_mtu = mtu };
- int err = 0;
- err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
- if (!err)
- dev->data->mtu = mtu;
-
- return err;
+ return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
}
static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81a3..c8ae95a61306 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (frame_size > NIC_HW_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
- /* Update max_rx_pkt_len */
- rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
nic->mtu = mtu;
for (i = 0; i < nic->sqs_count; i++)
@@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
}
/* Setup scatter mode if needed by jumbo */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE > buffsz)
+ if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
- /* Setup MTU based on max_rx_pkt_len or default */
- mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
- dev->data->dev_conf.rxmode.max_rx_pkt_len
- - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
+ /* Setup MTU */
+ mtu = dev->data->mtu;
if (nicvf_dev_set_mtu(dev, mtu)) {
PMD_INIT_LOG(ERR, "Failed to set default mtu size");
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 006399468841..269de9f848dd 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3486,8 +3486,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965c8..44cfcd76bca4 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -55,6 +55,10 @@
#define TXGBE_5TUPLE_MAX_PRI 7
#define TXGBE_5TUPLE_MIN_PRI 1
+
+/* The overhead from MTU to max frame size. */
+#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
+
#define TXGBE_RSS_OFFLOAD_ALL ( \
ETH_RSS_IPV4 | \
ETH_RSS_NONFRAG_IPV4_TCP | \
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 896da8a88770..43dc0ed39b75 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -1128,8 +1128,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (txgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1bd..c6cd3803c434 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
- } else {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
- }
+ wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+ TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD));
/*
* If loopback mode is configured, set LPBK bit.
@@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
+ if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
+ 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* VF packets received can work in all cases.
*/
if (txgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ dev->data->mtu + TXGBE_ETH_OVERHEAD);
return -EINVAL;
}
@@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
+ (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b60eeb24abe7..5d341a3e23bb 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -930,7 +930,6 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
hw->max_rx_pkt_len = frame_size;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = hw->max_rx_pkt_len;
return 0;
}
@@ -2116,14 +2115,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
+ if (rxmode->mtu > hw->max_mtu)
req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
- else
- hw->max_rx_pkt_len = ether_hdr_len + dev->data->mtu;
+ hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM))
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index adbd40808396..68e3c13730ad 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/bond/main.c b/examples/bond/main.c
index a63ca70a7f06..25ca459be57b 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -116,7 +116,6 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index d0f40a1fb4bc..8c4a8feec0c2 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 5ed0dc73ec60..e26be8edf28f 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ab8c6d6a0dad..476b147bdfcc 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 65c1d85cf2fb..8a43f6ac0f92 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -59,14 +59,6 @@ static struct{
} parm_config;
const char cb_port_delim[] = ":";
-/* Ethernet ports configured with default settings using struct. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of Ethernet ports. */
-
/* Creation of flow classifier object. 8< */
struct flow_classifier {
struct rte_flow_classifier *cls;
@@ -200,7 +192,7 @@ static struct rte_flow_attr attr;
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
struct rte_ether_addr addr;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
@@ -211,6 +203,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index b3977a8be561..fdc66368dce9 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -820,7 +820,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f24536972084..12062a785dc6 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -146,7 +146,8 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_SCATTER |
@@ -918,9 +919,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
@@ -963,8 +964,7 @@ main(int argc, char **argv)
}
/* set the mtu to the maximum received packet size */
- ret = rte_eth_dev_set_mtu(portid,
- local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD);
+ ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu);
if (ret < 0) {
printf("\n");
rte_exit(EXIT_FAILURE, "Set MTU failed: "
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..9ba02e687adb 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790be4..e5c7d46d2caa 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -162,7 +162,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_JUMBO_FRAME),
@@ -882,7 +883,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
/* mbufs stored int the gragment table. 8< */
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM;
- nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
+ nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
+ + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += nb_rxd + nb_txd;
@@ -1054,9 +1056,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7ad94cb8228b..d032a47d1c3b 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -2163,7 +2162,6 @@ cryptodevs_init(uint16_t req_queue_num)
static void
port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
{
- uint32_t frame_size;
struct rte_eth_dev_info dev_info;
struct rte_eth_txconf *txconf;
uint16_t nb_tx_queue, nb_rx_queue;
@@ -2211,10 +2209,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- frame_size = MTU_TO_FRAMELEN(mtu_size);
- if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
+ if (mtu_size > RTE_ETHER_MTU)
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- local_port_conf.rxmode.max_rx_pkt_len = frame_size;
+ local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b38..b3993685ec92 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -110,7 +110,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
@@ -715,9 +716,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/kni/main.c b/examples/kni/main.c
index beabb3c848aa..c10814c6a94f 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
memcpy(&conf, &port_conf, sizeof(conf));
/* Set new MTU */
- if (new_mtu > RTE_ETHER_MAX_LEN)
+ if (new_mtu > RTE_ETHER_MTU)
conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* mtu + length of header + length of FCS = max pkt length */
- conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
- KNI_ENET_FCS_SIZE;
+ conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
index 9b3e324efb23..d9cf00c9dfc7 100644
--- a/examples/l2fwd-cat/l2fwd-cat.c
+++ b/examples/l2fwd-cat/l2fwd-cat.c
@@ -19,10 +19,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
-};
-
/* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
/*
@@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0)
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 66d1491bf76d..f9438176cbb1 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -217,7 +217,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 19f32809aa9d..9040be5ed9b6 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564b6..7abb612ee6a4 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -141,6 +140,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
/* ethernet addresses of ports */
@@ -201,8 +202,8 @@ enum {
OPT_CONFIG_NUM = 256,
#define OPT_NONUMA "no-numa"
OPT_NONUMA_NUM,
-#define OPT_ENBJMO "enable-jumbo"
- OPT_ENBJMO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_RULE_IPV4 "rule_ipv4"
OPT_RULE_IPV4_NUM,
#define OPT_RULE_IPV6 "rule_ipv6"
@@ -1619,26 +1620,21 @@ print_usage(const char *prgname)
usage_acl_alg(alg, sizeof(alg));
printf("%s [EAL options] -- -p PORTMASK -P"
- "--"OPT_RULE_IPV4"=FILE"
- "--"OPT_RULE_IPV6"=FILE"
+ " --"OPT_RULE_IPV4"=FILE"
+ " --"OPT_RULE_IPV6"=FILE"
" [--"OPT_CONFIG" (port,queue,lcore)[,(port,queue,lcore]]"
- " [--"OPT_ENBJMO" [--max-pkt-len PKTLEN]]\n"
+ " [--"OPT_MAX_PKT_LEN" PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
- " --"OPT_CONFIG": (port,queue,lcore): "
- "rx queues configuration\n"
+ " -P: enable promiscuous mode\n"
+ " --"OPT_CONFIG" (port,queue,lcore): rx queues configuration\n"
" --"OPT_NONUMA": optional, disable numa awareness\n"
- " --"OPT_ENBJMO": enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
- " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries "
- "file. "
+ " --"OPT_MAX_PKT_LEN" PKTLEN: maximum packet length in decimal (64-9600)\n"
+ " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries file. "
"Each rule occupy one line. "
"2 kinds of rules are supported. "
"One is ACL entry at while line leads with character '%c', "
- "another is route entry at while line leads with "
- "character '%c'.\n"
- " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules "
- "entries file.\n"
+ "another is route entry at while line leads with character '%c'.\n"
+ " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules entries file.\n"
" --"OPT_ALG": ACL classify method to use, one of: %s\n",
prgname, ACL_LEAD_CHAR, ROUTE_LEAD_CHAR, alg);
}
@@ -1758,14 +1754,14 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
- {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
- {OPT_ENBJMO, 0, NULL, OPT_ENBJMO_NUM },
- {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
- {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
- {OPT_ALG, 1, NULL, OPT_ALG_NUM },
- {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
- {NULL, 0, 0, 0 }
+ {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
+ {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
+ {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
+ {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
+ {OPT_ALG, 1, NULL, OPT_ALG_NUM },
+ {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
+ {NULL, 0, 0, 0 }
};
argvopt = argv;
@@ -1804,43 +1800,11 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case OPT_ENBJMO_NUM:
- {
- struct option lenopts = {
- "max-pkt-len",
- required_argument,
- 0,
- 0
- };
-
- printf("jumbo frame is enabled\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, then use the
- * default value RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length "
- "to %u\n",
- (unsigned int)
- port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
case OPT_RULE_IPV4_NUM:
parm_config.rule_ipv4_name = optarg;
break;
@@ -2007,6 +1971,43 @@ set_default_dest_mac(void)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -2080,6 +2081,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index a0de8ca9b42d..b431b9ff5f3c 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
@@ -126,6 +125,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
@@ -259,7 +260,7 @@ print_usage(const char *prgname)
" [-P]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--per-port-pool]\n\n"
@@ -268,9 +269,7 @@ print_usage(const char *prgname)
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for "
"port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --per-port-pool: Use separate buffer pool per port\n\n",
prgname);
@@ -404,7 +403,7 @@ static const char short_options[] = "p:" /* portmask */
#define CMD_LINE_OPT_CONFIG "config"
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
enum {
/* Long options mapped to a short option */
@@ -416,7 +415,7 @@ enum {
CMD_LINE_OPT_CONFIG_NUM,
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
};
@@ -424,7 +423,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
{NULL, 0, 0, 0},
};
@@ -490,28 +489,8 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr, "Invalid maximum "
- "packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM: {
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
}
@@ -722,6 +701,43 @@ graph_main_loop(void *conf)
}
/* >8 End of main processing loop. */
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -807,6 +823,13 @@ main(int argc, char **argv)
nb_rx_queue, n_tx_queue);
rte_eth_dev_info_get(portid, &dev_info);
+
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44ae8..e58561327c48 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -251,7 +251,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -266,6 +265,8 @@ static struct rte_eth_conf port_conf = {
}
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool * pktmbuf_pool[NB_SOCKETS];
@@ -1601,16 +1602,15 @@ print_usage(const char *prgname)
" [--config (port,queue,lcore)[,(port,queue,lcore]]"
" [--high-perf-cores CORELIST"
" [--perf-config (port,queue,hi_perf,lcore_index)[,(port,queue,hi_perf,lcore_index]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
+ " -P: enable promiscuous mode\n"
" --config (port,queue,lcore): rx queues configuration\n"
" --high-perf-cores CORELIST: list of high performance cores\n"
" --perf-config: similar as config, cores specified as indices"
" for bins containing high or regular performance cores\n"
" --no-numa: optional, disable numa awareness\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --parse-ptype: parse packet type by software\n"
" --legacy: use legacy interrupt-based scaling\n"
" --empty-poll: enable empty poll detection"
@@ -1795,6 +1795,7 @@ parse_ep_config(const char *q_arg)
#define CMD_LINE_OPT_INTERRUPT_ONLY "interrupt-only"
#define CMD_LINE_OPT_TELEMETRY "telemetry"
#define CMD_LINE_OPT_PMD_MGMT "pmd-mgmt"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
/* Parse the argument given in the command line of the application */
static int
@@ -1810,7 +1811,7 @@ parse_args(int argc, char **argv)
{"perf-config", 1, 0, 0},
{"high-perf-cores", 1, 0, 0},
{"no-numa", 0, 0, 0},
- {"enable-jumbo", 0, 0, 0},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, 0},
{CMD_LINE_OPT_EMPTY_POLL, 1, 0, 0},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
{CMD_LINE_OPT_LEGACY, 0, 0, 0},
@@ -1954,36 +1955,10 @@ parse_args(int argc, char **argv)
}
if (!strncmp(lgopts[option_index].name,
- "enable-jumbo", 12)) {
- struct option lenopts =
- {"max-pkt-len", required_argument, \
- 0, 0};
-
- printf("jumbo frame is enabled \n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /**
- * if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (0 == getopt_long(argc, argvopt, "",
- &lenopts, &option_index)) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)){
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame "
- "max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ CMD_LINE_OPT_MAX_PKT_LEN,
+ sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
}
if (!strncmp(lgopts[option_index].name,
@@ -2505,6 +2480,43 @@ mode_to_str(enum appmode mode)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
/* Power library initialized in the main routine. 8< */
int
main(int argc, char **argv)
@@ -2622,6 +2634,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 00ac267af1dd..cb9bc7ad6002 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -136,6 +135,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static uint8_t lkp_per_socket[NB_SOCKETS];
@@ -326,7 +327,7 @@ print_usage(const char *prgname)
" [--lookup]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--hash-entry-num]"
" [--ipv6]"
@@ -344,9 +345,7 @@ print_usage(const char *prgname)
" Accepted: em (Exact Match), lpm (Longest Prefix Match), fib (Forwarding Information Base)\n"
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
" --ipv6: Set if running ipv6 packets\n"
@@ -566,7 +565,7 @@ static const char short_options[] =
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
#define CMD_LINE_OPT_IPV6 "ipv6"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
#define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
@@ -584,7 +583,7 @@ enum {
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
CMD_LINE_OPT_IPV6_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
CMD_LINE_OPT_PARSE_PTYPE_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
@@ -599,7 +598,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
{CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
@@ -698,31 +697,9 @@ parse_args(int argc, char **argv)
ipv6 = 1;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {
- "max-pkt-len", required_argument, 0, 0
- };
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr,
- "invalid maximum packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -981,6 +958,43 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
return 0;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
static void
l3fwd_poll_resource_setup(void)
{
@@ -1035,6 +1049,12 @@ l3fwd_poll_resource_setup(void)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf263d..b6cddc8c7b51 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -323,6 +322,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
@@ -2643,7 +2644,7 @@ print_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK -P"
" [--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread]]"
" [--tx (lcore,thread)[,(lcore,thread]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]"
" [--parse-ptype]\n\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -P : enable promiscuous mode\n"
@@ -2653,8 +2654,7 @@ print_usage(const char *prgname)
" --eth-dest=X,MM:MM:MM:MM:MM:MM: optional, ethernet destination for port X\n"
" --no-numa: optional, disable numa awareness\n"
" --ipv6: optional, specify it if running ipv6 packets\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --hash-entry-num: specify the hash entry number in hexadecimal to be setup\n"
" --no-lthreads: turn off lthread model\n"
" --parse-ptype: set to use software to analyze packet type\n\n",
@@ -2877,8 +2877,8 @@ enum {
OPT_NO_NUMA_NUM,
#define OPT_IPV6 "ipv6"
OPT_IPV6_NUM,
-#define OPT_ENABLE_JUMBO "enable-jumbo"
- OPT_ENABLE_JUMBO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_HASH_ENTRY_NUM "hash-entry-num"
OPT_HASH_ENTRY_NUM_NUM,
#define OPT_NO_LTHREADS "no-lthreads"
@@ -2902,7 +2902,7 @@ parse_args(int argc, char **argv)
{OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
{OPT_NO_NUMA, 0, NULL, OPT_NO_NUMA_NUM },
{OPT_IPV6, 0, NULL, OPT_IPV6_NUM },
- {OPT_ENABLE_JUMBO, 0, NULL, OPT_ENABLE_JUMBO_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
{OPT_HASH_ENTRY_NUM, 1, NULL, OPT_HASH_ENTRY_NUM_NUM },
{OPT_NO_LTHREADS, 0, NULL, OPT_NO_LTHREADS_NUM },
{OPT_PARSE_PTYPE, 0, NULL, OPT_PARSE_PTYPE_NUM },
@@ -2981,35 +2981,10 @@ parse_args(int argc, char **argv)
parse_ptype_on = 1;
break;
- case OPT_ENABLE_JUMBO_NUM:
- {
- struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- printf("jumbo frame is enabled - disabling simple TX path\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /* if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
-
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) || (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
case OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -3489,6 +3464,43 @@ check_all_ports_link_status(uint32_t port_mask)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -3577,6 +3589,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/test.sh b/examples/performance-thread/l3fwd-thread/test.sh
index f0b6e271a5f3..3dd33407ea41 100755
--- a/examples/performance-thread/l3fwd-thread/test.sh
+++ b/examples/performance-thread/l3fwd-thread/test.sh
@@ -11,7 +11,7 @@ case "$1" in
echo "1.1 1 L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -23,7 +23,7 @@ case "$1" in
echo "1.2 1 L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -34,7 +34,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=8)"
./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -45,7 +45,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -61,7 +61,7 @@ case "$1" in
echo "2.1 N L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 --lcores="2,(0-1)@0" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -73,7 +73,7 @@ case "$1" in
echo "2.2 N L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 --lcores="(0-3)@0,4" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -84,7 +84,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=8)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-7)@0,8" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -95,7 +95,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-15)@0,16" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -111,7 +111,7 @@ case "$1" in
echo "3.1 N L-threads per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(0,0)" \
--stat-lcore 1
@@ -121,7 +121,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(0,0)(0,1)" \
--stat-lcore 1
@@ -131,7 +131,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=8)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)" \
--tx="(0,0)(0,1)(0,2)(0,3)" \
--stat-lcore 1
@@ -141,7 +141,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=16)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)" \
--tx="(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)" \
--stat-lcore 1
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..4f20dfc4be06 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fbf7..3b6c6c297f43 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
uint8_t ptp_enabled_port_nb;
static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static const struct rte_ether_addr ether_multicast = {
.addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
};
@@ -178,7 +172,7 @@ static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_dev_info dev_info;
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1;
const uint16_t tx_rings = 1;
int retval;
@@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369dc..c32d2e12e633 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -52,7 +52,6 @@ static struct rte_mempool *pool = NULL;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..1367569c65db 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index ab6fa7d56c5d..6845c396b8d9 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
static const char usage[] =
"%s EAL_ARGS -- [-t]\n";
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static struct {
uint64_t total_cycles;
uint64_t total_queue_cycles;
@@ -124,7 +118,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused,
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -137,6 +131,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index ae9bbee8d820..fd7207aee758 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -17,14 +17,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-/* Configuration of ethernet ports. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of ethernet ports. */
-
/* basicfwd.c: Basic DPDK skeleton forwarding example. */
/*
@@ -36,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -48,6 +40,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d0bf1f31e36a..da381b41c0c5 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -44,6 +44,7 @@
#define BURST_RX_RETRIES 4 /* Number of retries on RX. */
#define JUMBO_FRAME_MAX_SIZE 0x2600
+#define MAX_MTU (JUMBO_FRAME_MAX_SIZE - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN))
/* State of virtio device. */
#define DEVICE_MAC_LEARNING 0
@@ -633,8 +634,7 @@ us_vhost_parse_args(int argc, char **argv)
if (ret) {
vmdq_conf_default.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- vmdq_conf_default.rxmode.max_rx_pkt_len
- = JUMBO_FRAME_MAX_SIZE;
+ vmdq_conf_default.rxmode.mtu = MAX_MTU;
}
break;
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e59fb7d3478b..e19d79a40802 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -51,17 +51,10 @@
static uint32_t enabled_port_mask;
static volatile bool force_quit;
-/****************/
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca924221..4d0584af52e3 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1324,6 +1324,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
return ret;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1331,6 +1344,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
+ uint32_t max_rx_pktlen;
uint16_t overhead_len;
int diag;
int ret;
@@ -1381,11 +1395,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
/* Get the real Ethernet overhead length */
- if (dev_info.max_mtu != UINT16_MAX &&
- dev_info.max_rx_pktlen > dev_info.max_mtu)
- overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
- else
- overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
/* If number of queues specified by application for both Rx and Tx is
* zero, use driver preferred values. This cannot be done individually
@@ -1454,49 +1465,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/*
- * If jumbo frames are enabled, check that the maximum RX packet
- * length is supported by the configured device.
+ * Check that the maximum RX packet length is supported by the
+ * configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- (unsigned int)RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
- goto rollback;
- }
+ if (dev_conf->rxmode.mtu == 0)
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
+ if (max_rx_pktlen > dev_info.max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
+ port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
+ ret = -EINVAL;
+ goto rollback;
+ } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
+ port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ goto rollback;
+ }
- /* Scale the MTU size to adapt max_rx_pkt_len */
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- overhead_len;
- } else {
- uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
- if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
- pktlen > RTE_ETHER_MTU + overhead_len)
+ if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
+ dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
/* Use default value */
- dev->data->dev_conf.rxmode.max_rx_pkt_len =
- RTE_ETHER_MTU + overhead_len;
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
}
+ dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
+
/*
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
goto rollback;
@@ -2156,13 +2163,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
+ /* Get the real Ethernet overhead length */
if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ uint16_t overhead_len;
+ uint32_t max_rx_pktlen;
+ int ret;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->mtu + overhead_len;
if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- int ret = eth_dev_check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
+ ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674cc..9fba2bd73c84 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -416,7 +416,7 @@ enum rte_eth_tx_mq_mode {
struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
- uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ uint32_t mtu; /**< Requested MTU. */
/** Maximum allowed size of LRO aggregated packet. */
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index 0036bda7465c..1491c815c312 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -28,7 +28,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(nb_tx_q);
rte_trace_point_emit_u32(dev_conf->link_speeds);
rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
- rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
+ rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
rte_trace_point_emit_u64(dev_conf->txmode.offloads);
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v4 2/6] ethdev: move jumbo frame offload check to library
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
@ 2021-10-05 17:16 ` Ferruh Yigit
2021-10-08 8:39 ` Xu, Rosen
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 3/6] ethdev: move check to library for MTU set Ferruh Yigit
` (6 subsequent siblings)
7 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-05 17:16 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: Ferruh Yigit, dev
Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
and application should enable the jumbo frame offload support for it.
When jumbo frame offload is not enabled by application, but MTU bigger
than RTE_ETHER_MTU is requested there are two options, either fail or
enable jumbo frame offload implicitly.
Enabling jumbo frame offload implicitly is selected by many drivers
since setting a big MTU value already implies it, and this increases
usability.
This patch moves this logic from drivers to the library, both to reduce
the duplicated code in the drivers and to make behaviour more visible.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
drivers/net/dpaa/dpaa_ethdev.c | 7 -------
drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
drivers/net/e1000/em_ethdev.c | 9 ++-------
drivers/net/e1000/igb_ethdev.c | 9 ++-------
drivers/net/enetc/enetc_ethdev.c | 7 -------
drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
drivers/net/hns3/hns3_ethdev.c | 8 --------
drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
drivers/net/i40e/i40e_ethdev.c | 5 -----
drivers/net/iavf/iavf_ethdev.c | 7 -------
drivers/net/ice/ice_ethdev.c | 5 -----
drivers/net/igc/igc_ethdev.c | 9 ++-------
drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
drivers/net/liquidio/lio_ethdev.c | 7 -------
drivers/net/nfp/nfp_common.c | 6 ------
drivers/net/octeontx/octeontx_ethdev.c | 5 -----
drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
drivers/net/qede/qede_ethdev.c | 4 ----
drivers/net/sfc/sfc_ethdev.c | 9 ---------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 6 ------
lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
27 files changed, 29 insertions(+), 166 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 76aeec077f2b..2960834b4539 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
val = 1;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
val = 0;
- }
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 8c6f20b75aed..07ee19938930 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3052,15 +3052,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
return -EINVAL;
}
- if (new_mtu > RTE_ETHER_MTU) {
+ if (new_mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* Is there a change in mtu setting? */
if (eth_dev->data->mtu == new_mtu)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 695d0d6fd3e2..349896f6a1bf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
plt_err("Failed to max Rx frame length, rc=%d", rc);
goto exit;
}
-
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 8cf61f12a8d6..0c9cc2f5bb3f 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
- /* set to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
return err;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index adbdb87baab9..57b09f16ba44 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 758a14e0ad2d..df44bb204f65 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1470,13 +1470,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6f418a36aa04..1b41dd04df5a 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
return 0;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 4c114bf90fc7..a061d0529dd1 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4396,15 +4396,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index cdb9783b5372..fbcbbb6c0533 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads &=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 2d8271cb6095..4b30dfa222a8 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1547,13 +1547,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 4ead227f9122..e1d465de8234 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2571,7 +2571,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct hns3_adapter *hns = dev->data->dev_private;
uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
struct hns3_hw *hw = &hns->hw;
- bool is_jumbo_frame;
int ret;
if (dev->data->dev_started) {
@@ -2581,7 +2580,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2596,12 +2594,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (is_jumbo_frame)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 0b5db486f8d6..3438b3650de6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -908,12 +908,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rte_spinlock_unlock(&hw->lock);
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index ab571a921f9e..9283adb19304 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11775,11 +11775,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 0eabce275d92..844d26d87ba6 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1473,13 +1473,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8ee1335ac6cf..3038a9714517 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3992,11 +3992,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index b26723064b07..dcbc26b8186e 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rctl = IGC_READ_REG(hw, IGC_RCTL);
-
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~IGC_RCTL_LPE;
- }
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 3634c0c8c5f0..e8a33f04bd69 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
mtu);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 31e67d86e77b..574a7bffc9cb 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5198,13 +5198,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 976916f870a5..3a516c52d199 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index a2031a7a82cc..850ec7655f82 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -962,12 +962,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* writing to configuration space */
nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 69c3bda12df8..fb65be2c2dc3 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index cf7804157198..293306c7be2a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,11 +59,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return rc;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4b971fd1fe3c..6886a4e5efb4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 1f55c90b419d..2ee80e2dc41f 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1064,15 +1064,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
- /*
- * The driver does not use it, but other PMDs update jumbo frame
- * flag when MTU is set.
- */
- if (mtu > RTE_ETHER_MTU) {
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index c8ae95a61306..b501fee5332c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct nicvf *nic = nicvf_pmd_priv(dev);
uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
size_t i;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
PMD_INIT_FUNC_TRACE();
@@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 269de9f848dd..35b98097c3a4 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3486,12 +3486,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
TXGBE_FRAME_SIZE_MAX);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d0584af52e3..1740bab98a83 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3639,6 +3639,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
+ int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3657,12 +3658,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+
+ if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ is_jumbo_frame_capable = 1;
}
+ if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
+ return -EINVAL;
+
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (!ret)
+ if (ret == 0) {
dev->data->mtu = mtu;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
return eth_err(port_id, ret);
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v4 3/6] ethdev: move check to library for MTU set
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-10-05 17:16 ` Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
` (5 subsequent siblings)
7 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-05 17:16 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Heinrich Kuhn,
Harman Kalra, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
Rasesh Mody, Devendra Singh Rawat, Maciej Czekaj, Jiawen Wu,
Jian Wang, Thomas Monjalon, Andrew Rybchenko
Cc: Ferruh Yigit, dev
Move requested MTU value check to the API to prevent the duplicated
code.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 15 ++++-----------
drivers/net/bnxt/bnxt_ethdev.c | 2 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------
drivers/net/dpaa/dpaa_ethdev.c | 2 --
drivers/net/dpaa2/dpaa2_ethdev.c | 4 ----
drivers/net/e1000/em_ethdev.c | 10 ----------
drivers/net/e1000/igb_ethdev.c | 11 -----------
drivers/net/enetc/enetc_ethdev.c | 4 ----
drivers/net/hinic/hinic_pmd_ethdev.c | 8 +-------
drivers/net/i40e/i40e_ethdev.c | 17 ++++-------------
drivers/net/iavf/iavf_ethdev.c | 10 ++--------
drivers/net/ice/ice_ethdev.c | 14 +++-----------
drivers/net/igc/igc_ethdev.c | 5 -----
drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------
drivers/net/liquidio/lio_ethdev.c | 10 ----------
drivers/net/nfp/nfp_common.c | 4 ----
drivers/net/octeontx/octeontx_ethdev.c | 4 ----
drivers/net/octeontx2/otx2_ethdev_ops.c | 4 ----
drivers/net/qede/qede_ethdev.c | 12 ------------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 10 ----------
lib/ethdev/rte_ethdev.c | 9 +++++++++
22 files changed, 25 insertions(+), 155 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 2960834b4539..c36cd7b1d2f0 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct axgbe_port *pdata = dev->data->dev_private;
- uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- unsigned int val = 0;
- axgbe_dev_info_get(dev, &dev_info);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ unsigned int val;
+
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- val = 1;
- else
- val = 0;
+ val = mtu > RTE_ETHER_MTU ? 1 : 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
+
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 07ee19938930..dc33b961320a 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3025,7 +3025,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
- uint32_t rc = 0;
+ uint32_t rc;
uint32_t i;
rc = is_bnxt_in_error(bp);
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 0c9cc2f5bb3f..70b879fed100 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
- struct rte_eth_dev_info dev_info;
- int err;
uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- err = cxgbe_dev_info_get(eth_dev, &dev_info);
- if (err != 0)
- return err;
-
- /* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
- return -EINVAL;
-
- err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
+ return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- return err;
}
/*
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 57b09f16ba44..3172e3b2de87 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
- return -EINVAL;
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index df44bb204f65..c28f03641bbc 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1466,10 +1466,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
- return -EINVAL;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 1b41dd04df5a..6ebef55588bc 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1788,22 +1788,12 @@ eth_em_default_mac_addr_set(struct rte_eth_dev *dev,
static int
eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct e1000_hw *hw;
uint32_t frame_size;
uint32_t rctl;
- int ret;
-
- ret = eth_em_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
frame_size = mtu + E1000_ETH_OVERHEAD;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a061d0529dd1..3164fde5b939 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4363,9 +4363,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
uint32_t rctl;
struct e1000_hw *hw;
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + E1000_ETH_OVERHEAD;
- int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -4374,15 +4372,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (hw->mac.type == e1000_82571)
return -ENOTSUP;
#endif
- ret = eth_igb_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index fbcbbb6c0533..a7372c1787c7 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -662,10 +662,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- /* check that mtu is within the allowed range */
- if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size > ENETC_MAC_MAXFRM_SIZE)
- return -EINVAL;
-
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 4b30dfa222a8..79987bec273c 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1530,17 +1530,11 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- int ret = 0;
+ int ret;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu));
- if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) {
- PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d",
- mtu, HINIC_MIN_MTU_SIZE, HINIC_MAX_MTU_SIZE);
- return -EINVAL;
- }
-
ret = hinic_set_port_mtu(nic_dev->hwdev, mtu);
if (ret) {
PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret);
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9283adb19304..2824592aa62e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11757,25 +11757,16 @@ static int i40e_set_default_mac_addr(struct rte_eth_dev *dev,
}
static int
-i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
- int ret = 0;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
- return ret;
+ return 0;
}
/* Restore ethertype filter */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 844d26d87ba6..2d43c666fdbb 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1459,21 +1459,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
static int
-iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD;
- int ret = 0;
-
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port must be stopped before configuration");
return -EBUSY;
}
- return ret;
+ return 0;
}
static int
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3038a9714517..703178c6d40c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3974,21 +3974,13 @@ ice_dev_set_link_down(struct rte_eth_dev *dev)
}
static int
-ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + ICE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR,
"port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index dcbc26b8186e..e279ae1fff1d 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1576,11 +1576,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN)
frame_size += VLAN_TAG_SIZE;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > MAX_RX_JUMBO_FRAME_SIZE)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index e8a33f04bd69..377b96c0236a 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2778,12 +2778,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
int ret = 0;
struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev);
struct rte_eth_dev_data *dev_data = ethdev->data;
- uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)
- return -EINVAL;
/* mtu setting is forbidden if port is start */
/* make sure NIC port is stopped */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 3a516c52d199..9d1d811a2e37 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -434,7 +434,6 @@ static int
lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -446,15 +445,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -EINVAL;
}
- /* check if VF MTU is within allowed range.
- * New value should not exceed PF MTU.
- */
- if (mtu < RTE_ETHER_MIN_MTU || mtu > pf_mtu) {
- lio_dev_err(lio_dev, "VF MTU should be >= %d and <= %d\n",
- RTE_ETHER_MIN_MTU, pf_mtu);
- return -EINVAL;
- }
-
/* flush added to prevent cmd failure
* incase the queue is full
*/
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 850ec7655f82..b1ce35b334da 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -951,10 +951,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || (uint32_t)mtu > hw->max_mtu)
- return -EINVAL;
-
/* mtu setting is forbidden if port is started */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index fb65be2c2dc3..b2355fa695bc 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -524,10 +524,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
struct rte_eth_dev_data *data = eth_dev->data;
int rc = 0;
- /* Check if MTU is within the allowed range */
- if (frame_size < OCCTX_MIN_FRS || frame_size > OCCTX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 293306c7be2a..206da6f7cfda 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -20,10 +20,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (dev->configured && otx2_ethdev_is_ptp_en(dev))
frame_size += NIX_TIMESYNC_RX_OFFSET;
- /* Check if MTU is within the allowed range */
- if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 6886a4e5efb4..84e23ff03418 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2307,7 +2307,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
uint32_t frame_size;
uint16_t bufsz;
@@ -2315,19 +2314,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
int i, rc;
PMD_INIT_FUNC_TRACE(edev);
- rc = qede_dev_info_get(dev, &dev_info);
- if (rc != 0) {
- DP_ERR(edev, "Error during getting ethernet device info\n");
- return rc;
- }
frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
- DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
- mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
- QEDE_ETH_OVERHEAD);
- return -EINVAL;
- }
if (!dev->data->scattered_rx &&
frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
DP_INFO(edev, "MTU greater than minimum RX buffer size of %u\n",
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index b501fee5332c..44c6b1c72354 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -154,12 +154,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (frame_size > NIC_HW_MAX_FRS)
- return -EINVAL;
-
- if (frame_size < NIC_HW_MIN_FRS)
- return -EINVAL;
-
buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/*
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 35b98097c3a4..c6fcb1871981 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3463,18 +3463,8 @@ static int
txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct rte_eth_dev_data *dev_data = dev->data;
- int ret;
-
- ret = txgbe_dev_info_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 1740bab98a83..ce0ed509d28f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3652,6 +3652,9 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
@@ -3659,6 +3662,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ frame_size = mtu + overhead_len;
+ if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
+ return -EINVAL;
+
if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
is_jumbo_frame_capable = 1;
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v4 4/6] ethdev: remove jumbo offload flag
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 3/6] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-10-05 17:16 ` Ferruh Yigit
2021-10-08 8:38 ` Xu, Rosen
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 5/6] ethdev: unify MTU checks Ferruh Yigit
` (4 subsequent siblings)
7 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-05 17:16 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Yong Wang, Konstantin Ananyev, Radu Nicolau, Akhil Goyal,
David Hunt, John McNamara, Thomas Monjalon
Cc: Ferruh Yigit, dev
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
app/test-eventdev/test_pipeline_common.c | 2 -
app/test-pmd/cmdline.c | 2 +-
app/test-pmd/config.c | 25 +---------
app/test-pmd/testpmd.c | 48 +------------------
app/test-pmd/testpmd.h | 2 +-
doc/guides/howto/debug_troubleshoot.rst | 2 -
doc/guides/nics/bnxt.rst | 1 -
doc/guides/nics/features.rst | 3 +-
drivers/net/atlantic/atl_ethdev.c | 1 -
drivers/net/axgbe/axgbe_ethdev.c | 1 -
drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
drivers/net/bnxt/bnxt.h | 1 -
drivers/net/bnxt/bnxt_ethdev.c | 10 +---
drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
drivers/net/cnxk/cnxk_ethdev.h | 5 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
drivers/net/cxgbe/cxgbe.h | 1 -
drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
drivers/net/cxgbe/sge.c | 5 +-
drivers/net/dpaa/dpaa_ethdev.c | 2 -
drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
drivers/net/e1000/e1000_ethdev.h | 4 +-
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/em_rxtx.c | 19 +++-----
drivers/net/e1000/igb_rxtx.c | 3 +-
drivers/net/ena/ena_ethdev.c | 1 -
drivers/net/enetc/enetc_ethdev.c | 3 +-
drivers/net/enic/enic_res.c | 1 -
drivers/net/failsafe/failsafe_ops.c | 2 -
drivers/net/fm10k/fm10k_ethdev.c | 1 -
drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev_vf.c | 1 -
drivers/net/i40e/i40e_ethdev.c | 1 -
drivers/net/i40e/i40e_rxtx.c | 2 +-
drivers/net/iavf/iavf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_vf_representor.c | 1 -
drivers/net/ice/ice_ethdev.c | 1 -
drivers/net/ice/ice_rxtx.c | 3 +-
drivers/net/igc/igc_ethdev.h | 1 -
drivers/net/igc/igc_txrx.c | 2 +-
drivers/net/ionic/ionic_ethdev.c | 1 -
drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
drivers/net/ixgbe/ixgbe_pf.c | 9 +---
drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
drivers/net/mlx4/mlx4_rxq.c | 1 -
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/mvneta/mvneta_ethdev.h | 3 +-
drivers/net/mvpp2/mrvl_ethdev.c | 1 -
drivers/net/nfp/nfp_common.c | 6 +--
drivers/net/octeontx/octeontx_ethdev.h | 1 -
drivers/net/octeontx2/otx2_ethdev.h | 1 -
drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
drivers/net/qede/qede_ethdev.c | 1 -
drivers/net/sfc/sfc_rx.c | 2 -
drivers/net/thunderx/nicvf_ethdev.h | 1 -
drivers/net/txgbe/txgbe_rxtx.c | 1 -
drivers/net/virtio/virtio_ethdev.c | 1 -
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
examples/ip_fragmentation/main.c | 3 +-
examples/ip_reassembly/main.c | 3 +-
examples/ipsec-secgw/ipsec-secgw.c | 2 -
examples/ipv4_multicast/main.c | 1 -
examples/kni/main.c | 5 --
examples/l3fwd-acl/main.c | 4 +-
examples/l3fwd-graph/main.c | 4 +-
examples/l3fwd-power/main.c | 4 +-
examples/l3fwd/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 4 +-
examples/vhost/main.c | 5 +-
lib/ethdev/rte_ethdev.c | 26 +---------
lib/ethdev/rte_ethdev.h | 1 -
75 files changed, 47 insertions(+), 259 deletions(-)
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 5fcea74b4d43..2775e72c580d 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN;
- if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
RTE_ETH_FOREACH_DEV(i) {
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index a677451073ae..117945c2c61e 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1923,7 +1923,7 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
return;
}
- update_jumbo_frame_offload(port_id, res->value);
+ update_mtu_from_frame_size(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index db3eeffa0093..e890fadc716c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1144,40 +1144,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v)
void
port_mtu_set(portid_t port_id, uint16_t mtu)
{
+ struct rte_port *port = &ports[port_id];
int diag;
- struct rte_port *rte_port = &ports[port_id];
- struct rte_eth_dev_info dev_info;
- int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ret = eth_dev_info_get_print_err(port_id, &dev_info);
- if (ret != 0)
- return;
-
- if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
- fprintf(stderr,
- "Set MTU failed. MTU:%u is not in valid range, min:%u - max:%u\n",
- mtu, dev_info.min_mtu, dev_info.max_mtu);
- return;
- }
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
return;
}
- rte_port->dev_conf.rxmode.mtu = mtu;
-
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (mtu > RTE_ETHER_MTU)
- rte_port->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rte_port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
+ port->dev_conf.rxmode.mtu = mtu;
}
/* Generic flow management functions. */
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 8c23cfe7c3da..d2a2a9ac6cda 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1503,12 +1503,6 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid, 0);
- if (ret != 0)
- fprintf(stderr,
- "Updating jumbo frame offload failed for port %u\n",
- pid);
-
if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
@@ -3463,24 +3457,18 @@ rxtx_port_config(struct rte_port *port)
}
/*
- * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned.
+ * Helper function to set MTU from frame size
*
* port->dev_info should be set before calling this function.
*
- * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
- * ETH_OVERHEAD". This is useful to update flags but not MTU value.
- *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
+update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
- uint64_t rx_offloads;
uint16_t mtu, new_mtu;
- bool on;
eth_overhead = get_eth_overhead(&port->dev_info);
@@ -3489,40 +3477,8 @@ update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
return -1;
}
- if (max_rx_pktlen == 0)
- max_rx_pktlen = mtu + eth_overhead;
-
- rx_offloads = port->dev_conf.rxmode.offloads;
new_mtu = max_rx_pktlen - eth_overhead;
- if (new_mtu <= RTE_ETHER_MTU) {
- rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = false;
- } else {
- if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- fprintf(stderr,
- "Frame size (%u) is not supported by port %u\n",
- max_rx_pktlen, portid);
- return -1;
- }
- rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = true;
- }
-
- if (rx_offloads != port->dev_conf.rxmode.offloads) {
- uint16_t qid;
-
- port->dev_conf.rxmode.offloads = rx_offloads;
-
- /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
- for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
- if (on)
- port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
- }
-
if (mtu == new_mtu)
return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 17562215c733..eed9d031fd9a 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1022,7 +1022,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
+int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..df69fa8bcc24 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
* Identify if port Speed and Duplex is matching to desired values with
``rte_eth_link_get``.
- * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
-
* Check promiscuous mode if the drops do not occur for unique MAC address
with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3bc..8f10c6c78a1f 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -885,7 +885,6 @@ processing. This improved performance is derived from a number of optimizations:
DEV_RX_OFFLOAD_VLAN_STRIP
DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_JUMBO_FRAME
DEV_RX_OFFLOAD_IPV4_CKSUM
DEV_RX_OFFLOAD_UDP_CKSUM
DEV_RX_OFFLOAD_TCP_CKSUM
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 483cb7da576f..9580445828bf 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -165,8 +165,7 @@ Jumbo frame
Supports Rx jumbo frames.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.mtu``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 3f654c071566..5a198f53fce7 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
| DEV_RX_OFFLOAD_IPV4_CKSUM \
| DEV_RX_OFFLOAD_UDP_CKSUM \
| DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_JUMBO_FRAME \
| DEV_RX_OFFLOAD_MACSEC_STRIP \
| DEV_RX_OFFLOAD_VLAN_FILTER)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index c36cd7b1d2f0..0bc9e5eeeb10 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 009a94e9a8fa..50ff04bb2241 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 5121d05da65f..6743cf92b0e6 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -595,7 +595,6 @@ struct bnxt_rep_info {
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_VLAN_EXTEND | \
DEV_RX_OFFLOAD_TCP_LRO | \
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index dc33b961320a..e9d04f354a39 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -742,15 +742,10 @@ static int bnxt_start_nic(struct bnxt *bp)
unsigned int i, j;
int rc;
- if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* THOR does not support ring groups.
* But we will use the array to save RSS context IDs.
@@ -1250,7 +1245,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
if (eth_dev->data->dev_conf.rxmode.offloads &
~(DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 412acff42f65..2f3a1759419f 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1727,14 +1727,6 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 10e05e6b5edd..fa8c48f1eeb0 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -75,9 +75,8 @@
#define CNXK_NIX_RX_OFFLOAD_CAPA \
(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
+ DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP)
#define RSS_IPV4_ENABLE \
(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 349896f6a1bf..d0924df76152 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_SECURITY, " Security,"},
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..37625c5bfb69 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -51,7 +51,6 @@
DEV_RX_OFFLOAD_IPV4_CKSUM | \
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 70b879fed100..1374f32b6826 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -661,14 +661,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
if ((&rxq->fl) != NULL)
rxq->fl.size = temp_nb_desc;
- /* Set to jumbo mode if necessary */
- if (eth_dev->data->mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
&rxq->fl, NULL,
is_pf4(adapter) ?
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 830f5192474d..21b8fe61c9a7 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
struct rte_mbuf *buf_bulk[n];
int ret, i;
struct rte_pktmbuf_pool_private *mbp_priv;
- u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME;
/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
- if (jumbo_en &&
- ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000))
+ if ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000)
buf_size_idx = RX_LARGE_MTU_BUF;
ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk, n);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3172e3b2de87..defc072072af 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,7 +54,6 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
@@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c28f03641bbc..dc25eefb33b0 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
@@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6f4..1ae78fe71f02 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
void em_dev_clear_queues(struct rte_eth_dev *dev);
void em_dev_free_queues(struct rte_eth_dev *dev);
-uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
-uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
+uint64_t em_get_rx_port_offloads_capa(void);
+uint64_t em_get_rx_queue_offloads_capa(void);
int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6ebef55588bc..8a752eef52cf 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
- dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(dev);
- dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
+ dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa();
+ dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
dev_info->rx_queue_offload_capa;
dev_info->tx_queue_offload_capa = em_get_tx_queue_offloads_capa(dev);
dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd0074..e061f80a906a 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1359,12 +1359,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
}
uint64_t
-em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_port_offloads_capa(void)
{
uint64_t rx_offload_capa;
- uint32_t max_rx_pktlen;
-
- max_rx_pktlen = em_get_max_pktlen(dev);
rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
@@ -1374,14 +1371,12 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER;
- if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
- rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
return rx_offload_capa;
}
uint64_t
-em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_queue_offloads_capa(void)
{
uint64_t rx_queue_offload_capa;
@@ -1390,7 +1385,7 @@ em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
* capability be same to per port queue offloading capability
* for better convenience.
*/
- rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
+ rx_queue_offload_capa = em_get_rx_port_offloads_capa();
return rx_queue_offload_capa;
}
@@ -1839,7 +1834,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* to avoid splitting packets that don't fit into
* one buffer.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+ if (dev->data->mtu > RTE_ETHER_MTU ||
rctl_bsize < RTE_ETHER_MAX_LEN) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1874,14 +1869,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
if ((hw->mac.type == e1000_ich9lan ||
hw->mac.type == e1000_pch2lan ||
hw->mac.type == e1000_ich10lan) &&
- rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ dev->data->mtu > RTE_ETHER_MTU) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
}
if (hw->mac.type == e1000_pch2lan) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
@@ -1908,7 +1903,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index e9a30d393bd7..dda4d2101adb 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1640,7 +1640,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -2344,7 +2343,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Configure support of jumbo frames, if any.
*/
max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
rctl |= E1000_RCTL_LPE;
/*
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 3a9d5031b262..6d1026d31951 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -1918,7 +1918,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
- rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
/* Inform framework about available features */
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index a7372c1787c7..6457677d300a 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
(DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME);
+ DEV_RX_OFFLOAD_KEEP_CRC);
return 0;
}
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 0493e096d031..c5777772a09e 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
DEV_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e034..47c5efe9ea77 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1193,7 +1193,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
@@ -1211,7 +1210,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 5e4b361ca6c0..093021246286 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_RSS_HASH);
}
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 79987bec273c..4005414aeb71 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -738,7 +738,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index e1d465de8234..dbd4c54b18c6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2691,7 +2691,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 3438b3650de6..eee65ac77399 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -944,7 +944,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2824592aa62e..6a64221778fa 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3760,7 +3760,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 1d27cf2b0a01..69c282baa723 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2911,7 +2911,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
rxq->max_pkt_len =
RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
data->mtu + I40E_ETH_OVERHEAD);
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 2d43c666fdbb..2c4103ac7ef9 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -588,7 +588,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->mtu & RTE_ETHER_MTU) {
if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
max_pkt_len > IAVF_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -968,7 +968,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index c3c7ad88f250..16f642566e91 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -72,7 +72,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (max_pkt_len <= ICE_ETH_MAX_LEN ||
max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -683,7 +683,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f9137..d28fedc96e1a 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -149,7 +149,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 703178c6d40c..17d30b735693 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3676,7 +3676,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_FILTER;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index f9ef6ce57277..cc7908d32584 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -267,7 +267,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
struct ice_rlan_ctx rx_ctx;
enum ice_status err;
uint16_t buf_size;
- struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
@@ -282,7 +281,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
frame_size);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index b3473b5b1646..5e6c2ff30157 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -73,7 +73,6 @@ extern "C" {
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 28d3076439c3..30940857eac0 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ if (dev->data->mtu & RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
else
rctl &= ~IGC_RCTL_LPE;
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index 97447a10e46a..795980cb1ca5 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_SCATTER |
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 377b96c0236a..4e5d234e8c7d 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ DEV_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 574a7bffc9cb..3205c37c3b82 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6234,7 +6234,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct rte_eth_rxmode *rxmode;
uint32_t rf_dec, rf_int;
uint32_t bcnrc_val;
uint16_t link_speed = dev->data->dev_link.link_speed;
@@ -6256,14 +6255,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
bcnrc_val = 0;
}
- rxmode = &dev->data->dev_conf.rxmode;
/*
* Set global transmit compensation time to the MMW_SIZE in RTTBCNRM
* register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
* set as 0x4.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ if (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 9bcbc445f2d0..6e64f9a0ade2 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -600,15 +600,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT;
if (max_frs < max_frame) {
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (max_frame > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (max_frame > IXGBE_ETH_MAX_LEN)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 03991711fd6e..c223ef37c79f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3033,7 +3033,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -5095,7 +5094,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 4a5cfd22aa71..e73112c44749 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 6f4f351222d3..0cc3bccc0825 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..6428f9ff7931 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,8 +54,7 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 5ce71661c84e..ef987b7de1b5 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -59,7 +59,6 @@
/** Port Rx offload capabilities */
#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index b1ce35b334da..a0bb5b9640c2 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -369,8 +369,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = dev->data->mtu;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -757,9 +756,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
};
- /* All NFP devices support jumbo frames */
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..3a02824e3948 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -60,7 +60,6 @@
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30bda..47ee126ed7fd 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -148,7 +148,6 @@
DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
DEV_RX_OFFLOAD_VLAN_FILTER | \
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index a243683d61d3..c65041a16ba7 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
- devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..aa4dcd33cc79 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l3_len = hdr_lens.l3_len;
droq_pkt->l4_len = hdr_lens.l4_len;
- if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
- rte_pktmbuf_free(droq_pkt);
- goto oq_read_fail;
- }
-
if (droq_pkt->nb_segs > 1 &&
!(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 84e23ff03418..06c3ccf20716 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_RSS_HASH);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9e0..62b215f62cd6 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -940,8 +940,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
{
uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
- caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return caps & sfc_rx_get_offload_mask(sa);
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..5d38750d6313 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -40,7 +40,6 @@
#define NICVF_RX_OFFLOAD_CAPA ( \
DEV_RX_OFFLOAD_CHECKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index c6cd3803c434..0ce754fb25b0 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1953,7 +1953,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_SCATTER;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 5d341a3e23bb..a05e73cd8b60 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2556,7 +2556,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
host_features = VIRTIO_OPS(hw)->get_features(hw);
dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 2f40ae907dcd..0210f9140b48 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -54,7 +54,6 @@
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 12062a785dc6..7c0cb093eda3 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -150,8 +150,7 @@ static struct rte_eth_conf port_conf = {
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ DEV_RX_OFFLOAD_SCATTER),
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index e5c7d46d2caa..af67db49f7fb 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -165,8 +165,7 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index d032a47d1c3b..4a741bfdde4d 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2209,8 +2209,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- if (mtu_size > RTE_ETHER_MTU)
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index b3993685ec92..63bbd7e64ceb 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -113,7 +113,6 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/kni/main.c b/examples/kni/main.c
index c10814c6a94f..0fd945e7e0b2 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
}
memcpy(&conf, &port_conf, sizeof(conf));
- /* Set new MTU */
- if (new_mtu > RTE_ETHER_MTU)
- conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 7abb612ee6a4..f6dfb156ac56 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -2000,10 +2000,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index b431b9ff5f3c..a185a0512826 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -730,10 +730,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index e58561327c48..12b4dce77ce1 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -2509,10 +2509,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index cb9bc7ad6002..22d35749410b 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -987,10 +987,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index b6cddc8c7b51..8fc3a7c675a2 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -3493,10 +3493,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index da381b41c0c5..a9c207124153 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -631,11 +631,8 @@ us_vhost_parse_args(int argc, char **argv)
return -1;
}
mergeable = !!ret;
- if (ret) {
- vmdq_conf_default.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (ret)
vmdq_conf_default.rxmode.mtu = MAX_MTU;
- }
break;
case OPT_STATS_NUM:
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index ce0ed509d28f..c2b624aba1a0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -118,7 +118,6 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT),
RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
- RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
RTE_RX_OFFLOAD_BIT2STR(SCATTER),
RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
RTE_RX_OFFLOAD_BIT2STR(SECURITY),
@@ -1485,13 +1484,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
- dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
- /* Use default value */
- dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- }
-
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
/*
@@ -3639,7 +3631,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
- int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3667,27 +3658,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
frame_size = mtu + overhead_len;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
-
- if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
- is_jumbo_frame_capable = 1;
}
- if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
- return -EINVAL;
-
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (ret == 0) {
+ if (ret == 0)
dev->data->mtu = mtu;
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
return eth_err(port_id, ret);
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9fba2bd73c84..4d0f956a4b28 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1389,7 +1389,6 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v4 5/6] ethdev: unify MTU checks
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
` (2 preceding siblings ...)
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-10-05 17:16 ` Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
` (3 subsequent siblings)
7 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-05 17:16 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko; +Cc: Ferruh Yigit, dev, Huisong Li
Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
have slightly different checks. Like one checks min MTU against
RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
Checks moved into common function to unify the checks. Also this has
benefit to have common error logs.
Suggested-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
lib/ethdev/rte_ethdev.c | 82 ++++++++++++++++++++++++++---------------
lib/ethdev/rte_ethdev.h | 2 +-
2 files changed, 54 insertions(+), 30 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index c2b624aba1a0..0a6e952722ae 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1336,6 +1336,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
return overhead_len;
}
+/* rte_eth_dev_info_get() should be called prior to this function */
+static int
+eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
+ uint16_t mtu)
+{
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
+ if (mtu < dev_info->min_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) < device min MTU (%u) for port_id %u\n",
+ mtu, dev_info->min_mtu, port_id);
+ return -EINVAL;
+ }
+ if (mtu > dev_info->max_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) > device max MTU (%u) for port_id %u\n",
+ mtu, dev_info->max_mtu, port_id);
+ return -EINVAL;
+ }
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ frame_size = mtu + overhead_len;
+ if (frame_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) < min frame size (%u) for port_id %u\n",
+ frame_size, RTE_ETHER_MIN_LEN, port_id);
+ return -EINVAL;
+ }
+
+ if (frame_size > dev_info->max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) > device max frame size (%u) for port_id %u\n",
+ frame_size, dev_info->max_rx_pktlen, port_id);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1463,26 +1504,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- /*
- * Check that the maximum RX packet length is supported by the
- * configured device.
- */
if (dev_conf->rxmode.mtu == 0)
dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
- if (max_rx_pktlen > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
- port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
- port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
+
+ ret = eth_dev_validate_mtu(port_id, &dev_info,
+ dev->data->dev_conf.rxmode.mtu);
+ if (ret != 0)
goto rollback;
- }
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
@@ -1491,6 +1519,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
if (dev_conf->rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
@@ -3437,7 +3468,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = lim;
dev_info->tx_desc_lim = lim;
dev_info->device = dev->device;
- dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
dev_info->max_mtu = UINT16_MAX;
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
@@ -3643,21 +3675,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
- uint16_t overhead_len;
- uint32_t frame_size;
-
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
- if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
- return -EINVAL;
-
- overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
- dev_info.max_mtu);
- frame_size = mtu + overhead_len;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
+ if (ret != 0)
+ return ret;
}
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 4d0f956a4b28..50e124ff631f 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3056,7 +3056,7 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
* };
*
* device = dev->device
- * min_mtu = RTE_ETHER_MIN_MTU
+ * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
* max_mtu = UINT16_MAX
*
* The following fields will be populated if support for dev_infos_get()
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v4 6/6] examples/ip_reassembly: remove unused parameter
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
` (3 preceding siblings ...)
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 5/6] ethdev: unify MTU checks Ferruh Yigit
@ 2021-10-05 17:16 ` Ferruh Yigit
2021-10-05 22:07 ` [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length Ajit Khaparde
` (2 subsequent siblings)
7 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-05 17:16 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: Ferruh Yigit, dev
Remove 'max-pkt-len' parameter.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
examples/ip_reassembly/main.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index af67db49f7fb..2ff5ea3e7bc5 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -516,7 +516,6 @@ static void
print_usage(const char *prgname)
{
printf("%s [EAL options] -- -p PORTMASK [-q NQ]"
- " [--max-pkt-len PKTLEN]"
" [--maxflows=<flows>] [--flowttl=<ttl>[(s|ms)]]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of RX queues per lcore\n"
@@ -618,7 +617,6 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {"max-pkt-len", 1, 0, 0},
{"maxflows", 1, 0, 0},
{"flowttl", 1, 0, 0},
{NULL, 0, 0, 0}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
` (4 preceding siblings ...)
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
@ 2021-10-05 22:07 ` Ajit Khaparde
2021-10-06 6:08 ` Somnath Kotur
2021-10-08 8:36 ` Xu, Rosen
2021-10-10 6:30 ` Matan Azrad
7 siblings, 1 reply; 112+ messages in thread
From: Ajit Khaparde @ 2021-10-05 22:07 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Somnath Kotur,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
Beilei Xing, Jingjing Wu, Qiming Yang, Andrew Boyer, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Zyta Szpak,
Liron Himi, Heinrich Kuhn, Devendra Singh Rawat,
Andrew Rybchenko, Keith Wiles, Jiawen Wu, Jian Wang,
Maxime Coquelin, Chenbo Xia, Nicolas Chautru, Harry van Haaren,
Cristian Dumitrescu, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Declan Doherty, Pavan Nikhilesh, Kirill Rybalchenko,
Jasvinder Singh, Thomas Monjalon, dpdk-dev
[-- Attachment #1: Type: text/plain, Size: 2874 bytes --]
On Tue, Oct 5, 2021 at 10:31 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> Cc: Min Hu (Connor) <humin29@huawei.com>
>
> v2:
> * Converted to explicit checks for zero/non-zero
> * fixed hns3 checks
> * fixed some sample app rxmode.mtu value
> * fixed some sample app max-pkt-len argument and updated doc for it
>
> v3:
> * rebased
>
> v4:
> * fix typos in commit logs
That's a lot of detail in the log. Thanks
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length
2021-10-05 22:07 ` [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length Ajit Khaparde
@ 2021-10-06 6:08 ` Somnath Kotur
0 siblings, 0 replies; 112+ messages in thread
From: Somnath Kotur @ 2021-10-06 6:08 UTC (permalink / raw)
To: Ajit Khaparde
Cc: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
Beilei Xing, Jingjing Wu, Qiming Yang, Andrew Boyer, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Zyta Szpak,
Liron Himi, Heinrich Kuhn, Devendra Singh Rawat,
Andrew Rybchenko, Keith Wiles, Jiawen Wu, Jian Wang,
Maxime Coquelin, Chenbo Xia, Nicolas Chautru, Harry van Haaren,
Cristian Dumitrescu, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Declan Doherty, Pavan Nikhilesh, Kirill Rybalchenko,
Jasvinder Singh, Thomas Monjalon, dpdk-dev
[-- Attachment #1: Type: text/plain, Size: 3147 bytes --]
On Wed, Oct 6, 2021 at 3:38 AM Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:
>
> On Tue, Oct 5, 2021 at 10:31 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >
> > There is a confusion on setting max Rx packet length, this patch aims to
> > clarify it.
> >
> > 'rte_eth_dev_configure()' API accepts max Rx packet size via
> > 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> > rte_eth_conf'.
> >
> > Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> > stored into '(struct rte_eth_dev)->data->mtu'.
> >
> > These two APIs are related but they work in a disconnected way, they
> > store the set values in different variables which makes hard to figure
> > out which one to use, also having two different method for a related
> > functionality is confusing for the users.
> >
> > Other issues causing confusion is:
> > * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> > 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> > Ethernet frame overhead, and this overhead may be different from
> > device to device based on what device supports, like VLAN and QinQ.
> > * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> > which adds additional confusion and some APIs and PMDs already
> > discards this documented behavior.
> > * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> > field, this adds configuration complexity for application.
> >
> > As solution, both APIs gets MTU as parameter, and both saves the result
> > in same variable '(struct rte_eth_dev)->data->mtu'. For this
> > 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> > from jumbo frame.
> >
> > For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> > request and it should be used only within configure function and result
> > should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> > both application and PMD uses MTU from this variable.
> >
> > When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> > default 'RTE_ETHER_MTU' value is used.
> >
> > Additional clarification done on scattered Rx configuration, in
> > relation to MTU and Rx buffer size.
> > MTU is used to configure the device for physical Rx/Tx size limitation,
> > Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> > size as Rx buffer size.
> > PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> > or not. If scattered Rx is not supported by device, MTU bigger than Rx
> > buffer size should fail.
> >
> > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > ---
> > Cc: Min Hu (Connor) <humin29@huawei.com>
> >
> > v2:
> > * Converted to explicit checks for zero/non-zero
> > * fixed hns3 checks
> > * fixed some sample app rxmode.mtu value
> > * fixed some sample app max-pkt-len argument and updated doc for it
> >
> > v3:
> > * rebased
> >
> > v4:
> > * fix typos in commit logs
>
> That's a lot of detail in the log. Thanks
>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
` (6 preceding siblings ...)
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
@ 2021-10-07 16:56 ` Ferruh Yigit
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
` (6 more replies)
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
9 siblings, 7 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-07 16:56 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
Thomas Monjalon
Cc: Ferruh Yigit, dev
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
Cc: Min Hu (Connor) <humin29@huawei.com>
v2:
* Converted to explicit checks for zero/non-zero
* fixed hns3 checks
* fixed some sample app rxmode.mtu value
* fixed some sample app max-pkt-len argument and updated doc for it
v3:
* rebased
v4:
* fix typos in commit logs
v5:
* fix testpmd '--max-pkt-len=###' parameter for DTS jumbo frame test
---
app/test-eventdev/test_perf_common.c | 1 -
app/test-eventdev/test_pipeline_common.c | 5 +-
app/test-pmd/cmdline.c | 49 +++----
app/test-pmd/config.c | 22 ++-
app/test-pmd/parameters.c | 2 +-
app/test-pmd/testpmd.c | 115 ++++++++++------
app/test-pmd/testpmd.h | 4 +-
app/test/test_link_bonding.c | 1 -
app/test/test_link_bonding_mode4.c | 1 -
| 2 -
app/test/test_pmd_perf.c | 1 -
doc/guides/nics/dpaa.rst | 2 +-
doc/guides/nics/dpaa2.rst | 2 +-
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/fm10k.rst | 2 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/octeontx.rst | 2 +-
doc/guides/nics/thunderx.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 25 ----
doc/guides/sample_app_ug/flow_classify.rst | 7 +-
doc/guides/sample_app_ug/l3_forward.rst | 6 +-
.../sample_app_ug/l3_forward_access_ctrl.rst | 4 +-
doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +-
.../sample_app_ug/l3_forward_power_man.rst | 4 +-
.../sample_app_ug/performance_thread.rst | 4 +-
doc/guides/sample_app_ug/skeleton.rst | 7 +-
drivers/net/atlantic/atl_ethdev.c | 3 -
drivers/net/avp/avp_ethdev.c | 17 +--
drivers/net/axgbe/axgbe_ethdev.c | 7 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
drivers/net/bnxt/bnxt_ethdev.c | 21 +--
drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 9 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-
drivers/net/cxgbe/cxgbe_main.c | 3 +-
drivers/net/cxgbe/sge.c | 3 +-
drivers/net/dpaa/dpaa_ethdev.c | 52 +++----
drivers/net/dpaa2/dpaa2_ethdev.c | 35 ++---
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/igb_ethdev.c | 18 +--
drivers/net/e1000/igb_rxtx.c | 16 +--
drivers/net/ena/ena_ethdev.c | 27 ++--
drivers/net/enetc/enetc_ethdev.c | 24 +---
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 42 +++---
drivers/net/fm10k/fm10k_ethdev.c | 2 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-
drivers/net/hns3/hns3_ethdev.c | 42 +-----
drivers/net/hns3/hns3_ethdev_vf.c | 28 +---
drivers/net/hns3/hns3_rxtx.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 10 +-
drivers/net/i40e/i40e_rxtx.c | 4 +-
drivers/net/iavf/iavf_ethdev.c | 9 +-
drivers/net/ice/ice_dcf_ethdev.c | 5 +-
drivers/net/ice/ice_ethdev.c | 14 +-
drivers/net/ice/ice_rxtx.c | 12 +-
drivers/net/igc/igc_ethdev.c | 51 ++-----
drivers/net/igc/igc_ethdev.h | 7 +
drivers/net/igc/igc_txrx.c | 22 +--
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ionic/ionic_rxtx.c | 6 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 35 ++---
drivers/net/ixgbe/ixgbe_pf.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 15 +-
drivers/net/liquidio/lio_ethdev.c | 20 +--
drivers/net/mlx4/mlx4_rxq.c | 17 +--
drivers/net/mlx5/mlx5_rxq.c | 25 ++--
drivers/net/mvneta/mvneta_ethdev.c | 7 -
drivers/net/mvneta/mvneta_rxtx.c | 13 +-
drivers/net/mvpp2/mrvl_ethdev.c | 34 ++---
drivers/net/nfp/nfp_common.c | 9 +-
drivers/net/octeontx/octeontx_ethdev.c | 12 +-
drivers/net/octeontx2/otx2_ethdev.c | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-
drivers/net/pfe/pfe_ethdev.c | 7 +-
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/qede/qede_rxtx.c | 8 +-
drivers/net/sfc/sfc_ethdev.c | 4 +-
drivers/net/sfc/sfc_port.c | 6 +-
drivers/net/tap/rte_eth_tap.c | 7 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +-
drivers/net/txgbe/txgbe_ethdev.c | 7 +-
drivers/net/txgbe/txgbe_ethdev.h | 4 +
drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
drivers/net/txgbe/txgbe_rxtx.c | 19 +--
drivers/net/virtio/virtio_ethdev.c | 9 +-
examples/bbdev_app/main.c | 1 -
examples/bond/main.c | 1 -
examples/distributor/main.c | 1 -
.../pipeline_worker_generic.c | 1 -
.../eventdev_pipeline/pipeline_worker_tx.c | 1 -
examples/flow_classify/flow_classify.c | 12 +-
examples/ioat/ioatfwd.c | 1 -
examples/ip_fragmentation/main.c | 12 +-
examples/ip_pipeline/link.c | 2 +-
examples/ip_reassembly/main.c | 12 +-
examples/ipsec-secgw/ipsec-secgw.c | 7 +-
examples/ipv4_multicast/main.c | 9 +-
examples/kni/main.c | 6 +-
examples/l2fwd-cat/l2fwd-cat.c | 8 +-
examples/l2fwd-crypto/main.c | 1 -
examples/l2fwd-event/l2fwd_common.c | 1 -
examples/l3fwd-acl/main.c | 129 +++++++++---------
examples/l3fwd-graph/main.c | 83 +++++++----
examples/l3fwd-power/main.c | 90 +++++++-----
examples/l3fwd/main.c | 84 +++++++-----
.../performance-thread/l3fwd-thread/main.c | 88 +++++++-----
.../performance-thread/l3fwd-thread/test.sh | 24 ++--
examples/pipeline/obj.c | 2 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 1 -
examples/qos_sched/init.c | 1 -
examples/rxtx_callbacks/main.c | 10 +-
examples/skeleton/basicfwd.c | 12 +-
examples/vhost/main.c | 4 +-
examples/vm_power_manager/main.c | 11 +-
lib/ethdev/rte_ethdev.c | 92 +++++++------
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_trace.h | 2 +-
121 files changed, 815 insertions(+), 1073 deletions(-)
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..660d5a0364b6 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..5fcea74b4d43 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
return -EINVAL;
}
- port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
- if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
+ port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
+ if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index a9efd027c376..a677451073ae 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1892,45 +1892,38 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
__rte_unused void *data)
{
struct cmd_config_max_pkt_len_result *res = parsed_result;
- uint32_t max_rx_pkt_len_backup = 0;
- portid_t pid;
+ portid_t port_id;
int ret;
+ if (strcmp(res->name, "max-pkt-len") != 0) {
+ printf("Unknown parameter\n");
+ return;
+ }
+
if (!all_ports_stopped()) {
fprintf(stderr, "Please stop all ports first\n");
return;
}
- RTE_ETH_FOREACH_DEV(pid) {
- struct rte_port *port = &ports[pid];
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_port *port = &ports[port_id];
- if (!strcmp(res->name, "max-pkt-len")) {
- if (res->value < RTE_ETHER_MIN_LEN) {
- fprintf(stderr,
- "max-pkt-len can not be less than %d\n",
- RTE_ETHER_MIN_LEN);
- return;
- }
- if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
- return;
-
- ret = eth_dev_info_get_print_err(pid, &port->dev_info);
- if (ret != 0) {
- fprintf(stderr,
- "rte_eth_dev_info_get() failed for port %u\n",
- pid);
- return;
- }
-
- max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
+ if (res->value < RTE_ETHER_MIN_LEN) {
+ fprintf(stderr,
+ "max-pkt-len can not be less than %d\n",
+ RTE_ETHER_MIN_LEN);
+ return;
+ }
- port->dev_conf.rxmode.max_rx_pkt_len = res->value;
- if (update_jumbo_frame_offload(pid) != 0)
- port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
- } else {
- fprintf(stderr, "Unknown parameter\n");
+ ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
+ if (ret != 0) {
+ fprintf(stderr,
+ "rte_eth_dev_info_get() failed for port %u\n",
+ port_id);
return;
}
+
+ update_jumbo_frame_offload(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9c66329e96ee..db3eeffa0093 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1147,7 +1147,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
int diag;
struct rte_port *rte_port = &ports[port_id];
struct rte_eth_dev_info dev_info;
- uint16_t eth_overhead;
int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
@@ -1164,21 +1163,18 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
return;
}
diag = rte_eth_dev_set_mtu(port_id, mtu);
- if (diag)
+ if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
- else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- /*
- * Ether overhead in driver is equal to the difference of
- * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
- * device supports jumbo frame.
- */
- eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
- if (mtu > RTE_ETHER_MTU) {
+ return;
+ }
+
+ rte_port->dev_conf.rxmode.mtu = mtu;
+
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (mtu > RTE_ETHER_MTU)
rte_port->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- rte_port->dev_conf.rxmode.max_rx_pkt_len =
- mtu + eth_overhead;
- } else
+ else
rte_port->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
}
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 3f94a82e321f..dec5373b346d 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -870,7 +870,7 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
n = atoi(optarg);
if (n >= RTE_ETHER_MIN_LEN)
- rx_mode.max_rx_pkt_len = (uint32_t) n;
+ max_rx_pkt_len = n;
else
rte_exit(EXIT_FAILURE,
"Invalid max-pkt-len=%d - should be > %d\n",
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97ae52e17ecd..8c11ab23dd14 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -214,6 +214,11 @@ uint16_t stats_period; /**< Period to show statistics (disabled by default) */
*/
uint8_t f_quit;
+/*
+ * Max Rx frame size, set by '--max-pkt-len' parameter.
+ */
+uint16_t max_rx_pkt_len;
+
/*
* Configuration of packet segments used to scatter received packets
* if some of split features is configured.
@@ -446,13 +451,7 @@ lcoreid_t latencystats_lcore_id = -1;
/*
* Ethernet device configuration.
*/
-struct rte_eth_rxmode rx_mode = {
- /* Default maximum frame length.
- * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
- * in init_config().
- */
- .max_rx_pkt_len = 0,
-};
+struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
@@ -1481,11 +1480,24 @@ check_nb_hairpinq(queueid_t hairpinq)
return 0;
}
+static int
+get_eth_overhead(struct rte_eth_dev_info *dev_info)
+{
+ uint32_t eth_overhead;
+
+ if (dev_info->max_mtu != UINT16_MAX &&
+ dev_info->max_rx_pktlen > dev_info->max_mtu)
+ eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu;
+ else
+ eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return eth_overhead;
+}
+
static void
init_config_port_offloads(portid_t pid, uint32_t socket_id)
{
struct rte_port *port = &ports[pid];
- uint16_t data_size;
int ret;
int i;
@@ -1496,7 +1508,7 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid);
+ ret = update_jumbo_frame_offload(pid, 0);
if (ret != 0)
fprintf(stderr,
"Updating jumbo frame offload failed for port %u\n",
@@ -1516,6 +1528,12 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (eth_link_speed)
port->dev_conf.link_speeds = eth_link_speed;
+ if (max_rx_pkt_len) {
+ port->dev_conf.rxmode.mtu = max_rx_pkt_len -
+ get_eth_overhead(&port->dev_info);
+ max_rx_pkt_len = 0;
+ }
+
/* set flag to initialize port/queue */
port->need_reconfig = 1;
port->need_reconfig_queues = 1;
@@ -1528,14 +1546,20 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
*/
if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
- data_size = rx_mode.max_rx_pkt_len /
- port->dev_info.rx_desc_lim.nb_mtu_seg_max;
-
- if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
- mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
- TESTPMD_LOG(WARNING,
- "Configured mbuf size of the first segment %hu\n",
- mbuf_data_size[0]);
+ uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
+ uint16_t mtu;
+
+ if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
+ uint16_t data_size = (mtu + eth_overhead) /
+ port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+ uint16_t buffer_size = data_size + RTE_PKTMBUF_HEADROOM;
+
+ if (buffer_size > mbuf_data_size[0]) {
+ mbuf_data_size[0] = buffer_size;
+ TESTPMD_LOG(WARNING,
+ "Configured mbuf size of the first segment %hu\n",
+ mbuf_data_size[0]);
+ }
}
}
}
@@ -2552,6 +2576,7 @@ start_port(portid_t pid)
pi);
return -1;
}
+
/* configure port */
diag = eth_dev_configure_mp(pi, nb_rxq + nb_hairpinq,
nb_txq + nb_hairpinq,
@@ -3451,44 +3476,45 @@ rxtx_port_config(struct rte_port *port)
/*
* Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned if JUMBO_FRAME offload is not set.
+ * MTU is also aligned.
*
* port->dev_info should be set before calling this function.
*
+ * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
+ * ETH_OVERHEAD". This is useful to update flags but not MTU value.
+ *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid)
+update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
uint64_t rx_offloads;
- int ret;
+ uint16_t mtu, new_mtu;
bool on;
- /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
- if (port->dev_info.max_mtu != UINT16_MAX &&
- port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
- eth_overhead = port->dev_info.max_rx_pktlen -
- port->dev_info.max_mtu;
- else
- eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ eth_overhead = get_eth_overhead(&port->dev_info);
- rx_offloads = port->dev_conf.rxmode.offloads;
+ if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
+ printf("Failed to get MTU for port %u\n", portid);
+ return -1;
+ }
+
+ if (max_rx_pktlen == 0)
+ max_rx_pktlen = mtu + eth_overhead;
- /* Default config value is 0 to use PMD specific overhead */
- if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
- port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
+ rx_offloads = port->dev_conf.rxmode.offloads;
+ new_mtu = max_rx_pktlen - eth_overhead;
- if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
+ if (new_mtu <= RTE_ETHER_MTU) {
rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
on = false;
} else {
if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
fprintf(stderr,
"Frame size (%u) is not supported by port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len,
- portid);
+ max_rx_pktlen, portid);
return -1;
}
rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -3509,19 +3535,18 @@ update_jumbo_frame_offload(portid_t portid)
}
}
- /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
- * if unset do it here
- */
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- ret = eth_dev_set_mtu_mp(portid,
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
- if (ret)
- fprintf(stderr,
- "Failed to set MTU to %u for port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead,
- portid);
+ if (mtu == new_mtu)
+ return 0;
+
+ if (eth_dev_set_mtu_mp(portid, new_mtu) != 0) {
+ fprintf(stderr,
+ "Failed to set MTU to %u for port %u\n",
+ new_mtu, portid);
+ return -1;
}
+ port->dev_conf.rxmode.mtu = new_mtu;
+
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5863b2f43f3e..076c154b2b3a 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -448,6 +448,8 @@ extern uint8_t bitrate_enabled;
extern struct rte_fdir_conf fdir_conf;
+extern uint16_t max_rx_pkt_len;
+
/*
* Configuration of packet segments used to scatter received packets
* if some of split features is configured.
@@ -1022,7 +1024,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid);
+int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..5388d18125a6 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
.split_hdr_size = 0,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7adc7..3e9254fe896d 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..e7bb0497b663 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
@@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..a3b4f52c65e6 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 7355ec305916..9dad612058c6 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -335,7 +335,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index df23a5704dca..831bc564883a 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -545,7 +545,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 4fce8cd1c976..483cb7da576f 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -166,7 +166,7 @@ Jumbo frame
Supports Rx jumbo frames.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.max_rx_pkt_len``.
+ ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..ed6afd62703d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -141,7 +141,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
up to 15364 bytes can still reach the host interface.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d882..1f5619ed53fc 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -606,9 +606,9 @@ Driver options
and each stride receives one packet. MPRQ can improve throughput for
small-packet traffic.
- When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
+ When MPRQ is enabled, MTU can be larger than the size of
user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
- configure large stride size enough to accommodate max_rx_pkt_len as long as
+ configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index b1a868b054d1..8236cc3e93e0 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -157,7 +157,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 12d43ce93e28..98f23a2b2a3d 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -392,7 +392,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
up to 9200 bytes can still reach the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a2fe766d4b4f..1063a1fe4bea 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -81,31 +81,6 @@ Deprecation Notices
In 19.11 PMDs will still update the field even when the offload is not
enabled.
-* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be
- replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
- The new ``mtu`` field will be used to configure the initial device MTU via
- ``rte_eth_dev_configure()`` API.
- Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
- The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store
- the configured ``mtu`` value,
- and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
- be used to store the user configuration request.
- Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled,
- ``mtu`` field will be always valid.
- When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU``
- value will be used.
- ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully,
- either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
-
- An application may need to configure device for a specific Rx packet size, like for
- cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
- can't be bigger than Rx buffer size.
- To cover these cases an application needs to know the device packet overhead to be
- able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
- ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
- the device packet overhead can be calculated as:
- ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
-
* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
will be removed in 21.11.
Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst
index 812aaa87b05b..6c4c04e935e4 100644
--- a/doc/guides/sample_app_ug/flow_classify.rst
+++ b/doc/guides/sample_app_ug/flow_classify.rst
@@ -162,12 +162,7 @@ Forwarding application is shown below:
:end-before: >8 End of initializing a given port.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
-
-.. literalinclude:: ../../../examples/flow_classify/flow_classify.c
- :language: c
- :start-after: Ethernet ports configured with default settings using struct. 8<
- :end-before: >8 End of configuration of Ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 2d5cd5f1c0ba..56af5cd5b383 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -65,7 +65,7 @@ The application has a number of command line options::
[--lookup LOOKUP_METHOD]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--hash-entry-num]
[--ipv6]
@@ -95,9 +95,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 2cf6e4556f14..486247ac2e4f 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -236,7 +236,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
+ ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--max-pkt-len PKTLEN] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
where,
@@ -255,8 +255,6 @@ where,
* --alg=<val>: optional, ACL classify method to use, one of:
``scalar|sse|avx2|neon|altivec|avx512x16|avx512x32``
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst b/doc/guides/sample_app_ug/l3_forward_graph.rst
index 03e9a85aa68c..0a3e0d44ecea 100644
--- a/doc/guides/sample_app_ug/l3_forward_graph.rst
+++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
@@ -48,7 +48,7 @@ The application has a number of command line options similar to l3fwd::
[-P]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--per-port-pool]
@@ -63,9 +63,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index 0495314c87d5..8817eaadbfc3 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -88,7 +88,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa]
+ ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--max-pkt-len PKTLEN] [--no-numa]
where,
@@ -99,8 +99,6 @@ where,
* --config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which ports are mapped to which cores.
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst
index 9b09838f6448..7d1bf6eaae8c 100644
--- a/doc/guides/sample_app_ug/performance_thread.rst
+++ b/doc/guides/sample_app_ug/performance_thread.rst
@@ -59,7 +59,7 @@ The application has a number of command line options::
-p PORTMASK [-P]
--rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
--tx(lcore,thread)[,(lcore,thread)]
- [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
+ [--max-pkt-len PKTLEN] [--no-numa]
[--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
[--parse-ptype]
@@ -80,8 +80,6 @@ Where:
the lcore the thread runs on, and the id of RX thread with which it is
associated. The parameters are explained below.
-* ``--enable-jumbo``: optional, enables jumbo frames.
-
* ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
* ``--no-numa``: optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst
index f7bcd7ed2a1d..6d0de6440105 100644
--- a/doc/guides/sample_app_ug/skeleton.rst
+++ b/doc/guides/sample_app_ug/skeleton.rst
@@ -106,12 +106,7 @@ Forwarding application is shown below:
:end-before: >8 End of main functional part of port initialization.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
-
-.. literalinclude:: ../../../examples/skeleton/basicfwd.c
- :language: c
- :start-after: Configuration of ethernet ports. 8<
- :end-before: >8 End of configuration of ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..3f654c071566 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return 0;
}
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff5b..0feacc822433 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1059,17 +1059,18 @@ static int
avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
struct avp_dev *avp)
{
- unsigned int max_rx_pkt_len;
+ unsigned int max_rx_pktlen;
- max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
- if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
- (max_rx_pkt_len > avp->host_mbuf_size)) {
+ if (max_rx_pktlen > avp->guest_mbuf_size ||
+ max_rx_pktlen > avp->host_mbuf_size) {
/*
* If the guest MTU is greater than either the host or guest
* buffers then chained mbufs have to be enabled in the TX
* direction. It is assumed that the application will not need
- * to send packets larger than their max_rx_pkt_len (MRU).
+ * to send packets larger than their MTU.
*/
return 1;
}
@@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
avp->max_rx_pkt_len,
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN,
avp->host_mbuf_size,
avp->guest_mbuf_size);
@@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* function; send it truncated to avoid the performance
* hit of having to manage returning the already
* allocated buffer to the free list. This should not
- * happen since the application should have set the
- * max_rx_pkt_len based on its MTU and it should be
+ * happen since the application should have not send
+ * packages larger than its MTU and it should be
* policing its own packet sizes.
*/
txq->errors++;
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af11f..76aeec077f2b 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
int ret;
struct rte_eth_dev_data *dev_data = dev->data;
- uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len;
+ uint16_t max_pkt_len;
dev->dev_ops = &axgbe_eth_dev_ops;
@@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
+
+ max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (frame_size > AXGBE_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
val = 1;
@@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
val = 0;
}
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..009a94e9a8fa 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -175,16 +175,12 @@ static int
bnx2x_dev_configure(struct rte_eth_dev *dev)
{
struct bnx2x_softc *sc = dev->data->dev_private;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
PMD_INIT_FUNC_TRACE(sc);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- dev->data->mtu = sc->mtu;
- }
+ sc->mtu = dev->data->dev_conf.rxmode.mtu;
if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues");
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85fa..8c6f20b75aed 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1157,13 +1157,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
- BNXT_NUM_VLANS;
- bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
- }
+ bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
+
return 0;
resource_error:
@@ -1201,6 +1196,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
*/
static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
uint16_t buf_size;
int i;
@@ -1215,7 +1211,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size)
+ if (eth_dev->data->mtu + overhead > buf_size)
return 1;
}
return 0;
@@ -3026,6 +3022,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
uint32_t rc = 0;
@@ -3039,8 +3036,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
if (!eth_dev->data->nb_rx_queues)
return rc;
- new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
- VLAN_TAG_SIZE * BNXT_NUM_VLANS;
+ new_pkt_size = new_mtu + overhead;
/*
* Disallow any MTU change that would require scattered receive support
@@ -3067,7 +3063,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
/* Is there a change in mtu setting? */
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size)
+ if (eth_dev->data->mtu == new_mtu)
return rc;
for (i = 0; i < bp->nr_vnics; i++) {
@@ -3089,9 +3085,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
}
- if (!rc)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size;
-
if (bnxt_hwrm_config_host_mtu(bp))
PMD_DRV_LOG(WARNING, "Failed to configure host MTU\n");
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 54987d96b34d..412acff42f65 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1724,8 +1724,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_VLAN_FILTER;
- slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ slave_eth_dev->data->dev_conf.rxmode.mtu =
+ bonded_eth_dev->data->dev_conf.rxmode.mtu;
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_JUMBO_FRAME)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 8629193d5049..8d0677cd89d9 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
}
@@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct cnxk_eth_rxq_sp *rxq;
- uint16_t mtu;
int rc;
rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
/* Setup scatter mode if needed by jumbo */
nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
- CNXK_NIX_MAX_VTAG_ACT_SIZE;
-
- rc = cnxk_nix_mtu_set(eth_dev, mtu);
+ rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
if (rc)
plt_err("Failed to set default MTU size, rc=%d", rc);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..695d0d6fd3e2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
goto exit;
}
- frame_size += RTE_ETHER_CRC_LEN;
-
- if (frame_size > RTE_ETHER_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca397600..8cf61f12a8d6 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return err;
/* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
+ if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
/* set to jumbo mode if needed */
- if (new_mtu > CXGBE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- if (!err)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;
-
return err;
}
@@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
- unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
struct rte_eth_dev_info dev_info;
@@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
rxq->fl.size = temp_nb_desc;
/* Set to jumbo mode if necessary */
- if (pkt_len > CXGBE_ETH_MAX_LEN)
+ if (eth_dev->data->mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..91d6bb9bbcb0 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
unsigned int mtu;
int ret;
- mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ mtu = pi->eth_dev->data->mtu;
conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..830f5192474d 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
u32 wr_mid;
u64 cntrl, *end;
bool v6;
- u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
+ u32 max_pkt_len;
/* Reject xmit if queue is stopped */
if (unlikely(txq->flags & EQ_STOPPED))
@@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
return 0;
}
+ max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
(unlikely(m->pkt_len > max_pkt_len)))
goto out_free;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249df1..adbdb87baab9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > DPAA_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
@@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
struct fman_if *fif = dev->process_private;
struct __fman_if *__fif;
struct rte_intr_handle *intr_handle;
+ uint32_t max_rx_pktlen;
int speed, duplex;
int ret;
@@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- DPAA_PMD_DEBUG("enabling jumbo");
-
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- else {
- DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
- "supported is %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- DPAA_MAX_RX_PKT_LEN);
- max_len = DPAA_MAX_RX_PKT_LEN;
- }
-
- fman_if_set_maxfrm(dev->process_private, max_len);
- dev->data->mtu = max_len
- - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
+ DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
+ "supported is %d",
+ max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
+ max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
+ fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
+
if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
@@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
u32 flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -EINVAL;
}
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
+ VLAN_TAG_SIZE;
/* Max packet can fit in single buffer */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
+ if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- buffsz * DPAA_SGT_MAX_ENTRIES) {
- DPAA_PMD_ERR("max RxPkt size %d too big to fit "
+ if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
+ DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz * DPAA_SGT_MAX_ENTRIES);
+ max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES);
rte_errno = EOVERFLOW;
return -rte_errno;
}
@@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz - RTE_PKTMBUF_HEADROOM);
+ max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
}
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
@@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif),
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 275656fbe47c..97dd8e079a73 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
int tx_l3_csum_offload = false;
int tx_l4_csum_offload = false;
int ret, tc_index;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -559,25 +560,19 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
- ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
- priv->token, eth_conf->rxmode.max_rx_pkt_len
- - RTE_ETHER_CRC_LEN);
- if (ret) {
- DPAA2_PMD_ERR(
- "Unable to set mtu. check config");
- return ret;
- }
- dev->data->mtu =
- dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
- VLAN_TAG_SIZE;
- DPAA2_PMD_INFO("MTU configured for the device: %d",
- dev->data->mtu);
- } else {
- return -1;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
+ ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
+ priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
+ if (ret != 0) {
+ DPAA2_PMD_ERR("Unable to set mtu. check config");
+ return ret;
+ DPAA2_PMD_INFO("MTU configured for the device: %d",
+ dev->data->mtu);
}
+ } else {
+ return -1;
}
if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
@@ -1477,15 +1472,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (frame_size > DPAA2_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..6f418a36aa04 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d80fad01e36d..4c114bf90fc7 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2681,9 +2681,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD);
}
static void
@@ -2699,10 +2697,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE);
+ E1000_WRITE_REG(hw, E1000_RLPML,
+ dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
}
static int
@@ -4400,7 +4396,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -4411,11 +4407,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
return 0;
}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712af..e9a30d393bd7 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
@@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
rctl |= E1000_RCTL_LPE;
/*
@@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
@@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
/* setup MTU */
- e1000_rlpml_set_vf(hw,
- (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE));
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
+ e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68a7..3a9d5031b262 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -679,26 +679,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev,
return rc;
}
-static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
-{
- uint32_t max_frame_len = adapter->max_mtu;
-
- if (adapter->edev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- max_frame_len =
- adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
-
- return max_frame_len;
-}
-
static int ena_check_valid_conf(struct ena_adapter *adapter)
{
- uint32_t max_frame_len = ena_get_mtu_conf(adapter);
+ uint32_t mtu = adapter->edev_data->mtu;
- if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_INIT_LOG(ERR,
"Unsupported MTU of %d. Max MTU: %d, min MTU: %d\n",
- max_frame_len, adapter->max_mtu, ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return ENA_COM_UNSUPPORTED;
}
@@ -871,10 +859,10 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
ena_dev = &adapter->ena_dev;
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
- if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_DRV_LOG(ERR,
"Invalid MTU setting. New MTU: %d, max MTU: %d, min MTU: %d\n",
- mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return -EINVAL;
}
@@ -1945,7 +1933,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = ENA_HASH_KEY_SIZE;
dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
- dev_info->max_rx_pktlen = adapter->max_mtu;
+ dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ dev_info->min_mtu = ENA_MIN_MTU;
+ dev_info->max_mtu = adapter->max_mtu;
dev_info->max_mac_addrs = 1;
dev_info->max_rx_queues = adapter->max_num_io_queues;
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index b496cd470045..cdb9783b5372 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > ENETC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads &=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/*setting the MTU*/
enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) |
ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
@@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint32_t checksum = L3_CKSUM | L4_CKSUM;
+ uint32_t max_len;
PMD_INIT_FUNC_TRACE();
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
- enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
- ENETC_SET_MAXFRM(max_len));
- enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
- ENETC_MAC_MAXFRM_SIZE);
- enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
- 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
- RTE_ETHER_CRC_LEN;
- }
+ max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len));
+ enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
+ enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
int config;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b8f..6a81ceb62ba7 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
* max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
* a hint to the driver to size receive buffers accordingly so that
* larger-than-vnic-mtu packets get truncated.. For DPDK, we let
- * the user decide the buffer size via rxmode.max_rx_pkt_len, basically
+ * the user decide the buffer size via rxmode.mtu, basically
* ignoring vNIC mtu.
*/
device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu);
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..dfc7f5d1f94f 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
struct rq_enet_desc *rqd = rq->ring.descs;
unsigned i;
dma_addr_t dma_addr;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint16_t rq_buf_len;
if (!rq->in_use)
@@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
/*
* If *not* using scatter and the mbuf size is greater than the
- * requested max packet size (max_rx_pkt_len), then reduce the
- * posted buffer size to max_rx_pkt_len. HW still receives packets
- * larger than max_rx_pkt_len, but they will be truncated, which we
+ * requested max packet size (mtu + eth overhead), then reduce the
+ * posted buffer size to max packet size. HW still receives packets
+ * larger than max packet size, but they will be truncated, which we
* drop in the rx handler. Not ideal, but better than returning
* large packets when the user is not expecting them.
*/
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM;
- if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
- rq_buf_len = max_rx_pkt_len;
+ if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
+ rq_buf_len = max_rx_pktlen;
for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
mb = rte_mbuf_raw_alloc(rq->mp);
if (mb == NULL) {
@@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
unsigned int mbuf_size, mbufs_per_pkt;
unsigned int nb_sop_desc, nb_data_desc;
uint16_t min_sop, max_sop, min_data, max_data;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
/*
* Representor uses a reserved PF queue. Translate representor
@@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM);
- /* max_rx_pkt_len includes the ethernet header and CRC. */
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ /* max_rx_pktlen includes the ethernet header and CRC. */
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
- mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
+ mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
} else {
dev_info(enic, "Scatter rx mode disabled\n");
mbufs_per_pkt = 1;
- if (max_rx_pkt_len > mbuf_size) {
+ if (max_rx_pktlen > mbuf_size) {
dev_warning(enic, "The maximum Rx packet size (%u) is"
" larger than the mbuf size (%u), and"
" scatter is disabled. Larger packets will"
" be truncated.\n",
- max_rx_pkt_len, mbuf_size);
+ max_rx_pktlen, mbuf_size);
}
}
@@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
rq_sop->data_queue_enable = 1;
rq_data->in_use = 1;
/*
- * HW does not directly support rxmode.max_rx_pkt_len. HW always
+ * HW does not directly support MTU. HW always
* receives packet sizes up to the "max" MTU.
* If not using scatter, we can achieve the effect of dropping
* larger packets by reducing the size of posted buffers.
* See enic_alloc_rx_queue_mbufs().
*/
- if (max_rx_pkt_len <
- enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
- dev_warning(enic, "rxmode.max_rx_pkt_len is ignored"
- " when scatter rx mode is in use.\n");
+ if (enic->rte_dev->data->mtu < enic->max_mtu) {
+ dev_warning(enic,
+ "mtu is ignored when scatter rx mode is in use.\n");
}
} else {
dev_info(enic, "Rq %u Scatter rx mode not being used\n",
@@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
if (mbufs_per_pkt > 1) {
dev_info(enic, "For max packet size %u and mbuf size %u valid"
" rx descriptor range is %u to %u\n",
- max_rx_pkt_len, mbuf_size, min_sop + min_data,
+ max_rx_pktlen, mbuf_size, min_sop + min_data,
max_sop + max_data);
}
dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
@@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
"MTU (%u) is greater than value configured in NIC (%u)\n",
new_mtu, config_mtu);
- /* Update the MTU and maximum packet length */
- eth_dev->data->mtu = new_mtu;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- enic_mtu_to_max_rx_pktlen(new_mtu);
-
/*
* If the device has not started (enic_enable), nothing to do.
* Later, enic_enable() will set up RQs reflecting the new maximum
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e4021..5e4b361ca6c0 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
FM10K_SRRCTL_LOOPBACK_SUPPRESS);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
+ if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
uint32_t reg;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c01e2ec1d450..2d8271cb6095 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -315,19 +315,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- HINIC_MAX_JUMBO_FRAME_SIZE) {
+ if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
+ HINIC_MIN_FRAME_SIZE ||
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
+ HINIC_MAX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR,
- "Max rx pkt len out of range, get max_rx_pkt_len:%d, "
+ "Packet length out of range, get packet length:%d, "
"expect between %d and %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu),
HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- nic_dev->mtu_size =
- HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
/* rss template */
err = hinic_config_mq_mode(dev, TRUE);
@@ -1530,7 +1530,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- uint32_t frame_size;
int ret = 0;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
@@ -1548,16 +1547,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- /* update max frame size */
- frame_size = HINIC_MTU_TO_PKTLEN(mtu);
- if (frame_size > HINIC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972bf..4ead227f9122 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2371,41 +2371,6 @@ hns3_init_ring_with_vector(struct hns3_hw *hw)
return 0;
}
-static int
-hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
-{
- struct hns3_adapter *hns = dev->data->dev_private;
- struct hns3_hw *hw = &hns->hw;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
- int ret;
-
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
- return 0;
-
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater than %u "
- "and no more than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- return -EINVAL;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3_dev_mtu_set(dev, mtu);
- if (ret)
- return ret;
- dev->data->mtu = mtu;
-
- return 0;
-}
-
static int
hns3_setup_dcb(struct rte_eth_dev *dev)
{
@@ -2520,8 +2485,8 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- ret = hns3_refresh_mtu(dev, conf);
- if (ret)
+ ret = hns3_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
goto cfg_err;
ret = hns3_mbuf_dyn_rx_timestamp_register(dev, conf);
@@ -2616,7 +2581,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false;
+ is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2637,7 +2602,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c806..0b5db486f8d6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -784,8 +784,6 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
struct rte_eth_rss_conf rss_conf;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
bool gro_en;
int ret;
@@ -825,28 +823,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater "
- "than %u and less than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- ret = -EINVAL;
- goto cfg_err;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3vf_dev_mtu_set(dev, mtu);
- if (ret)
- goto cfg_err;
- dev->data->mtu = mtu;
- }
+ ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
+ goto cfg_err;
ret = hns3vf_dev_configure_vlan(dev);
if (ret)
@@ -935,7 +914,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e3957f..a260212f73f1 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1735,18 +1735,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size,
uint16_t nb_desc)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
- struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
uint16_t min_vec_bds;
/*
* HNS3 hardware network engine set scattered as default. If the driver
* is not work in scattered mode and the pkts greater than buf_size
- * but smaller than max_rx_pkt_len will be distributed to multiple BDs.
+ * but smaller than frame size will be distributed to multiple BDs.
* Driver cannot handle this situation.
*/
- if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) {
- hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater "
+ if (!hw->data->scattered_rx && frame_size > buf_size) {
+ hns3_err(hw, "frame size is not allowed to be set greater "
"than rx_buf_len if scattered is off.");
return -EINVAL;
}
@@ -1958,7 +1958,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
}
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
- dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
+ dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index bd97d93dd746..ab571a921f9e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11775,14 +11775,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > I40E_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return ret;
}
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index d5847ac6b546..1d27cf2b0a01 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2909,8 +2909,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
}
rxq->max_pkt_len =
- RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
- rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
+ RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
+ data->mtu + I40E_ETH_OVERHEAD);
if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 5a5a7f59e152..0eabce275d92 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -576,13 +576,14 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_dev_data *dev_data = dev->data;
uint16_t buf_size, max_pkt_len;
+ uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
/* Calculate the maximum packet length allowed */
max_pkt_len = RTE_MIN((uint32_t)
rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
@@ -839,7 +840,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
adapter->stopped = 0;
- vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
num_queue_pairs = vf->num_queue_pairs;
@@ -1472,15 +1473,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IAVF_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return ret;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4e4cdbcd7d71..c3c7ad88f250 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -66,9 +66,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ dev->data->mtu + ICE_ETH_OVERHEAD);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 9ab7704ff003..8ee1335ac6cf 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3603,8 +3603,8 @@ ice_dev_start(struct rte_eth_dev *dev)
pf->adapter_stopped = false;
/* Set the max frame size to default value*/
- max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
- pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
+ max_frame_size = pf->dev_data->mtu ?
+ pf->dev_data->mtu + ICE_ETH_OVERHEAD :
ICE_FRAME_SIZE_MAX;
/* Set the max frame size to HW*/
@@ -3992,14 +3992,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > ICE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 83fb788e6930..f9ef6ce57277 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -271,15 +271,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
+ uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
/* Set buffer size as the head split is disabled. */
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
RTE_PKTMBUF_HEADROOM);
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- rxq->max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev_data->dev_conf.rxmode.max_rx_pkt_len);
+ rxq->max_pkt_len =
+ RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ frame_size);
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
@@ -385,11 +386,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
return -EINVAL;
}
- buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
- RTE_PKTMBUF_HEADROOM);
-
/* Check if scattered RX needs to be used. */
- if (rxq->max_pkt_len > buf_size)
+ if (frame_size > buf_size)
dev_data->scattered_rx = 1;
rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a0954836b..b26723064b07 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -20,13 +20,6 @@
#define IGC_INTEL_VENDOR_ID 0x8086
-/*
- * The overhead from MTU to max frame size.
- * Considering VLAN so tag needs to be counted.
- */
-#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
- RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE)
-
#define IGC_FC_PAUSE_TIME 0x0680
#define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
#define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
@@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= IGC_RCTL_LPE;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~IGC_RCTL_LPE;
}
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
return 0;
}
@@ -2486,6 +2473,7 @@ static int
igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
- RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
+ if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
+ frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
@@ -2519,6 +2498,7 @@ static int
igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
+ if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
+ frame_size, MAX_RX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..b3473b5b1646 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -35,6 +35,13 @@ extern "C" {
#define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
#define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX)
+/*
+ * The overhead from MTU to max frame size.
+ * Considering VLAN so tag needs to be counted.
+ */
+#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2)
+
/*
* TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
* multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd220..28d3076439c3 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
struct igc_rx_queue *rxq;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint32_t rctl;
uint32_t rxcsum;
uint16_t buf_size;
@@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
rctl |= IGC_RCTL_LPE;
-
- /*
- * Set maximum packet length by default, and might be updated
- * together with enabling/disabling dual VLAN.
- */
- IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
- } else {
+ else
rctl &= ~IGC_RCTL_LPE;
- }
+
+ max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
+ /*
+ * Set maximum packet length by default, and might be updated
+ * together with enabling/disabling dual VLAN.
+ */
+ IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
+ if (max_rx_pktlen > buf_size)
dev->data->scattered_rx = 1;
} else {
/*
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e6207939665e..97447a10e46a 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -343,25 +343,15 @@ static int
ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
- uint32_t max_frame_size;
int err;
IONIC_PRINT_CALL();
/*
* Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
- * is done by the the API.
+ * is done by the API.
*/
- /*
- * Max frame size is MTU + Ethernet header + VLAN + QinQ
- * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
- */
- max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
-
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
- return -EINVAL;
-
err = ionic_lif_change_mtu(lif, mtu);
if (err)
return err;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa6a..3f5fc66abf71 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
struct rte_mbuf *rxm, *rxm_seg;
uint32_t max_frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint64_t pkt_flags = 0;
uint32_t pkt_type;
struct ionic_rx_stats *stats = &rxq->stats;
@@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
int __rte_cold
ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
{
- uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
struct ionic_rx_qcq *rxq;
int err;
@@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
{
struct ionic_rx_qcq *rxq = rx_queue;
uint32_t frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
struct ionic_rx_service service_cb_arg;
service_cb_arg.rx_pkts = rx_pkts;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa5877d..3634c0c8c5f0 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IPN3KE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 8b33897ca167..e5ddae219182 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5174,7 +5174,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct ixgbe_hw *hw;
struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
- struct rte_eth_dev_data *dev_data = dev->data;
int ret;
ret = ixgbe_dev_info_get(dev, &dev_info);
@@ -5188,9 +5187,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
*/
- if (dev_data->dev_started && !dev_data->scattered_rx &&
- (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ if (dev->data->dev_started && !dev->data->scattered_rx &&
+ frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -5199,23 +5198,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (frame_size > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU) {
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
return 0;
@@ -6272,12 +6266,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
* set as 0x4.
*/
if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_JUMBO_FRAME);
+ (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_DEFAULT);
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
/* Set RTTBCNRC of queue X */
IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
@@ -6549,8 +6541,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (mtu < RTE_ETHER_MIN_MTU ||
- max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
+ if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
return -EINVAL;
/* If device is started, refuse mtu that requires the support of
@@ -6558,7 +6549,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
*/
if (dev_data->dev_started && !dev_data->scattered_rx &&
(max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -6575,8 +6566,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (ixgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index fbf2b17d160f..9bcbc445f2d0 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* if PF has jumbo frames enabled which means legacy
* VFs are disabled.
*/
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ if (dev->data->mtu > RTE_ETHER_MTU)
break;
/* fall through */
default:
@@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* legacy VFs.
*/
if (max_frame > IXGBE_ETH_MAX_LEN ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ dev->data->mtu > RTE_ETHER_MTU)
return -1;
break;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index bfdfd5e755de..03991711fd6e 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -5063,6 +5063,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
uint16_t buf_size;
uint16_t i;
struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
int rc;
PMD_INIT_FUNC_TRACE();
@@ -5098,7 +5099,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (rx_conf->max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
} else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
@@ -5172,8 +5173,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
+ if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -5653,6 +5653,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
struct ixgbe_hw *hw;
struct ixgbe_rx_queue *rxq;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
uint64_t bus_addr;
uint32_t srrctl, psrtype = 0;
uint16_t buf_size;
@@ -5689,10 +5690,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
* VF packets received can work in all cases.
*/
- if (ixgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ if (ixgbevf_rlpml_set_vf(hw, frame_size) != 0) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
return -EINVAL;
}
@@ -5751,8 +5751,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
+ (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->data->scattered_rx = 1;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index b72060a4499b..976916f870a5 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
- uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (frame_len > LIO_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
- eth_dev->data->mtu = mtu;
-
return 0;
}
@@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev)
static int
lio_dev_start(struct rte_eth_dev *eth_dev)
{
- uint16_t mtu;
- uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
int ret = 0;
@@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
goto dev_mtu_set_error;
}
- mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN);
- if (mtu < RTE_ETHER_MIN_MTU)
- mtu = RTE_ETHER_MIN_MTU;
-
- if (eth_dev->data->mtu != mtu) {
- ret = lio_dev_mtu_set(eth_dev, mtu);
- if (ret)
- goto dev_mtu_set_error;
- }
+ ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
+ if (ret != 0)
+ goto dev_mtu_set_error;
return 0;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 978cbb8201ea..4a5cfd22aa71 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
int ret;
uint32_t crc_present;
uint64_t offloads;
+ uint32_t max_rx_pktlen;
offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
@@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
};
/* Enable scattered packets support for this queue if necessary. */
MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- (mb_len - RTE_PKTMBUF_HEADROOM)) {
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
- uint32_t size =
- RTE_PKTMBUF_HEADROOM +
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
/*
@@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
/* Make sure sges_n did not overflow. */
size = mb_len * (1 << rxq->sges_n);
size -= RTE_PKTMBUF_HEADROOM;
- if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
+ if (size < max_rx_pktlen) {
rte_errno = EOVERFLOW;
ERROR("%p: too many SGEs (%u) needed to handle"
" requested maximum packet size %u",
(void *)dev,
- 1 << sges_n,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ 1 << sges_n, max_rx_pktlen);
goto error;
}
} else {
WARN("%p: the requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- (void *)dev,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ (void *)dev, max_rx_pktlen,
mb_len - RTE_PKTMBUF_HEADROOM);
}
DEBUG("%p: maximum number of segments per packet: %u",
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce798986..6f4f351222d3 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1330,10 +1330,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
- unsigned int max_rx_pkt_len = lro_on_queue ?
+ unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
+ dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
@@ -1372,7 +1373,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* needed to handle max size packets, replace zero length
* with the buffer length from the pool.
*/
- tail_len = max_rx_pkt_len;
+ tail_len = max_rx_pktlen;
do {
struct mlx5_eth_rxseg *hw_seg =
&tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
@@ -1410,7 +1411,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- tmpl->rxq.rxseg_n, max_rx_pkt_len,
+ tmpl->rxq.rxseg_n, max_rx_pktlen,
MLX5_MAX_RXQ_NSEG);
rte_errno = ENOTSUP;
goto error;
@@ -1435,7 +1436,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
- dev->data->port_id, idx, mb_len, max_rx_pkt_len,
+ dev->data->port_id, idx, mb_len, max_rx_pktlen,
RTE_PKTMBUF_HEADROOM);
rte_errno = ENOSPC;
goto error;
@@ -1454,7 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* following conditions are met:
* - MPRQ is enabled.
* - The number of descs is more than the number of strides.
- * - max_rx_pkt_len plus overhead is less than the max size
+ * - max_rx_pktlen plus overhead is less than the max size
* of a stride or mprq_stride_size is specified by a user.
* Need to make sure that there are enough strides to encap
* the maximum packet size in case mprq_stride_size is set.
@@ -1478,7 +1479,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
!!(offloads & DEV_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
- max_lro_size = RTE_MIN(max_rx_pkt_len,
+ max_lro_size = RTE_MIN(max_rx_pktlen,
(1u << tmpl->rxq.strd_num_n) *
(1u << tmpl->rxq.strd_sz_n));
DRV_LOG(DEBUG,
@@ -1487,9 +1488,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->port_id, idx,
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
} else if (tmpl->rxq.rxseg_n == 1) {
- MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
+ MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
@@ -1511,13 +1512,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- 1 << sges_n, max_rx_pkt_len,
+ 1 << sges_n, max_rx_pktlen,
1u << MLX5_MAX_LOG_RQ_SEGS);
rte_errno = ENOTSUP;
goto error;
}
tmpl->rxq.sges_n = sges_n;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
}
if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
DRV_LOG(WARNING,
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index a3ee15020466..520c6fdb1d31 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_NETA_ETH_HDRS_LEN;
-
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
@@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
/* It is OK. New MTU will be set later on mvneta_dev_start */
return 0;
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc09039..2cd4fb31348b 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mvneta_priv *priv = dev->data->dev_private;
struct mvneta_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS;
- if (frame_size < max_rx_pkt_len) {
+ if (frame_size < max_rx_pktlen) {
MVNETA_LOG(ERR,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8da4..5ce71661c84e 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_PP2_ETH_HDRS_LEN;
- if (dev->data->mtu > priv->max_mtu) {
- MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n",
- dev->data->mtu,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- priv->max_mtu);
- return -EINVAL;
- }
+ if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
+ MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
+ dev->data->dev_conf.rxmode.mtu,
+ priv->max_mtu);
+ return -EINVAL;
}
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
@@ -595,9 +590,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
return 0;
@@ -1994,7 +1986,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mrvl_priv *priv = dev->data->dev_private;
struct mrvl_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
int ret, tc, inq;
uint64_t offloads;
@@ -2009,17 +2001,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
return -EFAULT;
}
- frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
- MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
- if (frame_size < max_rx_pkt_len) {
+ frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS;
+ if (frame_size < max_rx_pktlen) {
MRVL_LOG(WARNING,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MRVL_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 1b4bc33593fb..a2031a7a82cc 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -370,7 +370,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = rxmode->max_rx_pkt_len;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -963,16 +963,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
/* switch to jumbo mode if needed */
- if ((uint32_t)mtu > RTE_ETHER_MTU)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
-
/* writing to configuration space */
- nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
+ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
hw->mtu = mtu;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9f4c0503b4d4..69c3bda12df8 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > OCCTX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
@@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
/* Setup scatter mode if needed by jumbo */
- if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (data->mtu > buffsz) {
nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
@@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
evdev_priv->rx_offload_flags = nic->rx_offload_flags;
evdev_priv->tx_offload_flags = nic->tx_offload_flags;
- /* Setup MTU based on max_rx_pkt_len */
- nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD;
+ /* Setup MTU */
+ nic->mtu = data->mtu;
return 0;
}
@@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
octeontx_recheck_rx_offloads(rxq);
}
- /* Setting up the mtu based on max_rx_pkt_len */
+ /* Setting up the mtu */
ret = octeontx_dev_mtu_set(dev, nic->mtu);
if (ret) {
octeontx_log_err("Failed to set default MTU size %d", ret);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e7c..787e8d890215 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
mbp_priv = rte_mempool_get_priv(rxq->pool);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d2b..cf7804157198 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,14 +59,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > NIX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return rc;
}
@@ -75,7 +72,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct otx2_eth_rxq *rxq;
- uint16_t mtu;
int rc;
rxq = data->rx_queues[0];
@@ -83,10 +79,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
/* Setup scatter mode if needed by jumbo */
otx2_nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
-
- rc = otx2_nix_mtu_set(eth_dev, mtu);
+ rc = otx2_nix_mtu_set(eth_dev, data->mtu);
if (rc)
otx2_err("Failed to set default MTU size %d", rc);
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index feec4d10a26e..2619bd2f2a19 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev)
static int
pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- int ret;
struct pfe_eth_priv_s *priv = dev->data->dev_private;
uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
/*TODO Support VLAN*/
- ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
- if (!ret)
- dev->data->mtu = mtu;
-
- return ret;
+ return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
}
/* pfe_eth_enet_addr_byte_mac
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a4304e0eff44..4b971fd1fe3c 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- /* If jumbo enabled adjust MTU */
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
-
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
@@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
- uint32_t max_rx_pkt_len;
uint32_t frame_size;
uint16_t bufsz;
bool restart = false;
@@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
DP_ERR(edev, "Error during getting ethernet device info\n");
return rc;
}
- max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
- frame_size = max_rx_pkt_len;
+
+ frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
@@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (frame_size > QEDE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->dev_started = 1;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
-
return 0;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 35cde561ba59..c2263787b4ec 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
struct qede_rx_queue *rxq;
- uint16_t max_rx_pkt_len;
+ uint16_t max_rx_pktlen;
uint16_t bufsz;
int rc;
@@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
dev->data->rx_queues[qid] = NULL;
}
- max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
/* Fix up RX buffer size */
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
- (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
+ (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
dev->data->scattered_rx = 1;
}
}
- rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
+ rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
if (rc < 0)
return rc;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3ad..1f55c90b419d 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1066,15 +1066,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
/*
* The driver does not use it, but other PMDs update jumbo frame
- * flag and max_rx_pkt_len when MTU is set.
+ * flag when MTU is set.
*/
if (mtu > RTE_ETHER_MTU) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb8175..22f74735db08 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -383,14 +383,10 @@ sfc_port_configure(struct sfc_adapter *sa)
{
const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
struct sfc_port *port = &sa->port;
- const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
sfc_log_init(sa, "entry");
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- port->pdu = rxmode->max_rx_pkt_len;
- else
- port->pdu = EFX_MAC_PDU(dev_data->mtu);
+ port->pdu = EFX_MAC_PDU(dev_data->mtu);
return 0;
}
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf71d..0a8d29277aeb 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_mtu = mtu };
- int err = 0;
- err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
- if (!err)
- dev->data->mtu = mtu;
-
- return err;
+ return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
}
static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81a3..c8ae95a61306 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (frame_size > NIC_HW_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
- /* Update max_rx_pkt_len */
- rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
nic->mtu = mtu;
for (i = 0; i < nic->sqs_count; i++)
@@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
}
/* Setup scatter mode if needed by jumbo */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE > buffsz)
+ if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
- /* Setup MTU based on max_rx_pkt_len or default */
- mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
- dev->data->dev_conf.rxmode.max_rx_pkt_len
- - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
+ /* Setup MTU */
+ mtu = dev->data->mtu;
if (nicvf_dev_set_mtu(dev, mtu)) {
PMD_INIT_LOG(ERR, "Failed to set default mtu size");
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 006399468841..269de9f848dd 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3486,8 +3486,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965c8..44cfcd76bca4 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -55,6 +55,10 @@
#define TXGBE_5TUPLE_MAX_PRI 7
#define TXGBE_5TUPLE_MIN_PRI 1
+
+/* The overhead from MTU to max frame size. */
+#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
+
#define TXGBE_RSS_OFFLOAD_ALL ( \
ETH_RSS_IPV4 | \
ETH_RSS_NONFRAG_IPV4_TCP | \
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 896da8a88770..43dc0ed39b75 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -1128,8 +1128,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (txgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1bd..c6cd3803c434 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
- } else {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
- }
+ wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+ TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD));
/*
* If loopback mode is configured, set LPBK bit.
@@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
+ if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
+ 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* VF packets received can work in all cases.
*/
if (txgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ dev->data->mtu + TXGBE_ETH_OVERHEAD);
return -EINVAL;
}
@@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
+ (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b60eeb24abe7..5d341a3e23bb 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -930,7 +930,6 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
hw->max_rx_pkt_len = frame_size;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = hw->max_rx_pkt_len;
return 0;
}
@@ -2116,14 +2115,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
+ if (rxmode->mtu > hw->max_mtu)
req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
- else
- hw->max_rx_pkt_len = ether_hdr_len + dev->data->mtu;
+ hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM))
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index adbd40808396..68e3c13730ad 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/bond/main.c b/examples/bond/main.c
index a63ca70a7f06..25ca459be57b 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -116,7 +116,6 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index d0f40a1fb4bc..8c4a8feec0c2 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 5ed0dc73ec60..e26be8edf28f 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ab8c6d6a0dad..476b147bdfcc 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 65c1d85cf2fb..8a43f6ac0f92 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -59,14 +59,6 @@ static struct{
} parm_config;
const char cb_port_delim[] = ":";
-/* Ethernet ports configured with default settings using struct. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of Ethernet ports. */
-
/* Creation of flow classifier object. 8< */
struct flow_classifier {
struct rte_flow_classifier *cls;
@@ -200,7 +192,7 @@ static struct rte_flow_attr attr;
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
struct rte_ether_addr addr;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
@@ -211,6 +203,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index b3977a8be561..fdc66368dce9 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -820,7 +820,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f24536972084..12062a785dc6 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -146,7 +146,8 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_SCATTER |
@@ -918,9 +919,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
@@ -963,8 +964,7 @@ main(int argc, char **argv)
}
/* set the mtu to the maximum received packet size */
- ret = rte_eth_dev_set_mtu(portid,
- local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD);
+ ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu);
if (ret < 0) {
printf("\n");
rte_exit(EXIT_FAILURE, "Set MTU failed: "
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..9ba02e687adb 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790be4..e5c7d46d2caa 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -162,7 +162,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_JUMBO_FRAME),
@@ -882,7 +883,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
/* mbufs stored int the gragment table. 8< */
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM;
- nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
+ nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
+ + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += nb_rxd + nb_txd;
@@ -1054,9 +1056,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7ad94cb8228b..d032a47d1c3b 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -2163,7 +2162,6 @@ cryptodevs_init(uint16_t req_queue_num)
static void
port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
{
- uint32_t frame_size;
struct rte_eth_dev_info dev_info;
struct rte_eth_txconf *txconf;
uint16_t nb_tx_queue, nb_rx_queue;
@@ -2211,10 +2209,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- frame_size = MTU_TO_FRAMELEN(mtu_size);
- if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
+ if (mtu_size > RTE_ETHER_MTU)
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- local_port_conf.rxmode.max_rx_pkt_len = frame_size;
+ local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b38..b3993685ec92 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -110,7 +110,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
@@ -715,9 +716,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 2a993a0ca460..62f6e42a9437 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
memcpy(&conf, &port_conf, sizeof(conf));
/* Set new MTU */
- if (new_mtu > RTE_ETHER_MAX_LEN)
+ if (new_mtu > RTE_ETHER_MTU)
conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* mtu + length of header + length of FCS = max pkt length */
- conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
- KNI_ENET_FCS_SIZE;
+ conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
index 9b3e324efb23..d9cf00c9dfc7 100644
--- a/examples/l2fwd-cat/l2fwd-cat.c
+++ b/examples/l2fwd-cat/l2fwd-cat.c
@@ -19,10 +19,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
-};
-
/* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
/*
@@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0)
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 66d1491bf76d..f9438176cbb1 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -217,7 +217,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 19f32809aa9d..9040be5ed9b6 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564b6..7abb612ee6a4 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -141,6 +140,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
/* ethernet addresses of ports */
@@ -201,8 +202,8 @@ enum {
OPT_CONFIG_NUM = 256,
#define OPT_NONUMA "no-numa"
OPT_NONUMA_NUM,
-#define OPT_ENBJMO "enable-jumbo"
- OPT_ENBJMO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_RULE_IPV4 "rule_ipv4"
OPT_RULE_IPV4_NUM,
#define OPT_RULE_IPV6 "rule_ipv6"
@@ -1619,26 +1620,21 @@ print_usage(const char *prgname)
usage_acl_alg(alg, sizeof(alg));
printf("%s [EAL options] -- -p PORTMASK -P"
- "--"OPT_RULE_IPV4"=FILE"
- "--"OPT_RULE_IPV6"=FILE"
+ " --"OPT_RULE_IPV4"=FILE"
+ " --"OPT_RULE_IPV6"=FILE"
" [--"OPT_CONFIG" (port,queue,lcore)[,(port,queue,lcore]]"
- " [--"OPT_ENBJMO" [--max-pkt-len PKTLEN]]\n"
+ " [--"OPT_MAX_PKT_LEN" PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
- " --"OPT_CONFIG": (port,queue,lcore): "
- "rx queues configuration\n"
+ " -P: enable promiscuous mode\n"
+ " --"OPT_CONFIG" (port,queue,lcore): rx queues configuration\n"
" --"OPT_NONUMA": optional, disable numa awareness\n"
- " --"OPT_ENBJMO": enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
- " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries "
- "file. "
+ " --"OPT_MAX_PKT_LEN" PKTLEN: maximum packet length in decimal (64-9600)\n"
+ " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries file. "
"Each rule occupy one line. "
"2 kinds of rules are supported. "
"One is ACL entry at while line leads with character '%c', "
- "another is route entry at while line leads with "
- "character '%c'.\n"
- " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules "
- "entries file.\n"
+ "another is route entry at while line leads with character '%c'.\n"
+ " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules entries file.\n"
" --"OPT_ALG": ACL classify method to use, one of: %s\n",
prgname, ACL_LEAD_CHAR, ROUTE_LEAD_CHAR, alg);
}
@@ -1758,14 +1754,14 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
- {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
- {OPT_ENBJMO, 0, NULL, OPT_ENBJMO_NUM },
- {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
- {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
- {OPT_ALG, 1, NULL, OPT_ALG_NUM },
- {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
- {NULL, 0, 0, 0 }
+ {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
+ {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
+ {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
+ {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
+ {OPT_ALG, 1, NULL, OPT_ALG_NUM },
+ {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
+ {NULL, 0, 0, 0 }
};
argvopt = argv;
@@ -1804,43 +1800,11 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case OPT_ENBJMO_NUM:
- {
- struct option lenopts = {
- "max-pkt-len",
- required_argument,
- 0,
- 0
- };
-
- printf("jumbo frame is enabled\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, then use the
- * default value RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length "
- "to %u\n",
- (unsigned int)
- port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
case OPT_RULE_IPV4_NUM:
parm_config.rule_ipv4_name = optarg;
break;
@@ -2007,6 +1971,43 @@ set_default_dest_mac(void)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -2080,6 +2081,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index a0de8ca9b42d..b431b9ff5f3c 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
@@ -126,6 +125,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
@@ -259,7 +260,7 @@ print_usage(const char *prgname)
" [-P]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--per-port-pool]\n\n"
@@ -268,9 +269,7 @@ print_usage(const char *prgname)
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for "
"port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --per-port-pool: Use separate buffer pool per port\n\n",
prgname);
@@ -404,7 +403,7 @@ static const char short_options[] = "p:" /* portmask */
#define CMD_LINE_OPT_CONFIG "config"
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
enum {
/* Long options mapped to a short option */
@@ -416,7 +415,7 @@ enum {
CMD_LINE_OPT_CONFIG_NUM,
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
};
@@ -424,7 +423,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
{NULL, 0, 0, 0},
};
@@ -490,28 +489,8 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr, "Invalid maximum "
- "packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM: {
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
}
@@ -722,6 +701,43 @@ graph_main_loop(void *conf)
}
/* >8 End of main processing loop. */
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -807,6 +823,13 @@ main(int argc, char **argv)
nb_rx_queue, n_tx_queue);
rte_eth_dev_info_get(portid, &dev_info);
+
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44ae8..e58561327c48 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -251,7 +251,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -266,6 +265,8 @@ static struct rte_eth_conf port_conf = {
}
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool * pktmbuf_pool[NB_SOCKETS];
@@ -1601,16 +1602,15 @@ print_usage(const char *prgname)
" [--config (port,queue,lcore)[,(port,queue,lcore]]"
" [--high-perf-cores CORELIST"
" [--perf-config (port,queue,hi_perf,lcore_index)[,(port,queue,hi_perf,lcore_index]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
+ " -P: enable promiscuous mode\n"
" --config (port,queue,lcore): rx queues configuration\n"
" --high-perf-cores CORELIST: list of high performance cores\n"
" --perf-config: similar as config, cores specified as indices"
" for bins containing high or regular performance cores\n"
" --no-numa: optional, disable numa awareness\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --parse-ptype: parse packet type by software\n"
" --legacy: use legacy interrupt-based scaling\n"
" --empty-poll: enable empty poll detection"
@@ -1795,6 +1795,7 @@ parse_ep_config(const char *q_arg)
#define CMD_LINE_OPT_INTERRUPT_ONLY "interrupt-only"
#define CMD_LINE_OPT_TELEMETRY "telemetry"
#define CMD_LINE_OPT_PMD_MGMT "pmd-mgmt"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
/* Parse the argument given in the command line of the application */
static int
@@ -1810,7 +1811,7 @@ parse_args(int argc, char **argv)
{"perf-config", 1, 0, 0},
{"high-perf-cores", 1, 0, 0},
{"no-numa", 0, 0, 0},
- {"enable-jumbo", 0, 0, 0},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, 0},
{CMD_LINE_OPT_EMPTY_POLL, 1, 0, 0},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
{CMD_LINE_OPT_LEGACY, 0, 0, 0},
@@ -1954,36 +1955,10 @@ parse_args(int argc, char **argv)
}
if (!strncmp(lgopts[option_index].name,
- "enable-jumbo", 12)) {
- struct option lenopts =
- {"max-pkt-len", required_argument, \
- 0, 0};
-
- printf("jumbo frame is enabled \n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /**
- * if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (0 == getopt_long(argc, argvopt, "",
- &lenopts, &option_index)) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)){
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame "
- "max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ CMD_LINE_OPT_MAX_PKT_LEN,
+ sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
}
if (!strncmp(lgopts[option_index].name,
@@ -2505,6 +2480,43 @@ mode_to_str(enum appmode mode)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
/* Power library initialized in the main routine. 8< */
int
main(int argc, char **argv)
@@ -2622,6 +2634,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 00ac267af1dd..cb9bc7ad6002 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -136,6 +135,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static uint8_t lkp_per_socket[NB_SOCKETS];
@@ -326,7 +327,7 @@ print_usage(const char *prgname)
" [--lookup]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--hash-entry-num]"
" [--ipv6]"
@@ -344,9 +345,7 @@ print_usage(const char *prgname)
" Accepted: em (Exact Match), lpm (Longest Prefix Match), fib (Forwarding Information Base)\n"
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
" --ipv6: Set if running ipv6 packets\n"
@@ -566,7 +565,7 @@ static const char short_options[] =
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
#define CMD_LINE_OPT_IPV6 "ipv6"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
#define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
@@ -584,7 +583,7 @@ enum {
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
CMD_LINE_OPT_IPV6_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
CMD_LINE_OPT_PARSE_PTYPE_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
@@ -599,7 +598,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
{CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
@@ -698,31 +697,9 @@ parse_args(int argc, char **argv)
ipv6 = 1;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {
- "max-pkt-len", required_argument, 0, 0
- };
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr,
- "invalid maximum packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -981,6 +958,43 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
return 0;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
static void
l3fwd_poll_resource_setup(void)
{
@@ -1035,6 +1049,12 @@ l3fwd_poll_resource_setup(void)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf263d..b6cddc8c7b51 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -323,6 +322,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint16_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
@@ -2643,7 +2644,7 @@ print_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK -P"
" [--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread]]"
" [--tx (lcore,thread)[,(lcore,thread]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]"
" [--parse-ptype]\n\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -P : enable promiscuous mode\n"
@@ -2653,8 +2654,7 @@ print_usage(const char *prgname)
" --eth-dest=X,MM:MM:MM:MM:MM:MM: optional, ethernet destination for port X\n"
" --no-numa: optional, disable numa awareness\n"
" --ipv6: optional, specify it if running ipv6 packets\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --hash-entry-num: specify the hash entry number in hexadecimal to be setup\n"
" --no-lthreads: turn off lthread model\n"
" --parse-ptype: set to use software to analyze packet type\n\n",
@@ -2877,8 +2877,8 @@ enum {
OPT_NO_NUMA_NUM,
#define OPT_IPV6 "ipv6"
OPT_IPV6_NUM,
-#define OPT_ENABLE_JUMBO "enable-jumbo"
- OPT_ENABLE_JUMBO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_HASH_ENTRY_NUM "hash-entry-num"
OPT_HASH_ENTRY_NUM_NUM,
#define OPT_NO_LTHREADS "no-lthreads"
@@ -2902,7 +2902,7 @@ parse_args(int argc, char **argv)
{OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
{OPT_NO_NUMA, 0, NULL, OPT_NO_NUMA_NUM },
{OPT_IPV6, 0, NULL, OPT_IPV6_NUM },
- {OPT_ENABLE_JUMBO, 0, NULL, OPT_ENABLE_JUMBO_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
{OPT_HASH_ENTRY_NUM, 1, NULL, OPT_HASH_ENTRY_NUM_NUM },
{OPT_NO_LTHREADS, 0, NULL, OPT_NO_LTHREADS_NUM },
{OPT_PARSE_PTYPE, 0, NULL, OPT_PARSE_PTYPE_NUM },
@@ -2981,35 +2981,10 @@ parse_args(int argc, char **argv)
parse_ptype_on = 1;
break;
- case OPT_ENABLE_JUMBO_NUM:
- {
- struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- printf("jumbo frame is enabled - disabling simple TX path\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /* if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
-
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) || (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
case OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -3489,6 +3464,43 @@ check_all_ports_link_status(uint32_t port_mask)
}
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint16_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -3577,6 +3589,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/test.sh b/examples/performance-thread/l3fwd-thread/test.sh
index f0b6e271a5f3..3dd33407ea41 100755
--- a/examples/performance-thread/l3fwd-thread/test.sh
+++ b/examples/performance-thread/l3fwd-thread/test.sh
@@ -11,7 +11,7 @@ case "$1" in
echo "1.1 1 L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -23,7 +23,7 @@ case "$1" in
echo "1.2 1 L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -34,7 +34,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=8)"
./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -45,7 +45,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -61,7 +61,7 @@ case "$1" in
echo "2.1 N L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 --lcores="2,(0-1)@0" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -73,7 +73,7 @@ case "$1" in
echo "2.2 N L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 --lcores="(0-3)@0,4" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -84,7 +84,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=8)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-7)@0,8" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -95,7 +95,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-15)@0,16" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -111,7 +111,7 @@ case "$1" in
echo "3.1 N L-threads per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(0,0)" \
--stat-lcore 1
@@ -121,7 +121,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(0,0)(0,1)" \
--stat-lcore 1
@@ -131,7 +131,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=8)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)" \
--tx="(0,0)(0,1)(0,2)(0,3)" \
--stat-lcore 1
@@ -141,7 +141,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=16)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)" \
--tx="(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)" \
--stat-lcore 1
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..4f20dfc4be06 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fbf7..3b6c6c297f43 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
uint8_t ptp_enabled_port_nb;
static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static const struct rte_ether_addr ether_multicast = {
.addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
};
@@ -178,7 +172,7 @@ static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_dev_info dev_info;
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1;
const uint16_t tx_rings = 1;
int retval;
@@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369dc..c32d2e12e633 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -52,7 +52,6 @@ static struct rte_mempool *pool = NULL;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..1367569c65db 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index ab6fa7d56c5d..6845c396b8d9 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
static const char usage[] =
"%s EAL_ARGS -- [-t]\n";
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static struct {
uint64_t total_cycles;
uint64_t total_queue_cycles;
@@ -124,7 +118,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused,
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -137,6 +131,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index ae9bbee8d820..fd7207aee758 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -17,14 +17,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-/* Configuration of ethernet ports. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of ethernet ports. */
-
/* basicfwd.c: Basic DPDK skeleton forwarding example. */
/*
@@ -36,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -48,6 +40,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index d0bf1f31e36a..da381b41c0c5 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -44,6 +44,7 @@
#define BURST_RX_RETRIES 4 /* Number of retries on RX. */
#define JUMBO_FRAME_MAX_SIZE 0x2600
+#define MAX_MTU (JUMBO_FRAME_MAX_SIZE - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN))
/* State of virtio device. */
#define DEVICE_MAC_LEARNING 0
@@ -633,8 +634,7 @@ us_vhost_parse_args(int argc, char **argv)
if (ret) {
vmdq_conf_default.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- vmdq_conf_default.rxmode.max_rx_pkt_len
- = JUMBO_FRAME_MAX_SIZE;
+ vmdq_conf_default.rxmode.mtu = MAX_MTU;
}
break;
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e59fb7d3478b..e19d79a40802 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -51,17 +51,10 @@
static uint32_t enabled_port_mask;
static volatile bool force_quit;
-/****************/
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca924221..4d0584af52e3 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1324,6 +1324,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
return ret;
}
+static uint16_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint16_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1331,6 +1344,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
+ uint32_t max_rx_pktlen;
uint16_t overhead_len;
int diag;
int ret;
@@ -1381,11 +1395,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
/* Get the real Ethernet overhead length */
- if (dev_info.max_mtu != UINT16_MAX &&
- dev_info.max_rx_pktlen > dev_info.max_mtu)
- overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
- else
- overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
/* If number of queues specified by application for both Rx and Tx is
* zero, use driver preferred values. This cannot be done individually
@@ -1454,49 +1465,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/*
- * If jumbo frames are enabled, check that the maximum RX packet
- * length is supported by the configured device.
+ * Check that the maximum RX packet length is supported by the
+ * configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- (unsigned int)RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
- goto rollback;
- }
+ if (dev_conf->rxmode.mtu == 0)
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
+ if (max_rx_pktlen > dev_info.max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
+ port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
+ ret = -EINVAL;
+ goto rollback;
+ } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
+ port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ goto rollback;
+ }
- /* Scale the MTU size to adapt max_rx_pkt_len */
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- overhead_len;
- } else {
- uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
- if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
- pktlen > RTE_ETHER_MTU + overhead_len)
+ if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
+ dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
/* Use default value */
- dev->data->dev_conf.rxmode.max_rx_pkt_len =
- RTE_ETHER_MTU + overhead_len;
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
}
+ dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
+
/*
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
goto rollback;
@@ -2156,13 +2163,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
+ /* Get the real Ethernet overhead length */
if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ uint16_t overhead_len;
+ uint32_t max_rx_pktlen;
+ int ret;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->mtu + overhead_len;
if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- int ret = eth_dev_check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
+ ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index afdc53b674cc..9fba2bd73c84 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -416,7 +416,7 @@ enum rte_eth_tx_mq_mode {
struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
- uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ uint32_t mtu; /**< Requested MTU. */
/** Maximum allowed size of LRO aggregated packet. */
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index 0036bda7465c..1491c815c312 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -28,7 +28,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(nb_tx_q);
rte_trace_point_emit_u32(dev_conf->link_speeds);
rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
- rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
+ rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
rte_trace_point_emit_u64(dev_conf->txmode.offloads);
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
@ 2021-10-07 16:56 ` Ferruh Yigit
2021-10-08 17:20 ` Ananyev, Konstantin
2021-10-09 10:58 ` lihuisong (C)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 3/6] ethdev: move check to library for MTU set Ferruh Yigit
` (5 subsequent siblings)
6 siblings, 2 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-07 16:56 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: Ferruh Yigit, dev
Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
and application should enable the jumbo frame offload support for it.
When jumbo frame offload is not enabled by application, but MTU bigger
than RTE_ETHER_MTU is requested there are two options, either fail or
enable jumbo frame offload implicitly.
Enabling jumbo frame offload implicitly is selected by many drivers
since setting a big MTU value already implies it, and this increases
usability.
This patch moves this logic from drivers to the library, both to reduce
the duplicated code in the drivers and to make behaviour more visible.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
drivers/net/dpaa/dpaa_ethdev.c | 7 -------
drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
drivers/net/e1000/em_ethdev.c | 9 ++-------
drivers/net/e1000/igb_ethdev.c | 9 ++-------
drivers/net/enetc/enetc_ethdev.c | 7 -------
drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
drivers/net/hns3/hns3_ethdev.c | 8 --------
drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
drivers/net/i40e/i40e_ethdev.c | 5 -----
drivers/net/iavf/iavf_ethdev.c | 7 -------
drivers/net/ice/ice_ethdev.c | 5 -----
drivers/net/igc/igc_ethdev.c | 9 ++-------
drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
drivers/net/liquidio/lio_ethdev.c | 7 -------
drivers/net/nfp/nfp_common.c | 6 ------
drivers/net/octeontx/octeontx_ethdev.c | 5 -----
drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
drivers/net/qede/qede_ethdev.c | 4 ----
drivers/net/sfc/sfc_ethdev.c | 9 ---------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 6 ------
lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
27 files changed, 29 insertions(+), 166 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 76aeec077f2b..2960834b4539 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
val = 1;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
val = 0;
- }
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 8c6f20b75aed..07ee19938930 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3052,15 +3052,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
return -EINVAL;
}
- if (new_mtu > RTE_ETHER_MTU) {
+ if (new_mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* Is there a change in mtu setting? */
if (eth_dev->data->mtu == new_mtu)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 695d0d6fd3e2..349896f6a1bf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
plt_err("Failed to max Rx frame length, rc=%d", rc);
goto exit;
}
-
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 8cf61f12a8d6..0c9cc2f5bb3f 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
- /* set to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
return err;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index adbdb87baab9..57b09f16ba44 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 97dd8e079a73..737b474dd814 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1472,13 +1472,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6f418a36aa04..1b41dd04df5a 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
return 0;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 4c114bf90fc7..a061d0529dd1 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4396,15 +4396,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index cdb9783b5372..fbcbbb6c0533 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads &=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 2d8271cb6095..4b30dfa222a8 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1547,13 +1547,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 4ead227f9122..e1d465de8234 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2571,7 +2571,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct hns3_adapter *hns = dev->data->dev_private;
uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
struct hns3_hw *hw = &hns->hw;
- bool is_jumbo_frame;
int ret;
if (dev->data->dev_started) {
@@ -2581,7 +2580,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2596,12 +2594,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (is_jumbo_frame)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 0b5db486f8d6..3438b3650de6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -908,12 +908,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rte_spinlock_unlock(&hw->lock);
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index ab571a921f9e..9283adb19304 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11775,11 +11775,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 0eabce275d92..844d26d87ba6 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1473,13 +1473,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8ee1335ac6cf..3038a9714517 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3992,11 +3992,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index b26723064b07..dcbc26b8186e 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rctl = IGC_READ_REG(hw, IGC_RCTL);
-
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~IGC_RCTL_LPE;
- }
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 3634c0c8c5f0..e8a33f04bd69 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
mtu);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index e5ddae219182..c337430f2df8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5198,13 +5198,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 976916f870a5..3a516c52d199 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index a2031a7a82cc..850ec7655f82 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -962,12 +962,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* writing to configuration space */
nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 69c3bda12df8..fb65be2c2dc3 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index cf7804157198..293306c7be2a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,11 +59,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return rc;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4b971fd1fe3c..6886a4e5efb4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 1f55c90b419d..2ee80e2dc41f 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1064,15 +1064,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
- /*
- * The driver does not use it, but other PMDs update jumbo frame
- * flag when MTU is set.
- */
- if (mtu > RTE_ETHER_MTU) {
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index c8ae95a61306..b501fee5332c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct nicvf *nic = nicvf_pmd_priv(dev);
uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
size_t i;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
PMD_INIT_FUNC_TRACE();
@@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 269de9f848dd..35b98097c3a4 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3486,12 +3486,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
TXGBE_FRAME_SIZE_MAX);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4d0584af52e3..1740bab98a83 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3639,6 +3639,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
+ int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3657,12 +3658,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+
+ if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ is_jumbo_frame_capable = 1;
}
+ if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
+ return -EINVAL;
+
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (!ret)
+ if (ret == 0) {
dev->data->mtu = mtu;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
return eth_err(port_id, ret);
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v5 3/6] ethdev: move check to library for MTU set
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-10-07 16:56 ` Ferruh Yigit
2021-10-08 17:19 ` Ananyev, Konstantin
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
` (4 subsequent siblings)
6 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-07 16:56 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Heinrich Kuhn,
Harman Kalra, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
Rasesh Mody, Devendra Singh Rawat, Maciej Czekaj, Jiawen Wu,
Jian Wang, Thomas Monjalon, Andrew Rybchenko
Cc: Ferruh Yigit, dev
Move requested MTU value check to the API to prevent the duplicated
code.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 15 ++++-----------
drivers/net/bnxt/bnxt_ethdev.c | 2 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------
drivers/net/dpaa/dpaa_ethdev.c | 2 --
drivers/net/dpaa2/dpaa2_ethdev.c | 4 ----
drivers/net/e1000/em_ethdev.c | 10 ----------
drivers/net/e1000/igb_ethdev.c | 11 -----------
drivers/net/enetc/enetc_ethdev.c | 4 ----
drivers/net/hinic/hinic_pmd_ethdev.c | 8 +-------
drivers/net/i40e/i40e_ethdev.c | 17 ++++-------------
drivers/net/iavf/iavf_ethdev.c | 10 ++--------
drivers/net/ice/ice_ethdev.c | 14 +++-----------
drivers/net/igc/igc_ethdev.c | 5 -----
drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------
drivers/net/liquidio/lio_ethdev.c | 10 ----------
drivers/net/nfp/nfp_common.c | 4 ----
drivers/net/octeontx/octeontx_ethdev.c | 4 ----
drivers/net/octeontx2/otx2_ethdev_ops.c | 4 ----
drivers/net/qede/qede_ethdev.c | 12 ------------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 10 ----------
lib/ethdev/rte_ethdev.c | 9 +++++++++
22 files changed, 25 insertions(+), 155 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 2960834b4539..c36cd7b1d2f0 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct axgbe_port *pdata = dev->data->dev_private;
- uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- unsigned int val = 0;
- axgbe_dev_info_get(dev, &dev_info);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ unsigned int val;
+
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- val = 1;
- else
- val = 0;
+ val = mtu > RTE_ETHER_MTU ? 1 : 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
+
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 07ee19938930..dc33b961320a 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3025,7 +3025,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
- uint32_t rc = 0;
+ uint32_t rc;
uint32_t i;
rc = is_bnxt_in_error(bp);
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 0c9cc2f5bb3f..70b879fed100 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
- struct rte_eth_dev_info dev_info;
- int err;
uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- err = cxgbe_dev_info_get(eth_dev, &dev_info);
- if (err != 0)
- return err;
-
- /* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
- return -EINVAL;
-
- err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
+ return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- return err;
}
/*
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 57b09f16ba44..3172e3b2de87 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
- return -EINVAL;
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 737b474dd814..d8f4de65ce6d 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1468,10 +1468,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
- return -EINVAL;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 1b41dd04df5a..6ebef55588bc 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1788,22 +1788,12 @@ eth_em_default_mac_addr_set(struct rte_eth_dev *dev,
static int
eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct e1000_hw *hw;
uint32_t frame_size;
uint32_t rctl;
- int ret;
-
- ret = eth_em_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
frame_size = mtu + E1000_ETH_OVERHEAD;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index a061d0529dd1..3164fde5b939 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4363,9 +4363,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
uint32_t rctl;
struct e1000_hw *hw;
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + E1000_ETH_OVERHEAD;
- int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -4374,15 +4372,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (hw->mac.type == e1000_82571)
return -ENOTSUP;
#endif
- ret = eth_igb_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index fbcbbb6c0533..a7372c1787c7 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -662,10 +662,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- /* check that mtu is within the allowed range */
- if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size > ENETC_MAC_MAXFRM_SIZE)
- return -EINVAL;
-
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 4b30dfa222a8..79987bec273c 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1530,17 +1530,11 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- int ret = 0;
+ int ret;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu));
- if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) {
- PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d",
- mtu, HINIC_MIN_MTU_SIZE, HINIC_MAX_MTU_SIZE);
- return -EINVAL;
- }
-
ret = hinic_set_port_mtu(nic_dev->hwdev, mtu);
if (ret) {
PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret);
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 9283adb19304..2824592aa62e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11757,25 +11757,16 @@ static int i40e_set_default_mac_addr(struct rte_eth_dev *dev,
}
static int
-i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
- int ret = 0;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
- return ret;
+ return 0;
}
/* Restore ethertype filter */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 844d26d87ba6..2d43c666fdbb 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1459,21 +1459,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
static int
-iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD;
- int ret = 0;
-
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port must be stopped before configuration");
return -EBUSY;
}
- return ret;
+ return 0;
}
static int
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3038a9714517..703178c6d40c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3974,21 +3974,13 @@ ice_dev_set_link_down(struct rte_eth_dev *dev)
}
static int
-ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + ICE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR,
"port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index dcbc26b8186e..e279ae1fff1d 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1576,11 +1576,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN)
frame_size += VLAN_TAG_SIZE;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > MAX_RX_JUMBO_FRAME_SIZE)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index e8a33f04bd69..377b96c0236a 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2778,12 +2778,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
int ret = 0;
struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev);
struct rte_eth_dev_data *dev_data = ethdev->data;
- uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)
- return -EINVAL;
/* mtu setting is forbidden if port is start */
/* make sure NIC port is stopped */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 3a516c52d199..9d1d811a2e37 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -434,7 +434,6 @@ static int
lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -446,15 +445,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -EINVAL;
}
- /* check if VF MTU is within allowed range.
- * New value should not exceed PF MTU.
- */
- if (mtu < RTE_ETHER_MIN_MTU || mtu > pf_mtu) {
- lio_dev_err(lio_dev, "VF MTU should be >= %d and <= %d\n",
- RTE_ETHER_MIN_MTU, pf_mtu);
- return -EINVAL;
- }
-
/* flush added to prevent cmd failure
* incase the queue is full
*/
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 850ec7655f82..b1ce35b334da 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -951,10 +951,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || (uint32_t)mtu > hw->max_mtu)
- return -EINVAL;
-
/* mtu setting is forbidden if port is started */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index fb65be2c2dc3..b2355fa695bc 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -524,10 +524,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
struct rte_eth_dev_data *data = eth_dev->data;
int rc = 0;
- /* Check if MTU is within the allowed range */
- if (frame_size < OCCTX_MIN_FRS || frame_size > OCCTX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 293306c7be2a..206da6f7cfda 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -20,10 +20,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (dev->configured && otx2_ethdev_is_ptp_en(dev))
frame_size += NIX_TIMESYNC_RX_OFFSET;
- /* Check if MTU is within the allowed range */
- if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 6886a4e5efb4..84e23ff03418 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2307,7 +2307,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
uint32_t frame_size;
uint16_t bufsz;
@@ -2315,19 +2314,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
int i, rc;
PMD_INIT_FUNC_TRACE(edev);
- rc = qede_dev_info_get(dev, &dev_info);
- if (rc != 0) {
- DP_ERR(edev, "Error during getting ethernet device info\n");
- return rc;
- }
frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
- DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
- mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
- QEDE_ETH_OVERHEAD);
- return -EINVAL;
- }
if (!dev->data->scattered_rx &&
frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
DP_INFO(edev, "MTU greater than minimum RX buffer size of %u\n",
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index b501fee5332c..44c6b1c72354 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -154,12 +154,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (frame_size > NIC_HW_MAX_FRS)
- return -EINVAL;
-
- if (frame_size < NIC_HW_MIN_FRS)
- return -EINVAL;
-
buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/*
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 35b98097c3a4..c6fcb1871981 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3463,18 +3463,8 @@ static int
txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct rte_eth_dev_data *dev_data = dev->data;
- int ret;
-
- ret = txgbe_dev_info_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 1740bab98a83..ce0ed509d28f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3652,6 +3652,9 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
@@ -3659,6 +3662,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ frame_size = mtu + overhead_len;
+ if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
+ return -EINVAL;
+
if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
is_jumbo_frame_capable = 1;
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 3/6] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-10-07 16:56 ` Ferruh Yigit
2021-10-08 17:11 ` Ananyev, Konstantin
2021-10-10 5:46 ` Matan Azrad
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks Ferruh Yigit
` (3 subsequent siblings)
6 siblings, 2 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-07 16:56 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Yong Wang, Konstantin Ananyev, Radu Nicolau, Akhil Goyal,
David Hunt, John McNamara, Thomas Monjalon
Cc: Ferruh Yigit, dev
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 55989 bytes --]
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
app/test-eventdev/test_pipeline_common.c | 2 -
app/test-pmd/cmdline.c | 2 +-
app/test-pmd/config.c | 25 +---------
app/test-pmd/testpmd.c | 48 +------------------
app/test-pmd/testpmd.h | 2 +-
doc/guides/howto/debug_troubleshoot.rst | 2 -
doc/guides/nics/bnxt.rst | 1 -
doc/guides/nics/features.rst | 3 +-
drivers/net/atlantic/atl_ethdev.c | 1 -
drivers/net/axgbe/axgbe_ethdev.c | 1 -
drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
drivers/net/bnxt/bnxt.h | 1 -
drivers/net/bnxt/bnxt_ethdev.c | 10 +---
drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
drivers/net/cnxk/cnxk_ethdev.h | 5 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
drivers/net/cxgbe/cxgbe.h | 1 -
drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
drivers/net/cxgbe/sge.c | 5 +-
drivers/net/dpaa/dpaa_ethdev.c | 2 -
drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
drivers/net/e1000/e1000_ethdev.h | 4 +-
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/em_rxtx.c | 19 +++-----
drivers/net/e1000/igb_rxtx.c | 3 +-
drivers/net/ena/ena_ethdev.c | 1 -
drivers/net/enetc/enetc_ethdev.c | 3 +-
drivers/net/enic/enic_res.c | 1 -
drivers/net/failsafe/failsafe_ops.c | 2 -
drivers/net/fm10k/fm10k_ethdev.c | 1 -
drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev_vf.c | 1 -
drivers/net/i40e/i40e_ethdev.c | 1 -
drivers/net/i40e/i40e_rxtx.c | 2 +-
drivers/net/iavf/iavf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_vf_representor.c | 1 -
drivers/net/ice/ice_ethdev.c | 1 -
drivers/net/ice/ice_rxtx.c | 3 +-
drivers/net/igc/igc_ethdev.h | 1 -
drivers/net/igc/igc_txrx.c | 2 +-
drivers/net/ionic/ionic_ethdev.c | 1 -
drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
drivers/net/ixgbe/ixgbe_pf.c | 9 +---
drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
drivers/net/mlx4/mlx4_rxq.c | 1 -
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/mvneta/mvneta_ethdev.h | 3 +-
drivers/net/mvpp2/mrvl_ethdev.c | 1 -
drivers/net/nfp/nfp_common.c | 6 +--
drivers/net/octeontx/octeontx_ethdev.h | 1 -
drivers/net/octeontx2/otx2_ethdev.h | 1 -
drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
drivers/net/qede/qede_ethdev.c | 1 -
drivers/net/sfc/sfc_rx.c | 2 -
drivers/net/thunderx/nicvf_ethdev.h | 1 -
drivers/net/txgbe/txgbe_rxtx.c | 1 -
drivers/net/virtio/virtio_ethdev.c | 1 -
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
examples/ip_fragmentation/main.c | 3 +-
examples/ip_reassembly/main.c | 3 +-
examples/ipsec-secgw/ipsec-secgw.c | 2 -
examples/ipv4_multicast/main.c | 1 -
examples/kni/main.c | 5 --
examples/l3fwd-acl/main.c | 4 +-
examples/l3fwd-graph/main.c | 4 +-
examples/l3fwd-power/main.c | 4 +-
examples/l3fwd/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 4 +-
examples/vhost/main.c | 5 +-
lib/ethdev/rte_ethdev.c | 26 +---------
lib/ethdev/rte_ethdev.h | 1 -
75 files changed, 47 insertions(+), 259 deletions(-)
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 5fcea74b4d43..2775e72c580d 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN;
- if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
RTE_ETH_FOREACH_DEV(i) {
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index a677451073ae..117945c2c61e 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1923,7 +1923,7 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
return;
}
- update_jumbo_frame_offload(port_id, res->value);
+ update_mtu_from_frame_size(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index db3eeffa0093..e890fadc716c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1144,40 +1144,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v)
void
port_mtu_set(portid_t port_id, uint16_t mtu)
{
+ struct rte_port *port = &ports[port_id];
int diag;
- struct rte_port *rte_port = &ports[port_id];
- struct rte_eth_dev_info dev_info;
- int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ret = eth_dev_info_get_print_err(port_id, &dev_info);
- if (ret != 0)
- return;
-
- if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
- fprintf(stderr,
- "Set MTU failed. MTU:%u is not in valid range, min:%u - max:%u\n",
- mtu, dev_info.min_mtu, dev_info.max_mtu);
- return;
- }
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
return;
}
- rte_port->dev_conf.rxmode.mtu = mtu;
-
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (mtu > RTE_ETHER_MTU)
- rte_port->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rte_port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
+ port->dev_conf.rxmode.mtu = mtu;
}
/* Generic flow management functions. */
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 8c11ab23dd14..deb279b7ea5d 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1508,12 +1508,6 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid, 0);
- if (ret != 0)
- fprintf(stderr,
- "Updating jumbo frame offload failed for port %u\n",
- pid);
-
if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
@@ -3475,24 +3469,18 @@ rxtx_port_config(struct rte_port *port)
}
/*
- * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned.
+ * Helper function to set MTU from frame size
*
* port->dev_info should be set before calling this function.
*
- * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
- * ETH_OVERHEAD". This is useful to update flags but not MTU value.
- *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
+update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
- uint64_t rx_offloads;
uint16_t mtu, new_mtu;
- bool on;
eth_overhead = get_eth_overhead(&port->dev_info);
@@ -3501,40 +3489,8 @@ update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
return -1;
}
- if (max_rx_pktlen == 0)
- max_rx_pktlen = mtu + eth_overhead;
-
- rx_offloads = port->dev_conf.rxmode.offloads;
new_mtu = max_rx_pktlen - eth_overhead;
- if (new_mtu <= RTE_ETHER_MTU) {
- rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = false;
- } else {
- if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- fprintf(stderr,
- "Frame size (%u) is not supported by port %u\n",
- max_rx_pktlen, portid);
- return -1;
- }
- rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = true;
- }
-
- if (rx_offloads != port->dev_conf.rxmode.offloads) {
- uint16_t qid;
-
- port->dev_conf.rxmode.offloads = rx_offloads;
-
- /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
- for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
- if (on)
- port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
- }
-
if (mtu == new_mtu)
return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 076c154b2b3a..b4fbeb61e838 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1024,7 +1024,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
+int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..df69fa8bcc24 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
* Identify if port Speed and Duplex is matching to desired values with
``rte_eth_link_get``.
- * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
-
* Check promiscuous mode if the drops do not occur for unique MAC address
with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3bc..8f10c6c78a1f 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -885,7 +885,6 @@ processing. This improved performance is derived from a number of optimizations:
DEV_RX_OFFLOAD_VLAN_STRIP
DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_JUMBO_FRAME
DEV_RX_OFFLOAD_IPV4_CKSUM
DEV_RX_OFFLOAD_UDP_CKSUM
DEV_RX_OFFLOAD_TCP_CKSUM
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 483cb7da576f..9580445828bf 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -165,8 +165,7 @@ Jumbo frame
Supports Rx jumbo frames.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.mtu``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 3f654c071566..5a198f53fce7 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
| DEV_RX_OFFLOAD_IPV4_CKSUM \
| DEV_RX_OFFLOAD_UDP_CKSUM \
| DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_JUMBO_FRAME \
| DEV_RX_OFFLOAD_MACSEC_STRIP \
| DEV_RX_OFFLOAD_VLAN_FILTER)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index c36cd7b1d2f0..0bc9e5eeeb10 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 009a94e9a8fa..50ff04bb2241 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 5121d05da65f..6743cf92b0e6 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -595,7 +595,6 @@ struct bnxt_rep_info {
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_VLAN_EXTEND | \
DEV_RX_OFFLOAD_TCP_LRO | \
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index dc33b961320a..e9d04f354a39 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -742,15 +742,10 @@ static int bnxt_start_nic(struct bnxt *bp)
unsigned int i, j;
int rc;
- if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* THOR does not support ring groups.
* But we will use the array to save RSS context IDs.
@@ -1250,7 +1245,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
if (eth_dev->data->dev_conf.rxmode.offloads &
~(DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 412acff42f65..2f3a1759419f 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1727,14 +1727,6 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 10e05e6b5edd..fa8c48f1eeb0 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -75,9 +75,8 @@
#define CNXK_NIX_RX_OFFLOAD_CAPA \
(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_VLAN_STRIP)
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
+ DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP)
#define RSS_IPV4_ENABLE \
(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 349896f6a1bf..d0924df76152 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_SECURITY, " Security,"},
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..37625c5bfb69 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -51,7 +51,6 @@
DEV_RX_OFFLOAD_IPV4_CKSUM | \
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 70b879fed100..1374f32b6826 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -661,14 +661,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
if ((&rxq->fl) != NULL)
rxq->fl.size = temp_nb_desc;
- /* Set to jumbo mode if necessary */
- if (eth_dev->data->mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
&rxq->fl, NULL,
is_pf4(adapter) ?
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 830f5192474d..21b8fe61c9a7 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
struct rte_mbuf *buf_bulk[n];
int ret, i;
struct rte_pktmbuf_pool_private *mbp_priv;
- u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME;
/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
- if (jumbo_en &&
- ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000))
+ if ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000)
buf_size_idx = RX_LARGE_MTU_BUF;
ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk, n);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3172e3b2de87..defc072072af 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,7 +54,6 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
@@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index d8f4de65ce6d..bce7ddd9c5fa 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
@@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6f4..1ae78fe71f02 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
void em_dev_clear_queues(struct rte_eth_dev *dev);
void em_dev_free_queues(struct rte_eth_dev *dev);
-uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
-uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
+uint64_t em_get_rx_port_offloads_capa(void);
+uint64_t em_get_rx_queue_offloads_capa(void);
int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6ebef55588bc..8a752eef52cf 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
- dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(dev);
- dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
+ dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa();
+ dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
dev_info->rx_queue_offload_capa;
dev_info->tx_queue_offload_capa = em_get_tx_queue_offloads_capa(dev);
dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd0074..e061f80a906a 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1359,12 +1359,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
}
uint64_t
-em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_port_offloads_capa(void)
{
uint64_t rx_offload_capa;
- uint32_t max_rx_pktlen;
-
- max_rx_pktlen = em_get_max_pktlen(dev);
rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
@@ -1374,14 +1371,12 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER;
- if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
- rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
return rx_offload_capa;
}
uint64_t
-em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_queue_offloads_capa(void)
{
uint64_t rx_queue_offload_capa;
@@ -1390,7 +1385,7 @@ em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
* capability be same to per port queue offloading capability
* for better convenience.
*/
- rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
+ rx_queue_offload_capa = em_get_rx_port_offloads_capa();
return rx_queue_offload_capa;
}
@@ -1839,7 +1834,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* to avoid splitting packets that don't fit into
* one buffer.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+ if (dev->data->mtu > RTE_ETHER_MTU ||
rctl_bsize < RTE_ETHER_MAX_LEN) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1874,14 +1869,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
if ((hw->mac.type == e1000_ich9lan ||
hw->mac.type == e1000_pch2lan ||
hw->mac.type == e1000_ich10lan) &&
- rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ dev->data->mtu > RTE_ETHER_MTU) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
}
if (hw->mac.type == e1000_pch2lan) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
@@ -1908,7 +1903,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index e9a30d393bd7..dda4d2101adb 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1640,7 +1640,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -2344,7 +2343,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Configure support of jumbo frames, if any.
*/
max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
rctl |= E1000_RCTL_LPE;
/*
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 3a9d5031b262..6d1026d31951 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -1918,7 +1918,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
- rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
/* Inform framework about available features */
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index a7372c1787c7..6457677d300a 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
(DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME);
+ DEV_RX_OFFLOAD_KEEP_CRC);
return 0;
}
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 0493e096d031..c5777772a09e 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
DEV_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e034..47c5efe9ea77 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1193,7 +1193,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
@@ -1211,7 +1210,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 5e4b361ca6c0..093021246286 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_RSS_HASH);
}
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 79987bec273c..4005414aeb71 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -738,7 +738,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index e1d465de8234..dbd4c54b18c6 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2691,7 +2691,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 3438b3650de6..eee65ac77399 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -944,7 +944,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 2824592aa62e..6a64221778fa 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3760,7 +3760,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 1d27cf2b0a01..69c282baa723 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2911,7 +2911,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
rxq->max_pkt_len =
RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
data->mtu + I40E_ETH_OVERHEAD);
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 2d43c666fdbb..2c4103ac7ef9 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -588,7 +588,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->mtu & RTE_ETHER_MTU) {
if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
max_pkt_len > IAVF_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -968,7 +968,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index c3c7ad88f250..16f642566e91 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -72,7 +72,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (max_pkt_len <= ICE_ETH_MAX_LEN ||
max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -683,7 +683,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f9137..d28fedc96e1a 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -149,7 +149,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 703178c6d40c..17d30b735693 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3676,7 +3676,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_FILTER;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index f9ef6ce57277..cc7908d32584 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -267,7 +267,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
struct ice_rlan_ctx rx_ctx;
enum ice_status err;
uint16_t buf_size;
- struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
@@ -282,7 +281,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
frame_size);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index b3473b5b1646..5e6c2ff30157 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -73,7 +73,6 @@ extern "C" {
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 28d3076439c3..30940857eac0 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ if (dev->data->mtu & RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
else
rctl &= ~IGC_RCTL_LPE;
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index 97447a10e46a..795980cb1ca5 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_SCATTER |
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 377b96c0236a..4e5d234e8c7d 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ DEV_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index c337430f2df8..096a7a5b2439 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6234,7 +6234,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct rte_eth_rxmode *rxmode;
uint32_t rf_dec, rf_int;
uint32_t bcnrc_val;
uint16_t link_speed = dev->data->dev_link.link_speed;
@@ -6256,14 +6255,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
bcnrc_val = 0;
}
- rxmode = &dev->data->dev_conf.rxmode;
/*
* Set global transmit compensation time to the MMW_SIZE in RTTBCNRM
* register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
* set as 0x4.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ if (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 9bcbc445f2d0..6e64f9a0ade2 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -600,15 +600,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT;
if (max_frs < max_frame) {
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (max_frame > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (max_frame > IXGBE_ETH_MAX_LEN)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 03991711fd6e..c223ef37c79f 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3033,7 +3033,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -5095,7 +5094,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 4a5cfd22aa71..e73112c44749 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 6f4f351222d3..0cc3bccc0825 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..6428f9ff7931 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,8 +54,7 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 5ce71661c84e..ef987b7de1b5 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -59,7 +59,6 @@
/** Port Rx offload capabilities */
#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index b1ce35b334da..a0bb5b9640c2 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -369,8 +369,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = dev->data->mtu;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -757,9 +756,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
};
- /* All NFP devices support jumbo frames */
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..3a02824e3948 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -60,7 +60,6 @@
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30bda..47ee126ed7fd 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -148,7 +148,6 @@
DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
DEV_RX_OFFLOAD_VLAN_FILTER | \
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index a243683d61d3..c65041a16ba7 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
- devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..aa4dcd33cc79 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l3_len = hdr_lens.l3_len;
droq_pkt->l4_len = hdr_lens.l4_len;
- if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
- rte_pktmbuf_free(droq_pkt);
- goto oq_read_fail;
- }
-
if (droq_pkt->nb_segs > 1 &&
!(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 84e23ff03418..06c3ccf20716 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_RSS_HASH);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9e0..62b215f62cd6 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -940,8 +940,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
{
uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
- caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return caps & sfc_rx_get_offload_mask(sa);
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..5d38750d6313 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -40,7 +40,6 @@
#define NICVF_RX_OFFLOAD_CAPA ( \
DEV_RX_OFFLOAD_CHECKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index c6cd3803c434..0ce754fb25b0 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1953,7 +1953,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_SCATTER;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 5d341a3e23bb..a05e73cd8b60 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2556,7 +2556,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
host_features = VIRTIO_OPS(hw)->get_features(hw);
dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 2f40ae907dcd..0210f9140b48 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -54,7 +54,6 @@
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 12062a785dc6..7c0cb093eda3 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -150,8 +150,7 @@ static struct rte_eth_conf port_conf = {
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ DEV_RX_OFFLOAD_SCATTER),
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index e5c7d46d2caa..af67db49f7fb 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -165,8 +165,7 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index d032a47d1c3b..4a741bfdde4d 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2209,8 +2209,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- if (mtu_size > RTE_ETHER_MTU)
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index b3993685ec92..63bbd7e64ceb 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -113,7 +113,6 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 62f6e42a9437..1790ec024072 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
}
memcpy(&conf, &port_conf, sizeof(conf));
- /* Set new MTU */
- if (new_mtu > RTE_ETHER_MTU)
- conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 7abb612ee6a4..f6dfb156ac56 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -2000,10 +2000,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index b431b9ff5f3c..a185a0512826 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -730,10 +730,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index e58561327c48..12b4dce77ce1 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -2509,10 +2509,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index cb9bc7ad6002..22d35749410b 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -987,10 +987,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index b6cddc8c7b51..8fc3a7c675a2 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -3493,10 +3493,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index da381b41c0c5..a9c207124153 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -631,11 +631,8 @@ us_vhost_parse_args(int argc, char **argv)
return -1;
}
mergeable = !!ret;
- if (ret) {
- vmdq_conf_default.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (ret)
vmdq_conf_default.rxmode.mtu = MAX_MTU;
- }
break;
case OPT_STATS_NUM:
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index ce0ed509d28f..c2b624aba1a0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -118,7 +118,6 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT),
RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
- RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
RTE_RX_OFFLOAD_BIT2STR(SCATTER),
RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
RTE_RX_OFFLOAD_BIT2STR(SECURITY),
@@ -1485,13 +1484,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
- dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
- /* Use default value */
- dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- }
-
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
/*
@@ -3639,7 +3631,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
- int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3667,27 +3658,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
frame_size = mtu + overhead_len;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
-
- if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
- is_jumbo_frame_capable = 1;
}
- if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
- return -EINVAL;
-
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (ret == 0) {
+ if (ret == 0)
dev->data->mtu = mtu;
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
return eth_err(port_id, ret);
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 9fba2bd73c84..4d0f956a4b28 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1389,7 +1389,6 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
` (2 preceding siblings ...)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-10-07 16:56 ` Ferruh Yigit
2021-10-08 16:51 ` Ananyev, Konstantin
2021-10-09 11:43 ` lihuisong (C)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
` (2 subsequent siblings)
6 siblings, 2 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-07 16:56 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko; +Cc: Ferruh Yigit, dev, Huisong Li
Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
have slightly different checks. Like one checks min MTU against
RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
Checks moved into common function to unify the checks. Also this has
benefit to have common error logs.
Suggested-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
lib/ethdev/rte_ethdev.c | 82 ++++++++++++++++++++++++++---------------
lib/ethdev/rte_ethdev.h | 2 +-
2 files changed, 54 insertions(+), 30 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index c2b624aba1a0..0a6e952722ae 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1336,6 +1336,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
return overhead_len;
}
+/* rte_eth_dev_info_get() should be called prior to this function */
+static int
+eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
+ uint16_t mtu)
+{
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
+ if (mtu < dev_info->min_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) < device min MTU (%u) for port_id %u\n",
+ mtu, dev_info->min_mtu, port_id);
+ return -EINVAL;
+ }
+ if (mtu > dev_info->max_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) > device max MTU (%u) for port_id %u\n",
+ mtu, dev_info->max_mtu, port_id);
+ return -EINVAL;
+ }
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ frame_size = mtu + overhead_len;
+ if (frame_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) < min frame size (%u) for port_id %u\n",
+ frame_size, RTE_ETHER_MIN_LEN, port_id);
+ return -EINVAL;
+ }
+
+ if (frame_size > dev_info->max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) > device max frame size (%u) for port_id %u\n",
+ frame_size, dev_info->max_rx_pktlen, port_id);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1463,26 +1504,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- /*
- * Check that the maximum RX packet length is supported by the
- * configured device.
- */
if (dev_conf->rxmode.mtu == 0)
dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
- if (max_rx_pktlen > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
- port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
- port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
+
+ ret = eth_dev_validate_mtu(port_id, &dev_info,
+ dev->data->dev_conf.rxmode.mtu);
+ if (ret != 0)
goto rollback;
- }
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
@@ -1491,6 +1519,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
if (dev_conf->rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
@@ -3437,7 +3468,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = lim;
dev_info->tx_desc_lim = lim;
dev_info->device = dev->device;
- dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
dev_info->max_mtu = UINT16_MAX;
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
@@ -3643,21 +3675,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
- uint16_t overhead_len;
- uint32_t frame_size;
-
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
- if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
- return -EINVAL;
-
- overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
- dev_info.max_mtu);
- frame_size = mtu + overhead_len;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
+ if (ret != 0)
+ return ret;
}
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 4d0f956a4b28..50e124ff631f 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3056,7 +3056,7 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
* };
*
* device = dev->device
- * min_mtu = RTE_ETHER_MIN_MTU
+ * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
* max_mtu = UINT16_MAX
*
* The following fields will be populated if support for dev_infos_get()
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v5 6/6] examples/ip_reassembly: remove unused parameter
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
` (3 preceding siblings ...)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks Ferruh Yigit
@ 2021-10-07 16:56 ` Ferruh Yigit
2021-10-08 16:53 ` Ananyev, Konstantin
2021-10-08 15:57 ` [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length Ananyev, Konstantin
2021-10-09 10:56 ` lihuisong (C)
6 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-07 16:56 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: Ferruh Yigit, dev
Remove 'max-pkt-len' parameter.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
examples/ip_reassembly/main.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index af67db49f7fb..2ff5ea3e7bc5 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -516,7 +516,6 @@ static void
print_usage(const char *prgname)
{
printf("%s [EAL options] -- -p PORTMASK [-q NQ]"
- " [--max-pkt-len PKTLEN]"
" [--maxflows=<flows>] [--flowttl=<ttl>[(s|ms)]]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of RX queues per lcore\n"
@@ -618,7 +617,6 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {"max-pkt-len", 1, 0, 0},
{"maxflows", 1, 0, 0},
{"flowttl", 1, 0, 0},
{NULL, 0, 0, 0}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
` (5 preceding siblings ...)
2021-10-05 22:07 ` [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length Ajit Khaparde
@ 2021-10-08 8:36 ` Xu, Rosen
2021-10-10 6:30 ` Matan Azrad
7 siblings, 0 replies; 112+ messages in thread
From: Xu, Rosen @ 2021-10-08 8:36 UTC (permalink / raw)
To: Yigit, Ferruh, Jerin Jacob, Li, Xiaoyun, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Zhang, Qi Z, Wang, Xiao W,
Matan Azrad, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Iremonger, Bernard, Ananyev, Konstantin,
Kiran Kumar K, Nithin Dabilpuram, Hunt, David, Mcnamara, John,
Richardson, Bruce, Igor Russkikh, Steven Webster, Peters, Matt,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Wang, Haiyue, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, Daley, John,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
Andrew Boyer, Shijith Thotton, Srisivasubramanian Srinivasan,
Zyta Szpak, Liron Himi, Heinrich Kuhn, Devendra Singh Rawat,
Andrew Rybchenko, Wiles, Keith, Jiawen Wu, Jian Wang,
Maxime Coquelin, Xia, Chenbo, Chautru, Nicolas, Van Haaren,
Harry, Dumitrescu, Cristian, Nicolau, Radu, Akhil Goyal,
Kantecki, Tomasz, Doherty, Declan, Pavan Nikhilesh, Rybalchenko,
Kirill, Singh, Jasvinder, Thomas Monjalon
Cc: dev
Hi,
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Wednesday, October 06, 2021 1:17
> To: Jerin Jacob <jerinj@marvell.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Chas Williams <chas3@att.com>; Min Hu (Connor) <humin29@huawei.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> <sachin.saxena@oss.nxp.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang,
> Xiao W <xiao.w.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> Viacheslav Ovsiienko <viacheslavo@nvidia.com>; Harman Kalra
> <hkalra@marvell.com>; Maciej Czekaj <mczekaj@marvell.com>; Ray Kinsella
> <mdr@ashroe.eu>; Iremonger, Bernard <bernard.iremonger@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Kiran Kumar K
> <kirankumark@marvell.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Hunt, David <david.hunt@intel.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Igor Russkikh <irusskikh@marvell.com>;
> Steven Webster <steven.webster@windriver.com>; Peters, Matt
> <matt.peters@windriver.com>; Somalapuram Amaranath
> <asomalap@amd.com>; Rasesh Mody <rmody@marvell.com>; Shahed
> Shaikh <shshaikh@marvell.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Sunil Kumar Kori <skori@marvell.com>;
> Satha Rao <skoteshwar@marvell.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> Marcin Wojtas <mw@semihalf.com>; Michal Krawczyk <mk@semihalf.com>;
> Shai Brandes <shaibran@amazon.com>; Evgeny Schemeilin
> <evgenys@amazon.com>; Igor Chauskin <igorch@amazon.com>; Gagandeep
> Singh <g.singh@nxp.com>; Daley, John <johndale@cisco.com>; Hyong Youb
> Kim <hyonkim@cisco.com>; Ziyang Xuan <xuanziyang2@huawei.com>;
> Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Yisen Zhuang <yisen.zhuang@huawei.com>;
> Lijun Ou <oulijun@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> Andrew Boyer <aboyer@pensando.io>; Xu, Rosen <rosen.xu@intel.com>;
> Shijith Thotton <sthotton@marvell.com>; Srisivasubramanian Srinivasan
> <srinivasan@marvell.com>; Zyta Szpak <zr@semihalf.com>; Liron Himi
> <lironh@marvell.com>; Heinrich Kuhn <heinrich.kuhn@corigine.com>;
> Devendra Singh Rawat <dsinghrawat@marvell.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Wiles, Keith <keith.wiles@intel.com>;
> Jiawen Wu <jiawenwu@trustnetic.com>; Jian Wang
> <jianwang@trustnetic.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>;
> Chautru, Nicolas <nicolas.chautru@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>;
> Akhil Goyal <gakhil@marvell.com>; Kantecki, Tomasz
> <tomasz.kantecki@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Pavan Nikhilesh <pbhagavatula@marvell.com>; Rybalchenko, Kirill
> <kirill.rybalchenko@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org
> Subject: [PATCH v4 1/6] ethdev: fix max Rx packet length
>
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> Cc: Min Hu (Connor) <humin29@huawei.com>
>
> v2:
> * Converted to explicit checks for zero/non-zero
> * fixed hns3 checks
> * fixed some sample app rxmode.mtu value
> * fixed some sample app max-pkt-len argument and updated doc for it
>
> v3:
> * rebased
>
> v4:
> * fix typos in commit logs
> ---
> app/test-eventdev/test_perf_common.c | 1 -
> app/test-eventdev/test_pipeline_common.c | 5 +-
> app/test-pmd/cmdline.c | 49 +++----
> app/test-pmd/config.c | 22 ++-
> app/test-pmd/parameters.c | 4 +-
> app/test-pmd/testpmd.c | 103 ++++++++------
> app/test-pmd/testpmd.h | 2 +-
> app/test/test_link_bonding.c | 1 -
> app/test/test_link_bonding_mode4.c | 1 -
> app/test/test_link_bonding_rssconf.c | 2 -
> app/test/test_pmd_perf.c | 1 -
> doc/guides/nics/dpaa.rst | 2 +-
> doc/guides/nics/dpaa2.rst | 2 +-
> doc/guides/nics/features.rst | 2 +-
> doc/guides/nics/fm10k.rst | 2 +-
> doc/guides/nics/mlx5.rst | 4 +-
> doc/guides/nics/octeontx.rst | 2 +-
> doc/guides/nics/thunderx.rst | 2 +-
> doc/guides/rel_notes/deprecation.rst | 25 ----
> doc/guides/sample_app_ug/flow_classify.rst | 7 +-
> doc/guides/sample_app_ug/l3_forward.rst | 6 +-
> .../sample_app_ug/l3_forward_access_ctrl.rst | 4 +-
> doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +-
> .../sample_app_ug/l3_forward_power_man.rst | 4 +-
> .../sample_app_ug/performance_thread.rst | 4 +-
> doc/guides/sample_app_ug/skeleton.rst | 7 +-
> drivers/net/atlantic/atl_ethdev.c | 3 -
> drivers/net/avp/avp_ethdev.c | 17 +--
> drivers/net/axgbe/axgbe_ethdev.c | 7 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
> drivers/net/bnxt/bnxt_ethdev.c | 21 +--
> drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
> drivers/net/cnxk/cnxk_ethdev.c | 9 +-
> drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
> drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-
> drivers/net/cxgbe/cxgbe_main.c | 3 +-
> drivers/net/cxgbe/sge.c | 3 +-
> drivers/net/dpaa/dpaa_ethdev.c | 52 +++----
> drivers/net/dpaa2/dpaa2_ethdev.c | 31 ++---
> drivers/net/e1000/em_ethdev.c | 4 +-
> drivers/net/e1000/igb_ethdev.c | 18 +--
> drivers/net/e1000/igb_rxtx.c | 16 +--
> drivers/net/ena/ena_ethdev.c | 27 ++--
> drivers/net/enetc/enetc_ethdev.c | 24 +---
> drivers/net/enic/enic_ethdev.c | 2 +-
> drivers/net/enic/enic_main.c | 42 +++---
> drivers/net/fm10k/fm10k_ethdev.c | 2 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-
> drivers/net/hns3/hns3_ethdev.c | 42 +-----
> drivers/net/hns3/hns3_ethdev_vf.c | 28 +---
> drivers/net/hns3/hns3_rxtx.c | 10 +-
> drivers/net/i40e/i40e_ethdev.c | 10 +-
> drivers/net/i40e/i40e_rxtx.c | 4 +-
> drivers/net/iavf/iavf_ethdev.c | 9 +-
> drivers/net/ice/ice_dcf_ethdev.c | 5 +-
> drivers/net/ice/ice_ethdev.c | 14 +-
> drivers/net/ice/ice_rxtx.c | 12 +-
> drivers/net/igc/igc_ethdev.c | 51 ++-----
> drivers/net/igc/igc_ethdev.h | 7 +
> drivers/net/igc/igc_txrx.c | 22 +--
> drivers/net/ionic/ionic_ethdev.c | 12 +-
> drivers/net/ionic/ionic_rxtx.c | 6 +-
> drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 35 ++---
> drivers/net/ixgbe/ixgbe_pf.c | 6 +-
> drivers/net/ixgbe/ixgbe_rxtx.c | 15 +-
> drivers/net/liquidio/lio_ethdev.c | 20 +--
> drivers/net/mlx4/mlx4_rxq.c | 17 +--
> drivers/net/mlx5/mlx5_rxq.c | 25 ++--
> drivers/net/mvneta/mvneta_ethdev.c | 7 -
> drivers/net/mvneta/mvneta_rxtx.c | 13 +-
> drivers/net/mvpp2/mrvl_ethdev.c | 34 ++---
> drivers/net/nfp/nfp_common.c | 9 +-
> drivers/net/octeontx/octeontx_ethdev.c | 12 +-
> drivers/net/octeontx2/otx2_ethdev.c | 2 +-
> drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-
> drivers/net/pfe/pfe_ethdev.c | 7 +-
> drivers/net/qede/qede_ethdev.c | 16 +--
> drivers/net/qede/qede_rxtx.c | 8 +-
> drivers/net/sfc/sfc_ethdev.c | 4 +-
> drivers/net/sfc/sfc_port.c | 6 +-
> drivers/net/tap/rte_eth_tap.c | 7 +-
> drivers/net/thunderx/nicvf_ethdev.c | 13 +-
> drivers/net/txgbe/txgbe_ethdev.c | 7 +-
> drivers/net/txgbe/txgbe_ethdev.h | 4 +
> drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
> drivers/net/txgbe/txgbe_rxtx.c | 19 +--
> drivers/net/virtio/virtio_ethdev.c | 9 +-
> examples/bbdev_app/main.c | 1 -
> examples/bond/main.c | 1 -
> examples/distributor/main.c | 1 -
> .../pipeline_worker_generic.c | 1 -
> .../eventdev_pipeline/pipeline_worker_tx.c | 1 -
> examples/flow_classify/flow_classify.c | 12 +-
> examples/ioat/ioatfwd.c | 1 -
> examples/ip_fragmentation/main.c | 12 +-
> examples/ip_pipeline/link.c | 2 +-
> examples/ip_reassembly/main.c | 12 +-
> examples/ipsec-secgw/ipsec-secgw.c | 7 +-
> examples/ipv4_multicast/main.c | 9 +-
> examples/kni/main.c | 6 +-
> examples/l2fwd-cat/l2fwd-cat.c | 8 +-
> examples/l2fwd-crypto/main.c | 1 -
> examples/l2fwd-event/l2fwd_common.c | 1 -
> examples/l3fwd-acl/main.c | 129 +++++++++---------
> examples/l3fwd-graph/main.c | 83 +++++++----
> examples/l3fwd-power/main.c | 90 +++++++-----
> examples/l3fwd/main.c | 84 +++++++-----
> .../performance-thread/l3fwd-thread/main.c | 88 +++++++-----
> .../performance-thread/l3fwd-thread/test.sh | 24 ++--
> examples/pipeline/obj.c | 2 +-
> examples/ptpclient/ptpclient.c | 10 +-
> examples/qos_meter/main.c | 1 -
> examples/qos_sched/init.c | 1 -
> examples/rxtx_callbacks/main.c | 10 +-
> examples/skeleton/basicfwd.c | 12 +-
> examples/vhost/main.c | 4 +-
> examples/vm_power_manager/main.c | 11 +-
> lib/ethdev/rte_ethdev.c | 92 +++++++------
> lib/ethdev/rte_ethdev.h | 2 +-
> lib/ethdev/rte_ethdev_trace.h | 2 +-
> 121 files changed, 801 insertions(+), 1071 deletions(-)
>
> diff --git a/app/test-eventdev/test_perf_common.c b/app/test-
> eventdev/test_perf_common.c
> index cc100650c21e..660d5a0364b6 100644
> --- a/app/test-eventdev/test_perf_common.c
> +++ b/app/test-eventdev/test_perf_common.c
> @@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
> struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-
> eventdev/test_pipeline_common.c
> index 6ee530d4cdc9..5fcea74b4d43 100644
> --- a/app/test-eventdev/test_pipeline_common.c
> +++ b/app/test-eventdev/test_pipeline_common.c
> @@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
> return -EINVAL;
> }
>
> - port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
> - if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
> + port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN;
> + if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
> port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> t->internal_port = 1;
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index a9efd027c376..a677451073ae 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1892,45 +1892,38 @@ cmd_config_max_pkt_len_parsed(void
> *parsed_result,
> __rte_unused void *data)
> {
> struct cmd_config_max_pkt_len_result *res = parsed_result;
> - uint32_t max_rx_pkt_len_backup = 0;
> - portid_t pid;
> + portid_t port_id;
> int ret;
>
> + if (strcmp(res->name, "max-pkt-len") != 0) {
> + printf("Unknown parameter\n");
> + return;
> + }
> +
> if (!all_ports_stopped()) {
> fprintf(stderr, "Please stop all ports first\n");
> return;
> }
>
> - RTE_ETH_FOREACH_DEV(pid) {
> - struct rte_port *port = &ports[pid];
> + RTE_ETH_FOREACH_DEV(port_id) {
> + struct rte_port *port = &ports[port_id];
>
> - if (!strcmp(res->name, "max-pkt-len")) {
> - if (res->value < RTE_ETHER_MIN_LEN) {
> - fprintf(stderr,
> - "max-pkt-len can not be less
> than %d\n",
> - RTE_ETHER_MIN_LEN);
> - return;
> - }
> - if (res->value == port-
> >dev_conf.rxmode.max_rx_pkt_len)
> - return;
> -
> - ret = eth_dev_info_get_print_err(pid, &port-
> >dev_info);
> - if (ret != 0) {
> - fprintf(stderr,
> - "rte_eth_dev_info_get() failed for
> port %u\n",
> - pid);
> - return;
> - }
> -
> - max_rx_pkt_len_backup = port-
> >dev_conf.rxmode.max_rx_pkt_len;
> + if (res->value < RTE_ETHER_MIN_LEN) {
> + fprintf(stderr,
> + "max-pkt-len can not be less than %d\n",
> + RTE_ETHER_MIN_LEN);
> + return;
> + }
>
> - port->dev_conf.rxmode.max_rx_pkt_len = res->value;
> - if (update_jumbo_frame_offload(pid) != 0)
> - port->dev_conf.rxmode.max_rx_pkt_len =
> max_rx_pkt_len_backup;
> - } else {
> - fprintf(stderr, "Unknown parameter\n");
> + ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
> + if (ret != 0) {
> + fprintf(stderr,
> + "rte_eth_dev_info_get() failed for port %u\n",
> + port_id);
> return;
> }
> +
> + update_jumbo_frame_offload(port_id, res->value);
> }
>
> init_port_config();
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 9c66329e96ee..db3eeffa0093 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1147,7 +1147,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> int diag;
> struct rte_port *rte_port = &ports[port_id];
> struct rte_eth_dev_info dev_info;
> - uint16_t eth_overhead;
> int ret;
>
> if (port_id_is_invalid(port_id, ENABLED_WARN))
> @@ -1164,21 +1163,18 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> return;
> }
> diag = rte_eth_dev_set_mtu(port_id, mtu);
> - if (diag)
> + if (diag != 0) {
> fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
> - else if (dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - /*
> - * Ether overhead in driver is equal to the difference of
> - * max_rx_pktlen and max_mtu in rte_eth_dev_info when
> the
> - * device supports jumbo frame.
> - */
> - eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
> - if (mtu > RTE_ETHER_MTU) {
> + return;
> + }
> +
> + rte_port->dev_conf.rxmode.mtu = mtu;
> +
> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (mtu > RTE_ETHER_MTU)
> rte_port->dev_conf.rxmode.offloads |=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - rte_port->dev_conf.rxmode.max_rx_pkt_len =
> - mtu + eth_overhead;
> - } else
> + else
> rte_port->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> }
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index 3f94a82e321f..27eb4bc667df 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -870,7 +870,9 @@ launch_args_parse(int argc, char** argv)
> if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
> n = atoi(optarg);
> if (n >= RTE_ETHER_MIN_LEN)
> - rx_mode.max_rx_pkt_len = (uint32_t)
> n;
> + rx_mode.mtu = (uint32_t) n -
> + (RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN);
> else
> rte_exit(EXIT_FAILURE,
> "Invalid max-pkt-len=%d -
> should be > %d\n",
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 97ae52e17ecd..8c23cfe7c3da 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -446,13 +446,7 @@ lcoreid_t latencystats_lcore_id = -1;
> /*
> * Ethernet device configuration.
> */
> -struct rte_eth_rxmode rx_mode = {
> - /* Default maximum frame length.
> - * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
> - * in init_config().
> - */
> - .max_rx_pkt_len = 0,
> -};
> +struct rte_eth_rxmode rx_mode;
>
> struct rte_eth_txmode tx_mode = {
> .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
> @@ -1481,11 +1475,24 @@ check_nb_hairpinq(queueid_t hairpinq)
> return 0;
> }
>
> +static int
> +get_eth_overhead(struct rte_eth_dev_info *dev_info)
> +{
> + uint32_t eth_overhead;
> +
> + if (dev_info->max_mtu != UINT16_MAX &&
> + dev_info->max_rx_pktlen > dev_info->max_mtu)
> + eth_overhead = dev_info->max_rx_pktlen - dev_info-
> >max_mtu;
> + else
> + eth_overhead = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return eth_overhead;
> +}
> +
> static void
> init_config_port_offloads(portid_t pid, uint32_t socket_id)
> {
> struct rte_port *port = &ports[pid];
> - uint16_t data_size;
> int ret;
> int i;
>
> @@ -1496,7 +1503,7 @@ init_config_port_offloads(portid_t pid, uint32_t
> socket_id)
> if (ret != 0)
> rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
>
> - ret = update_jumbo_frame_offload(pid);
> + ret = update_jumbo_frame_offload(pid, 0);
> if (ret != 0)
> fprintf(stderr,
> "Updating jumbo frame offload failed for port %u\n",
> @@ -1528,14 +1535,20 @@ init_config_port_offloads(portid_t pid, uint32_t
> socket_id)
> */
> if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
> port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
> - data_size = rx_mode.max_rx_pkt_len /
> - port->dev_info.rx_desc_lim.nb_mtu_seg_max;
> -
> - if ((data_size + RTE_PKTMBUF_HEADROOM) >
> mbuf_data_size[0]) {
> - mbuf_data_size[0] = data_size +
> RTE_PKTMBUF_HEADROOM;
> - TESTPMD_LOG(WARNING,
> - "Configured mbuf size of the first
> segment %hu\n",
> - mbuf_data_size[0]);
> + uint32_t eth_overhead = get_eth_overhead(&port-
> >dev_info);
> + uint16_t mtu;
> +
> + if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
> + uint16_t data_size = (mtu + eth_overhead) /
> + port-
> >dev_info.rx_desc_lim.nb_mtu_seg_max;
> + uint16_t buffer_size = data_size +
> RTE_PKTMBUF_HEADROOM;
> +
> + if (buffer_size > mbuf_data_size[0]) {
> + mbuf_data_size[0] = buffer_size;
> + TESTPMD_LOG(WARNING,
> + "Configured mbuf size of the first
> segment %hu\n",
> + mbuf_data_size[0]);
> + }
> }
> }
> }
> @@ -3451,44 +3464,45 @@ rxtx_port_config(struct rte_port *port)
>
> /*
> * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME
> offload,
> - * MTU is also aligned if JUMBO_FRAME offload is not set.
> + * MTU is also aligned.
> *
> * port->dev_info should be set before calling this function.
> *
> + * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
> + * ETH_OVERHEAD". This is useful to update flags but not MTU value.
> + *
> * return 0 on success, negative on error
> */
> int
> -update_jumbo_frame_offload(portid_t portid)
> +update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
> {
> struct rte_port *port = &ports[portid];
> uint32_t eth_overhead;
> uint64_t rx_offloads;
> - int ret;
> + uint16_t mtu, new_mtu;
> bool on;
>
> - /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
> - if (port->dev_info.max_mtu != UINT16_MAX &&
> - port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
> - eth_overhead = port->dev_info.max_rx_pktlen -
> - port->dev_info.max_mtu;
> - else
> - eth_overhead = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> + eth_overhead = get_eth_overhead(&port->dev_info);
>
> - rx_offloads = port->dev_conf.rxmode.offloads;
> + if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
> + printf("Failed to get MTU for port %u\n", portid);
> + return -1;
> + }
> +
> + if (max_rx_pktlen == 0)
> + max_rx_pktlen = mtu + eth_overhead;
>
> - /* Default config value is 0 to use PMD specific overhead */
> - if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
> - port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU
> + eth_overhead;
> + rx_offloads = port->dev_conf.rxmode.offloads;
> + new_mtu = max_rx_pktlen - eth_overhead;
>
> - if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU +
> eth_overhead) {
> + if (new_mtu <= RTE_ETHER_MTU) {
> rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> on = false;
> } else {
> if ((port->dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> fprintf(stderr,
> "Frame size (%u) is not supported by
> port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len,
> - portid);
> + max_rx_pktlen, portid);
> return -1;
> }
> rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -3509,19 +3523,18 @@ update_jumbo_frame_offload(portid_t portid)
> }
> }
>
> - /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
> - * if unset do it here
> - */
> - if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> - ret = eth_dev_set_mtu_mp(portid,
> - port->dev_conf.rxmode.max_rx_pkt_len -
> eth_overhead);
> - if (ret)
> - fprintf(stderr,
> - "Failed to set MTU to %u for port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len -
> eth_overhead,
> - portid);
> + if (mtu == new_mtu)
> + return 0;
> +
> + if (eth_dev_set_mtu_mp(portid, new_mtu) != 0) {
> + fprintf(stderr,
> + "Failed to set MTU to %u for port %u\n",
> + new_mtu, portid);
> + return -1;
> }
>
> + port->dev_conf.rxmode.mtu = new_mtu;
> +
> return 0;
> }
>
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 5863b2f43f3e..17562215c733 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -1022,7 +1022,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id,
> __rte_unused uint16_t queue,
> __rte_unused void *user_param);
> void add_tx_dynf_callback(portid_t portid);
> void remove_tx_dynf_callback(portid_t portid);
> -int update_jumbo_frame_offload(portid_t portid);
> +int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
>
> /*
> * Work-around of a compilation error with ICC on invocations of the
> diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
> index 8a5c8310a8b4..5388d18125a6 100644
> --- a/app/test/test_link_bonding.c
> +++ b/app/test/test_link_bonding.c
> @@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> .split_hdr_size = 0,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/app/test/test_link_bonding_mode4.c
> b/app/test/test_link_bonding_mode4.c
> index 2c835fa7adc7..3e9254fe896d 100644
> --- a/app/test/test_link_bonding_mode4.c
> +++ b/app/test/test_link_bonding_mode4.c
> @@ -108,7 +108,6 @@ static struct link_bonding_unittest_params
> test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_link_bonding_rssconf.c
> b/app/test/test_link_bonding_rssconf.c
> index 5dac60ca1edd..e7bb0497b663 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params
> test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> @@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
> static struct rte_eth_conf rss_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
> index 3a248d512c4a..a3b4f52c65e6 100644
> --- a/app/test/test_pmd_perf.c
> +++ b/app/test/test_pmd_perf.c
> @@ -63,7 +63,6 @@ static struct rte_ether_addr
> ports_eth_addr[RTE_MAX_ETHPORTS];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
> index 7355ec305916..9dad612058c6 100644
> --- a/doc/guides/nics/dpaa.rst
> +++ b/doc/guides/nics/dpaa.rst
> @@ -335,7 +335,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA SoC family support a maximum of a 10240 jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
> index df23a5704dca..831bc564883a 100644
> --- a/doc/guides/nics/dpaa2.rst
> +++ b/doc/guides/nics/dpaa2.rst
> @@ -545,7 +545,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 4fce8cd1c976..483cb7da576f 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -166,7 +166,7 @@ Jumbo frame
> Supports Rx jumbo frames.
>
> * **[uses] rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> - ``dev_conf.rxmode.max_rx_pkt_len``.
> + ``dev_conf.rxmode.mtu``.
> * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
> * **[related] API**: ``rte_eth_dev_set_mtu()``.
>
> diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
> index 7b8ef0e7823d..ed6afd62703d 100644
> --- a/doc/guides/nics/fm10k.rst
> +++ b/doc/guides/nics/fm10k.rst
> @@ -141,7 +141,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The FM10000 family of NICS support a maximum of a 15K jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
> up to 15364 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index bae73f42d882..1f5619ed53fc 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -606,9 +606,9 @@ Driver options
> and each stride receives one packet. MPRQ can improve throughput for
> small-packet traffic.
>
> - When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
> + When MPRQ is enabled, MTU can be larger than the size of
> user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled.
> PMD will
> - configure large stride size enough to accommodate max_rx_pkt_len as long
> as
> + configure large stride size enough to accommodate MTU as long as
> device allows. Note that this can waste system memory compared to
> enabling Rx
> scatter and multi-segment packet.
>
> diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
> index b1a868b054d1..8236cc3e93e0 100644
> --- a/doc/guides/nics/octeontx.rst
> +++ b/doc/guides/nics/octeontx.rst
> @@ -157,7 +157,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame.
> The value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
> up to 32k bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
> index 12d43ce93e28..98f23a2b2a3d 100644
> --- a/doc/guides/nics/thunderx.rst
> +++ b/doc/guides/nics/thunderx.rst
> @@ -392,7 +392,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
> up to 9200 bytes can still reach the host interface.
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index a2fe766d4b4f..1063a1fe4bea 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -81,31 +81,6 @@ Deprecation Notices
> In 19.11 PMDs will still update the field even when the offload is not
> enabled.
>
> -* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will
> be
> - replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
> - The new ``mtu`` field will be used to configure the initial device MTU via
> - ``rte_eth_dev_configure()`` API.
> - Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
> - The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to
> store
> - the configured ``mtu`` value,
> - and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
> - be used to store the user configuration request.
> - Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME``
> enabled,
> - ``mtu`` field will be always valid.
> - When ``mtu`` config is not provided by the application, default
> ``RTE_ETHER_MTU``
> - value will be used.
> - ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set
> successfully,
> - either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
> -
> - An application may need to configure device for a specific Rx packet size,
> like for
> - cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received
> packet size
> - can't be bigger than Rx buffer size.
> - To cover these cases an application needs to know the device packet
> overhead to be
> - able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
> - ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
> - the device packet overhead can be calculated as:
> - ``(struct rte_eth_dev_info).max_rx_pktlen - (struct
> rte_eth_dev_info).max_mtu``
> -
> * ethdev: ``rx_descriptor_done`` dev_ops and
> ``rte_eth_rx_descriptor_done``
> will be removed in 21.11.
> Existing ``rte_eth_rx_descriptor_status`` and
> ``rte_eth_tx_descriptor_status``
> diff --git a/doc/guides/sample_app_ug/flow_classify.rst
> b/doc/guides/sample_app_ug/flow_classify.rst
> index 812aaa87b05b..6c4c04e935e4 100644
> --- a/doc/guides/sample_app_ug/flow_classify.rst
> +++ b/doc/guides/sample_app_ug/flow_classify.rst
> @@ -162,12 +162,7 @@ Forwarding application is shown below:
> :end-before: >8 End of initializing a given port.
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
> -
> -.. literalinclude:: ../../../examples/flow_classify/flow_classify.c
> - :language: c
> - :start-after: Ethernet ports configured with default settings using struct. 8<
> - :end-before: >8 End of configuration of Ethernet ports.
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/doc/guides/sample_app_ug/l3_forward.rst
> b/doc/guides/sample_app_ug/l3_forward.rst
> index 2d5cd5f1c0ba..56af5cd5b383 100644
> --- a/doc/guides/sample_app_ug/l3_forward.rst
> +++ b/doc/guides/sample_app_ug/l3_forward.rst
> @@ -65,7 +65,7 @@ The application has a number of command line
> options::
> [--lookup LOOKUP_METHOD]
> --config(port,queue,lcore)[,(port,queue,lcore)]
> [--eth-dest=X,MM:MM:MM:MM:MM:MM]
> - [--enable-jumbo [--max-pkt-len PKTLEN]]
> + [--max-pkt-len PKTLEN]
> [--no-numa]
> [--hash-entry-num]
> [--ipv6]
> @@ -95,9 +95,7 @@ Where,
>
> * ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet
> destination for port X.
>
> -* ``--enable-jumbo:`` Optional, enables jumbo frames.
> -
> -* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo,
> maximum packet length in decimal (64-9600).
> +* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
>
> * ``--no-numa:`` Optional, disables numa awareness.
>
> diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> index 2cf6e4556f14..486247ac2e4f 100644
> --- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> +++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> @@ -236,7 +236,7 @@ The application has a number of command line
> options:
>
> .. code-block:: console
>
> - ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P]
> --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --
> rule_ipv6 FILENAME [--alg=<val>] [--enable-jumbo [--max-pkt-len PKTLEN]] [--
> no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
> + ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P]
> --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --
> rule_ipv6 FILENAME [--alg=<val>] [--max-pkt-len PKTLEN] [--no-numa] [--eth-
> dest=X,MM:MM:MM:MM:MM:MM]
>
>
> where,
> @@ -255,8 +255,6 @@ where,
> * --alg=<val>: optional, ACL classify method to use, one of:
> ``scalar|sse|avx2|neon|altivec|avx512x16|avx512x32``
>
> -* --enable-jumbo: optional, enables jumbo frames
> -
> * --max-pkt-len: optional, maximum packet length in decimal (64-9600)
>
> * --no-numa: optional, disables numa awareness
> diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst
> b/doc/guides/sample_app_ug/l3_forward_graph.rst
> index 03e9a85aa68c..0a3e0d44ecea 100644
> --- a/doc/guides/sample_app_ug/l3_forward_graph.rst
> +++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
> @@ -48,7 +48,7 @@ The application has a number of command line options
> similar to l3fwd::
> [-P]
> --config(port,queue,lcore)[,(port,queue,lcore)]
> [--eth-dest=X,MM:MM:MM:MM:MM:MM]
> - [--enable-jumbo [--max-pkt-len PKTLEN]]
> + [--max-pkt-len PKTLEN]
> [--no-numa]
> [--per-port-pool]
>
> @@ -63,9 +63,7 @@ Where,
>
> * ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet
> destination for port X.
>
> -* ``--enable-jumbo:`` Optional, enables jumbo frames.
> -
> -* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo,
> maximum packet length in decimal (64-9600).
> +* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
>
> * ``--no-numa:`` Optional, disables numa awareness.
>
> diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst
> b/doc/guides/sample_app_ug/l3_forward_power_man.rst
> index 0495314c87d5..8817eaadbfc3 100644
> --- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
> +++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
> @@ -88,7 +88,7 @@ The application has a number of command line options:
>
> .. code-block:: console
>
> - ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK
> [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-
> pkt-len PKTLEN]] [--no-numa]
> + ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK
> [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--max-pkt-len PKTLEN] [-
> -no-numa]
>
> where,
>
> @@ -99,8 +99,6 @@ where,
>
> * --config (port,queue,lcore)[,(port,queue,lcore)]: determines which
> queues from which ports are mapped to which cores.
>
> -* --enable-jumbo: optional, enables jumbo frames
> -
> * --max-pkt-len: optional, maximum packet length in decimal (64-9600)
>
> * --no-numa: optional, disables numa awareness
> diff --git a/doc/guides/sample_app_ug/performance_thread.rst
> b/doc/guides/sample_app_ug/performance_thread.rst
> index 9b09838f6448..7d1bf6eaae8c 100644
> --- a/doc/guides/sample_app_ug/performance_thread.rst
> +++ b/doc/guides/sample_app_ug/performance_thread.rst
> @@ -59,7 +59,7 @@ The application has a number of command line
> options::
> -p PORTMASK [-P]
> --rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
> --tx(lcore,thread)[,(lcore,thread)]
> - [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
> + [--max-pkt-len PKTLEN] [--no-numa]
> [--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
> [--parse-ptype]
>
> @@ -80,8 +80,6 @@ Where:
> the lcore the thread runs on, and the id of RX thread with which it is
> associated. The parameters are explained below.
>
> -* ``--enable-jumbo``: optional, enables jumbo frames.
> -
> * ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
>
> * ``--no-numa``: optional, disables numa awareness.
> diff --git a/doc/guides/sample_app_ug/skeleton.rst
> b/doc/guides/sample_app_ug/skeleton.rst
> index f7bcd7ed2a1d..6d0de6440105 100644
> --- a/doc/guides/sample_app_ug/skeleton.rst
> +++ b/doc/guides/sample_app_ug/skeleton.rst
> @@ -106,12 +106,7 @@ Forwarding application is shown below:
> :end-before: >8 End of main functional part of port initialization.
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
> -
> -.. literalinclude:: ../../../examples/skeleton/basicfwd.c
> - :language: c
> - :start-after: Configuration of ethernet ports. 8<
> - :end-before: >8 End of configuration of ethernet ports.
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/drivers/net/atlantic/atl_ethdev.c
> b/drivers/net/atlantic/atl_ethdev.c
> index 0ce35eb519e2..3f654c071566 100644
> --- a/drivers/net/atlantic/atl_ethdev.c
> +++ b/drivers/net/atlantic/atl_ethdev.c
> @@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> dev_info.max_rx_pktlen)
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return 0;
> }
>
> diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> index 623fa5e5ff5b..0feacc822433 100644
> --- a/drivers/net/avp/avp_ethdev.c
> +++ b/drivers/net/avp/avp_ethdev.c
> @@ -1059,17 +1059,18 @@ static int
> avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
> struct avp_dev *avp)
> {
> - unsigned int max_rx_pkt_len;
> + unsigned int max_rx_pktlen;
>
> - max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
>
> - if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
> - (max_rx_pkt_len > avp->host_mbuf_size)) {
> + if (max_rx_pktlen > avp->guest_mbuf_size ||
> + max_rx_pktlen > avp->host_mbuf_size) {
> /*
> * If the guest MTU is greater than either the host or guest
> * buffers then chained mbufs have to be enabled in the TX
> * direction. It is assumed that the application will not need
> - * to send packets larger than their max_rx_pkt_len (MRU).
> + * to send packets larger than their MTU.
> */
> return 1;
> }
> @@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
>
> PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u)
> mbuf_size=(%u,%u)\n",
> avp->max_rx_pkt_len,
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN,
> avp->host_mbuf_size,
> avp->guest_mbuf_size);
>
> @@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
> * function; send it truncated to avoid the
> performance
> * hit of having to manage returning the already
> * allocated buffer to the free list. This should not
> - * happen since the application should have set the
> - * max_rx_pkt_len based on its MTU and it should be
> + * happen since the application should have not send
> + * packages larger than its MTU and it should be
> * policing its own packet sizes.
> */
> txq->errors++;
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index 9cb4818af11f..76aeec077f2b 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
> struct axgbe_port *pdata = dev->data->dev_private;
> int ret;
> struct rte_eth_dev_data *dev_data = dev->data;
> - uint16_t max_pkt_len = dev_data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint16_t max_pkt_len;
>
> dev->dev_ops = &axgbe_eth_dev_ops;
>
> @@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
>
> rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
> rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
> +
> + max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> if ((dev_data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) ||
> max_pkt_len > pdata->rx_buf_size)
> dev_data->scattered_rx = 1;
> @@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (frame_size > AXGBE_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> val = 1;
> @@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> val = 0;
> }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c
> b/drivers/net/bnx2x/bnx2x_ethdev.c
> index 463886f17a58..009a94e9a8fa 100644
> --- a/drivers/net/bnx2x/bnx2x_ethdev.c
> +++ b/drivers/net/bnx2x/bnx2x_ethdev.c
> @@ -175,16 +175,12 @@ static int
> bnx2x_dev_configure(struct rte_eth_dev *dev)
> {
> struct bnx2x_softc *sc = dev->data->dev_private;
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
>
> int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
>
> PMD_INIT_FUNC_TRACE(sc);
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - dev->data->mtu = sc->mtu;
> - }
> + sc->mtu = dev->data->dev_conf.rxmode.mtu;
>
> if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
> PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater
> than number of RX queues");
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index aa7e7fdc85fa..8c6f20b75aed 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -1157,13 +1157,8 @@ static int bnxt_dev_configure_op(struct
> rte_eth_dev *eth_dev)
> rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
> eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - eth_dev->data->mtu =
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> VLAN_TAG_SIZE *
> - BNXT_NUM_VLANS;
> - bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> - }
> + bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> +
> return 0;
>
> resource_error:
> @@ -1201,6 +1196,7 @@ void bnxt_print_link_info(struct rte_eth_dev
> *eth_dev)
> */
> static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> uint16_t buf_size;
> int i;
>
> @@ -1215,7 +1211,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev
> *eth_dev)
>
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq-
> >mb_pool) -
> RTE_PKTMBUF_HEADROOM);
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len >
> buf_size)
> + if (eth_dev->data->mtu + overhead > buf_size)
> return 1;
> }
> return 0;
> @@ -3026,6 +3022,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev,
> __rte_unused uint16_t queue_id,
>
> int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> struct bnxt *bp = eth_dev->data->dev_private;
> uint32_t new_pkt_size;
> uint32_t rc = 0;
> @@ -3039,8 +3036,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> if (!eth_dev->data->nb_rx_queues)
> return rc;
>
> - new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> - VLAN_TAG_SIZE * BNXT_NUM_VLANS;
> + new_pkt_size = new_mtu + overhead;
>
> /*
> * Disallow any MTU change that would require scattered receive
> support
> @@ -3067,7 +3063,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> }
>
> /* Is there a change in mtu setting? */
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len ==
> new_pkt_size)
> + if (eth_dev->data->mtu == new_mtu)
> return rc;
>
> for (i = 0; i < bp->nr_vnics; i++) {
> @@ -3089,9 +3085,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> }
> }
>
> - if (!rc)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> new_pkt_size;
> -
> if (bnxt_hwrm_config_host_mtu(bp))
> PMD_DRV_LOG(WARNING, "Failed to configure host
> MTU\n");
>
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
> b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 54987d96b34d..412acff42f65 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -1724,8 +1724,8 @@ slave_configure(struct rte_eth_dev
> *bonded_eth_dev,
> slave_eth_dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_VLAN_FILTER;
>
> - slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - bonded_eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + slave_eth_dev->data->dev_conf.rxmode.mtu =
> + bonded_eth_dev->data->dev_conf.rxmode.mtu;
>
> if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
> index 8629193d5049..8d0677cd89d9 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.c
> +++ b/drivers/net/cnxk/cnxk_ethdev.c
> @@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp
> *rxq)
> mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
> buffsz = mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM;
>
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD >
> buffsz) {
> dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> }
> @@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> {
> struct rte_eth_dev_data *data = eth_dev->data;
> struct cnxk_eth_rxq_sp *rxq;
> - uint16_t mtu;
> int rc;
>
> rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
> /* Setup scatter mode if needed by jumbo */
> nix_enable_mseg_on_jumbo(rxq);
>
> - /* Setup MTU based on max_rx_pkt_len */
> - mtu = data->dev_conf.rxmode.max_rx_pkt_len -
> CNXK_NIX_L2_OVERHEAD +
> - CNXK_NIX_MAX_VTAG_ACT_SIZE;
> -
> - rc = cnxk_nix_mtu_set(eth_dev, mtu);
> + rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
> if (rc)
> plt_err("Failed to set default MTU size, rc=%d", rc);
>
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c
> b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index b6cc5286c6d0..695d0d6fd3e2 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> goto exit;
> }
>
> - frame_size += RTE_ETHER_CRC_LEN;
> -
> - if (frame_size > RTE_ETHER_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> exit:
> return rc;
> }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 177eca397600..8cf61f12a8d6 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu)
> return err;
>
> /* Must accommodate at least RTE_ETHER_MIN_MTU */
> - if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> + if (mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> return -EINVAL;
>
> /* set to jumbo mode if needed */
> - if (new_mtu > CXGBE_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
>
> err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1,
> -1,
> -1, -1, true);
> - if (!err)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> new_mtu;
> -
> return err;
> }
>
> @@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> const struct rte_eth_rxconf *rx_conf __rte_unused,
> struct rte_mempool *mp)
> {
> - unsigned int pkt_len = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> struct port_info *pi = eth_dev->data->dev_private;
> struct adapter *adapter = pi->adapter;
> struct rte_eth_dev_info dev_info;
> @@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> rxq->fl.size = temp_nb_desc;
>
> /* Set to jumbo mode if necessary */
> - if (pkt_len > CXGBE_ETH_MAX_LEN)
> + if (eth_dev->data->mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> diff --git a/drivers/net/cxgbe/cxgbe_main.c
> b/drivers/net/cxgbe/cxgbe_main.c
> index 6dd1bf1f836e..91d6bb9bbcb0 100644
> --- a/drivers/net/cxgbe/cxgbe_main.c
> +++ b/drivers/net/cxgbe/cxgbe_main.c
> @@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
> unsigned int mtu;
> int ret;
>
> - mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
> + mtu = pi->eth_dev->data->mtu;
>
> conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
>
> diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
> index e5f7721dc4b3..830f5192474d 100644
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct
> rte_mbuf *mbuf,
> u32 wr_mid;
> u64 cntrl, *end;
> bool v6;
> - u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
> + u32 max_pkt_len;
>
> /* Reject xmit if queue is stopped */
> if (unlikely(txq->flags & EQ_STOPPED))
> @@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct
> rte_mbuf *mbuf,
> return 0;
> }
>
> + max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
> (unlikely(m->pkt_len > max_pkt_len)))
> goto out_free;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c
> b/drivers/net/dpaa/dpaa_ethdev.c
> index 36d8f9249df1..adbdb87baab9 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (frame_size > DPAA_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> fman_if_set_maxfrm(dev->process_private, frame_size);
>
> return 0;
> @@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> struct fman_if *fif = dev->process_private;
> struct __fman_if *__fif;
> struct rte_intr_handle *intr_handle;
> + uint32_t max_rx_pktlen;
> int speed, duplex;
> int ret;
>
> @@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - DPAA_PMD_DEBUG("enabling jumbo");
> -
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - DPAA_MAX_RX_PKT_LEN)
> - max_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> - else {
> - DPAA_PMD_INFO("enabling jumbo override conf
> max len=%d "
> - "supported is %d",
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> - DPAA_MAX_RX_PKT_LEN);
> - max_len = DPAA_MAX_RX_PKT_LEN;
> - }
> -
> - fman_if_set_maxfrm(dev->process_private, max_len);
> - dev->data->mtu = max_len
> - - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> VLAN_TAG_SIZE;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
> + DPAA_PMD_INFO("enabling jumbo override conf max
> len=%d "
> + "supported is %d",
> + max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
> + max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
> }
>
> + fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
> +
> if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
> DPAA_PMD_DEBUG("enabling scatter mode");
> fman_if_set_sg(dev->process_private, 1);
> @@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> u32 flags = 0;
> int ret;
> u32 buffsz = rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> return -EINVAL;
> }
>
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> + VLAN_TAG_SIZE;
> /* Max packet can fit in single buffer */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
> + if (max_rx_pktlen <= buffsz) {
> ;
> } else if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - buffsz * DPAA_SGT_MAX_ENTRIES) {
> - DPAA_PMD_ERR("max RxPkt size %d too big to fit "
> + if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
> + DPAA_PMD_ERR("Maximum Rx packet size %d too
> big to fit "
> "MaxSGlist %d",
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> - buffsz * DPAA_SGT_MAX_ENTRIES);
> + max_rx_pktlen, buffsz *
> DPAA_SGT_MAX_ENTRIES);
> rte_errno = EOVERFLOW;
> return -rte_errno;
> }
> @@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> DPAA_PMD_WARN("The requested maximum Rx packet size
> (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - buffsz - RTE_PKTMBUF_HEADROOM);
> + max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
> }
>
> dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
> @@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
>
> dpaa_intf->valid = 1;
> DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf-
> >name,
> - fman_if_get_sg_enable(fif),
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + fman_if_get_sg_enable(fif), max_rx_pktlen);
> /* checking if push mode only, no error check for now */
> if (!rxq->is_static &&
> dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index c12169578e22..758a14e0ad2d 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> int tx_l3_csum_offload = false;
> int tx_l4_csum_offload = false;
> int ret, tc_index;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (eth_conf->rxmode.max_rx_pkt_len <=
> DPAA2_MAX_RX_PKT_LEN) {
> - ret = dpni_set_max_frame_length(dpni,
> CMD_PRI_LOW,
> - priv->token, eth_conf-
> >rxmode.max_rx_pkt_len
> - - RTE_ETHER_CRC_LEN);
> - if (ret) {
> - DPAA2_PMD_ERR(
> - "Unable to set mtu. check config");
> - return ret;
> - }
> - dev->data->mtu =
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
> -
> - VLAN_TAG_SIZE;
> - } else {
> - return -1;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
> + ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
> + priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
> + if (ret != 0) {
> + DPAA2_PMD_ERR("Unable to set mtu. check config");
> + return ret;
> }
> + } else {
> + return -1;
> }
>
> if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
> @@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> DPAA2_MAX_RX_PKT_LEN)
> return -EINVAL;
>
> - if (frame_size > DPAA2_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c
> b/drivers/net/e1000/em_ethdev.c
> index a0ca371b0275..6f418a36aa04 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index d80fad01e36d..4c114bf90fc7 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -2681,9 +2681,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev
> *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu +
> E1000_ETH_OVERHEAD);
> }
>
> static void
> @@ -2699,10 +2697,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev
> *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE);
> + E1000_WRITE_REG(hw, E1000_RLPML,
> + dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
> }
>
> static int
> @@ -4400,7 +4396,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -4411,11 +4407,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
>
> return 0;
> }
> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> index 278d5d2712af..e9a30d393bd7 100644
> --- a/drivers/net/e1000/igb_rxtx.c
> +++ b/drivers/net/e1000/igb_rxtx.c
> @@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> @@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure support of jumbo frames, if any.
> */
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> -
> rctl |= E1000_RCTL_LPE;
>
> /*
> @@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN
> */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) >
> buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> @@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> /* setup MTU */
> - e1000_rlpml_set_vf(hw,
> - (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE));
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> + e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN
> */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) >
> buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> index 4cebf60a68a7..3a9d5031b262 100644
> --- a/drivers/net/ena/ena_ethdev.c
> +++ b/drivers/net/ena/ena_ethdev.c
> @@ -679,26 +679,14 @@ static int ena_queue_start_all(struct rte_eth_dev
> *dev,
> return rc;
> }
>
> -static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
> -{
> - uint32_t max_frame_len = adapter->max_mtu;
> -
> - if (adapter->edev_data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME)
> - max_frame_len =
> - adapter->edev_data-
> >dev_conf.rxmode.max_rx_pkt_len;
> -
> - return max_frame_len;
> -}
> -
> static int ena_check_valid_conf(struct ena_adapter *adapter)
> {
> - uint32_t max_frame_len = ena_get_mtu_conf(adapter);
> + uint32_t mtu = adapter->edev_data->mtu;
>
> - if (max_frame_len > adapter->max_mtu || max_frame_len <
> ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_INIT_LOG(ERR,
> "Unsupported MTU of %d. Max MTU: %d, min
> MTU: %d\n",
> - max_frame_len, adapter->max_mtu,
> ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return ENA_COM_UNSUPPORTED;
> }
>
> @@ -871,10 +859,10 @@ static int ena_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> ena_dev = &adapter->ena_dev;
> ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
>
> - if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_DRV_LOG(ERR,
> "Invalid MTU setting. New MTU: %d, max MTU: %d,
> min MTU: %d\n",
> - mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return -EINVAL;
> }
>
> @@ -1945,7 +1933,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
> dev_info->hash_key_size = ENA_HASH_KEY_SIZE;
>
> dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
> - dev_info->max_rx_pktlen = adapter->max_mtu;
> + dev_info->max_rx_pktlen = adapter->max_mtu +
> RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + dev_info->min_mtu = ENA_MIN_MTU;
> + dev_info->max_mtu = adapter->max_mtu;
> dev_info->max_mac_addrs = 1;
>
> dev_info->max_rx_queues = adapter->max_num_io_queues;
> diff --git a/drivers/net/enetc/enetc_ethdev.c
> b/drivers/net/enetc/enetc_ethdev.c
> index b496cd470045..cdb9783b5372 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (frame_size > ENETC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads &=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
> enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /*setting the MTU*/
> enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> ENETC_SET_MAXFRM(frame_size) |
> ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
> @@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
> struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
> uint64_t rx_offloads = eth_conf->rxmode.offloads;
> uint32_t checksum = L3_CKSUM | L4_CKSUM;
> + uint32_t max_len;
>
> PMD_INIT_FUNC_TRACE();
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> -
> - enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> - ENETC_SET_MAXFRM(max_len));
> - enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> - ENETC_MAC_MAXFRM_SIZE);
> - enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
> - 2 * ENETC_MAC_MAXFRM_SIZE);
> - dev->data->mtu = RTE_ETHER_MAX_LEN -
> RTE_ETHER_HDR_LEN -
> - RTE_ETHER_CRC_LEN;
> - }
> + max_len = dev->data->dev_conf.rxmode.mtu +
> RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> ENETC_SET_MAXFRM(max_len));
> + enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
> + enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
>
> if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
> int config;
> diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
> index 8d5797523b8f..6a81ceb62ba7 100644
> --- a/drivers/net/enic/enic_ethdev.c
> +++ b/drivers/net/enic/enic_ethdev.c
> @@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev
> *eth_dev,
> * max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
> * a hint to the driver to size receive buffers accordingly so that
> * larger-than-vnic-mtu packets get truncated.. For DPDK, we let
> - * the user decide the buffer size via rxmode.max_rx_pkt_len,
> basically
> + * the user decide the buffer size via rxmode.mtu, basically
> * ignoring vNIC mtu.
> */
> device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic-
> >max_mtu);
> diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
> index 2affd380c6a4..dfc7f5d1f94f 100644
> --- a/drivers/net/enic/enic_main.c
> +++ b/drivers/net/enic/enic_main.c
> @@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct
> vnic_rq *rq)
> struct rq_enet_desc *rqd = rq->ring.descs;
> unsigned i;
> dma_addr_t dma_addr;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint16_t rq_buf_len;
>
> if (!rq->in_use)
> @@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic,
> struct vnic_rq *rq)
>
> /*
> * If *not* using scatter and the mbuf size is greater than the
> - * requested max packet size (max_rx_pkt_len), then reduce the
> - * posted buffer size to max_rx_pkt_len. HW still receives packets
> - * larger than max_rx_pkt_len, but they will be truncated, which we
> + * requested max packet size (mtu + eth overhead), then reduce the
> + * posted buffer size to max packet size. HW still receives packets
> + * larger than max packet size, but they will be truncated, which we
> * drop in the rx handler. Not ideal, but better than returning
> * large packets when the user is not expecting them.
> */
> - max_rx_pkt_len = enic->rte_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data-
> >mtu);
> rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) -
> RTE_PKTMBUF_HEADROOM;
> - if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
> - rq_buf_len = max_rx_pkt_len;
> + if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
> + rq_buf_len = max_rx_pktlen;
> for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
> mb = rte_mbuf_raw_alloc(rq->mp);
> if (mb == NULL) {
> @@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> unsigned int mbuf_size, mbufs_per_pkt;
> unsigned int nb_sop_desc, nb_data_desc;
> uint16_t min_sop, max_sop, min_data, max_data;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
>
> /*
> * Representor uses a reserved PF queue. Translate representor
> @@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
>
> mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM);
> - /* max_rx_pkt_len includes the ethernet header and CRC. */
> - max_rx_pkt_len = enic->rte_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + /* max_rx_pktlen includes the ethernet header and CRC. */
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data-
> >mtu);
>
> if (enic->rte_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> dev_info(enic, "Rq %u Scatter rx mode enabled\n",
> queue_idx);
> /* ceil((max pkt len)/mbuf_size) */
> - mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) /
> mbuf_size;
> + mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) /
> mbuf_size;
> } else {
> dev_info(enic, "Scatter rx mode disabled\n");
> mbufs_per_pkt = 1;
> - if (max_rx_pkt_len > mbuf_size) {
> + if (max_rx_pktlen > mbuf_size) {
> dev_warning(enic, "The maximum Rx packet size (%u)
> is"
> " larger than the mbuf size (%u), and"
> " scatter is disabled. Larger packets will"
> " be truncated.\n",
> - max_rx_pkt_len, mbuf_size);
> + max_rx_pktlen, mbuf_size);
> }
> }
>
> @@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> rq_sop->data_queue_enable = 1;
> rq_data->in_use = 1;
> /*
> - * HW does not directly support rxmode.max_rx_pkt_len.
> HW always
> + * HW does not directly support MTU. HW always
> * receives packet sizes up to the "max" MTU.
> * If not using scatter, we can achieve the effect of dropping
> * larger packets by reducing the size of posted buffers.
> * See enic_alloc_rx_queue_mbufs().
> */
> - if (max_rx_pkt_len <
> - enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
> - dev_warning(enic, "rxmode.max_rx_pkt_len is
> ignored"
> - " when scatter rx mode is in use.\n");
> + if (enic->rte_dev->data->mtu < enic->max_mtu) {
> + dev_warning(enic,
> + "mtu is ignored when scatter rx mode is in
> use.\n");
> }
> } else {
> dev_info(enic, "Rq %u Scatter rx mode not being used\n",
> @@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> if (mbufs_per_pkt > 1) {
> dev_info(enic, "For max packet size %u and mbuf size %u
> valid"
> " rx descriptor range is %u to %u\n",
> - max_rx_pkt_len, mbuf_size, min_sop + min_data,
> + max_rx_pktlen, mbuf_size, min_sop + min_data,
> max_sop + max_data);
> }
> dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
> @@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t
> new_mtu)
> "MTU (%u) is greater than value configured in NIC
> (%u)\n",
> new_mtu, config_mtu);
>
> - /* Update the MTU and maximum packet length */
> - eth_dev->data->mtu = new_mtu;
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - enic_mtu_to_max_rx_pktlen(new_mtu);
> -
> /*
> * If the device has not started (enic_enable), nothing to do.
> * Later, enic_enable() will set up RQs reflecting the new maximum
> diff --git a/drivers/net/fm10k/fm10k_ethdev.c
> b/drivers/net/fm10k/fm10k_ethdev.c
> index 3236290e4021..5e4b361ca6c0 100644
> --- a/drivers/net/fm10k/fm10k_ethdev.c
> +++ b/drivers/net/fm10k/fm10k_ethdev.c
> @@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
> FM10K_SRRCTL_LOOPBACK_SUPPRESS);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> + if ((dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> 2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
> rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
> uint32_t reg;
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
> b/drivers/net/hinic/hinic_pmd_ethdev.c
> index c01e2ec1d450..2d8271cb6095 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -315,19 +315,19 @@ static int hinic_dev_configure(struct rte_eth_dev
> *dev)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_RSS_HASH;
>
> /* mtu size is 256~9600 */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> HINIC_MIN_FRAME_SIZE ||
> - dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - HINIC_MAX_JUMBO_FRAME_SIZE) {
> + if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
> + HINIC_MIN_FRAME_SIZE ||
> + HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
> + HINIC_MAX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR,
> - "Max rx pkt len out of range, get max_rx_pkt_len:%d,
> "
> + "Packet length out of range, get packet length:%d, "
> "expect between %d and %d",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + HINIC_MTU_TO_PKTLEN(dev->data-
> >dev_conf.rxmode.mtu),
> HINIC_MIN_FRAME_SIZE,
> HINIC_MAX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
>
> - nic_dev->mtu_size =
> - HINIC_PKTLEN_TO_MTU(dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
>
> /* rss template */
> err = hinic_config_mq_mode(dev, TRUE);
> @@ -1530,7 +1530,6 @@ static void hinic_deinit_mac_addr(struct
> rte_eth_dev *eth_dev)
> static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct hinic_nic_dev *nic_dev =
> HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
> - uint32_t frame_size;
> int ret = 0;
>
> PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d,
> max_pkt_len: %d",
> @@ -1548,16 +1547,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev
> *dev, uint16_t mtu)
> return ret;
> }
>
> - /* update max frame size */
> - frame_size = HINIC_MTU_TO_PKTLEN(mtu);
> - if (frame_size > HINIC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> nic_dev->mtu_size = mtu;
>
> return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c
> b/drivers/net/hns3/hns3_ethdev.c
> index 7d37004972bf..4ead227f9122 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2371,41 +2371,6 @@ hns3_init_ring_with_vector(struct hns3_hw *hw)
> return 0;
> }
>
> -static int
> -hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
> -{
> - struct hns3_adapter *hns = dev->data->dev_private;
> - struct hns3_hw *hw = &hns->hw;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> - int ret;
> -
> - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
> - return 0;
> -
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> - hns3_err(hw, "maximum Rx packet length must be greater
> than %u "
> - "and no more than %u when jumbo frame enabled.",
> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> - (uint16_t)HNS3_MAX_FRAME_LEN);
> - return -EINVAL;
> - }
> -
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3_dev_mtu_set(dev, mtu);
> - if (ret)
> - return ret;
> - dev->data->mtu = mtu;
> -
> - return 0;
> -}
> -
> static int
> hns3_setup_dcb(struct rte_eth_dev *dev)
> {
> @@ -2520,8 +2485,8 @@ hns3_dev_configure(struct rte_eth_dev *dev)
> goto cfg_err;
> }
>
> - ret = hns3_refresh_mtu(dev, conf);
> - if (ret)
> + ret = hns3_dev_mtu_set(dev, conf->rxmode.mtu);
> + if (ret != 0)
> goto cfg_err;
>
> ret = hns3_mbuf_dyn_rx_timestamp_register(dev, conf);
> @@ -2616,7 +2581,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
>
> rte_spinlock_lock(&hw->lock);
> - is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true :
> false;
> + is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
> frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
>
> /*
> @@ -2637,7 +2602,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
> b/drivers/net/hns3/hns3_ethdev_vf.c
> index 8d9b7979c806..0b5db486f8d6 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -784,8 +784,6 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> uint16_t nb_rx_q = dev->data->nb_rx_queues;
> uint16_t nb_tx_q = dev->data->nb_tx_queues;
> struct rte_eth_rss_conf rss_conf;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> bool gro_en;
> int ret;
>
> @@ -825,28 +823,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> goto cfg_err;
> }
>
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> - hns3_err(hw, "maximum Rx packet length must be
> greater "
> - "than %u and less than %u when jumbo
> frame enabled.",
> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> - (uint16_t)HNS3_MAX_FRAME_LEN);
> - ret = -EINVAL;
> - goto cfg_err;
> - }
> -
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3vf_dev_mtu_set(dev, mtu);
> - if (ret)
> - goto cfg_err;
> - dev->data->mtu = mtu;
> - }
> + ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
> + if (ret != 0)
> + goto cfg_err;
>
> ret = hns3vf_dev_configure_vlan(dev);
> if (ret)
> @@ -935,7 +914,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
> index 481872e3957f..a260212f73f1 100644
> --- a/drivers/net/hns3/hns3_rxtx.c
> +++ b/drivers/net/hns3/hns3_rxtx.c
> @@ -1735,18 +1735,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw
> *hw, uint16_t buf_size,
> uint16_t nb_desc)
> {
> struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
> - struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
> eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
> + uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
> uint16_t min_vec_bds;
>
> /*
> * HNS3 hardware network engine set scattered as default. If the
> driver
> * is not work in scattered mode and the pkts greater than buf_size
> - * but smaller than max_rx_pkt_len will be distributed to multiple
> BDs.
> + * but smaller than frame size will be distributed to multiple BDs.
> * Driver cannot handle this situation.
> */
> - if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size)
> {
> - hns3_err(hw, "max_rx_pkt_len is not allowed to be set
> greater "
> + if (!hw->data->scattered_rx && frame_size > buf_size) {
> + hns3_err(hw, "frame size is not allowed to be set greater "
> "than rx_buf_len if scattered is off.");
> return -EINVAL;
> }
> @@ -1958,7 +1958,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
> }
>
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
> - dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
> + dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
> dev->data->scattered_rx = true;
> }
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index bd97d93dd746..ab571a921f9e 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11775,14 +11775,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > I40E_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return ret;
> }
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index d5847ac6b546..1d27cf2b0a01 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2909,8 +2909,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
> }
>
> rxq->max_pkt_len =
> - RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
> - rxq->rx_buf_len), data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq-
> >rx_buf_len,
> + data->mtu + I40E_ETH_OVERHEAD);
> if (data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
> rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 5a5a7f59e152..0eabce275d92 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -576,13 +576,14 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct
> iavf_rx_queue *rxq)
> struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> struct rte_eth_dev_data *dev_data = dev->data;
> uint16_t buf_size, max_pkt_len;
> + uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
>
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM;
>
> /* Calculate the maximum packet length allowed */
> max_pkt_len = RTE_MIN((uint32_t)
> rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + frame_size);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> @@ -839,7 +840,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
>
> adapter->stopped = 0;
>
> - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
> vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
> dev->data->nb_tx_queues);
> num_queue_pairs = vf->num_queue_pairs;
> @@ -1472,15 +1473,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IAVF_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return ret;
> }
>
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c
> b/drivers/net/ice/ice_dcf_ethdev.c
> index 4e4cdbcd7d71..c3c7ad88f250 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -66,9 +66,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct
> ice_rx_queue *rxq)
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM;
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> ICE_RLAN_CTX_DBUF_S));
> - max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> + dev->data->mtu + ICE_ETH_OVERHEAD);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 9ab7704ff003..8ee1335ac6cf 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3603,8 +3603,8 @@ ice_dev_start(struct rte_eth_dev *dev)
> pf->adapter_stopped = false;
>
> /* Set the max frame size to default value*/
> - max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
> - pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
> + max_frame_size = pf->dev_data->mtu ?
> + pf->dev_data->mtu + ICE_ETH_OVERHEAD :
> ICE_FRAME_SIZE_MAX;
>
> /* Set the max frame size to HW*/
> @@ -3992,14 +3992,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EBUSY;
> }
>
> - if (frame_size > ICE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return 0;
> }
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index 83fb788e6930..f9ef6ce57277 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -271,15 +271,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
> uint32_t rxdid = ICE_RXDID_COMMS_OVS;
> uint32_t regval;
> struct ice_adapter *ad = rxq->vsi->adapter;
> + uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
>
> /* Set buffer size as the head split is disabled. */
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM);
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> ICE_RLAN_CTX_DBUF_S));
> - rxq->max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> - dev_data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + rxq->max_pkt_len =
> + RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> + frame_size);
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
> @@ -385,11 +386,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
> return -EINVAL;
> }
>
> - buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> - RTE_PKTMBUF_HEADROOM);
> -
> /* Check if scattered RX needs to be used. */
> - if (rxq->max_pkt_len > buf_size)
> + if (frame_size > buf_size)
> dev_data->scattered_rx = 1;
>
> rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
> index 224a0954836b..b26723064b07 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -20,13 +20,6 @@
>
> #define IGC_INTEL_VENDOR_ID 0x8086
>
> -/*
> - * The overhead from MTU to max frame size.
> - * Considering VLAN so tag needs to be counted.
> - */
> -#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> - RTE_ETHER_CRC_LEN +
> VLAN_TAG_SIZE)
> -
> #define IGC_FC_PAUSE_TIME 0x0680
> #define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
> #define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
> @@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>
> /* switch to jumbo mode if needed */
> if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= IGC_RCTL_LPE;
> } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl &= ~IGC_RCTL_LPE;
> }
> IGC_WRITE_REG(hw, IGC_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> return 0;
> }
> @@ -2486,6 +2473,7 @@ static int
> igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev
> *dev)
> if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> - RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> + if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error, min
> is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
> + frame_size, VLAN_TAG_SIZE +
> RTE_ETHER_MIN_MTU);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext &
> ~IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> @@ -2519,6 +2498,7 @@ static int
> igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev
> *dev)
> if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
> + if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error, max
> is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
> + frame_size, MAX_RX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext |
> IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
> index 7b6c209df3b6..b3473b5b1646 100644
> --- a/drivers/net/igc/igc_ethdev.h
> +++ b/drivers/net/igc/igc_ethdev.h
> @@ -35,6 +35,13 @@ extern "C" {
> #define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
> #define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE *
> IGC_HKEY_MAX_INDEX)
>
> +/*
> + * The overhead from MTU to max frame size.
> + * Considering VLAN so tag needs to be counted.
> + */
> +#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> + RTE_ETHER_CRC_LEN +
> VLAN_TAG_SIZE * 2)
> +
> /*
> * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN
> should be
> * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
> diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
> index b5489eedd220..28d3076439c3 100644
> --- a/drivers/net/igc/igc_txrx.c
> +++ b/drivers/net/igc/igc_txrx.c
> @@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> struct igc_rx_queue *rxq;
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
> - uint32_t max_rx_pkt_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint32_t rctl;
> uint32_t rxcsum;
> uint16_t buf_size;
> @@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
>
> /* Configure support of jumbo frames, if any. */
> - if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> rctl |= IGC_RCTL_LPE;
> -
> - /*
> - * Set maximum packet length by default, and might be
> updated
> - * together with enabling/disabling dual VLAN.
> - */
> - IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
> - } else {
> + else
> rctl &= ~IGC_RCTL_LPE;
> - }
> +
> + max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
> + /*
> + * Set maximum packet length by default, and might be updated
> + * together with enabling/disabling dual VLAN.
> + */
> + IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN
> */
> - if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
> + if (max_rx_pktlen > buf_size)
> dev->data->scattered_rx = 1;
> } else {
> /*
> diff --git a/drivers/net/ionic/ionic_ethdev.c
> b/drivers/net/ionic/ionic_ethdev.c
> index e6207939665e..97447a10e46a 100644
> --- a/drivers/net/ionic/ionic_ethdev.c
> +++ b/drivers/net/ionic/ionic_ethdev.c
> @@ -343,25 +343,15 @@ static int
> ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> {
> struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
> - uint32_t max_frame_size;
> int err;
>
> IONIC_PRINT_CALL();
>
> /*
> * Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
> - * is done by the the API.
> + * is done by the API.
> */
>
> - /*
> - * Max frame size is MTU + Ethernet header + VLAN + QinQ
> - * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
> - */
> - max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
> -
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len <
> max_frame_size)
> - return -EINVAL;
> -
> err = ionic_lif_change_mtu(lif, mtu);
> if (err)
> return err;
> diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
> index b83ea1bcaa6a..3f5fc66abf71 100644
> --- a/drivers/net/ionic/ionic_rxtx.c
> +++ b/drivers/net/ionic/ionic_rxtx.c
> @@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
> struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
> struct rte_mbuf *rxm, *rxm_seg;
> uint32_t max_frame_size =
> - rxq->qcq.lif->eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint64_t pkt_flags = 0;
> uint32_t pkt_type;
> struct ionic_rx_stats *stats = &rxq->stats;
> @@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
> int __rte_cold
> ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t
> rx_queue_id)
> {
> - uint32_t frame_size = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
> struct ionic_rx_qcq *rxq;
> int err;
> @@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts,
> {
> struct ionic_rx_qcq *rxq = rx_queue;
> uint32_t frame_size =
> - rxq->qcq.lif->eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> struct ionic_rx_service service_cb_arg;
>
> service_cb_arg.rx_pkts = rx_pkts;
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c
> b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 589d9fa5877d..3634c0c8c5f0 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev
> *ethdev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IPN3KE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (rpst->i40e_pf_eth) {
> ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst-
> >i40e_pf_eth,
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 47693c0c47cd..31e67d86e77b 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -5174,7 +5174,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> struct ixgbe_hw *hw;
> struct rte_eth_dev_info dev_info;
> uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
> - struct rte_eth_dev_data *dev_data = dev->data;
> int ret;
>
> ret = ixgbe_dev_info_get(dev, &dev_info);
> @@ -5188,9 +5187,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> /* If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> */
> - if (dev_data->dev_started && !dev_data->scattered_rx &&
> - (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
> - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> + if (dev->data->dev_started && !dev->data->scattered_rx &&
> + frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
> + dev->data->min_rx_buf_size -
> RTE_PKTMBUF_HEADROOM) {
> PMD_INIT_LOG(ERR, "Stop port first.");
> return -EINVAL;
> }
> @@ -5199,23 +5198,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > IXGBE_ETH_MAX_LEN) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU) {
> + dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> }
> IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> maxfrs &= 0x0000FFFF;
> - maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
> + maxfrs |= (frame_size << 16);
> IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
>
> return 0;
> @@ -6272,12 +6266,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev
> *dev,
> * set as 0x4.
> */
> if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
> - (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
> - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> - IXGBE_MMW_SIZE_JUMBO_FRAME);
> + (dev->data->mtu + IXGBE_ETH_OVERHEAD >=
> IXGBE_MAX_JUMBO_FRAME_SIZE))
> + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> IXGBE_MMW_SIZE_JUMBO_FRAME);
> else
> - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> - IXGBE_MMW_SIZE_DEFAULT);
> + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> IXGBE_MMW_SIZE_DEFAULT);
>
> /* Set RTTBCNRC of queue X */
> IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
> @@ -6549,8 +6541,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
>
> hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
> + if (mtu < RTE_ETHER_MIN_MTU || max_frame >
> RTE_ETHER_MAX_JUMBO_FRAME_LEN)
> return -EINVAL;
>
> /* If device is started, refuse mtu that requires the support of
> @@ -6558,7 +6549,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> */
> if (dev_data->dev_started && !dev_data->scattered_rx &&
> (max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
> - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> + dev->data->min_rx_buf_size -
> RTE_PKTMBUF_HEADROOM)) {
> PMD_INIT_LOG(ERR, "Stop port first.");
> return -EINVAL;
> }
> @@ -6575,8 +6566,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (ixgbevf_rlpml_set_vf(hw, max_frame))
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
> return 0;
> }
>
> diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
> index fbf2b17d160f..9bcbc445f2d0 100644
> --- a/drivers/net/ixgbe/ixgbe_pf.c
> +++ b/drivers/net/ixgbe/ixgbe_pf.c
> @@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf,
> uint32_t *msgbuf)
> * if PF has jumbo frames enabled which means
> legacy
> * VFs are disabled.
> */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - IXGBE_ETH_MAX_LEN)
> + if (dev->data->mtu > RTE_ETHER_MTU)
> break;
> /* fall through */
> default:
> @@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf,
> uint32_t *msgbuf)
> * legacy VFs.
> */
> if (max_frame > IXGBE_ETH_MAX_LEN ||
> - dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - IXGBE_ETH_MAX_LEN)
> + dev->data->mtu > RTE_ETHER_MTU)
> return -1;
> break;
> }
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index bfdfd5e755de..03991711fd6e 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -5063,6 +5063,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> uint16_t buf_size;
> uint16_t i;
> struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
> + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
> int rc;
>
> PMD_INIT_FUNC_TRACE();
> @@ -5098,7 +5099,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> maxfrs &= 0x0000FFFF;
> - maxfrs |= (rx_conf->max_rx_pkt_len << 16);
> + maxfrs |= (frame_size << 16);
> IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
> } else
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> @@ -5172,8 +5173,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> IXGBE_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * IXGBE_VLAN_TAG_SIZE >
> buf_size)
> + if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
> dev->data->scattered_rx = 1;
> if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> @@ -5653,6 +5653,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
> struct ixgbe_hw *hw;
> struct ixgbe_rx_queue *rxq;
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
> uint64_t bus_addr;
> uint32_t srrctl, psrtype = 0;
> uint16_t buf_size;
> @@ -5689,10 +5690,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
> * ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
> * VF packets received can work in all cases.
> */
> - if (ixgbevf_rlpml_set_vf(hw,
> - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
> + if (ixgbevf_rlpml_set_vf(hw, frame_size) != 0) {
> PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + frame_size);
> return -EINVAL;
> }
>
> @@ -5751,8 +5751,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
> /* It adds dual VLAN length for supporting dual VLAN */
> - (rxmode->max_rx_pkt_len +
> - 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
> + (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter
> mode");
> dev->data->scattered_rx = 1;
> diff --git a/drivers/net/liquidio/lio_ethdev.c
> b/drivers/net/liquidio/lio_ethdev.c
> index b72060a4499b..976916f870a5 100644
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ b/drivers/net/liquidio/lio_ethdev.c
> @@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> {
> struct lio_device *lio_dev = LIO_DEV(eth_dev);
> uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
> - uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> struct lio_dev_ctrl_cmd ctrl_cmd;
> struct lio_ctrl_pkt ctrl_pkt;
>
> @@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> return -1;
> }
>
> - if (frame_len > LIO_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> eth_dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
> - eth_dev->data->mtu = mtu;
> -
> return 0;
> }
>
> @@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev)
> static int
> lio_dev_start(struct rte_eth_dev *eth_dev)
> {
> - uint16_t mtu;
> - uint32_t frame_len = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> struct lio_device *lio_dev = LIO_DEV(eth_dev);
> uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> int ret = 0;
> @@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
> goto dev_mtu_set_error;
> }
>
> - mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN -
> RTE_ETHER_CRC_LEN);
> - if (mtu < RTE_ETHER_MIN_MTU)
> - mtu = RTE_ETHER_MIN_MTU;
> -
> - if (eth_dev->data->mtu != mtu) {
> - ret = lio_dev_mtu_set(eth_dev, mtu);
> - if (ret)
> - goto dev_mtu_set_error;
> - }
> + ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
> + if (ret != 0)
> + goto dev_mtu_set_error;
>
> return 0;
>
> diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
> index 978cbb8201ea..4a5cfd22aa71 100644
> --- a/drivers/net/mlx4/mlx4_rxq.c
> +++ b/drivers/net/mlx4/mlx4_rxq.c
> @@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> int ret;
> uint32_t crc_present;
> uint64_t offloads;
> + uint32_t max_rx_pktlen;
>
> offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
>
> @@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> };
> /* Enable scattered packets support for this queue if necessary. */
> MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - (mb_len - RTE_PKTMBUF_HEADROOM)) {
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> + if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
> ;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> - uint32_t size =
> - RTE_PKTMBUF_HEADROOM +
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
> uint32_t sges_n;
>
> /*
> @@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> /* Make sure sges_n did not overflow. */
> size = mb_len * (1 << rxq->sges_n);
> size -= RTE_PKTMBUF_HEADROOM;
> - if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
> + if (size < max_rx_pktlen) {
> rte_errno = EOVERFLOW;
> ERROR("%p: too many SGEs (%u) needed to handle"
> " requested maximum packet size %u",
> (void *)dev,
> - 1 << sges_n,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + 1 << sges_n, max_rx_pktlen);
> goto error;
> }
> } else {
> WARN("%p: the requested maximum Rx packet size (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - (void *)dev,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + (void *)dev, max_rx_pktlen,
> mb_len - RTE_PKTMBUF_HEADROOM);
> }
> DEBUG("%p: maximum number of segments per packet: %u",
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index abd8ce798986..6f4f351222d3 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -1330,10 +1330,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> uint64_t offloads = conf->offloads |
> dev->data->dev_conf.rxmode.offloads;
> unsigned int lro_on_queue = !!(offloads &
> DEV_RX_OFFLOAD_TCP_LRO);
> - unsigned int max_rx_pkt_len = lro_on_queue ?
> + unsigned int max_rx_pktlen = lro_on_queue ?
> dev->data->dev_conf.rxmode.max_lro_pkt_size :
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
> + dev->data->mtu + (unsigned
> int)RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
>
> RTE_PKTMBUF_HEADROOM;
> unsigned int max_lro_size = 0;
> unsigned int first_mb_free_size = mb_len -
> RTE_PKTMBUF_HEADROOM;
> @@ -1372,7 +1373,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> * needed to handle max size packets, replace zero length
> * with the buffer length from the pool.
> */
> - tail_len = max_rx_pkt_len;
> + tail_len = max_rx_pktlen;
> do {
> struct mlx5_eth_rxseg *hw_seg =
> &tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
> @@ -1410,7 +1411,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to
> handle"
> " requested maximum packet size %u, the
> maximum"
> " supported are %u", dev->data->port_id,
> - tmpl->rxq.rxseg_n, max_rx_pkt_len,
> + tmpl->rxq.rxseg_n, max_rx_pktlen,
> MLX5_MAX_RXQ_NSEG);
> rte_errno = ENOTSUP;
> goto error;
> @@ -1435,7 +1436,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
> " configured and no enough mbuf space(%u) to
> contain "
> "the maximum RX packet length(%u) with head-
> room(%u)",
> - dev->data->port_id, idx, mb_len, max_rx_pkt_len,
> + dev->data->port_id, idx, mb_len, max_rx_pktlen,
> RTE_PKTMBUF_HEADROOM);
> rte_errno = ENOSPC;
> goto error;
> @@ -1454,7 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> * following conditions are met:
> * - MPRQ is enabled.
> * - The number of descs is more than the number of strides.
> - * - max_rx_pkt_len plus overhead is less than the max size
> + * - max_rx_pktlen plus overhead is less than the max size
> * of a stride or mprq_stride_size is specified by a user.
> * Need to make sure that there are enough strides to encap
> * the maximum packet size in case mprq_stride_size is set.
> @@ -1478,7 +1479,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> !!(offloads & DEV_RX_OFFLOAD_SCATTER);
> tmpl->rxq.mprq_max_memcpy_len =
> RTE_MIN(first_mb_free_size,
> config->mprq.max_memcpy_len);
> - max_lro_size = RTE_MIN(max_rx_pkt_len,
> + max_lro_size = RTE_MIN(max_rx_pktlen,
> (1u << tmpl->rxq.strd_num_n) *
> (1u << tmpl->rxq.strd_sz_n));
> DRV_LOG(DEBUG,
> @@ -1487,9 +1488,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> dev->data->port_id, idx,
> tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
> } else if (tmpl->rxq.rxseg_n == 1) {
> - MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
> + MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
> tmpl->rxq.sges_n = 0;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> unsigned int sges_n;
>
> @@ -1511,13 +1512,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to
> handle"
> " requested maximum packet size %u, the
> maximum"
> " supported are %u", dev->data->port_id,
> - 1 << sges_n, max_rx_pkt_len,
> + 1 << sges_n, max_rx_pktlen,
> 1u << MLX5_MAX_LOG_RQ_SEGS);
> rte_errno = ENOTSUP;
> goto error;
> }
> tmpl->rxq.sges_n = sges_n;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> }
> if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
> DRV_LOG(WARNING,
> diff --git a/drivers/net/mvneta/mvneta_ethdev.c
> b/drivers/net/mvneta/mvneta_ethdev.c
> index a3ee15020466..520c6fdb1d31 100644
> --- a/drivers/net/mvneta/mvneta_ethdev.c
> +++ b/drivers/net/mvneta/mvneta_ethdev.c
> @@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> - dev->data->mtu = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> - MRVL_NETA_ETH_HDRS_LEN;
> -
> if (dev->data->dev_conf.txmode.offloads &
> DEV_TX_OFFLOAD_MULTI_SEGS)
> priv->multiseg = 1;
>
> @@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - dev->data->mtu = mtu;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
> -
> if (!priv->ppio)
> /* It is OK. New MTU will be set later on mvneta_dev_start */
> return 0;
> diff --git a/drivers/net/mvneta/mvneta_rxtx.c
> b/drivers/net/mvneta/mvneta_rxtx.c
> index dfa7ecc09039..2cd4fb31348b 100644
> --- a/drivers/net/mvneta/mvneta_rxtx.c
> +++ b/drivers/net/mvneta/mvneta_rxtx.c
> @@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> struct mvneta_priv *priv = dev->data->dev_private;
> struct mvneta_rxq *rxq;
> uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
> - uint32_t max_rx_pkt_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
>
> frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
> MVNETA_PKT_EFFEC_OFFS;
>
> - if (frame_size < max_rx_pkt_len) {
> + if (frame_size < max_rx_pktlen) {
> MVNETA_LOG(ERR,
> "Mbuf size must be increased to %u bytes to hold up
> "
> "to %u bytes of data.",
> - buf_size + max_rx_pkt_len - frame_size,
> - max_rx_pkt_len);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> - MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_rx_pktlen + buf_size - frame_size,
> + max_rx_pktlen);
> + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
> + MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
> }
>
> if (dev->data->rx_queues[idx]) {
> diff --git a/drivers/net/mvpp2/mrvl_ethdev.c
> b/drivers/net/mvpp2/mrvl_ethdev.c
> index 078aefbb8da4..5ce71661c84e 100644
> --- a/drivers/net/mvpp2/mrvl_ethdev.c
> +++ b/drivers/net/mvpp2/mrvl_ethdev.c
> @@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - dev->data->mtu = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> - MRVL_PP2_ETH_HDRS_LEN;
> - if (dev->data->mtu > priv->max_mtu) {
> - MRVL_LOG(ERR, "inherit MTU %u from
> max_rx_pkt_len %u is larger than max_mtu %u\n",
> - dev->data->mtu,
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> - priv->max_mtu);
> - return -EINVAL;
> - }
> + if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
> + MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
> + dev->data->dev_conf.rxmode.mtu,
> + priv->max_mtu);
> + return -EINVAL;
> }
>
> if (dev->data->dev_conf.txmode.offloads &
> DEV_TX_OFFLOAD_MULTI_SEGS)
> @@ -595,9 +590,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - dev->data->mtu = mtu;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
> -
> if (!priv->ppio)
> return 0;
>
> @@ -1994,7 +1986,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> struct mrvl_priv *priv = dev->data->dev_private;
> struct mrvl_rxq *rxq;
> uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
> - uint32_t max_rx_pkt_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
> int ret, tc, inq;
> uint64_t offloads;
>
> @@ -2009,17 +2001,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> return -EFAULT;
> }
>
> - frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
> - MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
> - if (frame_size < max_rx_pkt_len) {
> + frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
> MRVL_PKT_EFFEC_OFFS;
> + if (frame_size < max_rx_pktlen) {
> MRVL_LOG(WARNING,
> "Mbuf size must be increased to %u bytes to hold up
> "
> "to %u bytes of data.",
> - buf_size + max_rx_pkt_len - frame_size,
> - max_rx_pkt_len);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> - MRVL_LOG(INFO, "Setting max rx pkt len to %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_rx_pktlen + buf_size - frame_size,
> + max_rx_pktlen);
> + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
> + MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
> }
>
> if (dev->data->rx_queues[idx]) {
> diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
> index 1b4bc33593fb..a2031a7a82cc 100644
> --- a/drivers/net/nfp/nfp_common.c
> +++ b/drivers/net/nfp/nfp_common.c
> @@ -370,7 +370,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
> }
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - hw->mtu = rxmode->max_rx_pkt_len;
> + hw->mtu = dev->data->mtu;
>
> if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
> ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
> @@ -963,16 +963,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
>
> /* switch to jumbo mode if needed */
> - if ((uint32_t)mtu > RTE_ETHER_MTU)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
> -
> /* writing to configuration space */
> - nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
> + nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
>
> hw->mtu = mtu;
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.c
> b/drivers/net/octeontx/octeontx_ethdev.c
> index 9f4c0503b4d4..69c3bda12df8 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.c
> +++ b/drivers/net/octeontx/octeontx_ethdev.c
> @@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (frame_size > OCCTX_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> octeontx_log_info("Received pkt beyond maxlen %d will be
> dropped",
> frame_size);
>
> @@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq
> *rxq)
> buffsz = mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM;
>
> /* Setup scatter mode if needed by jumbo */
> - if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (data->mtu > buffsz) {
> nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
> nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
> @@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq
> *rxq)
> evdev_priv->rx_offload_flags = nic->rx_offload_flags;
> evdev_priv->tx_offload_flags = nic->tx_offload_flags;
>
> - /* Setup MTU based on max_rx_pkt_len */
> - nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len -
> OCCTX_L2_OVERHEAD;
> + /* Setup MTU */
> + nic->mtu = data->mtu;
>
> return 0;
> }
> @@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
> octeontx_recheck_rx_offloads(rxq);
> }
>
> - /* Setting up the mtu based on max_rx_pkt_len */
> + /* Setting up the mtu */
> ret = octeontx_dev_mtu_set(dev, nic->mtu);
> if (ret) {
> octeontx_log_err("Failed to set default MTU size %d", ret);
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c
> b/drivers/net/octeontx2/otx2_ethdev.c
> index 75d4cabf2e7c..787e8d890215 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.c
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct
> otx2_eth_rxq *rxq)
> mbp_priv = rte_mempool_get_priv(rxq->pool);
> buffsz = mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM;
>
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
> dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
>
> diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c
> b/drivers/net/octeontx2/otx2_ethdev_ops.c
> index 552e6bd43d2b..cf7804157198 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_ops.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
> @@ -59,14 +59,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> if (rc)
> return rc;
>
> - if (frame_size > NIX_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return rc;
> }
>
> @@ -75,7 +72,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> {
> struct rte_eth_dev_data *data = eth_dev->data;
> struct otx2_eth_rxq *rxq;
> - uint16_t mtu;
> int rc;
>
> rxq = data->rx_queues[0];
> @@ -83,10 +79,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> /* Setup scatter mode if needed by jumbo */
> otx2_nix_enable_mseg_on_jumbo(rxq);
>
> - /* Setup MTU based on max_rx_pkt_len */
> - mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
> -
> - rc = otx2_nix_mtu_set(eth_dev, mtu);
> + rc = otx2_nix_mtu_set(eth_dev, data->mtu);
> if (rc)
> otx2_err("Failed to set default MTU size %d", rc);
>
> diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
> index feec4d10a26e..2619bd2f2a19 100644
> --- a/drivers/net/pfe/pfe_ethdev.c
> +++ b/drivers/net/pfe/pfe_ethdev.c
> @@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev)
> static int
> pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> - int ret;
> struct pfe_eth_priv_s *priv = dev->data->dev_private;
> uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
>
> /*TODO Support VLAN*/
> - ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
> - if (!ret)
> - dev->data->mtu = mtu;
> -
> - return ret;
> + return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
> }
>
> /* pfe_eth_enet_addr_byte_mac
> diff --git a/drivers/net/qede/qede_ethdev.c
> b/drivers/net/qede/qede_ethdev.c
> index a4304e0eff44..4b971fd1fe3c 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev
> *eth_dev)
> return -ENOMEM;
> }
>
> - /* If jumbo enabled adjust MTU */
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - eth_dev->data->mtu =
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
> -
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
> eth_dev->data->scattered_rx = 1;
>
> @@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
> struct rte_eth_dev_info dev_info = {0};
> struct qede_fastpath *fp;
> - uint32_t max_rx_pkt_len;
> uint32_t frame_size;
> uint16_t bufsz;
> bool restart = false;
> @@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> DP_ERR(edev, "Error during getting ethernet device info\n");
> return rc;
> }
> - max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
> - frame_size = max_rx_pkt_len;
> +
> + frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> dev_info.max_rx_pktlen) {
> DP_ERR(edev, "MTU %u out of range, %u is maximum
> allowable\n",
> mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
> @@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> fp->rxq->rx_buf_size = rc;
> }
> }
> - if (frame_size > QEDE_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> dev->data->dev_started = 1;
> }
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
> -
> return 0;
> }
>
> diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
> index 35cde561ba59..c2263787b4ec 100644
> --- a/drivers/net/qede/qede_rxtx.c
> +++ b/drivers/net/qede/qede_rxtx.c
> @@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t qid,
> struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> struct qede_rx_queue *rxq;
> - uint16_t max_rx_pkt_len;
> + uint16_t max_rx_pktlen;
> uint16_t bufsz;
> int rc;
>
> @@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t qid,
> dev->data->rx_queues[qid] = NULL;
> }
>
> - max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
>
> /* Fix up RX buffer size */
> bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM;
> /* cache align the mbuf size to simplfy rx_buf_size calculation */
> bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
> if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
> - (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
> + (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
> if (!dev->data->scattered_rx) {
> DP_INFO(edev, "Forcing scatter-gather mode\n");
> dev->data->scattered_rx = 1;
> }
> }
>
> - rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
> + rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
> if (rc < 0)
> return rc;
>
> diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
> index 2db0d000c3ad..1f55c90b419d 100644
> --- a/drivers/net/sfc/sfc_ethdev.c
> +++ b/drivers/net/sfc/sfc_ethdev.c
> @@ -1066,15 +1066,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
>
> /*
> * The driver does not use it, but other PMDs update jumbo frame
> - * flag and max_rx_pkt_len when MTU is set.
> + * flag when MTU is set.
> */
> if (mtu > RTE_ETHER_MTU) {
> struct rte_eth_rxmode *rxmode = &dev->data-
> >dev_conf.rxmode;
> rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> }
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
> -
> sfc_adapter_unlock(sa);
>
> sfc_log_init(sa, "done");
> diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
> index adb2b2cb8175..22f74735db08 100644
> --- a/drivers/net/sfc/sfc_port.c
> +++ b/drivers/net/sfc/sfc_port.c
> @@ -383,14 +383,10 @@ sfc_port_configure(struct sfc_adapter *sa)
> {
> const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
> struct sfc_port *port = &sa->port;
> - const struct rte_eth_rxmode *rxmode = &dev_data-
> >dev_conf.rxmode;
>
> sfc_log_init(sa, "entry");
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - port->pdu = rxmode->max_rx_pkt_len;
> - else
> - port->pdu = EFX_MAC_PDU(dev_data->mtu);
> + port->pdu = EFX_MAC_PDU(dev_data->mtu);
>
> return 0;
> }
> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
> index c515de3bf71d..0a8d29277aeb 100644
> --- a/drivers/net/tap/rte_eth_tap.c
> +++ b/drivers/net/tap/rte_eth_tap.c
> @@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> {
> struct pmd_internals *pmd = dev->data->dev_private;
> struct ifreq ifr = { .ifr_mtu = mtu };
> - int err = 0;
>
> - err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
> - if (!err)
> - dev->data->mtu = mtu;
> -
> - return err;
> + return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
> }
>
> static int
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c
> b/drivers/net/thunderx/nicvf_ethdev.c
> index 561a98fc81a3..c8ae95a61306 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t
> mtu)
> (frame_size + 2 * VLAN_TAG_SIZE > buffsz *
> NIC_HW_MAX_SEGS))
> return -EINVAL;
>
> - if (frame_size > NIC_HW_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t
> mtu)
> if (nicvf_mbox_update_hw_max_frs(nic, mtu))
> return -EINVAL;
>
> - /* Update max_rx_pkt_len */
> - rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
> nic->mtu = mtu;
>
> for (i = 0; i < nic->sqs_count; i++)
> @@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
> }
>
> /* Setup scatter mode if needed by jumbo */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE > buffsz)
> + if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 *
> VLAN_TAG_SIZE > buffsz)
> dev->data->scattered_rx = 1;
> if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
> dev->data->scattered_rx = 1;
>
> - /* Setup MTU based on max_rx_pkt_len or default */
> - mtu = dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME ?
> - dev->data->dev_conf.rxmode.max_rx_pkt_len
> - - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
> + /* Setup MTU */
> + mtu = dev->data->mtu;
>
> if (nicvf_dev_set_mtu(dev, mtu)) {
> PMD_INIT_LOG(ERR, "Failed to set default mtu size");
> diff --git a/drivers/net/txgbe/txgbe_ethdev.c
> b/drivers/net/txgbe/txgbe_ethdev.c
> index 006399468841..269de9f848dd 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -3486,8 +3486,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EINVAL;
> }
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + /* switch to jumbo mode if needed */
> + if (mtu > RTE_ETHER_MTU)
> + dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> + dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (hw->mode)
> wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> diff --git a/drivers/net/txgbe/txgbe_ethdev.h
> b/drivers/net/txgbe/txgbe_ethdev.h
> index 3021933965c8..44cfcd76bca4 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.h
> +++ b/drivers/net/txgbe/txgbe_ethdev.h
> @@ -55,6 +55,10 @@
> #define TXGBE_5TUPLE_MAX_PRI 7
> #define TXGBE_5TUPLE_MIN_PRI 1
>
> +
> +/* The overhead from MTU to max frame size. */
> +#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN)
> +
> #define TXGBE_RSS_OFFLOAD_ALL ( \
> ETH_RSS_IPV4 | \
> ETH_RSS_NONFRAG_IPV4_TCP | \
> diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c
> b/drivers/net/txgbe/txgbe_ethdev_vf.c
> index 896da8a88770..43dc0ed39b75 100644
> --- a/drivers/net/txgbe/txgbe_ethdev_vf.c
> +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
> @@ -1128,8 +1128,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (txgbevf_rlpml_set_vf(hw, max_frame))
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
> return 0;
> }
>
> diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
> index 1a261287d1bd..c6cd3803c434 100644
> --- a/drivers/net/txgbe/txgbe_rxtx.c
> +++ b/drivers/net/txgbe/txgbe_rxtx.c
> @@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure jumbo frame support, if any.
> */
> - if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> - TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
> - } else {
> - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> - TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
> - }
> + wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> + TXGBE_FRMSZ_MAX(dev->data->mtu +
> TXGBE_ETH_OVERHEAD));
>
> /*
> * If loopback mode is configured, set LPBK bit.
> @@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * TXGBE_VLAN_TAG_SIZE >
> buf_size)
> + if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
> + 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
> dev->data->scattered_rx = 1;
> if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> @@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
> * VF packets received can work in all cases.
> */
> if (txgbevf_rlpml_set_vf(hw,
> - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
> + (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
> PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + dev->data->mtu + TXGBE_ETH_OVERHEAD);
> return -EINVAL;
> }
>
> @@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
> /* It adds dual VLAN length for supporting dual VLAN */
> - (rxmode->max_rx_pkt_len +
> + (dev->data->mtu + TXGBE_ETH_OVERHEAD +
> 2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter
> mode");
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index b60eeb24abe7..5d341a3e23bb 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -930,7 +930,6 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> hw->max_rx_pkt_len = frame_size;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = hw-
> >max_rx_pkt_len;
>
> return 0;
> }
> @@ -2116,14 +2115,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
> return ret;
> }
>
> - if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
> - (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
> + if (rxmode->mtu > hw->max_mtu)
> req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
> - else
> - hw->max_rx_pkt_len = ether_hdr_len + dev->data->mtu;
> + hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
>
> if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM))
> diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
> index adbd40808396..68e3c13730ad 100644
> --- a/examples/bbdev_app/main.c
> +++ b/examples/bbdev_app/main.c
> @@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/bond/main.c b/examples/bond/main.c
> index a63ca70a7f06..25ca459be57b 100644
> --- a/examples/bond/main.c
> +++ b/examples/bond/main.c
> @@ -116,7 +116,6 @@ static struct rte_mempool *mbuf_pool;
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> diff --git a/examples/distributor/main.c b/examples/distributor/main.c
> index d0f40a1fb4bc..8c4a8feec0c2 100644
> --- a/examples/distributor/main.c
> +++ b/examples/distributor/main.c
> @@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c
> b/examples/eventdev_pipeline/pipeline_worker_generic.c
> index 5ed0dc73ec60..e26be8edf28f 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> @@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool
> *mbuf_pool)
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c
> b/examples/eventdev_pipeline/pipeline_worker_tx.c
> index ab8c6d6a0dad..476b147bdfcc 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> @@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool
> *mbuf_pool)
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/flow_classify/flow_classify.c
> b/examples/flow_classify/flow_classify.c
> index 65c1d85cf2fb..8a43f6ac0f92 100644
> --- a/examples/flow_classify/flow_classify.c
> +++ b/examples/flow_classify/flow_classify.c
> @@ -59,14 +59,6 @@ static struct{
> } parm_config;
> const char cb_port_delim[] = ":";
>
> -/* Ethernet ports configured with default settings using struct. 8< */
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -/* >8 End of configuration of Ethernet ports. */
> -
> /* Creation of flow classifier object. 8< */
> struct flow_classifier {
> struct rte_flow_classifier *cls;
> @@ -200,7 +192,7 @@ static struct rte_flow_attr attr;
> static inline int
> port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> struct rte_ether_addr addr;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> @@ -211,6 +203,8 @@ port_init(uint8_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
> index b3977a8be561..fdc66368dce9 100644
> --- a/examples/ioat/ioatfwd.c
> +++ b/examples/ioat/ioatfwd.c
> @@ -820,7 +820,6 @@ port_init(uint16_t portid, struct rte_mempool
> *mbuf_pool, uint16_t nb_queues)
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/ip_fragmentation/main.c
> b/examples/ip_fragmentation/main.c
> index f24536972084..12062a785dc6 100644
> --- a/examples/ip_fragmentation/main.c
> +++ b/examples/ip_fragmentation/main.c
> @@ -146,7 +146,8 @@ struct lcore_queue_conf
> lcore_queue_conf[RTE_MAX_LCORE];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> @@ -918,9 +919,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> @@ -963,8 +964,7 @@ main(int argc, char **argv)
> }
>
> /* set the mtu to the maximum received packet size */
> - ret = rte_eth_dev_set_mtu(portid,
> - local_port_conf.rxmode.max_rx_pkt_len -
> MTU_OVERHEAD);
> + ret = rte_eth_dev_set_mtu(portid,
> local_port_conf.rxmode.mtu);
> if (ret < 0) {
> printf("\n");
> rte_exit(EXIT_FAILURE, "Set MTU failed: "
> diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
> index 16bcffe356bc..9ba02e687adb 100644
> --- a/examples/ip_pipeline/link.c
> +++ b/examples/ip_pipeline/link.c
> @@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
> .link_speeds = 0,
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
> + .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN),
> /* Jumbo frame MTU */
> .split_hdr_size = 0, /* Header split buffer size */
> },
> .rx_adv_conf = {
> diff --git a/examples/ip_reassembly/main.c
> b/examples/ip_reassembly/main.c
> index 8645ac790be4..e5c7d46d2caa 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -162,7 +162,8 @@ static struct lcore_queue_conf
> lcore_queue_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> DEV_RX_OFFLOAD_JUMBO_FRAME),
> @@ -882,7 +883,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t
> lcore, uint32_t queue)
>
> /* mbufs stored int the gragment table. 8< */
> nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) *
> MAX_FRAG_NUM;
> - nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) /
> BUF_SIZE;
> + nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN
> + + BUF_SIZE - 1) / BUF_SIZE;
> nb_mbuf *= 2; /* ipv4 and ipv6 */
> nb_mbuf += nb_rxd + nb_txd;
>
> @@ -1054,9 +1056,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-
> secgw/ipsec-secgw.c
> index 7ad94cb8228b..d032a47d1c3b 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -2163,7 +2162,6 @@ cryptodevs_init(uint16_t req_queue_num)
> static void
> port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
> {
> - uint32_t frame_size;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_txconf *txconf;
> uint16_t nb_tx_queue, nb_rx_queue;
> @@ -2211,10 +2209,9 @@ port_init(uint16_t portid, uint64_t
> req_rx_offloads, uint64_t req_tx_offloads)
> printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
> nb_rx_queue, nb_tx_queue);
>
> - frame_size = MTU_TO_FRAMELEN(mtu_size);
> - if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
> + if (mtu_size > RTE_ETHER_MTU)
> local_port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - local_port_conf.rxmode.max_rx_pkt_len = frame_size;
> + local_port_conf.rxmode.mtu = mtu_size;
>
> if (multi_seg_required()) {
> local_port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_SCATTER;
> diff --git a/examples/ipv4_multicast/main.c
> b/examples/ipv4_multicast/main.c
> index cc527d7f6b38..b3993685ec92 100644
> --- a/examples/ipv4_multicast/main.c
> +++ b/examples/ipv4_multicast/main.c
> @@ -110,7 +110,8 @@ static struct lcore_queue_conf
> lcore_queue_conf[RTE_MAX_LCORE];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
> },
> @@ -715,9 +716,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> diff --git a/examples/kni/main.c b/examples/kni/main.c
> index beabb3c848aa..c10814c6a94f 100644
> --- a/examples/kni/main.c
> +++ b/examples/kni/main.c
> @@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int
> new_mtu)
>
> memcpy(&conf, &port_conf, sizeof(conf));
> /* Set new MTU */
> - if (new_mtu > RTE_ETHER_MAX_LEN)
> + if (new_mtu > RTE_ETHER_MTU)
> conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* mtu + length of header + length of FCS = max pkt length */
> - conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE
> +
> - KNI_ENET_FCS_SIZE;
> + conf.rxmode.mtu = new_mtu;
> ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
> if (ret < 0) {
> RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
> diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
> index 9b3e324efb23..d9cf00c9dfc7 100644
> --- a/examples/l2fwd-cat/l2fwd-cat.c
> +++ b/examples/l2fwd-cat/l2fwd-cat.c
> @@ -19,10 +19,6 @@
> #define MBUF_CACHE_SIZE 250
> #define BURST_SIZE 32
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> -};
> -
> /* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
>
> /*
> @@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> uint16_t q;
> @@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> /* Configure the Ethernet device. */
> retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
> if (retval != 0)
> diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
> index 66d1491bf76d..f9438176cbb1 100644
> --- a/examples/l2fwd-crypto/main.c
> +++ b/examples/l2fwd-crypto/main.c
> @@ -217,7 +217,6 @@ struct lcore_queue_conf
> lcore_queue_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-
> event/l2fwd_common.c
> index 19f32809aa9d..9040be5ed9b6 100644
> --- a/examples/l2fwd-event/l2fwd_common.c
> +++ b/examples/l2fwd-event/l2fwd_common.c
> @@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
> uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
> struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
> index a1f457b564b6..7abb612ee6a4 100644
> --- a/examples/l3fwd-acl/main.c
> +++ b/examples/l3fwd-acl/main.c
> @@ -125,7 +125,6 @@ static uint16_t nb_lcore_params =
> sizeof(lcore_params_array_default) /
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -141,6 +140,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
>
> /* ethernet addresses of ports */
> @@ -201,8 +202,8 @@ enum {
> OPT_CONFIG_NUM = 256,
> #define OPT_NONUMA "no-numa"
> OPT_NONUMA_NUM,
> -#define OPT_ENBJMO "enable-jumbo"
> - OPT_ENBJMO_NUM,
> +#define OPT_MAX_PKT_LEN "max-pkt-len"
> + OPT_MAX_PKT_LEN_NUM,
> #define OPT_RULE_IPV4 "rule_ipv4"
> OPT_RULE_IPV4_NUM,
> #define OPT_RULE_IPV6 "rule_ipv6"
> @@ -1619,26 +1620,21 @@ print_usage(const char *prgname)
>
> usage_acl_alg(alg, sizeof(alg));
> printf("%s [EAL options] -- -p PORTMASK -P"
> - "--"OPT_RULE_IPV4"=FILE"
> - "--"OPT_RULE_IPV6"=FILE"
> + " --"OPT_RULE_IPV4"=FILE"
> + " --"OPT_RULE_IPV6"=FILE"
> " [--"OPT_CONFIG" (port,queue,lcore)[,(port,queue,lcore]]"
> - " [--"OPT_ENBJMO" [--max-pkt-len PKTLEN]]\n"
> + " [--"OPT_MAX_PKT_LEN" PKTLEN]\n"
> " -p PORTMASK: hexadecimal bitmask of ports to
> configure\n"
> - " -P : enable promiscuous mode\n"
> - " --"OPT_CONFIG": (port,queue,lcore): "
> - "rx queues configuration\n"
> + " -P: enable promiscuous mode\n"
> + " --"OPT_CONFIG" (port,queue,lcore): rx queues
> configuration\n"
> " --"OPT_NONUMA": optional, disable numa awareness\n"
> - " --"OPT_ENBJMO": enable jumbo frame"
> - " which max packet len is PKTLEN in decimal (64-9600)\n"
> - " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries "
> - "file. "
> + " --"OPT_MAX_PKT_LEN" PKTLEN: maximum packet length in
> decimal (64-9600)\n"
> + " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries file.
> "
> "Each rule occupy one line. "
> "2 kinds of rules are supported. "
> "One is ACL entry at while line leads with character '%c', "
> - "another is route entry at while line leads with "
> - "character '%c'.\n"
> - " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules "
> - "entries file.\n"
> + "another is route entry at while line leads with character
> '%c'.\n"
> + " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules entries
> file.\n"
> " --"OPT_ALG": ACL classify method to use, one of: %s\n",
> prgname, ACL_LEAD_CHAR, ROUTE_LEAD_CHAR, alg);
> }
> @@ -1758,14 +1754,14 @@ parse_args(int argc, char **argv)
> int option_index;
> char *prgname = argv[0];
> static struct option lgopts[] = {
> - {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
> - {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
> - {OPT_ENBJMO, 0, NULL, OPT_ENBJMO_NUM },
> - {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
> - {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
> - {OPT_ALG, 1, NULL, OPT_ALG_NUM },
> - {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
> - {NULL, 0, 0, 0 }
> + {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
> + {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
> + {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
> + {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
> + {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
> + {OPT_ALG, 1, NULL, OPT_ALG_NUM },
> + {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
> + {NULL, 0, 0, 0 }
> };
>
> argvopt = argv;
> @@ -1804,43 +1800,11 @@ parse_args(int argc, char **argv)
> numa_on = 0;
> break;
>
> - case OPT_ENBJMO_NUM:
> - {
> - struct option lenopts = {
> - "max-pkt-len",
> - required_argument,
> - 0,
> - 0
> - };
> -
> - printf("jumbo frame is enabled\n");
> - port_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> - DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /*
> - * if no max-pkt-len set, then use the
> - * default value RTE_ETHER_MAX_LEN
> - */
> - if (getopt_long(argc, argvopt, "",
> - &lenopts, &option_index) == 0) {
> - ret = parse_max_pkt_len(optarg);
> - if ((ret < 64) ||
> - (ret > MAX_JUMBO_PKT_LEN)) {
> - printf("invalid packet "
> - "length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> - printf("set jumbo frame max packet length "
> - "to %u\n",
> - (unsigned int)
> - port_conf.rxmode.max_rx_pkt_len);
> + case OPT_MAX_PKT_LEN_NUM:
> + printf("Custom frame size is configured\n");
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> - }
> +
> case OPT_RULE_IPV4_NUM:
> parm_config.rule_ipv4_name = optarg;
> break;
> @@ -2007,6 +1971,43 @@ set_default_dest_mac(void)
> }
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> int
> main(int argc, char **argv)
> {
> @@ -2080,6 +2081,12 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
> index a0de8ca9b42d..b431b9ff5f3c 100644
> --- a/examples/l3fwd-graph/main.c
> +++ b/examples/l3fwd-graph/main.c
> @@ -112,7 +112,6 @@ static uint16_t nb_lcore_params =
> RTE_DIM(lcore_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> @@ -126,6 +125,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool
> *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
>
> static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
> @@ -259,7 +260,7 @@ print_usage(const char *prgname)
> " [-P]"
> " --config (port,queue,lcore)[,(port,queue,lcore)]"
> " [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]"
> + " [--max-pkt-len PKTLEN]"
> " [--no-numa]"
> " [--per-port-pool]\n\n"
>
> @@ -268,9 +269,7 @@ print_usage(const char *prgname)
> " --config (port,queue,lcore): Rx queue configuration\n"
> " --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet
> destination for "
> "port X\n"
> - " --enable-jumbo: Enable jumbo frames\n"
> - " --max-pkt-len: Under the premise of enabling jumbo,\n"
> - " maximum packet length in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal
> (64-9600)\n"
> " --no-numa: Disable numa awareness\n"
> " --per-port-pool: Use separate buffer pool per port\n\n",
> prgname);
> @@ -404,7 +403,7 @@ static const char short_options[] = "p:" /* portmask
> */
> #define CMD_LINE_OPT_CONFIG "config"
> #define CMD_LINE_OPT_ETH_DEST "eth-dest"
> #define CMD_LINE_OPT_NO_NUMA "no-numa"
> -#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
> +#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
> #define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
> enum {
> /* Long options mapped to a short option */
> @@ -416,7 +415,7 @@ enum {
> CMD_LINE_OPT_CONFIG_NUM,
> CMD_LINE_OPT_ETH_DEST_NUM,
> CMD_LINE_OPT_NO_NUMA_NUM,
> - CMD_LINE_OPT_ENABLE_JUMBO_NUM,
> + CMD_LINE_OPT_MAX_PKT_LEN_NUM,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL,
> };
>
> @@ -424,7 +423,7 @@ static const struct option lgopts[] = {
> {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
> {CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
> {CMD_LINE_OPT_NO_NUMA, 0, 0,
> CMD_LINE_OPT_NO_NUMA_NUM},
> - {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0,
> CMD_LINE_OPT_ENABLE_JUMBO_NUM},
> + {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0,
> CMD_LINE_OPT_MAX_PKT_LEN_NUM},
> {CMD_LINE_OPT_PER_PORT_POOL, 0, 0,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL},
> {NULL, 0, 0, 0},
> };
> @@ -490,28 +489,8 @@ parse_args(int argc, char **argv)
> numa_on = 0;
> break;
>
> - case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
> - const struct option lenopts = {"max-pkt-len",
> - required_argument, 0, 0};
> -
> - port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /*
> - * if no max-pkt-len set, use the default
> - * value RTE_ETHER_MAX_LEN.
> - */
> - if (getopt_long(argc, argvopt, "", &lenopts,
> - &option_index) == 0) {
> - ret = parse_max_pkt_len(optarg);
> - if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
> - fprintf(stderr, "Invalid maximum "
> - "packet length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> + case CMD_LINE_OPT_MAX_PKT_LEN_NUM: {
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> }
>
> @@ -722,6 +701,43 @@ graph_main_loop(void *conf)
> }
> /* >8 End of main processing loop. */
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> int
> main(int argc, char **argv)
> {
> @@ -807,6 +823,13 @@ main(int argc, char **argv)
> nb_rx_queue, n_tx_queue);
>
> rte_eth_dev_info_get(portid, &dev_info);
> +
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
> index aa7b8db44ae8..e58561327c48 100644
> --- a/examples/l3fwd-power/main.c
> +++ b/examples/l3fwd-power/main.c
> @@ -251,7 +251,6 @@ uint16_t nb_lcore_params =
> RTE_DIM(lcore_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -266,6 +265,8 @@ static struct rte_eth_conf port_conf = {
> }
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool * pktmbuf_pool[NB_SOCKETS];
>
>
> @@ -1601,16 +1602,15 @@ print_usage(const char *prgname)
> " [--config (port,queue,lcore)[,(port,queue,lcore]]"
> " [--high-perf-cores CORELIST"
> " [--perf-config
> (port,queue,hi_perf,lcore_index)[,(port,queue,hi_perf,lcore_index]]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
> + " [--max-pkt-len PKTLEN]\n"
> " -p PORTMASK: hexadecimal bitmask of ports to
> configure\n"
> - " -P : enable promiscuous mode\n"
> + " -P: enable promiscuous mode\n"
> " --config (port,queue,lcore): rx queues configuration\n"
> " --high-perf-cores CORELIST: list of high performance
> cores\n"
> " --perf-config: similar as config, cores specified as indices"
> " for bins containing high or regular performance cores\n"
> " --no-numa: optional, disable numa awareness\n"
> - " --enable-jumbo: enable jumbo frame"
> - " which max packet len is PKTLEN in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal
> (64-9600)\n"
> " --parse-ptype: parse packet type by software\n"
> " --legacy: use legacy interrupt-based scaling\n"
> " --empty-poll: enable empty poll detection"
> @@ -1795,6 +1795,7 @@ parse_ep_config(const char *q_arg)
> #define CMD_LINE_OPT_INTERRUPT_ONLY "interrupt-only"
> #define CMD_LINE_OPT_TELEMETRY "telemetry"
> #define CMD_LINE_OPT_PMD_MGMT "pmd-mgmt"
> +#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
>
> /* Parse the argument given in the command line of the application */
> static int
> @@ -1810,7 +1811,7 @@ parse_args(int argc, char **argv)
> {"perf-config", 1, 0, 0},
> {"high-perf-cores", 1, 0, 0},
> {"no-numa", 0, 0, 0},
> - {"enable-jumbo", 0, 0, 0},
> + {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, 0},
> {CMD_LINE_OPT_EMPTY_POLL, 1, 0, 0},
> {CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
> {CMD_LINE_OPT_LEGACY, 0, 0, 0},
> @@ -1954,36 +1955,10 @@ parse_args(int argc, char **argv)
> }
>
> if (!strncmp(lgopts[option_index].name,
> - "enable-jumbo", 12)) {
> - struct option lenopts =
> - {"max-pkt-len", required_argument, \
> - 0, 0};
> -
> - printf("jumbo frame is enabled \n");
> - port_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> -
> DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /**
> - * if no max-pkt-len set, use the default value
> - * RTE_ETHER_MAX_LEN
> - */
> - if (0 == getopt_long(argc, argvopt, "",
> - &lenopts, &option_index)) {
> - ret = parse_max_pkt_len(optarg);
> - if ((ret < 64) ||
> - (ret >
> MAX_JUMBO_PKT_LEN)){
> - printf("invalid packet "
> - "length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len =
> ret;
> - }
> - printf("set jumbo frame "
> - "max packet length to %u\n",
> - (unsigned
> int)port_conf.rxmode.max_rx_pkt_len);
> + CMD_LINE_OPT_MAX_PKT_LEN,
> +
> sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
> + printf("Custom frame size is configured\n");
> + max_pkt_len = parse_max_pkt_len(optarg);
> }
>
> if (!strncmp(lgopts[option_index].name,
> @@ -2505,6 +2480,43 @@ mode_to_str(enum appmode mode)
> }
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> /* Power library initialized in the main routine. 8< */
> int
> main(int argc, char **argv)
> @@ -2622,6 +2634,12 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 00ac267af1dd..cb9bc7ad6002 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -121,7 +121,6 @@ static uint16_t nb_lcore_params =
> sizeof(lcore_params_array_default) /
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -136,6 +135,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool
> *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
> static uint8_t lkp_per_socket[NB_SOCKETS];
>
> @@ -326,7 +327,7 @@ print_usage(const char *prgname)
> " [--lookup]"
> " --config (port,queue,lcore)[,(port,queue,lcore)]"
> " [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]"
> + " [--max-pkt-len PKTLEN]"
> " [--no-numa]"
> " [--hash-entry-num]"
> " [--ipv6]"
> @@ -344,9 +345,7 @@ print_usage(const char *prgname)
> " Accepted: em (Exact Match), lpm (Longest Prefix
> Match), fib (Forwarding Information Base)\n"
> " --config (port,queue,lcore): Rx queue configuration\n"
> " --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet
> destination for port X\n"
> - " --enable-jumbo: Enable jumbo frames\n"
> - " --max-pkt-len: Under the premise of enabling jumbo,\n"
> - " maximum packet length in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal
> (64-9600)\n"
> " --no-numa: Disable numa awareness\n"
> " --hash-entry-num: Specify the hash entry number in
> hexadecimal to be setup\n"
> " --ipv6: Set if running ipv6 packets\n"
> @@ -566,7 +565,7 @@ static const char short_options[] =
> #define CMD_LINE_OPT_ETH_DEST "eth-dest"
> #define CMD_LINE_OPT_NO_NUMA "no-numa"
> #define CMD_LINE_OPT_IPV6 "ipv6"
> -#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
> +#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
> #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
> #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
> #define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
> @@ -584,7 +583,7 @@ enum {
> CMD_LINE_OPT_ETH_DEST_NUM,
> CMD_LINE_OPT_NO_NUMA_NUM,
> CMD_LINE_OPT_IPV6_NUM,
> - CMD_LINE_OPT_ENABLE_JUMBO_NUM,
> + CMD_LINE_OPT_MAX_PKT_LEN_NUM,
> CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
> CMD_LINE_OPT_PARSE_PTYPE_NUM,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL,
> @@ -599,7 +598,7 @@ static const struct option lgopts[] = {
> {CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
> {CMD_LINE_OPT_NO_NUMA, 0, 0,
> CMD_LINE_OPT_NO_NUMA_NUM},
> {CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
> - {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0,
> CMD_LINE_OPT_ENABLE_JUMBO_NUM},
> + {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0,
> CMD_LINE_OPT_MAX_PKT_LEN_NUM},
> {CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0,
> CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
> {CMD_LINE_OPT_PARSE_PTYPE, 0, 0,
> CMD_LINE_OPT_PARSE_PTYPE_NUM},
> {CMD_LINE_OPT_PER_PORT_POOL, 0, 0,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL},
> @@ -698,31 +697,9 @@ parse_args(int argc, char **argv)
> ipv6 = 1;
> break;
>
> - case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
> - const struct option lenopts = {
> - "max-pkt-len", required_argument, 0, 0
> - };
> -
> - port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /*
> - * if no max-pkt-len set, use the default
> - * value RTE_ETHER_MAX_LEN.
> - */
> - if (getopt_long(argc, argvopt, "",
> - &lenopts, &option_index) == 0) {
> - ret = parse_max_pkt_len(optarg);
> - if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
> - fprintf(stderr,
> - "invalid maximum packet
> length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> + case CMD_LINE_OPT_MAX_PKT_LEN_NUM:
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> - }
>
> case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
> ret = parse_hash_entry_number(optarg);
> @@ -981,6 +958,43 @@ prepare_ptype_parser(uint16_t portid, uint16_t
> queueid)
> return 0;
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> static void
> l3fwd_poll_resource_setup(void)
> {
> @@ -1035,6 +1049,12 @@ l3fwd_poll_resource_setup(void)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/performance-thread/l3fwd-thread/main.c
> b/examples/performance-thread/l3fwd-thread/main.c
> index 2f593abf263d..b6cddc8c7b51 100644
> --- a/examples/performance-thread/l3fwd-thread/main.c
> +++ b/examples/performance-thread/l3fwd-thread/main.c
> @@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params =
> RTE_DIM(tx_thread_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -323,6 +322,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
>
> #if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
> @@ -2643,7 +2644,7 @@ print_usage(const char *prgname)
> printf("%s [EAL options] -- -p PORTMASK -P"
> " [--rx
> (port,queue,lcore,thread)[,(port,queue,lcore,thread]]"
> " [--tx (lcore,thread)[,(lcore,thread]]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
> + " [--max-pkt-len PKTLEN]"
> " [--parse-ptype]\n\n"
> " -p PORTMASK: hexadecimal bitmask of ports to
> configure\n"
> " -P : enable promiscuous mode\n"
> @@ -2653,8 +2654,7 @@ print_usage(const char *prgname)
> " --eth-dest=X,MM:MM:MM:MM:MM:MM: optional,
> ethernet destination for port X\n"
> " --no-numa: optional, disable numa awareness\n"
> " --ipv6: optional, specify it if running ipv6 packets\n"
> - " --enable-jumbo: enable jumbo frame"
> - " which max packet len is PKTLEN in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal
> (64-9600)\n"
> " --hash-entry-num: specify the hash entry number in
> hexadecimal to be setup\n"
> " --no-lthreads: turn off lthread model\n"
> " --parse-ptype: set to use software to analyze packet
> type\n\n",
> @@ -2877,8 +2877,8 @@ enum {
> OPT_NO_NUMA_NUM,
> #define OPT_IPV6 "ipv6"
> OPT_IPV6_NUM,
> -#define OPT_ENABLE_JUMBO "enable-jumbo"
> - OPT_ENABLE_JUMBO_NUM,
> +#define OPT_MAX_PKT_LEN "max-pkt-len"
> + OPT_MAX_PKT_LEN_NUM,
> #define OPT_HASH_ENTRY_NUM "hash-entry-num"
> OPT_HASH_ENTRY_NUM_NUM,
> #define OPT_NO_LTHREADS "no-lthreads"
> @@ -2902,7 +2902,7 @@ parse_args(int argc, char **argv)
> {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
> {OPT_NO_NUMA, 0, NULL, OPT_NO_NUMA_NUM },
> {OPT_IPV6, 0, NULL, OPT_IPV6_NUM },
> - {OPT_ENABLE_JUMBO, 0, NULL,
> OPT_ENABLE_JUMBO_NUM },
> + {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
> {OPT_HASH_ENTRY_NUM, 1, NULL,
> OPT_HASH_ENTRY_NUM_NUM },
> {OPT_NO_LTHREADS, 0, NULL, OPT_NO_LTHREADS_NUM },
> {OPT_PARSE_PTYPE, 0, NULL, OPT_PARSE_PTYPE_NUM },
> @@ -2981,35 +2981,10 @@ parse_args(int argc, char **argv)
> parse_ptype_on = 1;
> break;
>
> - case OPT_ENABLE_JUMBO_NUM:
> - {
> - struct option lenopts = {"max-pkt-len",
> - required_argument, 0, 0};
> -
> - printf("jumbo frame is enabled - disabling simple TX
> path\n");
> - port_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> - DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /* if no max-pkt-len set, use the default value
> - * RTE_ETHER_MAX_LEN
> - */
> - if (getopt_long(argc, argvopt, "", &lenopts,
> - &option_index) == 0) {
> -
> - ret = parse_max_pkt_len(optarg);
> - if ((ret < 64) || (ret > MAX_JUMBO_PKT_LEN))
> {
> - printf("invalid packet length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> - printf("set jumbo frame max packet length to %u\n",
> - (unsigned
> int)port_conf.rxmode.max_rx_pkt_len);
> + case OPT_MAX_PKT_LEN_NUM:
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> - }
> +
> #if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
> case OPT_HASH_ENTRY_NUM_NUM:
> ret = parse_hash_entry_number(optarg);
> @@ -3489,6 +3464,43 @@ check_all_ports_link_status(uint32_t port_mask)
> }
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> int
> main(int argc, char **argv)
> {
> @@ -3577,6 +3589,12 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/performance-thread/l3fwd-thread/test.sh
> b/examples/performance-thread/l3fwd-thread/test.sh
> index f0b6e271a5f3..3dd33407ea41 100755
> --- a/examples/performance-thread/l3fwd-thread/test.sh
> +++ b/examples/performance-thread/l3fwd-thread/test.sh
> @@ -11,7 +11,7 @@ case "$1" in
> echo "1.1 1 L-core per pcore (N=2)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,0)" \
> --tx="(1,0)" \
> --stat-lcore 2 \
> @@ -23,7 +23,7 @@ case "$1" in
> echo "1.2 1 L-core per pcore (N=4)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,1,1)" \
> --tx="(2,0)(3,1)" \
> --stat-lcore 4 \
> @@ -34,7 +34,7 @@ case "$1" in
> echo "1.3 1 L-core per pcore (N=8)"
>
> ./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)"
> \
> --tx="(4,0)(5,1)(6,2)(7,3)" \
> --stat-lcore 8 \
> @@ -45,7 +45,7 @@ case "$1" in
> echo "1.3 1 L-core per pcore (N=16)"
>
> ./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --
> rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
> --
> tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
> --stat-lcore 16 \
> @@ -61,7 +61,7 @@ case "$1" in
> echo "2.1 N L-core per pcore (N=2)"
>
> ./build/l3fwd-thread -c ff -n 2 --lcores="2,(0-1)@0" -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,0)" \
> --tx="(1,0)" \
> --stat-lcore 2 \
> @@ -73,7 +73,7 @@ case "$1" in
> echo "2.2 N L-core per pcore (N=4)"
>
> ./build/l3fwd-thread -c ff -n 2 --lcores="(0-3)@0,4" -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,1,1)" \
> --tx="(2,0)(3,1)" \
> --stat-lcore 4 \
> @@ -84,7 +84,7 @@ case "$1" in
> echo "2.3 N L-core per pcore (N=8)"
>
> ./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-7)@0,8" -- -P -p
> 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)"
> \
> --tx="(4,0)(5,1)(6,2)(7,3)" \
> --stat-lcore 8 \
> @@ -95,7 +95,7 @@ case "$1" in
> echo "2.3 N L-core per pcore (N=16)"
>
> ./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-15)@0,16" -- -P
> -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --
> rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
> --
> tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
> --stat-lcore 16 \
> @@ -111,7 +111,7 @@ case "$1" in
> echo "3.1 N L-threads per pcore (N=2)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,0)" \
> --tx="(0,0)" \
> --stat-lcore 1
> @@ -121,7 +121,7 @@ case "$1" in
> echo "3.2 N L-threads per pcore (N=4)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,1)" \
> --tx="(0,0)(0,1)" \
> --stat-lcore 1
> @@ -131,7 +131,7 @@ case "$1" in
> echo "3.2 N L-threads per pcore (N=8)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)"
> \
> --tx="(0,0)(0,1)(0,2)(0,3)" \
> --stat-lcore 1
> @@ -141,7 +141,7 @@ case "$1" in
> echo "3.2 N L-threads per pcore (N=16)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --
> rx="(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)" \
> --tx="(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)"
> \
> --stat-lcore 1
> diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
> index 467cda5a6dac..4f20dfc4be06 100644
> --- a/examples/pipeline/obj.c
> +++ b/examples/pipeline/obj.c
> @@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
> .link_speeds = 0,
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
> + .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN),
> /* Jumbo frame MTU */
> .split_hdr_size = 0, /* Header split buffer size */
> },
> .rx_adv_conf = {
> diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
> index 4f32ade7fbf7..3b6c6c297f43 100644
> --- a/examples/ptpclient/ptpclient.c
> +++ b/examples/ptpclient/ptpclient.c
> @@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
> uint8_t ptp_enabled_port_nb;
> static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static const struct rte_ether_addr ether_multicast = {
> .addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
> };
> @@ -178,7 +172,7 @@ static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> struct rte_eth_dev_info dev_info;
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1;
> const uint16_t tx_rings = 1;
> int retval;
> @@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
> index 7ffccc8369dc..c32d2e12e633 100644
> --- a/examples/qos_meter/main.c
> +++ b/examples/qos_meter/main.c
> @@ -52,7 +52,6 @@ static struct rte_mempool *pool = NULL;
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
> index 1abe003fc6ae..1367569c65db 100644
> --- a/examples/qos_sched/init.c
> +++ b/examples/qos_sched/init.c
> @@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/rxtx_callbacks/main.c
> b/examples/rxtx_callbacks/main.c
> index ab6fa7d56c5d..6845c396b8d9 100644
> --- a/examples/rxtx_callbacks/main.c
> +++ b/examples/rxtx_callbacks/main.c
> @@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
> static const char usage[] =
> "%s EAL_ARGS -- [-t]\n";
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static struct {
> uint64_t total_cycles;
> uint64_t total_queue_cycles;
> @@ -124,7 +118,7 @@ calc_latency(uint16_t port, uint16_t qidx
> __rte_unused,
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> uint16_t nb_rxd = RX_RING_SIZE;
> uint16_t nb_txd = TX_RING_SIZE;
> @@ -137,6 +131,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
> index ae9bbee8d820..fd7207aee758 100644
> --- a/examples/skeleton/basicfwd.c
> +++ b/examples/skeleton/basicfwd.c
> @@ -17,14 +17,6 @@
> #define MBUF_CACHE_SIZE 250
> #define BURST_SIZE 32
>
> -/* Configuration of ethernet ports. 8< */
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -/* >8 End of configuration of ethernet ports. */
> -
> /* basicfwd.c: Basic DPDK skeleton forwarding example. */
>
> /*
> @@ -36,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> uint16_t nb_rxd = RX_RING_SIZE;
> uint16_t nb_txd = TX_RING_SIZE;
> @@ -48,6 +40,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index d0bf1f31e36a..da381b41c0c5 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -44,6 +44,7 @@
> #define BURST_RX_RETRIES 4 /* Number of retries on RX. */
>
> #define JUMBO_FRAME_MAX_SIZE 0x2600
> +#define MAX_MTU (JUMBO_FRAME_MAX_SIZE - (RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN))
>
> /* State of virtio device. */
> #define DEVICE_MAC_LEARNING 0
> @@ -633,8 +634,7 @@ us_vhost_parse_args(int argc, char **argv)
> if (ret) {
> vmdq_conf_default.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - vmdq_conf_default.rxmode.max_rx_pkt_len
> - = JUMBO_FRAME_MAX_SIZE;
> + vmdq_conf_default.rxmode.mtu =
> MAX_MTU;
> }
> break;
>
> diff --git a/examples/vm_power_manager/main.c
> b/examples/vm_power_manager/main.c
> index e59fb7d3478b..e19d79a40802 100644
> --- a/examples/vm_power_manager/main.c
> +++ b/examples/vm_power_manager/main.c
> @@ -51,17 +51,10 @@
> static uint32_t enabled_port_mask;
> static volatile bool force_quit;
>
> -/****************/
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> uint16_t q;
> @@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index daf5ca924221..4d0584af52e3 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1324,6 +1324,19 @@ eth_dev_validate_offloads(uint16_t port_id,
> uint64_t req_offloads,
> return ret;
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1331,6 +1344,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t
> nb_rx_q, uint16_t nb_tx_q,
> struct rte_eth_dev *dev;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_conf orig_conf;
> + uint32_t max_rx_pktlen;
> uint16_t overhead_len;
> int diag;
> int ret;
> @@ -1381,11 +1395,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t
> nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
>
> /* Get the real Ethernet overhead length */
> - if (dev_info.max_mtu != UINT16_MAX &&
> - dev_info.max_rx_pktlen > dev_info.max_mtu)
> - overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
> - else
> - overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
>
> /* If number of queues specified by application for both Rx and Tx is
> * zero, use driver preferred values. This cannot be done individually
> @@ -1454,49 +1465,45 @@ rte_eth_dev_configure(uint16_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> /*
> - * If jumbo frames are enabled, check that the maximum RX packet
> - * length is supported by the configured device.
> + * Check that the maximum RX packet length is supported by the
> + * configured device.
> */
> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> {
> - if (dev_conf->rxmode.max_rx_pkt_len >
> dev_info.max_rx_pktlen) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u >
> max valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - dev_info.max_rx_pktlen);
> - ret = -EINVAL;
> - goto rollback;
> - } else if (dev_conf->rxmode.max_rx_pkt_len <
> RTE_ETHER_MIN_LEN) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u <
> min valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - (unsigned int)RTE_ETHER_MIN_LEN);
> - ret = -EINVAL;
> - goto rollback;
> - }
> + if (dev_conf->rxmode.mtu == 0)
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u > max valid
> value %u\n",
> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
> + ret = -EINVAL;
> + goto rollback;
> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u < min valid
> value %u\n",
> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
> + ret = -EINVAL;
> + goto rollback;
> + }
>
> - /* Scale the MTU size to adapt max_rx_pkt_len */
> - dev->data->mtu = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> - overhead_len;
> - } else {
> - uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
> - if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
> - pktlen > RTE_ETHER_MTU + overhead_len)
> + if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> == 0) {
> + if (dev->data->dev_conf.rxmode.mtu <
> RTE_ETHER_MIN_MTU ||
> + dev->data->dev_conf.rxmode.mtu >
> RTE_ETHER_MTU)
> /* Use default value */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - RTE_ETHER_MTU +
> overhead_len;
> + dev->data->dev_conf.rxmode.mtu =
> RTE_ETHER_MTU;
> }
>
> + dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
> +
> /*
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> if (dev_conf->rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + dev->data->dev_conf.rxmode.max_lro_pkt_size =
> max_rx_pktlen;
> ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data-
> >dev_conf.rxmode.max_lro_pkt_size,
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> goto rollback;
> @@ -2156,13 +2163,20 @@ rte_eth_rx_queue_setup(uint16_t port_id,
> uint16_t rx_queue_id,
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> + /* Get the real Ethernet overhead length */
> if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + uint16_t overhead_len;
> + uint32_t max_rx_pktlen;
> + int ret;
> +
> + overhead_len =
> eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
> + max_rx_pktlen = dev->data->mtu + overhead_len;
> if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> - int ret = eth_dev_check_lro_pkt_size(port_id,
> + dev->data->dev_conf.rxmode.max_lro_pkt_size =
> max_rx_pktlen;
> + ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data-
> >dev_conf.rxmode.max_lro_pkt_size,
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> return ret;
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index afdc53b674cc..9fba2bd73c84 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -416,7 +416,7 @@ enum rte_eth_tx_mq_mode {
> struct rte_eth_rxmode {
> /** The multi-queue packet distribution mode to be used, e.g. RSS.
> */
> enum rte_eth_rx_mq_mode mq_mode;
> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled.
> */
> + uint32_t mtu; /**< Requested MTU. */
> /** Maximum allowed size of LRO aggregated packet. */
> uint32_t max_lro_pkt_size;
> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
> diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
> index 0036bda7465c..1491c815c312 100644
> --- a/lib/ethdev/rte_ethdev_trace.h
> +++ b/lib/ethdev/rte_ethdev_trace.h
> @@ -28,7 +28,7 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_u16(nb_tx_q);
> rte_trace_point_emit_u32(dev_conf->link_speeds);
> rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
> - rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
> + rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
> rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
> rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
> rte_trace_point_emit_u64(dev_conf->txmode.offloads);
> --
> 2.31.1
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v4 4/6] ethdev: remove jumbo offload flag
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-10-08 8:38 ` Xu, Rosen
0 siblings, 0 replies; 112+ messages in thread
From: Xu, Rosen @ 2021-10-08 8:38 UTC (permalink / raw)
To: Yigit, Ferruh, Jerin Jacob, Li, Xiaoyun, Ajit Khaparde,
Somnath Kotur, Igor Russkikh, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, Daley, John, Hyong Youb Kim,
Gaetan Rivet, Zhang, Qi Z, Wang, Xiao W, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou, Xing, Beilei,
Wu, Jingjing, Yang, Qiming, Andrew Boyer, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
Chenbo, Yong Wang, Ananyev, Konstantin, Nicolau, Radu,
Akhil Goyal, Hunt, David, Mcnamara, John, Thomas Monjalon
Cc: dev
Hi,
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Wednesday, October 06, 2021 1:17
> To: Jerin Jacob <jerinj@marvell.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Igor Russkikh <irusskikh@marvell.com>;
> Somalapuram Amaranath <asomalap@amd.com>; Rasesh Mody
> <rmody@marvell.com>; Shahed Shaikh <shshaikh@marvell.com>; Chas
> Williams <chas3@att.com>; Min Hu (Connor) <humin29@huawei.com>;
> Nithin Dabilpuram <ndabilpuram@marvell.com>; Kiran Kumar K
> <kirankumark@marvell.com>; Sunil Kumar Kori <skori@marvell.com>; Satha
> Rao <skoteshwar@marvell.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@oss.nxp.com>;
> Wang, Haiyue <haiyue.wang@intel.com>; Marcin Wojtas
> <mw@semihalf.com>; Michal Krawczyk <mk@semihalf.com>; Shai Brandes
> <shaibran@amazon.com>; Evgeny Schemeilin <evgenys@amazon.com>; Igor
> Chauskin <igorch@amazon.com>; Gagandeep Singh <g.singh@nxp.com>;
> Daley, John <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>;
> Gaetan Rivet <grive@u256.net>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang,
> Xiao W <xiao.w.wang@intel.com>; Ziyang Xuan <xuanziyang2@huawei.com>;
> Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Yisen Zhuang <yisen.zhuang@huawei.com>;
> Lijun Ou <oulijun@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> Andrew Boyer <aboyer@pensando.io>; Xu, Rosen <rosen.xu@intel.com>;
> Matan Azrad <matan@nvidia.com>; Viacheslav Ovsiienko
> <viacheslavo@nvidia.com>; Zyta Szpak <zr@semihalf.com>; Liron Himi
> <lironh@marvell.com>; Heinrich Kuhn <heinrich.kuhn@corigine.com>;
> Harman Kalra <hkalra@marvell.com>; Nalla Pradeep <pnalla@marvell.com>;
> Radha Mohan Chintakuntla <radhac@marvell.com>; Veerasenareddy Burru
> <vburru@marvell.com>; Devendra Singh Rawat
> <dsinghrawat@marvell.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Maciej Czekaj <mczekaj@marvell.com>;
> Jiawen Wu <jiawenwu@trustnetic.com>; Jian Wang
> <jianwang@trustnetic.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>;
> Yong Wang <yongwang@vmware.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>;
> Akhil Goyal <gakhil@marvell.com>; Hunt, David <david.hunt@intel.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org
> Subject: [PATCH v4 4/6] ethdev: remove jumbo offload flag
>
> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>
> Instead of drivers announce this capability, application can deduct the
> capability by checking reported 'dev_info.max_mtu' or
> 'dev_info.max_rx_pktlen'.
>
> And instead of application setting this flag explicitly to enable jumbo
> frames, this can be deduced by driver by comparing requested 'mtu' to
> 'RTE_ETHER_MTU'.
>
> Removing this additional configuration for simplification.
>
> Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> ---
> app/test-eventdev/test_pipeline_common.c | 2 -
> app/test-pmd/cmdline.c | 2 +-
> app/test-pmd/config.c | 25 +---------
> app/test-pmd/testpmd.c | 48 +------------------
> app/test-pmd/testpmd.h | 2 +-
> doc/guides/howto/debug_troubleshoot.rst | 2 -
> doc/guides/nics/bnxt.rst | 1 -
> doc/guides/nics/features.rst | 3 +-
> drivers/net/atlantic/atl_ethdev.c | 1 -
> drivers/net/axgbe/axgbe_ethdev.c | 1 -
> drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
> drivers/net/bnxt/bnxt.h | 1 -
> drivers/net/bnxt/bnxt_ethdev.c | 10 +---
> drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
> drivers/net/cnxk/cnxk_ethdev.h | 5 +-
> drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
> drivers/net/cxgbe/cxgbe.h | 1 -
> drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
> drivers/net/cxgbe/sge.c | 5 +-
> drivers/net/dpaa/dpaa_ethdev.c | 2 -
> drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
> drivers/net/e1000/e1000_ethdev.h | 4 +-
> drivers/net/e1000/em_ethdev.c | 4 +-
> drivers/net/e1000/em_rxtx.c | 19 +++-----
> drivers/net/e1000/igb_rxtx.c | 3 +-
> drivers/net/ena/ena_ethdev.c | 1 -
> drivers/net/enetc/enetc_ethdev.c | 3 +-
> drivers/net/enic/enic_res.c | 1 -
> drivers/net/failsafe/failsafe_ops.c | 2 -
> drivers/net/fm10k/fm10k_ethdev.c | 1 -
> drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
> drivers/net/hns3/hns3_ethdev.c | 1 -
> drivers/net/hns3/hns3_ethdev_vf.c | 1 -
> drivers/net/i40e/i40e_ethdev.c | 1 -
> drivers/net/i40e/i40e_rxtx.c | 2 +-
> drivers/net/iavf/iavf_ethdev.c | 3 +-
> drivers/net/ice/ice_dcf_ethdev.c | 3 +-
> drivers/net/ice/ice_dcf_vf_representor.c | 1 -
> drivers/net/ice/ice_ethdev.c | 1 -
> drivers/net/ice/ice_rxtx.c | 3 +-
> drivers/net/igc/igc_ethdev.h | 1 -
> drivers/net/igc/igc_txrx.c | 2 +-
> drivers/net/ionic/ionic_ethdev.c | 1 -
> drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
> drivers/net/ixgbe/ixgbe_pf.c | 9 +---
> drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
> drivers/net/mlx4/mlx4_rxq.c | 1 -
> drivers/net/mlx5/mlx5_rxq.c | 1 -
> drivers/net/mvneta/mvneta_ethdev.h | 3 +-
> drivers/net/mvpp2/mrvl_ethdev.c | 1 -
> drivers/net/nfp/nfp_common.c | 6 +--
> drivers/net/octeontx/octeontx_ethdev.h | 1 -
> drivers/net/octeontx2/otx2_ethdev.h | 1 -
> drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
> drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
> drivers/net/qede/qede_ethdev.c | 1 -
> drivers/net/sfc/sfc_rx.c | 2 -
> drivers/net/thunderx/nicvf_ethdev.h | 1 -
> drivers/net/txgbe/txgbe_rxtx.c | 1 -
> drivers/net/virtio/virtio_ethdev.c | 1 -
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
> examples/ip_fragmentation/main.c | 3 +-
> examples/ip_reassembly/main.c | 3 +-
> examples/ipsec-secgw/ipsec-secgw.c | 2 -
> examples/ipv4_multicast/main.c | 1 -
> examples/kni/main.c | 5 --
> examples/l3fwd-acl/main.c | 4 +-
> examples/l3fwd-graph/main.c | 4 +-
> examples/l3fwd-power/main.c | 4 +-
> examples/l3fwd/main.c | 4 +-
> .../performance-thread/l3fwd-thread/main.c | 4 +-
> examples/vhost/main.c | 5 +-
> lib/ethdev/rte_ethdev.c | 26 +---------
> lib/ethdev/rte_ethdev.h | 1 -
> 75 files changed, 47 insertions(+), 259 deletions(-)
>
> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-
> eventdev/test_pipeline_common.c
> index 5fcea74b4d43..2775e72c580d 100644
> --- a/app/test-eventdev/test_pipeline_common.c
> +++ b/app/test-eventdev/test_pipeline_common.c
> @@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
>
> port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> RTE_ETHER_CRC_LEN;
> - if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
> - port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> t->internal_port = 1;
> RTE_ETH_FOREACH_DEV(i) {
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index a677451073ae..117945c2c61e 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1923,7 +1923,7 @@ cmd_config_max_pkt_len_parsed(void
> *parsed_result,
> return;
> }
>
> - update_jumbo_frame_offload(port_id, res->value);
> + update_mtu_from_frame_size(port_id, res->value);
> }
>
> init_port_config();
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index db3eeffa0093..e890fadc716c 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1144,40 +1144,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off,
> uint32_t reg_v)
> void
> port_mtu_set(portid_t port_id, uint16_t mtu)
> {
> + struct rte_port *port = &ports[port_id];
> int diag;
> - struct rte_port *rte_port = &ports[port_id];
> - struct rte_eth_dev_info dev_info;
> - int ret;
>
> if (port_id_is_invalid(port_id, ENABLED_WARN))
> return;
>
> - ret = eth_dev_info_get_print_err(port_id, &dev_info);
> - if (ret != 0)
> - return;
> -
> - if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
> - fprintf(stderr,
> - "Set MTU failed. MTU:%u is not in valid range,
> min:%u - max:%u\n",
> - mtu, dev_info.min_mtu, dev_info.max_mtu);
> - return;
> - }
> diag = rte_eth_dev_set_mtu(port_id, mtu);
> if (diag != 0) {
> fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
> return;
> }
>
> - rte_port->dev_conf.rxmode.mtu = mtu;
> -
> - if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (mtu > RTE_ETHER_MTU)
> - rte_port->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - rte_port->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
> + port->dev_conf.rxmode.mtu = mtu;
> }
>
> /* Generic flow management functions. */
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 8c23cfe7c3da..d2a2a9ac6cda 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -1503,12 +1503,6 @@ init_config_port_offloads(portid_t pid, uint32_t
> socket_id)
> if (ret != 0)
> rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
>
> - ret = update_jumbo_frame_offload(pid, 0);
> - if (ret != 0)
> - fprintf(stderr,
> - "Updating jumbo frame offload failed for port %u\n",
> - pid);
> -
> if (!(port->dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE))
> port->dev_conf.txmode.offloads &=
> ~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> @@ -3463,24 +3457,18 @@ rxtx_port_config(struct rte_port *port)
> }
>
> /*
> - * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME
> offload,
> - * MTU is also aligned.
> + * Helper function to set MTU from frame size
> *
> * port->dev_info should be set before calling this function.
> *
> - * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
> - * ETH_OVERHEAD". This is useful to update flags but not MTU value.
> - *
> * return 0 on success, negative on error
> */
> int
> -update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
> +update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
> {
> struct rte_port *port = &ports[portid];
> uint32_t eth_overhead;
> - uint64_t rx_offloads;
> uint16_t mtu, new_mtu;
> - bool on;
>
> eth_overhead = get_eth_overhead(&port->dev_info);
>
> @@ -3489,40 +3477,8 @@ update_jumbo_frame_offload(portid_t portid,
> uint32_t max_rx_pktlen)
> return -1;
> }
>
> - if (max_rx_pktlen == 0)
> - max_rx_pktlen = mtu + eth_overhead;
> -
> - rx_offloads = port->dev_conf.rxmode.offloads;
> new_mtu = max_rx_pktlen - eth_overhead;
>
> - if (new_mtu <= RTE_ETHER_MTU) {
> - rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - on = false;
> - } else {
> - if ((port->dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> - fprintf(stderr,
> - "Frame size (%u) is not supported by
> port %u\n",
> - max_rx_pktlen, portid);
> - return -1;
> - }
> - rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - on = true;
> - }
> -
> - if (rx_offloads != port->dev_conf.rxmode.offloads) {
> - uint16_t qid;
> -
> - port->dev_conf.rxmode.offloads = rx_offloads;
> -
> - /* Apply JUMBO_FRAME offload configuration to Rx queue(s)
> */
> - for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
> - if (on)
> - port->rx_conf[qid].offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - port->rx_conf[qid].offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
> - }
> -
> if (mtu == new_mtu)
> return 0;
>
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 17562215c733..eed9d031fd9a 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -1022,7 +1022,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id,
> __rte_unused uint16_t queue,
> __rte_unused void *user_param);
> void add_tx_dynf_callback(portid_t portid);
> void remove_tx_dynf_callback(portid_t portid);
> -int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
> +int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen);
>
> /*
> * Work-around of a compilation error with ICC on invocations of the
> diff --git a/doc/guides/howto/debug_troubleshoot.rst
> b/doc/guides/howto/debug_troubleshoot.rst
> index 457ac441429a..df69fa8bcc24 100644
> --- a/doc/guides/howto/debug_troubleshoot.rst
> +++ b/doc/guides/howto/debug_troubleshoot.rst
> @@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
> * Identify if port Speed and Duplex is matching to desired values with
> ``rte_eth_link_get``.
>
> - * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with
> ``rte_eth_dev_info_get``.
> -
> * Check promiscuous mode if the drops do not occur for unique MAC
> address
> with ``rte_eth_promiscuous_get``.
>
> diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
> index e75f4fa9e3bc..8f10c6c78a1f 100644
> --- a/doc/guides/nics/bnxt.rst
> +++ b/doc/guides/nics/bnxt.rst
> @@ -885,7 +885,6 @@ processing. This improved performance is derived
> from a number of optimizations:
>
> DEV_RX_OFFLOAD_VLAN_STRIP
> DEV_RX_OFFLOAD_KEEP_CRC
> - DEV_RX_OFFLOAD_JUMBO_FRAME
> DEV_RX_OFFLOAD_IPV4_CKSUM
> DEV_RX_OFFLOAD_UDP_CKSUM
> DEV_RX_OFFLOAD_TCP_CKSUM
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 483cb7da576f..9580445828bf 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -165,8 +165,7 @@ Jumbo frame
>
> Supports Rx jumbo frames.
>
> -* **[uses] rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> - ``dev_conf.rxmode.mtu``.
> +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
> * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
> * **[related] API**: ``rte_eth_dev_set_mtu()``.
>
> diff --git a/drivers/net/atlantic/atl_ethdev.c
> b/drivers/net/atlantic/atl_ethdev.c
> index 3f654c071566..5a198f53fce7 100644
> --- a/drivers/net/atlantic/atl_ethdev.c
> +++ b/drivers/net/atlantic/atl_ethdev.c
> @@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
> | DEV_RX_OFFLOAD_IPV4_CKSUM \
> | DEV_RX_OFFLOAD_UDP_CKSUM \
> | DEV_RX_OFFLOAD_TCP_CKSUM \
> - | DEV_RX_OFFLOAD_JUMBO_FRAME \
> | DEV_RX_OFFLOAD_MACSEC_STRIP \
> | DEV_RX_OFFLOAD_VLAN_FILTER)
>
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index c36cd7b1d2f0..0bc9e5eeeb10 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev,
> struct rte_eth_dev_info *dev_info)
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_KEEP_CRC;
>
> diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c
> b/drivers/net/bnx2x/bnx2x_ethdev.c
> index 009a94e9a8fa..50ff04bb2241 100644
> --- a/drivers/net/bnx2x/bnx2x_ethdev.c
> +++ b/drivers/net/bnx2x/bnx2x_ethdev.c
> @@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
> dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
> dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
> dev_info->speed_capa = ETH_LINK_SPEED_10G |
> ETH_LINK_SPEED_20G;
> - dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
> dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
> diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
> index 5121d05da65f..6743cf92b0e6 100644
> --- a/drivers/net/bnxt/bnxt.h
> +++ b/drivers/net/bnxt/bnxt.h
> @@ -595,7 +595,6 @@ struct bnxt_rep_info {
> DEV_RX_OFFLOAD_TCP_CKSUM | \
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> \
> DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
> \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_KEEP_CRC | \
> DEV_RX_OFFLOAD_VLAN_EXTEND | \
> DEV_RX_OFFLOAD_TCP_LRO | \
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index dc33b961320a..e9d04f354a39 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -742,15 +742,10 @@ static int bnxt_start_nic(struct bnxt *bp)
> unsigned int i, j;
> int rc;
>
> - if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
> - bp->eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
> bp->flags |= BNXT_FLAG_JUMBO;
> - } else {
> - bp->eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> bp->flags &= ~BNXT_FLAG_JUMBO;
> - }
>
> /* THOR does not support ring groups.
> * But we will use the array to save RSS context IDs.
> @@ -1250,7 +1245,6 @@ bnxt_receive_function(struct rte_eth_dev
> *eth_dev)
> if (eth_dev->data->dev_conf.rxmode.offloads &
> ~(DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_KEEP_CRC |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
> b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 412acff42f65..2f3a1759419f 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -1727,14 +1727,6 @@ slave_configure(struct rte_eth_dev
> *bonded_eth_dev,
> slave_eth_dev->data->dev_conf.rxmode.mtu =
> bonded_eth_dev->data->dev_conf.rxmode.mtu;
>
> - if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME)
> - slave_eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - slave_eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
> nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
>
> diff --git a/drivers/net/cnxk/cnxk_ethdev.h
> b/drivers/net/cnxk/cnxk_ethdev.h
> index 10e05e6b5edd..fa8c48f1eeb0 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.h
> +++ b/drivers/net/cnxk/cnxk_ethdev.h
> @@ -75,9 +75,8 @@
> #define CNXK_NIX_RX_OFFLOAD_CAPA \
> (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |
> \
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_SCATTER | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
> - DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP |
> \
> - DEV_RX_OFFLOAD_VLAN_STRIP)
> + DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
> DEV_RX_OFFLOAD_RSS_HASH | \
> + DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP)
>
> #define RSS_IPV4_ENABLE \
> (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
> ETH_RSS_NONFRAG_IPV4_UDP | \
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c
> b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index 349896f6a1bf..d0924df76152 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev
> *eth_dev, uint16_t queue_id,
> {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
> {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
> {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
> - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
> {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
> {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
> {DEV_RX_OFFLOAD_SECURITY, " Security,"},
> diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
> index 7c89a028bf16..37625c5bfb69 100644
> --- a/drivers/net/cxgbe/cxgbe.h
> +++ b/drivers/net/cxgbe/cxgbe.h
> @@ -51,7 +51,6 @@
> DEV_RX_OFFLOAD_IPV4_CKSUM | \
> DEV_RX_OFFLOAD_UDP_CKSUM | \
> DEV_RX_OFFLOAD_TCP_CKSUM | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_SCATTER | \
> DEV_RX_OFFLOAD_RSS_HASH)
>
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 70b879fed100..1374f32b6826 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -661,14 +661,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> if ((&rxq->fl) != NULL)
> rxq->fl.size = temp_nb_desc;
>
> - /* Set to jumbo mode if necessary */
> - if (eth_dev->data->mtu > RTE_ETHER_MTU)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
> &rxq->fl, NULL,
> is_pf4(adapter) ?
> diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
> index 830f5192474d..21b8fe61c9a7 100644
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct
> adapter *adap, struct sge_fl *q,
> struct rte_mbuf *buf_bulk[n];
> int ret, i;
> struct rte_pktmbuf_pool_private *mbp_priv;
> - u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads
> &
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> /* Use jumbo mtu buffers if mbuf data room size can fit jumbo data.
> */
> mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
> - if (jumbo_en &&
> - ((mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM) >= 9000))
> + if ((mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM) >= 9000)
> buf_size_idx = RX_LARGE_MTU_BUF;
>
> ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk,
> n);
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c
> b/drivers/net/dpaa/dpaa_ethdev.c
> index 3172e3b2de87..defc072072af 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -54,7 +54,6 @@
>
> /* Supported Rx offloads */
> static uint64_t dev_rx_offloads_sup =
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_SCATTER;
>
> /* Rx offloads which cannot be disabled */
> @@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev
> *dev,
> uint64_t flags;
> const char *output;
> } rx_offload_map[] = {
> - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo
> frame,"},
> {DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
> {DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
> {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index c28f03641bbc..dc25eefb33b0 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
> DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_TIMESTAMP;
>
> /* Rx offloads which cannot be disabled */
> @@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev
> *dev,
> {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer
> UDP csum,"},
> {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
> {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
> - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo
> frame,"},
> {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
> {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
> {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
> diff --git a/drivers/net/e1000/e1000_ethdev.h
> b/drivers/net/e1000/e1000_ethdev.h
> index 3b4d9c3ee6f4..1ae78fe71f02 100644
> --- a/drivers/net/e1000/e1000_ethdev.h
> +++ b/drivers/net/e1000/e1000_ethdev.h
> @@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq);
> void em_dev_clear_queues(struct rte_eth_dev *dev);
> void em_dev_free_queues(struct rte_eth_dev *dev);
>
> -uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
> -uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
> +uint64_t em_get_rx_port_offloads_capa(void);
> +uint64_t em_get_rx_queue_offloads_capa(void);
>
> int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> uint16_t nb_rx_desc, unsigned int socket_id,
> diff --git a/drivers/net/e1000/em_ethdev.c
> b/drivers/net/e1000/em_ethdev.c
> index 6ebef55588bc..8a752eef52cf 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
> dev_info->max_rx_queues = 1;
> dev_info->max_tx_queues = 1;
>
> - dev_info->rx_queue_offload_capa =
> em_get_rx_queue_offloads_capa(dev);
> - dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
> + dev_info->rx_queue_offload_capa =
> em_get_rx_queue_offloads_capa();
> + dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
> dev_info->rx_queue_offload_capa;
> dev_info->tx_queue_offload_capa =
> em_get_tx_queue_offloads_capa(dev);
> dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
> diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
> index dfd8f2fd0074..e061f80a906a 100644
> --- a/drivers/net/e1000/em_rxtx.c
> +++ b/drivers/net/e1000/em_rxtx.c
> @@ -1359,12 +1359,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
> }
>
> uint64_t
> -em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
> +em_get_rx_port_offloads_capa(void)
> {
> uint64_t rx_offload_capa;
> - uint32_t max_rx_pktlen;
> -
> - max_rx_pktlen = em_get_max_pktlen(dev);
>
> rx_offload_capa =
> DEV_RX_OFFLOAD_VLAN_STRIP |
> @@ -1374,14 +1371,12 @@ em_get_rx_port_offloads_capa(struct
> rte_eth_dev *dev)
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_KEEP_CRC |
> DEV_RX_OFFLOAD_SCATTER;
> - if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
> - rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return rx_offload_capa;
> }
>
> uint64_t
> -em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
> +em_get_rx_queue_offloads_capa(void)
> {
> uint64_t rx_queue_offload_capa;
>
> @@ -1390,7 +1385,7 @@ em_get_rx_queue_offloads_capa(struct
> rte_eth_dev *dev)
> * capability be same to per port queue offloading capability
> * for better convenience.
> */
> - rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
> + rx_queue_offload_capa = em_get_rx_port_offloads_capa();
>
> return rx_queue_offload_capa;
> }
> @@ -1839,7 +1834,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
> * to avoid splitting packets that don't fit into
> * one buffer.
> */
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
> + if (dev->data->mtu > RTE_ETHER_MTU ||
> rctl_bsize < RTE_ETHER_MAX_LEN) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter
> mode");
> @@ -1874,14 +1869,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
> if ((hw->mac.type == e1000_ich9lan ||
> hw->mac.type == e1000_pch2lan ||
> hw->mac.type == e1000_ich10lan) &&
> - rxmode->offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + dev->data->mtu > RTE_ETHER_MTU) {
> u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
> E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
> E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
> }
>
> if (hw->mac.type == e1000_pch2lan) {
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> + if (dev->data->mtu > RTE_ETHER_MTU)
> e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
> else
> e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
> @@ -1908,7 +1903,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure support of jumbo frames, if any.
> */
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> + if (dev->data->mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> else
> rctl &= ~E1000_RCTL_LPE;
> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> index e9a30d393bd7..dda4d2101adb 100644
> --- a/drivers/net/e1000/igb_rxtx.c
> +++ b/drivers/net/e1000/igb_rxtx.c
> @@ -1640,7 +1640,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev
> *dev)
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_KEEP_CRC |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_RSS_HASH;
> @@ -2344,7 +2343,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> * Configure support of jumbo frames, if any.
> */
> max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
> rctl |= E1000_RCTL_LPE;
>
> /*
> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> index 3a9d5031b262..6d1026d31951 100644
> --- a/drivers/net/ena/ena_ethdev.c
> +++ b/drivers/net/ena/ena_ethdev.c
> @@ -1918,7 +1918,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM;
>
> - rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
>
> /* Inform framework about available features */
> diff --git a/drivers/net/enetc/enetc_ethdev.c
> b/drivers/net/enetc/enetc_ethdev.c
> index a7372c1787c7..6457677d300a 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev
> __rte_unused,
> (DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_KEEP_CRC |
> - DEV_RX_OFFLOAD_JUMBO_FRAME);
> + DEV_RX_OFFLOAD_KEEP_CRC);
>
> return 0;
> }
> diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
> index 0493e096d031..c5777772a09e 100644
> --- a/drivers/net/enic/enic_res.c
> +++ b/drivers/net/enic/enic_res.c
> @@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
> DEV_TX_OFFLOAD_TCP_TSO;
> enic->rx_offload_capa =
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> diff --git a/drivers/net/failsafe/failsafe_ops.c
> b/drivers/net/failsafe/failsafe_ops.c
> index 5ff33e03e034..47c5efe9ea77 100644
> --- a/drivers/net/failsafe/failsafe_ops.c
> +++ b/drivers/net/failsafe/failsafe_ops.c
> @@ -1193,7 +1193,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
> DEV_RX_OFFLOAD_HEADER_SPLIT |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_TIMESTAMP |
> DEV_RX_OFFLOAD_SECURITY |
> @@ -1211,7 +1210,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
> DEV_RX_OFFLOAD_HEADER_SPLIT |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_TIMESTAMP |
> DEV_RX_OFFLOAD_SECURITY |
> diff --git a/drivers/net/fm10k/fm10k_ethdev.c
> b/drivers/net/fm10k/fm10k_ethdev.c
> index 5e4b361ca6c0..093021246286 100644
> --- a/drivers/net/fm10k/fm10k_ethdev.c
> +++ b/drivers/net/fm10k/fm10k_ethdev.c
> @@ -1779,7 +1779,6 @@ static uint64_t
> fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_HEADER_SPLIT |
> DEV_RX_OFFLOAD_RSS_HASH);
> }
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
> b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 79987bec273c..4005414aeb71 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -738,7 +738,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *info)
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_TCP_LRO |
> DEV_RX_OFFLOAD_RSS_HASH;
>
> diff --git a/drivers/net/hns3/hns3_ethdev.c
> b/drivers/net/hns3/hns3_ethdev.c
> index e1d465de8234..dbd4c54b18c6 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2691,7 +2691,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev,
> struct rte_eth_dev_info *info)
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_RSS_HASH |
> DEV_RX_OFFLOAD_TCP_LRO);
> info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
> b/drivers/net/hns3/hns3_ethdev_vf.c
> index 3438b3650de6..eee65ac77399 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -944,7 +944,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev,
> struct rte_eth_dev_info *info)
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_RSS_HASH |
> DEV_RX_OFFLOAD_TCP_LRO);
> info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 2824592aa62e..6a64221778fa 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -3760,7 +3760,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_RSS_HASH;
>
> dev_info->tx_queue_offload_capa =
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index 1d27cf2b0a01..69c282baa723 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2911,7 +2911,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
> rxq->max_pkt_len =
> RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq-
> >rx_buf_len,
> data->mtu + I40E_ETH_OVERHEAD);
> - if (data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (data->mtu > RTE_ETHER_MTU) {
> if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
> rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> PMD_DRV_LOG(ERR, "maximum packet length must "
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 2d43c666fdbb..2c4103ac7ef9 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -588,7 +588,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct
> iavf_rx_queue *rxq)
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (dev->data->mtu & RTE_ETHER_MTU) {
> if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
> max_pkt_len > IAVF_FRAME_SIZE_MAX) {
> PMD_DRV_LOG(ERR, "maximum packet length must
> be "
> @@ -968,7 +968,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_RSS_HASH;
>
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c
> b/drivers/net/ice/ice_dcf_ethdev.c
> index c3c7ad88f250..16f642566e91 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -72,7 +72,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct
> ice_rx_queue *rxq)
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (dev_data->mtu > RTE_ETHER_MTU) {
> if (max_pkt_len <= ICE_ETH_MAX_LEN ||
> max_pkt_len > ICE_FRAME_SIZE_MAX) {
> PMD_DRV_LOG(ERR, "maximum packet length must
> be "
> @@ -683,7 +683,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_RSS_HASH;
> dev_info->tx_offload_capa =
> diff --git a/drivers/net/ice/ice_dcf_vf_representor.c
> b/drivers/net/ice/ice_dcf_vf_representor.c
> index b547c42f9137..d28fedc96e1a 100644
> --- a/drivers/net/ice/ice_dcf_vf_representor.c
> +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> @@ -149,7 +149,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev
> *dev,
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> DEV_RX_OFFLOAD_RSS_HASH;
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 703178c6d40c..17d30b735693 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3676,7 +3676,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
>
> dev_info->rx_offload_capa =
> DEV_RX_OFFLOAD_VLAN_STRIP |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_KEEP_CRC |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_VLAN_FILTER;
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index f9ef6ce57277..cc7908d32584 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -267,7 +267,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
> struct ice_rlan_ctx rx_ctx;
> enum ice_status err;
> uint16_t buf_size;
> - struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
> uint32_t rxdid = ICE_RXDID_COMMS_OVS;
> uint32_t regval;
> struct ice_adapter *ad = rxq->vsi->adapter;
> @@ -282,7 +281,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
> RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> frame_size);
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (dev_data->mtu > RTE_ETHER_MTU) {
> if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
> rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
> PMD_DRV_LOG(ERR, "maximum packet length must "
> diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
> index b3473b5b1646..5e6c2ff30157 100644
> --- a/drivers/net/igc/igc_ethdev.h
> +++ b/drivers/net/igc/igc_ethdev.h
> @@ -73,7 +73,6 @@ extern "C" {
> DEV_RX_OFFLOAD_UDP_CKSUM | \
> DEV_RX_OFFLOAD_TCP_CKSUM | \
> DEV_RX_OFFLOAD_SCTP_CKSUM | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_KEEP_CRC | \
> DEV_RX_OFFLOAD_SCATTER | \
> DEV_RX_OFFLOAD_RSS_HASH)
> diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
> index 28d3076439c3..30940857eac0 100644
> --- a/drivers/net/igc/igc_txrx.c
> +++ b/drivers/net/igc/igc_txrx.c
> @@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
>
> /* Configure support of jumbo frames, if any. */
> - if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> + if (dev->data->mtu & RTE_ETHER_MTU)
> rctl |= IGC_RCTL_LPE;
> else
> rctl &= ~IGC_RCTL_LPE;
> diff --git a/drivers/net/ionic/ionic_ethdev.c
> b/drivers/net/ionic/ionic_ethdev.c
> index 97447a10e46a..795980cb1ca5 100644
> --- a/drivers/net/ionic/ionic_ethdev.c
> +++ b/drivers/net/ionic/ionic_ethdev.c
> @@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
> DEV_RX_OFFLOAD_IPV4_CKSUM |
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_SCATTER |
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c
> b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 377b96c0236a..4e5d234e8c7d 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
> DEV_RX_OFFLOAD_VLAN_EXTEND |
> - DEV_RX_OFFLOAD_VLAN_FILTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + DEV_RX_OFFLOAD_VLAN_FILTER;
>
> dev_info->tx_queue_offload_capa =
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> dev_info->tx_offload_capa =
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 574a7bffc9cb..3205c37c3b82 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -6234,7 +6234,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev
> *dev,
> uint16_t queue_idx, uint16_t tx_rate)
> {
> struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> - struct rte_eth_rxmode *rxmode;
> uint32_t rf_dec, rf_int;
> uint32_t bcnrc_val;
> uint16_t link_speed = dev->data->dev_link.link_speed;
> @@ -6256,14 +6255,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev
> *dev,
> bcnrc_val = 0;
> }
>
> - rxmode = &dev->data->dev_conf.rxmode;
> /*
> * Set global transmit compensation time to the MMW_SIZE in
> RTTBCNRM
> * register. MMW_SIZE=0x014 if 9728-byte jumbo is supported,
> otherwise
> * set as 0x4.
> */
> - if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
> - (dev->data->mtu + IXGBE_ETH_OVERHEAD >=
> IXGBE_MAX_JUMBO_FRAME_SIZE))
> + if (dev->data->mtu + IXGBE_ETH_OVERHEAD >=
> IXGBE_MAX_JUMBO_FRAME_SIZE)
> IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> IXGBE_MMW_SIZE_JUMBO_FRAME);
> else
> IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> IXGBE_MMW_SIZE_DEFAULT);
> diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
> index 9bcbc445f2d0..6e64f9a0ade2 100644
> --- a/drivers/net/ixgbe/ixgbe_pf.c
> +++ b/drivers/net/ixgbe/ixgbe_pf.c
> @@ -600,15 +600,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t
> vf, uint32_t *msgbuf)
> IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT;
> if (max_frs < max_frame) {
> hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
> - if (max_frame > IXGBE_ETH_MAX_LEN) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (max_frame > IXGBE_ETH_MAX_LEN)
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> - }
> IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
>
> max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT;
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index 03991711fd6e..c223ef37c79f 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -3033,7 +3033,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev
> *dev)
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_KEEP_CRC |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_RSS_HASH;
> @@ -5095,7 +5094,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure jumbo frame support, if any.
> */
> - if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> maxfrs &= 0x0000FFFF;
> diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
> index 4a5cfd22aa71..e73112c44749 100644
> --- a/drivers/net/mlx4/mlx4_rxq.c
> +++ b/drivers/net/mlx4/mlx4_rxq.c
> @@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
> {
> uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_KEEP_CRC |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_RSS_HASH;
>
> if (priv->hw_csum)
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index 6f4f351222d3..0cc3bccc0825 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev
> *dev)
> struct mlx5_dev_config *config = &priv->config;
> uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
> DEV_RX_OFFLOAD_TIMESTAMP |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_RSS_HASH);
>
> if (!config->mprq.enabled)
> diff --git a/drivers/net/mvneta/mvneta_ethdev.h
> b/drivers/net/mvneta/mvneta_ethdev.h
> index ef8067790f82..6428f9ff7931 100644
> --- a/drivers/net/mvneta/mvneta_ethdev.h
> +++ b/drivers/net/mvneta/mvneta_ethdev.h
> @@ -54,8 +54,7 @@
> #define MRVL_NETA_MRU_TO_MTU(mru) ((mru) -
> MRVL_NETA_HDRS_LEN)
>
> /** Rx offloads capabilities */
> -#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
> - DEV_RX_OFFLOAD_CHECKSUM)
> +#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
>
> /** Tx offloads capabilities */
> #define MVNETA_TX_OFFLOAD_CHECKSUM
> (DEV_TX_OFFLOAD_IPV4_CKSUM | \
> diff --git a/drivers/net/mvpp2/mrvl_ethdev.c
> b/drivers/net/mvpp2/mrvl_ethdev.c
> index 5ce71661c84e..ef987b7de1b5 100644
> --- a/drivers/net/mvpp2/mrvl_ethdev.c
> +++ b/drivers/net/mvpp2/mrvl_ethdev.c
> @@ -59,7 +59,6 @@
>
> /** Port Rx offload capabilities */
> #define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_CHECKSUM)
>
> /** Port Tx offloads capabilities */
> diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
> index b1ce35b334da..a0bb5b9640c2 100644
> --- a/drivers/net/nfp/nfp_common.c
> +++ b/drivers/net/nfp/nfp_common.c
> @@ -369,8 +369,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
> ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
> }
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - hw->mtu = dev->data->mtu;
> + hw->mtu = dev->data->mtu;
>
> if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
> ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
> @@ -757,9 +756,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
> .nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
> };
>
> - /* All NFP devices support jumbo frames */
> - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
> dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.h
> b/drivers/net/octeontx/octeontx_ethdev.h
> index b73515de37ca..3a02824e3948 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.h
> +++ b/drivers/net/octeontx/octeontx_ethdev.h
> @@ -60,7 +60,6 @@
>
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
> DEV_RX_OFFLOAD_SCATTER |
> \
> DEV_RX_OFFLOAD_SCATTER
> | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME
> | \
> DEV_RX_OFFLOAD_VLAN_FILTER)
>
> #define OCTEONTX_TX_OFFLOADS (
> \
> diff --git a/drivers/net/octeontx2/otx2_ethdev.h
> b/drivers/net/octeontx2/otx2_ethdev.h
> index 7871e3d30bda..47ee126ed7fd 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.h
> +++ b/drivers/net/octeontx2/otx2_ethdev.h
> @@ -148,7 +148,6 @@
> DEV_RX_OFFLOAD_SCTP_CKSUM | \
> DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
> DEV_RX_OFFLOAD_SCATTER | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
> DEV_RX_OFFLOAD_VLAN_STRIP | \
> DEV_RX_OFFLOAD_VLAN_FILTER | \
> diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c
> b/drivers/net/octeontx_ep/otx_ep_ethdev.c
> index a243683d61d3..c65041a16ba7 100644
> --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
> +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
> @@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
>
> devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
> devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
> - devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
> - devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
> + devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
> devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
>
> devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
> diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c
> b/drivers/net/octeontx_ep/otx_ep_rxtx.c
> index a7d433547e36..aa4dcd33cc79 100644
> --- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
> +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
> @@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device
> *otx_ep,
> droq_pkt->l3_len = hdr_lens.l3_len;
> droq_pkt->l4_len = hdr_lens.l4_len;
>
> - if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN +
> OTX_CUST_DATA_LEN)) &&
> - !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
> - rte_pktmbuf_free(droq_pkt);
> - goto oq_read_fail;
> - }
> -
> if (droq_pkt->nb_segs > 1 &&
> !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
> rte_pktmbuf_free(droq_pkt);
> diff --git a/drivers/net/qede/qede_ethdev.c
> b/drivers/net/qede/qede_ethdev.c
> index 84e23ff03418..06c3ccf20716 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
> DEV_RX_OFFLOAD_TCP_LRO |
> DEV_RX_OFFLOAD_KEEP_CRC |
> DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_VLAN_STRIP |
> DEV_RX_OFFLOAD_RSS_HASH);
> diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
> index 280e8a61f9e0..62b215f62cd6 100644
> --- a/drivers/net/sfc/sfc_rx.c
> +++ b/drivers/net/sfc/sfc_rx.c
> @@ -940,8 +940,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
> {
> uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
>
> - caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return caps & sfc_rx_get_offload_mask(sa);
> }
>
> diff --git a/drivers/net/thunderx/nicvf_ethdev.h
> b/drivers/net/thunderx/nicvf_ethdev.h
> index b8dd905d0bd6..5d38750d6313 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.h
> +++ b/drivers/net/thunderx/nicvf_ethdev.h
> @@ -40,7 +40,6 @@
> #define NICVF_RX_OFFLOAD_CAPA ( \
> DEV_RX_OFFLOAD_CHECKSUM | \
> DEV_RX_OFFLOAD_VLAN_STRIP | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_SCATTER | \
> DEV_RX_OFFLOAD_RSS_HASH)
>
> diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
> index c6cd3803c434..0ce754fb25b0 100644
> --- a/drivers/net/txgbe/txgbe_rxtx.c
> +++ b/drivers/net/txgbe/txgbe_rxtx.c
> @@ -1953,7 +1953,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev
> *dev)
> DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM |
> DEV_RX_OFFLOAD_KEEP_CRC |
> - DEV_RX_OFFLOAD_JUMBO_FRAME |
> DEV_RX_OFFLOAD_VLAN_FILTER |
> DEV_RX_OFFLOAD_RSS_HASH |
> DEV_RX_OFFLOAD_SCATTER;
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index 5d341a3e23bb..a05e73cd8b60 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -2556,7 +2556,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct
> rte_eth_dev_info *dev_info)
>
> host_features = VIRTIO_OPS(hw)->get_features(hw);
> dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
> - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
> dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
> if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
> diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c
> b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> index 2f40ae907dcd..0210f9140b48 100644
> --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
> +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
> @@ -54,7 +54,6 @@
> DEV_RX_OFFLOAD_UDP_CKSUM | \
> DEV_RX_OFFLOAD_TCP_CKSUM | \
> DEV_RX_OFFLOAD_TCP_LRO | \
> - DEV_RX_OFFLOAD_JUMBO_FRAME | \
> DEV_RX_OFFLOAD_RSS_HASH)
>
> int vmxnet3_segs_dynfield_offset = -1;
> diff --git a/examples/ip_fragmentation/main.c
> b/examples/ip_fragmentation/main.c
> index 12062a785dc6..7c0cb093eda3 100644
> --- a/examples/ip_fragmentation/main.c
> +++ b/examples/ip_fragmentation/main.c
> @@ -150,8 +150,7 @@ static struct rte_eth_conf port_conf = {
> RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> - DEV_RX_OFFLOAD_SCATTER |
> - DEV_RX_OFFLOAD_JUMBO_FRAME),
> + DEV_RX_OFFLOAD_SCATTER),
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/examples/ip_reassembly/main.c
> b/examples/ip_reassembly/main.c
> index e5c7d46d2caa..af67db49f7fb 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -165,8 +165,7 @@ static struct rte_eth_conf port_conf = {
> .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> - .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> - DEV_RX_OFFLOAD_JUMBO_FRAME),
> + .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-
> secgw/ipsec-secgw.c
> index d032a47d1c3b..4a741bfdde4d 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -2209,8 +2209,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads,
> uint64_t req_tx_offloads)
> printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
> nb_rx_queue, nb_tx_queue);
>
> - if (mtu_size > RTE_ETHER_MTU)
> - local_port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> local_port_conf.rxmode.mtu = mtu_size;
>
> if (multi_seg_required()) {
> diff --git a/examples/ipv4_multicast/main.c
> b/examples/ipv4_multicast/main.c
> index b3993685ec92..63bbd7e64ceb 100644
> --- a/examples/ipv4_multicast/main.c
> +++ b/examples/ipv4_multicast/main.c
> @@ -113,7 +113,6 @@ static struct rte_eth_conf port_conf = {
> .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> - .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/examples/kni/main.c b/examples/kni/main.c
> index c10814c6a94f..0fd945e7e0b2 100644
> --- a/examples/kni/main.c
> +++ b/examples/kni/main.c
> @@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int
> new_mtu)
> }
>
> memcpy(&conf, &port_conf, sizeof(conf));
> - /* Set new MTU */
> - if (new_mtu > RTE_ETHER_MTU)
> - conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> conf.rxmode.mtu = new_mtu;
> ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
> diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
> index 7abb612ee6a4..f6dfb156ac56 100644
> --- a/examples/l3fwd-acl/main.c
> +++ b/examples/l3fwd-acl/main.c
> @@ -2000,10 +2000,8 @@ config_port_max_pkt_len(struct rte_eth_conf
> *conf,
> dev_info->max_mtu);
> conf->rxmode.mtu = max_pkt_len - overhead_len;
>
> - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> - conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
>
> return 0;
> }
> diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
> index b431b9ff5f3c..a185a0512826 100644
> --- a/examples/l3fwd-graph/main.c
> +++ b/examples/l3fwd-graph/main.c
> @@ -730,10 +730,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
> dev_info->max_mtu);
> conf->rxmode.mtu = max_pkt_len - overhead_len;
>
> - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> - conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
>
> return 0;
> }
> diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
> index e58561327c48..12b4dce77ce1 100644
> --- a/examples/l3fwd-power/main.c
> +++ b/examples/l3fwd-power/main.c
> @@ -2509,10 +2509,8 @@ config_port_max_pkt_len(struct rte_eth_conf
> *conf,
> dev_info->max_mtu);
> conf->rxmode.mtu = max_pkt_len - overhead_len;
>
> - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> - conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
>
> return 0;
> }
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index cb9bc7ad6002..22d35749410b 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -987,10 +987,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
> dev_info->max_mtu);
> conf->rxmode.mtu = max_pkt_len - overhead_len;
>
> - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> - conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
>
> return 0;
> }
> diff --git a/examples/performance-thread/l3fwd-thread/main.c
> b/examples/performance-thread/l3fwd-thread/main.c
> index b6cddc8c7b51..8fc3a7c675a2 100644
> --- a/examples/performance-thread/l3fwd-thread/main.c
> +++ b/examples/performance-thread/l3fwd-thread/main.c
> @@ -3493,10 +3493,8 @@ config_port_max_pkt_len(struct rte_eth_conf
> *conf,
> dev_info->max_mtu);
> conf->rxmode.mtu = max_pkt_len - overhead_len;
>
> - if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + if (conf->rxmode.mtu > RTE_ETHER_MTU)
> conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> - conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
>
> return 0;
> }
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index da381b41c0c5..a9c207124153 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -631,11 +631,8 @@ us_vhost_parse_args(int argc, char **argv)
> return -1;
> }
> mergeable = !!ret;
> - if (ret) {
> - vmdq_conf_default.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (ret)
> vmdq_conf_default.rxmode.mtu =
> MAX_MTU;
> - }
> break;
>
> case OPT_STATS_NUM:
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index ce0ed509d28f..c2b624aba1a0 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -118,7 +118,6 @@ static const struct {
> RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT),
> RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> - RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> @@ -1485,13 +1484,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t
> nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
> }
>
> - if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> == 0) {
> - if (dev->data->dev_conf.rxmode.mtu <
> RTE_ETHER_MIN_MTU ||
> - dev->data->dev_conf.rxmode.mtu >
> RTE_ETHER_MTU)
> - /* Use default value */
> - dev->data->dev_conf.rxmode.mtu =
> RTE_ETHER_MTU;
> - }
> -
> dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
>
> /*
> @@ -3639,7 +3631,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t
> mtu)
> int ret;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_dev *dev;
> - int is_jumbo_frame_capable = 0;
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> dev = &rte_eth_devices[port_id];
> @@ -3667,27 +3658,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t
> mtu)
> frame_size = mtu + overhead_len;
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> dev_info.max_rx_pktlen)
> return -EINVAL;
> -
> - if ((dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> - is_jumbo_frame_capable = 1;
> }
>
> - if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
> - return -EINVAL;
> -
> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
> - if (ret == 0) {
> + if (ret == 0)
> dev->data->mtu = mtu;
>
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
> -
> return eth_err(port_id, ret);
> }
>
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 9fba2bd73c84..4d0f956a4b28 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1389,7 +1389,6 @@ struct rte_eth_conf {
> #define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
> #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> -#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> /**
> * Timestamp is set by the driver in
> RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> --
> 2.31.1
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/6] ethdev: move jumbo frame offload check to library
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-10-08 8:39 ` Xu, Rosen
0 siblings, 0 replies; 112+ messages in thread
From: Xu, Rosen @ 2021-10-08 8:39 UTC (permalink / raw)
To: Yigit, Ferruh, Somalapuram Amaranath, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Hemant Agrawal,
Sachin Saxena, Wang, Haiyue, Gagandeep Singh, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
Zhang, Qi Z, Shijith Thotton, Srisivasubramanian Srinivasan,
Heinrich Kuhn, Harman Kalra, Jerin Jacob, Rasesh Mody,
Devendra Singh Rawat, Andrew Rybchenko, Maciej Czekaj, Jiawen Wu,
Jian Wang, Thomas Monjalon
Cc: dev
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Wednesday, October 06, 2021 1:17
> To: Somalapuram Amaranath <asomalap@amd.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar K <kirankumark@marvell.com>;
> Sunil Kumar Kori <skori@marvell.com>; Satha Rao
> <skoteshwar@marvell.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@oss.nxp.com>;
> Wang, Haiyue <haiyue.wang@intel.com>; Gagandeep Singh
> <g.singh@nxp.com>; Ziyang Xuan <xuanziyang2@huawei.com>; Xiaoyun
> Wang <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Min Hu (Connor) <humin29@huawei.com>;
> Yisen Zhuang <yisen.zhuang@huawei.com>; Lijun Ou
> <oulijun@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi
> Z <qi.z.zhang@intel.com>; Xu, Rosen <rosen.xu@intel.com>; Shijith Thotton
> <sthotton@marvell.com>; Srisivasubramanian Srinivasan
> <srinivasan@marvell.com>; Heinrich Kuhn <heinrich.kuhn@corigine.com>;
> Harman Kalra <hkalra@marvell.com>; Jerin Jacob <jerinj@marvell.com>;
> Rasesh Mody <rmody@marvell.com>; Devendra Singh Rawat
> <dsinghrawat@marvell.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Maciej Czekaj <mczekaj@marvell.com>;
> Jiawen Wu <jiawenwu@trustnetic.com>; Jian Wang
> <jianwang@trustnetic.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org
> Subject: [PATCH v4 2/6] ethdev: move jumbo frame offload check to library
>
> Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
> and application should enable the jumbo frame offload support for it.
>
> When jumbo frame offload is not enabled by application, but MTU bigger
> than RTE_ETHER_MTU is requested there are two options, either fail or
> enable jumbo frame offload implicitly.
>
> Enabling jumbo frame offload implicitly is selected by many drivers since
> setting a big MTU value already implies it, and this increases usability.
>
> This patch moves this logic from drivers to the library, both to reduce the
> duplicated code in the drivers and to make behaviour more visible.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> ---
> drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
> drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
> drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
> drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
> drivers/net/dpaa/dpaa_ethdev.c | 7 -------
> drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
> drivers/net/e1000/em_ethdev.c | 9 ++-------
> drivers/net/e1000/igb_ethdev.c | 9 ++-------
> drivers/net/enetc/enetc_ethdev.c | 7 -------
> drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
> drivers/net/hns3/hns3_ethdev.c | 8 --------
> drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
> drivers/net/i40e/i40e_ethdev.c | 5 -----
> drivers/net/iavf/iavf_ethdev.c | 7 -------
> drivers/net/ice/ice_ethdev.c | 5 -----
> drivers/net/igc/igc_ethdev.c | 9 ++-------
> drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
> drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
> drivers/net/liquidio/lio_ethdev.c | 7 -------
> drivers/net/nfp/nfp_common.c | 6 ------
> drivers/net/octeontx/octeontx_ethdev.c | 5 -----
> drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
> drivers/net/qede/qede_ethdev.c | 4 ----
> drivers/net/sfc/sfc_ethdev.c | 9 ---------
> drivers/net/thunderx/nicvf_ethdev.c | 6 ------
> drivers/net/txgbe/txgbe_ethdev.c | 6 ------
> lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
> 27 files changed, 29 insertions(+), 166 deletions(-)
>
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index 76aeec077f2b..2960834b4539 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> val = 1;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> val = 0;
> - }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> return 0;
> }
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index 8c6f20b75aed..07ee19938930 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -3052,15 +3052,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev
> *eth_dev, uint16_t new_mtu)
> return -EINVAL;
> }
>
> - if (new_mtu > RTE_ETHER_MTU) {
> + if (new_mtu > RTE_ETHER_MTU)
> bp->flags |= BNXT_FLAG_JUMBO;
> - bp->eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - } else {
> - bp->eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> bp->flags &= ~BNXT_FLAG_JUMBO;
> - }
>
> /* Is there a change in mtu setting? */
> if (eth_dev->data->mtu == new_mtu)
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c
> b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index 695d0d6fd3e2..349896f6a1bf 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> plt_err("Failed to max Rx frame length, rc=%d", rc);
> goto exit;
> }
> -
> - if (mtu > RTE_ETHER_MTU)
> - dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> exit:
> return rc;
> }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 8cf61f12a8d6..0c9cc2f5bb3f 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> return -EINVAL;
>
> - /* set to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1,
> -1,
> -1, -1, true);
> return err;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c
> b/drivers/net/dpaa/dpaa_ethdev.c index adbdb87baab9..57b09f16ba44
> 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> fman_if_set_maxfrm(dev->process_private, frame_size);
>
> return 0;
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 758a14e0ad2d..df44bb204f65 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -1470,13 +1470,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> DPAA2_MAX_RX_PKT_LEN)
> return -EINVAL;
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c
> b/drivers/net/e1000/em_ethdev.c index 6f418a36aa04..1b41dd04df5a
> 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~E1000_RCTL_LPE;
> - }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> return 0;
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index 4c114bf90fc7..a061d0529dd1 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -4396,15 +4396,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~E1000_RCTL_LPE;
> - }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> E1000_WRITE_REG(hw, E1000_RLPML, frame_size); diff --git
> a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
> index cdb9783b5372..fbcbbb6c0533 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads &=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
> enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
>
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
> b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 2d8271cb6095..4b30dfa222a8 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -1547,13 +1547,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev
> *dev, uint16_t mtu)
> return ret;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> nic_dev->mtu_size = mtu;
>
> return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c
> b/drivers/net/hns3/hns3_ethdev.c index 4ead227f9122..e1d465de8234
> 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2571,7 +2571,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> struct hns3_adapter *hns = dev->data->dev_private;
> uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
> struct hns3_hw *hw = &hns->hw;
> - bool is_jumbo_frame;
> int ret;
>
> if (dev->data->dev_started) {
> @@ -2581,7 +2580,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
>
> rte_spinlock_lock(&hw->lock);
> - is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
> frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
>
> /*
> @@ -2596,12 +2594,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return ret;
> }
>
> - if (is_jumbo_frame)
> - dev->data->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
> b/drivers/net/hns3/hns3_ethdev_vf.c
> index 0b5db486f8d6..3438b3650de6 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -908,12 +908,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rte_spinlock_unlock(&hw->lock);
> return ret;
> }
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> -
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index ab571a921f9e..9283adb19304 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11775,11 +11775,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return ret;
> }
>
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 0eabce275d92..844d26d87ba6 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -1473,13 +1473,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return ret;
> }
>
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index
> 8ee1335ac6cf..3038a9714517 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3992,11 +3992,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return 0;
> }
>
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index
> b26723064b07..dcbc26b8186e 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
>
> rctl = IGC_READ_REG(hw, IGC_RCTL);
> -
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= IGC_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~IGC_RCTL_LPE;
> - }
> IGC_WRITE_REG(hw, IGC_RCTL, rctl);
>
> IGC_WRITE_REG(hw, IGC_RLPML, frame_size); diff --git
> a/drivers/net/ipn3ke/ipn3ke_representor.c
> b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 3634c0c8c5f0..e8a33f04bd69 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (rpst->i40e_pf_eth) {
> ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst-
> >i40e_pf_eth,
> mtu);
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 31e67d86e77b..574a7bffc9cb 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -5198,13 +5198,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> - }
> IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
>
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS); diff --git
> a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> index 976916f870a5..3a516c52d199 100644
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ b/drivers/net/liquidio/lio_ethdev.c
> @@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> return -1;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return 0;
> }
>
> diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
> index a2031a7a82cc..850ec7655f82 100644
> --- a/drivers/net/nfp/nfp_common.c
> +++ b/drivers/net/nfp/nfp_common.c
> @@ -962,12 +962,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> /* writing to configuration space */
> nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.c
> b/drivers/net/octeontx/octeontx_ethdev.c
> index 69c3bda12df8..fb65be2c2dc3 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.c
> +++ b/drivers/net/octeontx/octeontx_ethdev.c
> @@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> if (rc)
> return rc;
>
> - if (mtu > RTE_ETHER_MTU)
> - nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> octeontx_log_info("Received pkt beyond maxlen %d will be
> dropped",
> frame_size);
>
> diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c
> b/drivers/net/octeontx2/otx2_ethdev_ops.c
> index cf7804157198..293306c7be2a 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_ops.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
> @@ -59,11 +59,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> if (rc)
> return rc;
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return rc;
> }
>
> diff --git a/drivers/net/qede/qede_ethdev.c
> b/drivers/net/qede/qede_ethdev.c index 4b971fd1fe3c..6886a4e5efb4
> 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> fp->rxq->rx_buf_size = rc;
> }
> }
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (!dev->data->dev_started && restart) {
> qede_dev_start(dev);
> diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index
> 1f55c90b419d..2ee80e2dc41f 100644
> --- a/drivers/net/sfc/sfc_ethdev.c
> +++ b/drivers/net/sfc/sfc_ethdev.c
> @@ -1064,15 +1064,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
> }
>
> - /*
> - * The driver does not use it, but other PMDs update jumbo frame
> - * flag when MTU is set.
> - */
> - if (mtu > RTE_ETHER_MTU) {
> - struct rte_eth_rxmode *rxmode = &dev->data-
> >dev_conf.rxmode;
> - rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
> -
> sfc_adapter_unlock(sa);
>
> sfc_log_init(sa, "done");
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c
> b/drivers/net/thunderx/nicvf_ethdev.c
> index c8ae95a61306..b501fee5332c 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t
> mtu)
> struct nicvf *nic = nicvf_pmd_priv(dev);
> uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
> size_t i;
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> (frame_size + 2 * VLAN_TAG_SIZE > buffsz *
> NIC_HW_MAX_SEGS))
> return -EINVAL;
>
> - if (mtu > RTE_ETHER_MTU)
> - rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (nicvf_mbox_update_hw_max_frs(nic, mtu))
> return -EINVAL;
>
> diff --git a/drivers/net/txgbe/txgbe_ethdev.c
> b/drivers/net/txgbe/txgbe_ethdev.c
> index 269de9f848dd..35b98097c3a4 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -3486,12 +3486,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EINVAL;
> }
>
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (hw->mode)
> wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> TXGBE_FRAME_SIZE_MAX);
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
> 4d0584af52e3..1740bab98a83 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -3639,6 +3639,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t
> mtu)
> int ret;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_dev *dev;
> + int is_jumbo_frame_capable = 0;
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> dev = &rte_eth_devices[port_id];
> @@ -3657,12 +3658,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t
> mtu)
>
> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
> return -EINVAL;
> +
> + if ((dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> + is_jumbo_frame_capable = 1;
> }
>
> + if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
> + return -EINVAL;
> +
> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
> - if (!ret)
> + if (ret == 0) {
> dev->data->mtu = mtu;
>
> + /* switch to jumbo mode if needed */
> + if (mtu > RTE_ETHER_MTU)
> + dev->data->dev_conf.rxmode.offloads |=
> + DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> + dev->data->dev_conf.rxmode.offloads &=
> + ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> return eth_err(port_id, ret);
> }
>
> --
> 2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
` (4 preceding siblings ...)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
@ 2021-10-08 15:57 ` Ananyev, Konstantin
2021-10-11 19:47 ` Ferruh Yigit
2021-10-09 10:56 ` lihuisong (C)
6 siblings, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-08 15:57 UTC (permalink / raw)
To: Yigit, Ferruh, Jerin Jacob, Li, Xiaoyun, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Zhang, Qi Z, Wang, Xiao W,
Matan Azrad, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Iremonger, Bernard, Kiran Kumar K,
Nithin Dabilpuram, Hunt, David, Mcnamara, John, Richardson,
Bruce, Igor Russkikh, Steven Webster, Peters, Matt,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Wang, Haiyue, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, Daley, John,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
Andrew Boyer, Xu, Rosen, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko, Wiles,
Keith, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia, Chenbo,
Chautru, Nicolas, Van Haaren, Harry, Dumitrescu, Cristian,
Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz, Doherty, Declan,
Pavan Nikhilesh, Rybalchenko, Kirill, Singh, Jasvinder,
Thomas Monjalon
Cc: dev
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
LGTM in general, one question below.
...
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index daf5ca924221..4d0584af52e3 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1324,6 +1324,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
> return ret;
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
In theory it could be overflow here, though I do realize that in practise it is unlikely situation.
Anyway why uint16_t, why not uint32_t for all variables here?
Just no to worry about such things.
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1331,6 +1344,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> struct rte_eth_dev *dev;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_conf orig_conf;
> + uint32_t max_rx_pktlen;
> uint16_t overhead_len;
> int diag;
> int ret;
> @@ -1381,11 +1395,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
>
> /* Get the real Ethernet overhead length */
> - if (dev_info.max_mtu != UINT16_MAX &&
> - dev_info.max_rx_pktlen > dev_info.max_mtu)
> - overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
> - else
> - overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
>
> /* If number of queues specified by application for both Rx and Tx is
> * zero, use driver preferred values. This cannot be done individually
> @@ -1454,49 +1465,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> /*
> - * If jumbo frames are enabled, check that the maximum RX packet
> - * length is supported by the configured device.
> + * Check that the maximum RX packet length is supported by the
> + * configured device.
> */
> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - dev_info.max_rx_pktlen);
> - ret = -EINVAL;
> - goto rollback;
> - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - (unsigned int)RTE_ETHER_MIN_LEN);
> - ret = -EINVAL;
> - goto rollback;
> - }
> + if (dev_conf->rxmode.mtu == 0)
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
> + ret = -EINVAL;
> + goto rollback;
> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
> + ret = -EINVAL;
> + goto rollback;
> + }
>
> - /* Scale the MTU size to adapt max_rx_pkt_len */
> - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - overhead_len;
> - } else {
> - uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
> - if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
> - pktlen > RTE_ETHER_MTU + overhead_len)
> + if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> + if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
> + dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
> /* Use default value */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - RTE_ETHER_MTU + overhead_len;
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> }
>
> + dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
> +
> /*
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> if (dev_conf->rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data->dev_conf.rxmode.max_lro_pkt_size,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> goto rollback;
> @@ -2156,13 +2163,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> + /* Get the real Ethernet overhead length */
> if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + uint16_t overhead_len;
> + uint32_t max_rx_pktlen;
> + int ret;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
> + max_rx_pktlen = dev->data->mtu + overhead_len;
> if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - int ret = eth_dev_check_lro_pkt_size(port_id,
> + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> + ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data->dev_conf.rxmode.max_lro_pkt_size,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> return ret;
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index afdc53b674cc..9fba2bd73c84 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -416,7 +416,7 @@ enum rte_eth_tx_mq_mode {
> struct rte_eth_rxmode {
> /** The multi-queue packet distribution mode to be used, e.g. RSS. */
> enum rte_eth_rx_mq_mode mq_mode;
> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
> + uint32_t mtu; /**< Requested MTU. */
> /** Maximum allowed size of LRO aggregated packet. */
> uint32_t max_lro_pkt_size;
> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
> diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
> index 0036bda7465c..1491c815c312 100644
> --- a/lib/ethdev/rte_ethdev_trace.h
> +++ b/lib/ethdev/rte_ethdev_trace.h
> @@ -28,7 +28,7 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_u16(nb_tx_q);
> rte_trace_point_emit_u32(dev_conf->link_speeds);
> rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
> - rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
> + rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
> rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
> rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
> rte_trace_point_emit_u64(dev_conf->txmode.offloads);
> --
> 2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks Ferruh Yigit
@ 2021-10-08 16:51 ` Ananyev, Konstantin
2021-10-11 19:50 ` Ferruh Yigit
2021-10-09 11:43 ` lihuisong (C)
1 sibling, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-08 16:51 UTC (permalink / raw)
To: Yigit, Ferruh, Thomas Monjalon, Andrew Rybchenko
Cc: Yigit, Ferruh, dev, Huisong Li
> Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
> have slightly different checks. Like one checks min MTU against
> RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
>
> Checks moved into common function to unify the checks. Also this has
> benefit to have common error logs.
>
> Suggested-by: Huisong Li <lihuisong@huawei.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> lib/ethdev/rte_ethdev.c | 82 ++++++++++++++++++++++++++---------------
> lib/ethdev/rte_ethdev.h | 2 +-
> 2 files changed, 54 insertions(+), 30 deletions(-)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c2b624aba1a0..0a6e952722ae 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1336,6 +1336,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> return overhead_len;
> }
>
> +/* rte_eth_dev_info_get() should be called prior to this function */
> +static int
> +eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
> + uint16_t mtu)
> +{
> + uint16_t overhead_len;
Again, I would just always use 32-bit arithmetic - safe and easy.
Apart from that:
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> + uint32_t frame_size;
> +
> + if (mtu < dev_info->min_mtu) {
> + RTE_ETHDEV_LOG(ERR,
> + "MTU (%u) < device min MTU (%u) for port_id %u\n",
> + mtu, dev_info->min_mtu, port_id);
> + return -EINVAL;
> + }
> + if (mtu > dev_info->max_mtu) {
> + RTE_ETHDEV_LOG(ERR,
> + "MTU (%u) > device max MTU (%u) for port_id %u\n",
> + mtu, dev_info->max_mtu, port_id);
> + return -EINVAL;
> + }
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
> + dev_info->max_mtu);
> + frame_size = mtu + overhead_len;
> + if (frame_size < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR,
> + "Frame size (%u) < min frame size (%u) for port_id %u\n",
> + frame_size, RTE_ETHER_MIN_LEN, port_id);
> + return -EINVAL;
> + }
> +
> + if (frame_size > dev_info->max_rx_pktlen) {
> + RTE_ETHDEV_LOG(ERR,
> + "Frame size (%u) > device max frame size (%u) for port_id %u\n",
> + frame_size, dev_info->max_rx_pktlen, port_id);
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1463,26 +1504,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
> }
>
> - /*
> - * Check that the maximum RX packet length is supported by the
> - * configured device.
> - */
> if (dev_conf->rxmode.mtu == 0)
> dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> - max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> - if (max_rx_pktlen > dev_info.max_rx_pktlen) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
> - port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
> - ret = -EINVAL;
> - goto rollback;
> - } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
> - port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
> - ret = -EINVAL;
> +
> + ret = eth_dev_validate_mtu(port_id, &dev_info,
> + dev->data->dev_conf.rxmode.mtu);
> + if (ret != 0)
> goto rollback;
> - }
>
> dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
>
> @@ -1491,6 +1519,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> * size is supported by the configured device.
> */
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> if (dev_conf->rxmode.max_lro_pkt_size == 0)
> dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> ret = eth_dev_check_lro_pkt_size(port_id,
> @@ -3437,7 +3468,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
> dev_info->rx_desc_lim = lim;
> dev_info->tx_desc_lim = lim;
> dev_info->device = dev->device;
> - dev_info->min_mtu = RTE_ETHER_MIN_MTU;
> + dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN;
> dev_info->max_mtu = UINT16_MAX;
>
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
> @@ -3643,21 +3675,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> * which relies on dev->dev_ops->dev_infos_get.
> */
> if (*dev->dev_ops->dev_infos_get != NULL) {
> - uint16_t overhead_len;
> - uint32_t frame_size;
> -
> ret = rte_eth_dev_info_get(port_id, &dev_info);
> if (ret != 0)
> return ret;
>
> - if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
> - return -EINVAL;
> -
> - overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> - dev_info.max_mtu);
> - frame_size = mtu + overhead_len;
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> - return -EINVAL;
> + ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
> + if (ret != 0)
> + return ret;
> }
>
> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 4d0f956a4b28..50e124ff631f 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -3056,7 +3056,7 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
> * };
> *
> * device = dev->device
> - * min_mtu = RTE_ETHER_MIN_MTU
> + * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
> * max_mtu = UINT16_MAX
> *
> * The following fields will be populated if support for dev_infos_get()
> --
> 2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 6/6] examples/ip_reassembly: remove unused parameter
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
@ 2021-10-08 16:53 ` Ananyev, Konstantin
0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-08 16:53 UTC (permalink / raw)
To: Yigit, Ferruh; +Cc: dev
> Remove 'max-pkt-len' parameter.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> examples/ip_reassembly/main.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
> index af67db49f7fb..2ff5ea3e7bc5 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -516,7 +516,6 @@ static void
> print_usage(const char *prgname)
> {
> printf("%s [EAL options] -- -p PORTMASK [-q NQ]"
> - " [--max-pkt-len PKTLEN]"
> " [--maxflows=<flows>] [--flowttl=<ttl>[(s|ms)]]\n"
> " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
> " -q NQ: number of RX queues per lcore\n"
> @@ -618,7 +617,6 @@ parse_args(int argc, char **argv)
> int option_index;
> char *prgname = argv[0];
> static struct option lgopts[] = {
> - {"max-pkt-len", 1, 0, 0},
> {"maxflows", 1, 0, 0},
> {"flowttl", 1, 0, 0},
> {NULL, 0, 0, 0}
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-10-08 17:11 ` Ananyev, Konstantin
2021-10-09 11:09 ` lihuisong (C)
2021-10-10 5:46 ` Matan Azrad
1 sibling, 1 reply; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-08 17:11 UTC (permalink / raw)
To: Yigit, Ferruh, Jerin Jacob, Li, Xiaoyun, Ajit Khaparde,
Somnath Kotur, Igor Russkikh, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, Daley, John, Hyong Youb Kim,
Gaetan Rivet, Zhang, Qi Z, Wang, Xiao W, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou, Xing, Beilei,
Wu, Jingjing, Yang, Qiming, Andrew Boyer, Xu, Rosen, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
Chenbo, Yong Wang, Nicolau, Radu, Akhil Goyal, Hunt, David,
Mcnamara, John, Thomas Monjalon
Cc: dev
>
> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>
> Instead of drivers announce this capability, application can deduct the
> capability by checking reported 'dev_info.max_mtu' or
> 'dev_info.max_rx_pktlen'.
>
> And instead of application setting this flag explicitly to enable jumbo
> frames, this can be deduced by driver by comparing requested 'mtu' to
> 'RTE_ETHER_MTU'.
>
> Removing this additional configuration for simplification.
>
> Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> ---
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/6] ethdev: move check to library for MTU set
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 3/6] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-10-08 17:19 ` Ananyev, Konstantin
0 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-08 17:19 UTC (permalink / raw)
To: Yigit, Ferruh, Somalapuram Amaranath, Ajit Khaparde,
Somnath Kotur, Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena,
Wang, Haiyue, Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Xing, Beilei, Wu, Jingjing, Yang, Qiming, Zhang,
Qi Z, Xu, Rosen, Shijith Thotton, Srisivasubramanian Srinivasan,
Heinrich Kuhn, Harman Kalra, Jerin Jacob, Nithin Dabilpuram,
Kiran Kumar K, Rasesh Mody, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, Thomas Monjalon, Andrew Rybchenko
Cc: Yigit, Ferruh, dev
>
> Move requested MTU value check to the API to prevent the duplicated
> code.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> ---
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-10-08 17:20 ` Ananyev, Konstantin
2021-10-09 10:58 ` lihuisong (C)
1 sibling, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-08 17:20 UTC (permalink / raw)
To: Yigit, Ferruh, Somalapuram Amaranath, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Hemant Agrawal,
Sachin Saxena, Wang, Haiyue, Gagandeep Singh, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
Zhang, Qi Z, Xu, Rosen, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: Yigit, Ferruh, dev
>
> Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
> and application should enable the jumbo frame offload support for it.
>
> When jumbo frame offload is not enabled by application, but MTU bigger
> than RTE_ETHER_MTU is requested there are two options, either fail or
> enable jumbo frame offload implicitly.
>
> Enabling jumbo frame offload implicitly is selected by many drivers
> since setting a big MTU value already implies it, and this increases
> usability.
>
> This patch moves this logic from drivers to the library, both to reduce
> the duplicated code in the drivers and to make behaviour more visible.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> ---
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> --
> 2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
` (5 preceding siblings ...)
2021-10-08 15:57 ` [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length Ananyev, Konstantin
@ 2021-10-09 10:56 ` lihuisong (C)
6 siblings, 0 replies; 112+ messages in thread
From: lihuisong (C) @ 2021-10-09 10:56 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
Thomas Monjalon
Cc: dev
在 2021/10/8 0:56, Ferruh Yigit 写道:
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> ---
> Cc: Min Hu (Connor) <humin29@huawei.com>
>
> v2:
> * Converted to explicit checks for zero/non-zero
> * fixed hns3 checks
> * fixed some sample app rxmode.mtu value
> * fixed some sample app max-pkt-len argument and updated doc for it
>
> v3:
> * rebased
>
> v4:
> * fix typos in commit logs
>
> v5:
> * fix testpmd '--max-pkt-len=###' parameter for DTS jumbo frame test
> ---
> app/test-eventdev/test_perf_common.c | 1 -
> app/test-eventdev/test_pipeline_common.c | 5 +-
> app/test-pmd/cmdline.c | 49 +++----
> app/test-pmd/config.c | 22 ++-
> app/test-pmd/parameters.c | 2 +-
> app/test-pmd/testpmd.c | 115 ++++++++++------
> app/test-pmd/testpmd.h | 4 +-
> app/test/test_link_bonding.c | 1 -
> app/test/test_link_bonding_mode4.c | 1 -
> app/test/test_link_bonding_rssconf.c | 2 -
> app/test/test_pmd_perf.c | 1 -
> doc/guides/nics/dpaa.rst | 2 +-
> doc/guides/nics/dpaa2.rst | 2 +-
> doc/guides/nics/features.rst | 2 +-
> doc/guides/nics/fm10k.rst | 2 +-
> doc/guides/nics/mlx5.rst | 4 +-
> doc/guides/nics/octeontx.rst | 2 +-
> doc/guides/nics/thunderx.rst | 2 +-
> doc/guides/rel_notes/deprecation.rst | 25 ----
> doc/guides/sample_app_ug/flow_classify.rst | 7 +-
> doc/guides/sample_app_ug/l3_forward.rst | 6 +-
> .../sample_app_ug/l3_forward_access_ctrl.rst | 4 +-
> doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +-
> .../sample_app_ug/l3_forward_power_man.rst | 4 +-
> .../sample_app_ug/performance_thread.rst | 4 +-
> doc/guides/sample_app_ug/skeleton.rst | 7 +-
> drivers/net/atlantic/atl_ethdev.c | 3 -
> drivers/net/avp/avp_ethdev.c | 17 +--
> drivers/net/axgbe/axgbe_ethdev.c | 7 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
> drivers/net/bnxt/bnxt_ethdev.c | 21 +--
> drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
> drivers/net/cnxk/cnxk_ethdev.c | 9 +-
> drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
> drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-
> drivers/net/cxgbe/cxgbe_main.c | 3 +-
> drivers/net/cxgbe/sge.c | 3 +-
> drivers/net/dpaa/dpaa_ethdev.c | 52 +++----
> drivers/net/dpaa2/dpaa2_ethdev.c | 35 ++---
> drivers/net/e1000/em_ethdev.c | 4 +-
> drivers/net/e1000/igb_ethdev.c | 18 +--
> drivers/net/e1000/igb_rxtx.c | 16 +--
> drivers/net/ena/ena_ethdev.c | 27 ++--
> drivers/net/enetc/enetc_ethdev.c | 24 +---
> drivers/net/enic/enic_ethdev.c | 2 +-
> drivers/net/enic/enic_main.c | 42 +++---
> drivers/net/fm10k/fm10k_ethdev.c | 2 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-
> drivers/net/hns3/hns3_ethdev.c | 42 +-----
> drivers/net/hns3/hns3_ethdev_vf.c | 28 +---
> drivers/net/hns3/hns3_rxtx.c | 10 +-
> drivers/net/i40e/i40e_ethdev.c | 10 +-
> drivers/net/i40e/i40e_rxtx.c | 4 +-
> drivers/net/iavf/iavf_ethdev.c | 9 +-
> drivers/net/ice/ice_dcf_ethdev.c | 5 +-
> drivers/net/ice/ice_ethdev.c | 14 +-
> drivers/net/ice/ice_rxtx.c | 12 +-
> drivers/net/igc/igc_ethdev.c | 51 ++-----
> drivers/net/igc/igc_ethdev.h | 7 +
> drivers/net/igc/igc_txrx.c | 22 +--
> drivers/net/ionic/ionic_ethdev.c | 12 +-
> drivers/net/ionic/ionic_rxtx.c | 6 +-
> drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 35 ++---
> drivers/net/ixgbe/ixgbe_pf.c | 6 +-
> drivers/net/ixgbe/ixgbe_rxtx.c | 15 +-
> drivers/net/liquidio/lio_ethdev.c | 20 +--
> drivers/net/mlx4/mlx4_rxq.c | 17 +--
> drivers/net/mlx5/mlx5_rxq.c | 25 ++--
> drivers/net/mvneta/mvneta_ethdev.c | 7 -
> drivers/net/mvneta/mvneta_rxtx.c | 13 +-
> drivers/net/mvpp2/mrvl_ethdev.c | 34 ++---
> drivers/net/nfp/nfp_common.c | 9 +-
> drivers/net/octeontx/octeontx_ethdev.c | 12 +-
> drivers/net/octeontx2/otx2_ethdev.c | 2 +-
> drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-
> drivers/net/pfe/pfe_ethdev.c | 7 +-
> drivers/net/qede/qede_ethdev.c | 16 +--
> drivers/net/qede/qede_rxtx.c | 8 +-
> drivers/net/sfc/sfc_ethdev.c | 4 +-
> drivers/net/sfc/sfc_port.c | 6 +-
> drivers/net/tap/rte_eth_tap.c | 7 +-
> drivers/net/thunderx/nicvf_ethdev.c | 13 +-
> drivers/net/txgbe/txgbe_ethdev.c | 7 +-
> drivers/net/txgbe/txgbe_ethdev.h | 4 +
> drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
> drivers/net/txgbe/txgbe_rxtx.c | 19 +--
> drivers/net/virtio/virtio_ethdev.c | 9 +-
> examples/bbdev_app/main.c | 1 -
> examples/bond/main.c | 1 -
> examples/distributor/main.c | 1 -
> .../pipeline_worker_generic.c | 1 -
> .../eventdev_pipeline/pipeline_worker_tx.c | 1 -
> examples/flow_classify/flow_classify.c | 12 +-
> examples/ioat/ioatfwd.c | 1 -
> examples/ip_fragmentation/main.c | 12 +-
> examples/ip_pipeline/link.c | 2 +-
> examples/ip_reassembly/main.c | 12 +-
> examples/ipsec-secgw/ipsec-secgw.c | 7 +-
> examples/ipv4_multicast/main.c | 9 +-
> examples/kni/main.c | 6 +-
> examples/l2fwd-cat/l2fwd-cat.c | 8 +-
> examples/l2fwd-crypto/main.c | 1 -
> examples/l2fwd-event/l2fwd_common.c | 1 -
> examples/l3fwd-acl/main.c | 129 +++++++++---------
> examples/l3fwd-graph/main.c | 83 +++++++----
> examples/l3fwd-power/main.c | 90 +++++++-----
> examples/l3fwd/main.c | 84 +++++++-----
> .../performance-thread/l3fwd-thread/main.c | 88 +++++++-----
> .../performance-thread/l3fwd-thread/test.sh | 24 ++--
> examples/pipeline/obj.c | 2 +-
> examples/ptpclient/ptpclient.c | 10 +-
> examples/qos_meter/main.c | 1 -
> examples/qos_sched/init.c | 1 -
> examples/rxtx_callbacks/main.c | 10 +-
> examples/skeleton/basicfwd.c | 12 +-
> examples/vhost/main.c | 4 +-
> examples/vm_power_manager/main.c | 11 +-
> lib/ethdev/rte_ethdev.c | 92 +++++++------
> lib/ethdev/rte_ethdev.h | 2 +-
> lib/ethdev/rte_ethdev_trace.h | 2 +-
> 121 files changed, 815 insertions(+), 1073 deletions(-)
>
> diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
> index cc100650c21e..660d5a0364b6 100644
> --- a/app/test-eventdev/test_perf_common.c
> +++ b/app/test-eventdev/test_perf_common.c
> @@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
> struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
> index 6ee530d4cdc9..5fcea74b4d43 100644
> --- a/app/test-eventdev/test_pipeline_common.c
> +++ b/app/test-eventdev/test_pipeline_common.c
> @@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
> return -EINVAL;
> }
>
> - port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
> - if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
> + port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN;
> + if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
> port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> t->internal_port = 1;
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index a9efd027c376..a677451073ae 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1892,45 +1892,38 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
> __rte_unused void *data)
> {
> struct cmd_config_max_pkt_len_result *res = parsed_result;
> - uint32_t max_rx_pkt_len_backup = 0;
> - portid_t pid;
> + portid_t port_id;
> int ret;
>
> + if (strcmp(res->name, "max-pkt-len") != 0) {
> + printf("Unknown parameter\n");
> + return;
> + }
> +
> if (!all_ports_stopped()) {
> fprintf(stderr, "Please stop all ports first\n");
> return;
> }
>
> - RTE_ETH_FOREACH_DEV(pid) {
> - struct rte_port *port = &ports[pid];
> + RTE_ETH_FOREACH_DEV(port_id) {
> + struct rte_port *port = &ports[port_id];
>
> - if (!strcmp(res->name, "max-pkt-len")) {
> - if (res->value < RTE_ETHER_MIN_LEN) {
> - fprintf(stderr,
> - "max-pkt-len can not be less than %d\n",
> - RTE_ETHER_MIN_LEN);
> - return;
> - }
> - if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
> - return;
> -
> - ret = eth_dev_info_get_print_err(pid, &port->dev_info);
> - if (ret != 0) {
> - fprintf(stderr,
> - "rte_eth_dev_info_get() failed for port %u\n",
> - pid);
> - return;
> - }
> -
> - max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
> + if (res->value < RTE_ETHER_MIN_LEN) {
> + fprintf(stderr,
> + "max-pkt-len can not be less than %d\n",
> + RTE_ETHER_MIN_LEN);
> + return;
> + }
>
> - port->dev_conf.rxmode.max_rx_pkt_len = res->value;
> - if (update_jumbo_frame_offload(pid) != 0)
> - port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
> - } else {
> - fprintf(stderr, "Unknown parameter\n");
> + ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
> + if (ret != 0) {
> + fprintf(stderr,
> + "rte_eth_dev_info_get() failed for port %u\n",
> + port_id);
> return;
> }
> +
> + update_jumbo_frame_offload(port_id, res->value);
> }
>
> init_port_config();
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 9c66329e96ee..db3eeffa0093 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1147,7 +1147,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> int diag;
> struct rte_port *rte_port = &ports[port_id];
> struct rte_eth_dev_info dev_info;
> - uint16_t eth_overhead;
> int ret;
>
> if (port_id_is_invalid(port_id, ENABLED_WARN))
> @@ -1164,21 +1163,18 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> return;
> }
> diag = rte_eth_dev_set_mtu(port_id, mtu);
> - if (diag)
> + if (diag != 0) {
> fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
> - else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - /*
> - * Ether overhead in driver is equal to the difference of
> - * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
> - * device supports jumbo frame.
> - */
> - eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
> - if (mtu > RTE_ETHER_MTU) {
> + return;
> + }
> +
> + rte_port->dev_conf.rxmode.mtu = mtu;
> +
> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (mtu > RTE_ETHER_MTU)
> rte_port->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - rte_port->dev_conf.rxmode.max_rx_pkt_len =
> - mtu + eth_overhead;
> - } else
> + else
> rte_port->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> }
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index 3f94a82e321f..dec5373b346d 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -870,7 +870,7 @@ launch_args_parse(int argc, char** argv)
> if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
> n = atoi(optarg);
> if (n >= RTE_ETHER_MIN_LEN)
> - rx_mode.max_rx_pkt_len = (uint32_t) n;
> + max_rx_pkt_len = n;
> else
> rte_exit(EXIT_FAILURE,
> "Invalid max-pkt-len=%d - should be > %d\n",
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 97ae52e17ecd..8c11ab23dd14 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -214,6 +214,11 @@ uint16_t stats_period; /**< Period to show statistics (disabled by default) */
> */
> uint8_t f_quit;
>
> +/*
> + * Max Rx frame size, set by '--max-pkt-len' parameter.
> + */
> +uint16_t max_rx_pkt_len;
> +
> /*
> * Configuration of packet segments used to scatter received packets
> * if some of split features is configured.
> @@ -446,13 +451,7 @@ lcoreid_t latencystats_lcore_id = -1;
> /*
> * Ethernet device configuration.
> */
> -struct rte_eth_rxmode rx_mode = {
> - /* Default maximum frame length.
> - * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
> - * in init_config().
> - */
> - .max_rx_pkt_len = 0,
> -};
> +struct rte_eth_rxmode rx_mode;
>
> struct rte_eth_txmode tx_mode = {
> .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
> @@ -1481,11 +1480,24 @@ check_nb_hairpinq(queueid_t hairpinq)
> return 0;
> }
>
> +static int
> +get_eth_overhead(struct rte_eth_dev_info *dev_info)
> +{
> + uint32_t eth_overhead;
> +
> + if (dev_info->max_mtu != UINT16_MAX &&
> + dev_info->max_rx_pktlen > dev_info->max_mtu)
> + eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu;
> + else
> + eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return eth_overhead;
> +}
> +
> static void
> init_config_port_offloads(portid_t pid, uint32_t socket_id)
> {
> struct rte_port *port = &ports[pid];
> - uint16_t data_size;
> int ret;
> int i;
>
> @@ -1496,7 +1508,7 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
> if (ret != 0)
> rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
>
> - ret = update_jumbo_frame_offload(pid);
> + ret = update_jumbo_frame_offload(pid, 0);
> if (ret != 0)
> fprintf(stderr,
> "Updating jumbo frame offload failed for port %u\n",
> @@ -1516,6 +1528,12 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
> if (eth_link_speed)
> port->dev_conf.link_speeds = eth_link_speed;
>
> + if (max_rx_pkt_len) {
> + port->dev_conf.rxmode.mtu = max_rx_pkt_len -
> + get_eth_overhead(&port->dev_info);
> + max_rx_pkt_len = 0;
> + }
> +
> /* set flag to initialize port/queue */
> port->need_reconfig = 1;
> port->need_reconfig_queues = 1;
> @@ -1528,14 +1546,20 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
> */
> if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
> port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
> - data_size = rx_mode.max_rx_pkt_len /
> - port->dev_info.rx_desc_lim.nb_mtu_seg_max;
> -
> - if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
> - mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
> - TESTPMD_LOG(WARNING,
> - "Configured mbuf size of the first segment %hu\n",
> - mbuf_data_size[0]);
> + uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
> + uint16_t mtu;
> +
> + if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
> + uint16_t data_size = (mtu + eth_overhead) /
> + port->dev_info.rx_desc_lim.nb_mtu_seg_max;
> + uint16_t buffer_size = data_size + RTE_PKTMBUF_HEADROOM;
> +
> + if (buffer_size > mbuf_data_size[0]) {
> + mbuf_data_size[0] = buffer_size;
> + TESTPMD_LOG(WARNING,
> + "Configured mbuf size of the first segment %hu\n",
> + mbuf_data_size[0]);
> + }
> }
> }
> }
> @@ -2552,6 +2576,7 @@ start_port(portid_t pid)
> pi);
> return -1;
> }
> +
> /* configure port */
> diag = eth_dev_configure_mp(pi, nb_rxq + nb_hairpinq,
> nb_txq + nb_hairpinq,
> @@ -3451,44 +3476,45 @@ rxtx_port_config(struct rte_port *port)
>
> /*
> * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
> - * MTU is also aligned if JUMBO_FRAME offload is not set.
> + * MTU is also aligned.
> *
> * port->dev_info should be set before calling this function.
> *
> + * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
> + * ETH_OVERHEAD". This is useful to update flags but not MTU value.
> + *
> * return 0 on success, negative on error
> */
> int
> -update_jumbo_frame_offload(portid_t portid)
> +update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
> {
> struct rte_port *port = &ports[portid];
> uint32_t eth_overhead;
> uint64_t rx_offloads;
> - int ret;
> + uint16_t mtu, new_mtu;
> bool on;
>
> - /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
> - if (port->dev_info.max_mtu != UINT16_MAX &&
> - port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
> - eth_overhead = port->dev_info.max_rx_pktlen -
> - port->dev_info.max_mtu;
> - else
> - eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> + eth_overhead = get_eth_overhead(&port->dev_info);
>
> - rx_offloads = port->dev_conf.rxmode.offloads;
> + if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
> + printf("Failed to get MTU for port %u\n", portid);
> + return -1;
> + }
> +
> + if (max_rx_pktlen == 0)
> + max_rx_pktlen = mtu + eth_overhead;
>
> - /* Default config value is 0 to use PMD specific overhead */
> - if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
> - port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
> + rx_offloads = port->dev_conf.rxmode.offloads;
> + new_mtu = max_rx_pktlen - eth_overhead;
>
> - if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
> + if (new_mtu <= RTE_ETHER_MTU) {
> rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> on = false;
> } else {
> if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> fprintf(stderr,
> "Frame size (%u) is not supported by port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len,
> - portid);
> + max_rx_pktlen, portid);
> return -1;
> }
> rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -3509,19 +3535,18 @@ update_jumbo_frame_offload(portid_t portid)
> }
> }
>
> - /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
> - * if unset do it here
> - */
> - if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> - ret = eth_dev_set_mtu_mp(portid,
> - port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
> - if (ret)
> - fprintf(stderr,
> - "Failed to set MTU to %u for port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead,
> - portid);
> + if (mtu == new_mtu)
> + return 0;
> +
> + if (eth_dev_set_mtu_mp(portid, new_mtu) != 0) {
> + fprintf(stderr,
> + "Failed to set MTU to %u for port %u\n",
> + new_mtu, portid);
> + return -1;
> }
>
> + port->dev_conf.rxmode.mtu = new_mtu;
> +
> return 0;
> }
>
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 5863b2f43f3e..076c154b2b3a 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -448,6 +448,8 @@ extern uint8_t bitrate_enabled;
>
> extern struct rte_fdir_conf fdir_conf;
>
> +extern uint16_t max_rx_pkt_len;
> +
> /*
> * Configuration of packet segments used to scatter received packets
> * if some of split features is configured.
> @@ -1022,7 +1024,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
> __rte_unused void *user_param);
> void add_tx_dynf_callback(portid_t portid);
> void remove_tx_dynf_callback(portid_t portid);
> -int update_jumbo_frame_offload(portid_t portid);
> +int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
>
> /*
> * Work-around of a compilation error with ICC on invocations of the
> diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
> index 8a5c8310a8b4..5388d18125a6 100644
> --- a/app/test/test_link_bonding.c
> +++ b/app/test/test_link_bonding.c
> @@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> .split_hdr_size = 0,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
> index 2c835fa7adc7..3e9254fe896d 100644
> --- a/app/test/test_link_bonding_mode4.c
> +++ b/app/test/test_link_bonding_mode4.c
> @@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
> index 5dac60ca1edd..e7bb0497b663 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> @@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
> static struct rte_eth_conf rss_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
> index 3a248d512c4a..a3b4f52c65e6 100644
> --- a/app/test/test_pmd_perf.c
> +++ b/app/test/test_pmd_perf.c
> @@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
> index 7355ec305916..9dad612058c6 100644
> --- a/doc/guides/nics/dpaa.rst
> +++ b/doc/guides/nics/dpaa.rst
> @@ -335,7 +335,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
> index df23a5704dca..831bc564883a 100644
> --- a/doc/guides/nics/dpaa2.rst
> +++ b/doc/guides/nics/dpaa2.rst
> @@ -545,7 +545,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 4fce8cd1c976..483cb7da576f 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -166,7 +166,7 @@ Jumbo frame
> Supports Rx jumbo frames.
>
> * **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> - ``dev_conf.rxmode.max_rx_pkt_len``.
> + ``dev_conf.rxmode.mtu``.
> * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
> * **[related] API**: ``rte_eth_dev_set_mtu()``.
>
> diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
> index 7b8ef0e7823d..ed6afd62703d 100644
> --- a/doc/guides/nics/fm10k.rst
> +++ b/doc/guides/nics/fm10k.rst
> @@ -141,7 +141,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
> up to 15364 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index bae73f42d882..1f5619ed53fc 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -606,9 +606,9 @@ Driver options
> and each stride receives one packet. MPRQ can improve throughput for
> small-packet traffic.
>
> - When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
> + When MPRQ is enabled, MTU can be larger than the size of
> user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
> - configure large stride size enough to accommodate max_rx_pkt_len as long as
> + configure large stride size enough to accommodate MTU as long as
> device allows. Note that this can waste system memory compared to enabling Rx
> scatter and multi-segment packet.
>
> diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
> index b1a868b054d1..8236cc3e93e0 100644
> --- a/doc/guides/nics/octeontx.rst
> +++ b/doc/guides/nics/octeontx.rst
> @@ -157,7 +157,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
> up to 32k bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
> index 12d43ce93e28..98f23a2b2a3d 100644
> --- a/doc/guides/nics/thunderx.rst
> +++ b/doc/guides/nics/thunderx.rst
> @@ -392,7 +392,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
> -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
> up to 9200 bytes can still reach the host interface.
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index a2fe766d4b4f..1063a1fe4bea 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -81,31 +81,6 @@ Deprecation Notices
> In 19.11 PMDs will still update the field even when the offload is not
> enabled.
>
> -* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be
> - replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
> - The new ``mtu`` field will be used to configure the initial device MTU via
> - ``rte_eth_dev_configure()`` API.
> - Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
> - The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store
> - the configured ``mtu`` value,
> - and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
> - be used to store the user configuration request.
> - Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled,
> - ``mtu`` field will be always valid.
> - When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU``
> - value will be used.
> - ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully,
> - either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
> -
> - An application may need to configure device for a specific Rx packet size, like for
> - cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
> - can't be bigger than Rx buffer size.
> - To cover these cases an application needs to know the device packet overhead to be
> - able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
> - ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
> - the device packet overhead can be calculated as:
> - ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
> -
> * ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
> will be removed in 21.11.
> Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
> diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst
> index 812aaa87b05b..6c4c04e935e4 100644
> --- a/doc/guides/sample_app_ug/flow_classify.rst
> +++ b/doc/guides/sample_app_ug/flow_classify.rst
> @@ -162,12 +162,7 @@ Forwarding application is shown below:
> :end-before: >8 End of initializing a given port.
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
> -
> -.. literalinclude:: ../../../examples/flow_classify/flow_classify.c
> - :language: c
> - :start-after: Ethernet ports configured with default settings using struct. 8<
> - :end-before: >8 End of configuration of Ethernet ports.
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
> index 2d5cd5f1c0ba..56af5cd5b383 100644
> --- a/doc/guides/sample_app_ug/l3_forward.rst
> +++ b/doc/guides/sample_app_ug/l3_forward.rst
> @@ -65,7 +65,7 @@ The application has a number of command line options::
> [--lookup LOOKUP_METHOD]
> --config(port,queue,lcore)[,(port,queue,lcore)]
> [--eth-dest=X,MM:MM:MM:MM:MM:MM]
> - [--enable-jumbo [--max-pkt-len PKTLEN]]
> + [--max-pkt-len PKTLEN]
> [--no-numa]
> [--hash-entry-num]
> [--ipv6]
> @@ -95,9 +95,7 @@ Where,
>
> * ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
>
> -* ``--enable-jumbo:`` Optional, enables jumbo frames.
> -
> -* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
> +* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
>
> * ``--no-numa:`` Optional, disables numa awareness.
>
> diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> index 2cf6e4556f14..486247ac2e4f 100644
> --- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> +++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> @@ -236,7 +236,7 @@ The application has a number of command line options:
>
> .. code-block:: console
>
> - ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
> + ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--max-pkt-len PKTLEN] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
>
>
> where,
> @@ -255,8 +255,6 @@ where,
> * --alg=<val>: optional, ACL classify method to use, one of:
> ``scalar|sse|avx2|neon|altivec|avx512x16|avx512x32``
>
> -* --enable-jumbo: optional, enables jumbo frames
> -
> * --max-pkt-len: optional, maximum packet length in decimal (64-9600)
>
> * --no-numa: optional, disables numa awareness
> diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst b/doc/guides/sample_app_ug/l3_forward_graph.rst
> index 03e9a85aa68c..0a3e0d44ecea 100644
> --- a/doc/guides/sample_app_ug/l3_forward_graph.rst
> +++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
> @@ -48,7 +48,7 @@ The application has a number of command line options similar to l3fwd::
> [-P]
> --config(port,queue,lcore)[,(port,queue,lcore)]
> [--eth-dest=X,MM:MM:MM:MM:MM:MM]
> - [--enable-jumbo [--max-pkt-len PKTLEN]]
> + [--max-pkt-len PKTLEN]
> [--no-numa]
> [--per-port-pool]
>
> @@ -63,9 +63,7 @@ Where,
>
> * ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
>
> -* ``--enable-jumbo:`` Optional, enables jumbo frames.
> -
> -* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
> +* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
>
> * ``--no-numa:`` Optional, disables numa awareness.
>
> diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
> index 0495314c87d5..8817eaadbfc3 100644
> --- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
> +++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
> @@ -88,7 +88,7 @@ The application has a number of command line options:
>
> .. code-block:: console
>
> - ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa]
> + ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--max-pkt-len PKTLEN] [--no-numa]
>
> where,
>
> @@ -99,8 +99,6 @@ where,
>
> * --config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which ports are mapped to which cores.
>
> -* --enable-jumbo: optional, enables jumbo frames
> -
> * --max-pkt-len: optional, maximum packet length in decimal (64-9600)
>
> * --no-numa: optional, disables numa awareness
> diff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst
> index 9b09838f6448..7d1bf6eaae8c 100644
> --- a/doc/guides/sample_app_ug/performance_thread.rst
> +++ b/doc/guides/sample_app_ug/performance_thread.rst
> @@ -59,7 +59,7 @@ The application has a number of command line options::
> -p PORTMASK [-P]
> --rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
> --tx(lcore,thread)[,(lcore,thread)]
> - [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
> + [--max-pkt-len PKTLEN] [--no-numa]
> [--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
> [--parse-ptype]
>
> @@ -80,8 +80,6 @@ Where:
> the lcore the thread runs on, and the id of RX thread with which it is
> associated. The parameters are explained below.
>
> -* ``--enable-jumbo``: optional, enables jumbo frames.
> -
> * ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
>
> * ``--no-numa``: optional, disables numa awareness.
> diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst
> index f7bcd7ed2a1d..6d0de6440105 100644
> --- a/doc/guides/sample_app_ug/skeleton.rst
> +++ b/doc/guides/sample_app_ug/skeleton.rst
> @@ -106,12 +106,7 @@ Forwarding application is shown below:
> :end-before: >8 End of main functional part of port initialization.
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
> -
> -.. literalinclude:: ../../../examples/skeleton/basicfwd.c
> - :language: c
> - :start-after: Configuration of ethernet ports. 8<
> - :end-before: >8 End of configuration of ethernet ports.
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
> index 0ce35eb519e2..3f654c071566 100644
> --- a/drivers/net/atlantic/atl_ethdev.c
> +++ b/drivers/net/atlantic/atl_ethdev.c
> @@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return 0;
> }
>
> diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> index 623fa5e5ff5b..0feacc822433 100644
> --- a/drivers/net/avp/avp_ethdev.c
> +++ b/drivers/net/avp/avp_ethdev.c
> @@ -1059,17 +1059,18 @@ static int
> avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
> struct avp_dev *avp)
> {
> - unsigned int max_rx_pkt_len;
> + unsigned int max_rx_pktlen;
>
> - max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
>
> - if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
> - (max_rx_pkt_len > avp->host_mbuf_size)) {
> + if (max_rx_pktlen > avp->guest_mbuf_size ||
> + max_rx_pktlen > avp->host_mbuf_size) {
> /*
> * If the guest MTU is greater than either the host or guest
> * buffers then chained mbufs have to be enabled in the TX
> * direction. It is assumed that the application will not need
> - * to send packets larger than their max_rx_pkt_len (MRU).
> + * to send packets larger than their MTU.
> */
> return 1;
> }
> @@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
>
> PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
> avp->max_rx_pkt_len,
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN,
> avp->host_mbuf_size,
> avp->guest_mbuf_size);
>
> @@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> * function; send it truncated to avoid the performance
> * hit of having to manage returning the already
> * allocated buffer to the free list. This should not
> - * happen since the application should have set the
> - * max_rx_pkt_len based on its MTU and it should be
> + * happen since the application should have not send
> + * packages larger than its MTU and it should be
> * policing its own packet sizes.
> */
> txq->errors++;
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
> index 9cb4818af11f..76aeec077f2b 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
> struct axgbe_port *pdata = dev->data->dev_private;
> int ret;
> struct rte_eth_dev_data *dev_data = dev->data;
> - uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len;
> + uint16_t max_pkt_len;
>
> dev->dev_ops = &axgbe_eth_dev_ops;
>
> @@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
>
> rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
> rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
> +
> + max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
> max_pkt_len > pdata->rx_buf_size)
> dev_data->scattered_rx = 1;
> @@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (frame_size > AXGBE_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> val = 1;
> @@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> val = 0;
> }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
> index 463886f17a58..009a94e9a8fa 100644
> --- a/drivers/net/bnx2x/bnx2x_ethdev.c
> +++ b/drivers/net/bnx2x/bnx2x_ethdev.c
> @@ -175,16 +175,12 @@ static int
> bnx2x_dev_configure(struct rte_eth_dev *dev)
> {
> struct bnx2x_softc *sc = dev->data->dev_private;
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
>
> int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
>
> PMD_INIT_FUNC_TRACE(sc);
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - dev->data->mtu = sc->mtu;
> - }
> + sc->mtu = dev->data->dev_conf.rxmode.mtu;
>
> if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
> PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues");
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index aa7e7fdc85fa..8c6f20b75aed 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -1157,13 +1157,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
> rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
> eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - eth_dev->data->mtu =
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
> - BNXT_NUM_VLANS;
> - bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> - }
> + bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> +
> return 0;
>
> resource_error:
> @@ -1201,6 +1196,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
> */
> static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> uint16_t buf_size;
> int i;
>
> @@ -1215,7 +1211,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
>
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
> RTE_PKTMBUF_HEADROOM);
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size)
> + if (eth_dev->data->mtu + overhead > buf_size)
> return 1;
> }
> return 0;
> @@ -3026,6 +3022,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
>
> int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> struct bnxt *bp = eth_dev->data->dev_private;
> uint32_t new_pkt_size;
> uint32_t rc = 0;
> @@ -3039,8 +3036,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> if (!eth_dev->data->nb_rx_queues)
> return rc;
>
> - new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
> - VLAN_TAG_SIZE * BNXT_NUM_VLANS;
> + new_pkt_size = new_mtu + overhead;
>
> /*
> * Disallow any MTU change that would require scattered receive support
> @@ -3067,7 +3063,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> }
>
> /* Is there a change in mtu setting? */
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size)
> + if (eth_dev->data->mtu == new_mtu)
> return rc;
>
> for (i = 0; i < bp->nr_vnics; i++) {
> @@ -3089,9 +3085,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> }
> }
>
> - if (!rc)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size;
> -
> if (bnxt_hwrm_config_host_mtu(bp))
> PMD_DRV_LOG(WARNING, "Failed to configure host MTU\n");
>
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 54987d96b34d..412acff42f65 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -1724,8 +1724,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
> slave_eth_dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_VLAN_FILTER;
>
> - slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + slave_eth_dev->data->dev_conf.rxmode.mtu =
> + bonded_eth_dev->data->dev_conf.rxmode.mtu;
>
> if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
> index 8629193d5049..8d0677cd89d9 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.c
> +++ b/drivers/net/cnxk/cnxk_ethdev.c
> @@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
> mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
> buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
>
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
> dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> }
> @@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> {
> struct rte_eth_dev_data *data = eth_dev->data;
> struct cnxk_eth_rxq_sp *rxq;
> - uint16_t mtu;
> int rc;
>
> rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
> /* Setup scatter mode if needed by jumbo */
> nix_enable_mseg_on_jumbo(rxq);
>
> - /* Setup MTU based on max_rx_pkt_len */
> - mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
> - CNXK_NIX_MAX_VTAG_ACT_SIZE;
> -
> - rc = cnxk_nix_mtu_set(eth_dev, mtu);
> + rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
> if (rc)
> plt_err("Failed to set default MTU size, rc=%d", rc);
>
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index b6cc5286c6d0..695d0d6fd3e2 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> goto exit;
> }
>
> - frame_size += RTE_ETHER_CRC_LEN;
> -
> - if (frame_size > RTE_ETHER_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> exit:
> return rc;
> }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 177eca397600..8cf61f12a8d6 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> return err;
>
> /* Must accommodate at least RTE_ETHER_MIN_MTU */
> - if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
> + if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
> return -EINVAL;
>
> /* set to jumbo mode if needed */
> - if (new_mtu > CXGBE_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
>
> err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
> -1, -1, true);
> - if (!err)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;
> -
> return err;
> }
>
> @@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
> const struct rte_eth_rxconf *rx_conf __rte_unused,
> struct rte_mempool *mp)
> {
> - unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> struct port_info *pi = eth_dev->data->dev_private;
> struct adapter *adapter = pi->adapter;
> struct rte_eth_dev_info dev_info;
> @@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
> rxq->fl.size = temp_nb_desc;
>
> /* Set to jumbo mode if necessary */
> - if (pkt_len > CXGBE_ETH_MAX_LEN)
> + if (eth_dev->data->mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
> index 6dd1bf1f836e..91d6bb9bbcb0 100644
> --- a/drivers/net/cxgbe/cxgbe_main.c
> +++ b/drivers/net/cxgbe/cxgbe_main.c
> @@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
> unsigned int mtu;
> int ret;
>
> - mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
> + mtu = pi->eth_dev->data->mtu;
>
> conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
>
> diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
> index e5f7721dc4b3..830f5192474d 100644
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
> u32 wr_mid;
> u64 cntrl, *end;
> bool v6;
> - u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
> + u32 max_pkt_len;
>
> /* Reject xmit if queue is stopped */
> if (unlikely(txq->flags & EQ_STOPPED))
> @@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
> return 0;
> }
>
> + max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
> (unlikely(m->pkt_len > max_pkt_len)))
> goto out_free;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
> index 36d8f9249df1..adbdb87baab9 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - if (frame_size > DPAA_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> fman_if_set_maxfrm(dev->process_private, frame_size);
>
> return 0;
> @@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> struct fman_if *fif = dev->process_private;
> struct __fman_if *__fif;
> struct rte_intr_handle *intr_handle;
> + uint32_t max_rx_pktlen;
> int speed, duplex;
> int ret;
>
> @@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - DPAA_PMD_DEBUG("enabling jumbo");
> -
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - DPAA_MAX_RX_PKT_LEN)
> - max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - else {
> - DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
> - "supported is %d",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - DPAA_MAX_RX_PKT_LEN);
> - max_len = DPAA_MAX_RX_PKT_LEN;
> - }
> -
> - fman_if_set_maxfrm(dev->process_private, max_len);
> - dev->data->mtu = max_len
> - - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
> + DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
> + "supported is %d",
> + max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
> + max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
> }
>
> + fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
> +
> if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
> DPAA_PMD_DEBUG("enabling scatter mode");
> fman_if_set_sg(dev->process_private, 1);
> @@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> u32 flags = 0;
> int ret;
> u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> return -EINVAL;
> }
>
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
> + VLAN_TAG_SIZE;
> /* Max packet can fit in single buffer */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
> + if (max_rx_pktlen <= buffsz) {
> ;
> } else if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - buffsz * DPAA_SGT_MAX_ENTRIES) {
> - DPAA_PMD_ERR("max RxPkt size %d too big to fit "
> + if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
> + DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
> "MaxSGlist %d",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - buffsz * DPAA_SGT_MAX_ENTRIES);
> + max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES);
> rte_errno = EOVERFLOW;
> return -rte_errno;
> }
> @@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - buffsz - RTE_PKTMBUF_HEADROOM);
> + max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
> }
>
> dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
> @@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
>
> dpaa_intf->valid = 1;
> DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
> - fman_if_get_sg_enable(fif),
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + fman_if_get_sg_enable(fif), max_rx_pktlen);
> /* checking if push mode only, no error check for now */
> if (!rxq->is_static &&
> dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 275656fbe47c..97dd8e079a73 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> int tx_l3_csum_offload = false;
> int tx_l4_csum_offload = false;
> int ret, tc_index;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -559,25 +560,19 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
> - ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
> - priv->token, eth_conf->rxmode.max_rx_pkt_len
> - - RTE_ETHER_CRC_LEN);
> - if (ret) {
> - DPAA2_PMD_ERR(
> - "Unable to set mtu. check config");
> - return ret;
> - }
> - dev->data->mtu =
> - dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> - VLAN_TAG_SIZE;
> - DPAA2_PMD_INFO("MTU configured for the device: %d",
> - dev->data->mtu);
> - } else {
> - return -1;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
> + ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
> + priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
> + if (ret != 0) {
> + DPAA2_PMD_ERR("Unable to set mtu. check config");
> + return ret;
> + DPAA2_PMD_INFO("MTU configured for the device: %d",
> + dev->data->mtu);
> }
> + } else {
> + return -1;
> }
>
> if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
> @@ -1477,15 +1472,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
> return -EINVAL;
>
> - if (frame_size > DPAA2_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
> index a0ca371b0275..6f418a36aa04 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index d80fad01e36d..4c114bf90fc7 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -2681,9 +2681,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD);
> }
>
> static void
> @@ -2699,10 +2697,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE);
> + E1000_WRITE_REG(hw, E1000_RLPML,
> + dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
> }
>
> static int
> @@ -4400,7 +4396,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -4411,11 +4407,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
>
> return 0;
> }
> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> index 278d5d2712af..e9a30d393bd7 100644
> --- a/drivers/net/e1000/igb_rxtx.c
> +++ b/drivers/net/e1000/igb_rxtx.c
> @@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> @@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure support of jumbo frames, if any.
> */
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> -
> rctl |= E1000_RCTL_LPE;
>
> /*
> @@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) > buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> @@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> /* setup MTU */
> - e1000_rlpml_set_vf(hw,
> - (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE));
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> + e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) > buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> index 4cebf60a68a7..3a9d5031b262 100644
> --- a/drivers/net/ena/ena_ethdev.c
> +++ b/drivers/net/ena/ena_ethdev.c
> @@ -679,26 +679,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev,
> return rc;
> }
>
> -static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
> -{
> - uint32_t max_frame_len = adapter->max_mtu;
> -
> - if (adapter->edev_data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME)
> - max_frame_len =
> - adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
> -
> - return max_frame_len;
> -}
> -
> static int ena_check_valid_conf(struct ena_adapter *adapter)
> {
> - uint32_t max_frame_len = ena_get_mtu_conf(adapter);
> + uint32_t mtu = adapter->edev_data->mtu;
>
> - if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_INIT_LOG(ERR,
> "Unsupported MTU of %d. Max MTU: %d, min MTU: %d\n",
> - max_frame_len, adapter->max_mtu, ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return ENA_COM_UNSUPPORTED;
> }
>
> @@ -871,10 +859,10 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> ena_dev = &adapter->ena_dev;
> ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
>
> - if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_DRV_LOG(ERR,
> "Invalid MTU setting. New MTU: %d, max MTU: %d, min MTU: %d\n",
> - mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return -EINVAL;
> }
>
> @@ -1945,7 +1933,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
> dev_info->hash_key_size = ENA_HASH_KEY_SIZE;
>
> dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
> - dev_info->max_rx_pktlen = adapter->max_mtu;
> + dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + dev_info->min_mtu = ENA_MIN_MTU;
> + dev_info->max_mtu = adapter->max_mtu;
> dev_info->max_mac_addrs = 1;
>
> dev_info->max_rx_queues = adapter->max_num_io_queues;
> diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
> index b496cd470045..cdb9783b5372 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - if (frame_size > ENETC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads &=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
> enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /*setting the MTU*/
> enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) |
> ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
> @@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
> struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
> uint64_t rx_offloads = eth_conf->rxmode.offloads;
> uint32_t checksum = L3_CKSUM | L4_CKSUM;
> + uint32_t max_len;
>
> PMD_INIT_FUNC_TRACE();
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> -
> - enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> - ENETC_SET_MAXFRM(max_len));
> - enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> - ENETC_MAC_MAXFRM_SIZE);
> - enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
> - 2 * ENETC_MAC_MAXFRM_SIZE);
> - dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
> - RTE_ETHER_CRC_LEN;
> - }
> + max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len));
> + enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
> + enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
>
> if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
> int config;
> diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
> index 8d5797523b8f..6a81ceb62ba7 100644
> --- a/drivers/net/enic/enic_ethdev.c
> +++ b/drivers/net/enic/enic_ethdev.c
> @@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
> * max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
> * a hint to the driver to size receive buffers accordingly so that
> * larger-than-vnic-mtu packets get truncated.. For DPDK, we let
> - * the user decide the buffer size via rxmode.max_rx_pkt_len, basically
> + * the user decide the buffer size via rxmode.mtu, basically
> * ignoring vNIC mtu.
> */
> device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu);
> diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
> index 2affd380c6a4..dfc7f5d1f94f 100644
> --- a/drivers/net/enic/enic_main.c
> +++ b/drivers/net/enic/enic_main.c
> @@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
> struct rq_enet_desc *rqd = rq->ring.descs;
> unsigned i;
> dma_addr_t dma_addr;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint16_t rq_buf_len;
>
> if (!rq->in_use)
> @@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
>
> /*
> * If *not* using scatter and the mbuf size is greater than the
> - * requested max packet size (max_rx_pkt_len), then reduce the
> - * posted buffer size to max_rx_pkt_len. HW still receives packets
> - * larger than max_rx_pkt_len, but they will be truncated, which we
> + * requested max packet size (mtu + eth overhead), then reduce the
> + * posted buffer size to max packet size. HW still receives packets
> + * larger than max packet size, but they will be truncated, which we
> * drop in the rx handler. Not ideal, but better than returning
> * large packets when the user is not expecting them.
> */
> - max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
> rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM;
> - if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
> - rq_buf_len = max_rx_pkt_len;
> + if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
> + rq_buf_len = max_rx_pktlen;
> for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
> mb = rte_mbuf_raw_alloc(rq->mp);
> if (mb == NULL) {
> @@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
> unsigned int mbuf_size, mbufs_per_pkt;
> unsigned int nb_sop_desc, nb_data_desc;
> uint16_t min_sop, max_sop, min_data, max_data;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
>
> /*
> * Representor uses a reserved PF queue. Translate representor
> @@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
>
> mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM);
> - /* max_rx_pkt_len includes the ethernet header and CRC. */
> - max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + /* max_rx_pktlen includes the ethernet header and CRC. */
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
>
> if (enic->rte_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
> /* ceil((max pkt len)/mbuf_size) */
> - mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
> + mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
> } else {
> dev_info(enic, "Scatter rx mode disabled\n");
> mbufs_per_pkt = 1;
> - if (max_rx_pkt_len > mbuf_size) {
> + if (max_rx_pktlen > mbuf_size) {
> dev_warning(enic, "The maximum Rx packet size (%u) is"
> " larger than the mbuf size (%u), and"
> " scatter is disabled. Larger packets will"
> " be truncated.\n",
> - max_rx_pkt_len, mbuf_size);
> + max_rx_pktlen, mbuf_size);
> }
> }
>
> @@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
> rq_sop->data_queue_enable = 1;
> rq_data->in_use = 1;
> /*
> - * HW does not directly support rxmode.max_rx_pkt_len. HW always
> + * HW does not directly support MTU. HW always
> * receives packet sizes up to the "max" MTU.
> * If not using scatter, we can achieve the effect of dropping
> * larger packets by reducing the size of posted buffers.
> * See enic_alloc_rx_queue_mbufs().
> */
> - if (max_rx_pkt_len <
> - enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
> - dev_warning(enic, "rxmode.max_rx_pkt_len is ignored"
> - " when scatter rx mode is in use.\n");
> + if (enic->rte_dev->data->mtu < enic->max_mtu) {
> + dev_warning(enic,
> + "mtu is ignored when scatter rx mode is in use.\n");
> }
> } else {
> dev_info(enic, "Rq %u Scatter rx mode not being used\n",
> @@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
> if (mbufs_per_pkt > 1) {
> dev_info(enic, "For max packet size %u and mbuf size %u valid"
> " rx descriptor range is %u to %u\n",
> - max_rx_pkt_len, mbuf_size, min_sop + min_data,
> + max_rx_pktlen, mbuf_size, min_sop + min_data,
> max_sop + max_data);
> }
> dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
> @@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
> "MTU (%u) is greater than value configured in NIC (%u)\n",
> new_mtu, config_mtu);
>
> - /* Update the MTU and maximum packet length */
> - eth_dev->data->mtu = new_mtu;
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - enic_mtu_to_max_rx_pktlen(new_mtu);
> -
> /*
> * If the device has not started (enic_enable), nothing to do.
> * Later, enic_enable() will set up RQs reflecting the new maximum
> diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
> index 3236290e4021..5e4b361ca6c0 100644
> --- a/drivers/net/fm10k/fm10k_ethdev.c
> +++ b/drivers/net/fm10k/fm10k_ethdev.c
> @@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
> FM10K_SRRCTL_LOOPBACK_SUPPRESS);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> + if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
> 2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
> rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
> uint32_t reg;
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
> index c01e2ec1d450..2d8271cb6095 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -315,19 +315,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
> dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
>
> /* mtu size is 256~9600 */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
> - dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - HINIC_MAX_JUMBO_FRAME_SIZE) {
> + if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
> + HINIC_MIN_FRAME_SIZE ||
> + HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
> + HINIC_MAX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR,
> - "Max rx pkt len out of range, get max_rx_pkt_len:%d, "
> + "Packet length out of range, get packet length:%d, "
> "expect between %d and %d",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu),
> HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
>
> - nic_dev->mtu_size =
> - HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
>
> /* rss template */
> err = hinic_config_mq_mode(dev, TRUE);
> @@ -1530,7 +1530,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
> static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
> - uint32_t frame_size;
> int ret = 0;
>
> PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
> @@ -1548,16 +1547,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> return ret;
> }
>
> - /* update max frame size */
> - frame_size = HINIC_MTU_TO_PKTLEN(mtu);
> - if (frame_size > HINIC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> nic_dev->mtu_size = mtu;
>
> return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
> index 7d37004972bf..4ead227f9122 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2371,41 +2371,6 @@ hns3_init_ring_with_vector(struct hns3_hw *hw)
> return 0;
> }
>
> -static int
> -hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
> -{
> - struct hns3_adapter *hns = dev->data->dev_private;
> - struct hns3_hw *hw = &hns->hw;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> - int ret;
> -
> - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
> - return 0;
> -
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> - hns3_err(hw, "maximum Rx packet length must be greater than %u "
> - "and no more than %u when jumbo frame enabled.",
> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> - (uint16_t)HNS3_MAX_FRAME_LEN);
> - return -EINVAL;
> - }
> -
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3_dev_mtu_set(dev, mtu);
> - if (ret)
> - return ret;
> - dev->data->mtu = mtu;
> -
> - return 0;
> -}
> -
> static int
> hns3_setup_dcb(struct rte_eth_dev *dev)
> {
> @@ -2520,8 +2485,8 @@ hns3_dev_configure(struct rte_eth_dev *dev)
> goto cfg_err;
> }
>
> - ret = hns3_refresh_mtu(dev, conf);
> - if (ret)
> + ret = hns3_dev_mtu_set(dev, conf->rxmode.mtu);
> + if (ret != 0)
> goto cfg_err;
>
> ret = hns3_mbuf_dyn_rx_timestamp_register(dev, conf);
> @@ -2616,7 +2581,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> rte_spinlock_lock(&hw->lock);
> - is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false;
> + is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
> frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
>
> /*
> @@ -2637,7 +2602,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
> index 8d9b7979c806..0b5db486f8d6 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -784,8 +784,6 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> uint16_t nb_rx_q = dev->data->nb_rx_queues;
> uint16_t nb_tx_q = dev->data->nb_tx_queues;
> struct rte_eth_rss_conf rss_conf;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> bool gro_en;
> int ret;
>
> @@ -825,28 +823,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> goto cfg_err;
> }
>
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> - hns3_err(hw, "maximum Rx packet length must be greater "
> - "than %u and less than %u when jumbo frame enabled.",
> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> - (uint16_t)HNS3_MAX_FRAME_LEN);
> - ret = -EINVAL;
> - goto cfg_err;
> - }
> -
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3vf_dev_mtu_set(dev, mtu);
> - if (ret)
> - goto cfg_err;
> - dev->data->mtu = mtu;
> - }
> + ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
> + if (ret != 0)
> + goto cfg_err;
>
> ret = hns3vf_dev_configure_vlan(dev);
> if (ret)
> @@ -935,7 +914,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
> index 481872e3957f..a260212f73f1 100644
> --- a/drivers/net/hns3/hns3_rxtx.c
> +++ b/drivers/net/hns3/hns3_rxtx.c
> @@ -1735,18 +1735,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size,
> uint16_t nb_desc)
> {
> struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
> - struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
> eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
> + uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
> uint16_t min_vec_bds;
>
> /*
> * HNS3 hardware network engine set scattered as default. If the driver
> * is not work in scattered mode and the pkts greater than buf_size
> - * but smaller than max_rx_pkt_len will be distributed to multiple BDs.
> + * but smaller than frame size will be distributed to multiple BDs.
> * Driver cannot handle this situation.
> */
> - if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) {
> - hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater "
> + if (!hw->data->scattered_rx && frame_size > buf_size) {
> + hns3_err(hw, "frame size is not allowed to be set greater "
> "than rx_buf_len if scattered is off.");
> return -EINVAL;
> }
> @@ -1958,7 +1958,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
> }
>
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
> - dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
> + dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
> dev->data->scattered_rx = true;
> }
>
Acked-by: Huisong Li <lihuisong@huawei.com>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index bd97d93dd746..ab571a921f9e 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11775,14 +11775,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > I40E_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return ret;
> }
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index d5847ac6b546..1d27cf2b0a01 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2909,8 +2909,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
> }
>
> rxq->max_pkt_len =
> - RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
> - rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
> + RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
> + data->mtu + I40E_ETH_OVERHEAD);
> if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
> rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 5a5a7f59e152..0eabce275d92 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -576,13 +576,14 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
> struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> struct rte_eth_dev_data *dev_data = dev->data;
> uint16_t buf_size, max_pkt_len;
> + uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
>
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
>
> /* Calculate the maximum packet length allowed */
> max_pkt_len = RTE_MIN((uint32_t)
> rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + frame_size);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> @@ -839,7 +840,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
>
> adapter->stopped = 0;
>
> - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
> vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
> dev->data->nb_tx_queues);
> num_queue_pairs = vf->num_queue_pairs;
> @@ -1472,15 +1473,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IAVF_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return ret;
> }
>
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
> index 4e4cdbcd7d71..c3c7ad88f250 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -66,9 +66,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
> - max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> + dev->data->mtu + ICE_ETH_OVERHEAD);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 9ab7704ff003..8ee1335ac6cf 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3603,8 +3603,8 @@ ice_dev_start(struct rte_eth_dev *dev)
> pf->adapter_stopped = false;
>
> /* Set the max frame size to default value*/
> - max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
> - pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
> + max_frame_size = pf->dev_data->mtu ?
> + pf->dev_data->mtu + ICE_ETH_OVERHEAD :
> ICE_FRAME_SIZE_MAX;
>
> /* Set the max frame size to HW*/
> @@ -3992,14 +3992,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > ICE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return 0;
> }
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index 83fb788e6930..f9ef6ce57277 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -271,15 +271,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
> uint32_t rxdid = ICE_RXDID_COMMS_OVS;
> uint32_t regval;
> struct ice_adapter *ad = rxq->vsi->adapter;
> + uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
>
> /* Set buffer size as the head split is disabled. */
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM);
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
> - rxq->max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> - dev_data->dev_conf.rxmode.max_rx_pkt_len);
> + rxq->max_pkt_len =
> + RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> + frame_size);
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
> @@ -385,11 +386,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
> return -EINVAL;
> }
>
> - buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> - RTE_PKTMBUF_HEADROOM);
> -
> /* Check if scattered RX needs to be used. */
> - if (rxq->max_pkt_len > buf_size)
> + if (frame_size > buf_size)
> dev_data->scattered_rx = 1;
>
> rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
> index 224a0954836b..b26723064b07 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -20,13 +20,6 @@
>
> #define IGC_INTEL_VENDOR_ID 0x8086
>
> -/*
> - * The overhead from MTU to max frame size.
> - * Considering VLAN so tag needs to be counted.
> - */
> -#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> - RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE)
> -
> #define IGC_FC_PAUSE_TIME 0x0680
> #define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
> #define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
> @@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
>
> /* switch to jumbo mode if needed */
> if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= IGC_RCTL_LPE;
> } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl &= ~IGC_RCTL_LPE;
> }
> IGC_WRITE_REG(hw, IGC_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> return 0;
> }
> @@ -2486,6 +2473,7 @@ static int
> igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
> if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> - RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> + if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
> + frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> @@ -2519,6 +2498,7 @@ static int
> igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
> if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
> + if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
> + frame_size, MAX_RX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
> index 7b6c209df3b6..b3473b5b1646 100644
> --- a/drivers/net/igc/igc_ethdev.h
> +++ b/drivers/net/igc/igc_ethdev.h
> @@ -35,6 +35,13 @@ extern "C" {
> #define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
> #define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX)
>
> +/*
> + * The overhead from MTU to max frame size.
> + * Considering VLAN so tag needs to be counted.
> + */
> +#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2)
> +
> /*
> * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
> * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
> diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
> index b5489eedd220..28d3076439c3 100644
> --- a/drivers/net/igc/igc_txrx.c
> +++ b/drivers/net/igc/igc_txrx.c
> @@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> struct igc_rx_queue *rxq;
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
> - uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint32_t rctl;
> uint32_t rxcsum;
> uint16_t buf_size;
> @@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
>
> /* Configure support of jumbo frames, if any. */
> - if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> rctl |= IGC_RCTL_LPE;
> -
> - /*
> - * Set maximum packet length by default, and might be updated
> - * together with enabling/disabling dual VLAN.
> - */
> - IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
> - } else {
> + else
> rctl &= ~IGC_RCTL_LPE;
> - }
> +
> + max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
> + /*
> + * Set maximum packet length by default, and might be updated
> + * together with enabling/disabling dual VLAN.
> + */
> + IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
> + if (max_rx_pktlen > buf_size)
> dev->data->scattered_rx = 1;
> } else {
> /*
> diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
> index e6207939665e..97447a10e46a 100644
> --- a/drivers/net/ionic/ionic_ethdev.c
> +++ b/drivers/net/ionic/ionic_ethdev.c
> @@ -343,25 +343,15 @@ static int
> ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> {
> struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
> - uint32_t max_frame_size;
> int err;
>
> IONIC_PRINT_CALL();
>
> /*
> * Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
> - * is done by the the API.
> + * is done by the API.
> */
>
> - /*
> - * Max frame size is MTU + Ethernet header + VLAN + QinQ
> - * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
> - */
> - max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
> -
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
> - return -EINVAL;
> -
> err = ionic_lif_change_mtu(lif, mtu);
> if (err)
> return err;
> diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
> index b83ea1bcaa6a..3f5fc66abf71 100644
> --- a/drivers/net/ionic/ionic_rxtx.c
> +++ b/drivers/net/ionic/ionic_rxtx.c
> @@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
> struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
> struct rte_mbuf *rxm, *rxm_seg;
> uint32_t max_frame_size =
> - rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint64_t pkt_flags = 0;
> uint32_t pkt_type;
> struct ionic_rx_stats *stats = &rxq->stats;
> @@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
> int __rte_cold
> ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
> {
> - uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
> struct ionic_rx_qcq *rxq;
> int err;
> @@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> {
> struct ionic_rx_qcq *rxq = rx_queue;
> uint32_t frame_size =
> - rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> struct ionic_rx_service service_cb_arg;
>
> service_cb_arg.rx_pkts = rx_pkts;
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 589d9fa5877d..3634c0c8c5f0 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IPN3KE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (rpst->i40e_pf_eth) {
> ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 8b33897ca167..e5ddae219182 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -5174,7 +5174,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> struct ixgbe_hw *hw;
> struct rte_eth_dev_info dev_info;
> uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
> - struct rte_eth_dev_data *dev_data = dev->data;
> int ret;
>
> ret = ixgbe_dev_info_get(dev, &dev_info);
> @@ -5188,9 +5187,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> /* If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> */
> - if (dev_data->dev_started && !dev_data->scattered_rx &&
> - (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
> - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> + if (dev->data->dev_started && !dev->data->scattered_rx &&
> + frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
> + dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
> PMD_INIT_LOG(ERR, "Stop port first.");
> return -EINVAL;
> }
> @@ -5199,23 +5198,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > IXGBE_ETH_MAX_LEN) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU) {
> + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> }
> IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> maxfrs &= 0x0000FFFF;
> - maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
> + maxfrs |= (frame_size << 16);
> IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
>
> return 0;
> @@ -6272,12 +6266,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
> * set as 0x4.
> */
> if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
> - (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
> - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> - IXGBE_MMW_SIZE_JUMBO_FRAME);
> + (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
> + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
> else
> - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> - IXGBE_MMW_SIZE_DEFAULT);
> + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
>
> /* Set RTTBCNRC of queue X */
> IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
> @@ -6549,8 +6541,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
>
> hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
> + if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
> return -EINVAL;
>
> /* If device is started, refuse mtu that requires the support of
> @@ -6558,7 +6549,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> */
> if (dev_data->dev_started && !dev_data->scattered_rx &&
> (max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
> - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> + dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> PMD_INIT_LOG(ERR, "Stop port first.");
> return -EINVAL;
> }
> @@ -6575,8 +6566,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> if (ixgbevf_rlpml_set_vf(hw, max_frame))
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
> return 0;
> }
>
> diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
> index fbf2b17d160f..9bcbc445f2d0 100644
> --- a/drivers/net/ixgbe/ixgbe_pf.c
> +++ b/drivers/net/ixgbe/ixgbe_pf.c
> @@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
> * if PF has jumbo frames enabled which means legacy
> * VFs are disabled.
> */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - IXGBE_ETH_MAX_LEN)
> + if (dev->data->mtu > RTE_ETHER_MTU)
> break;
> /* fall through */
> default:
> @@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
> * legacy VFs.
> */
> if (max_frame > IXGBE_ETH_MAX_LEN ||
> - dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - IXGBE_ETH_MAX_LEN)
> + dev->data->mtu > RTE_ETHER_MTU)
> return -1;
> break;
> }
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index bfdfd5e755de..03991711fd6e 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -5063,6 +5063,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> uint16_t buf_size;
> uint16_t i;
> struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
> + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
> int rc;
>
> PMD_INIT_FUNC_TRACE();
> @@ -5098,7 +5099,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> maxfrs &= 0x0000FFFF;
> - maxfrs |= (rx_conf->max_rx_pkt_len << 16);
> + maxfrs |= (frame_size << 16);
> IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
> } else
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> @@ -5172,8 +5173,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> IXGBE_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
> + if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
> dev->data->scattered_rx = 1;
> if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> @@ -5653,6 +5653,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
> struct ixgbe_hw *hw;
> struct ixgbe_rx_queue *rxq;
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
> uint64_t bus_addr;
> uint32_t srrctl, psrtype = 0;
> uint16_t buf_size;
> @@ -5689,10 +5690,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
> * ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
> * VF packets received can work in all cases.
> */
> - if (ixgbevf_rlpml_set_vf(hw,
> - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
> + if (ixgbevf_rlpml_set_vf(hw, frame_size) != 0) {
> PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + frame_size);
> return -EINVAL;
> }
>
> @@ -5751,8 +5751,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
> /* It adds dual VLAN length for supporting dual VLAN */
> - (rxmode->max_rx_pkt_len +
> - 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
> + (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter mode");
> dev->data->scattered_rx = 1;
> diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> index b72060a4499b..976916f870a5 100644
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ b/drivers/net/liquidio/lio_ethdev.c
> @@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> {
> struct lio_device *lio_dev = LIO_DEV(eth_dev);
> uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
> - uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> struct lio_dev_ctrl_cmd ctrl_cmd;
> struct lio_ctrl_pkt ctrl_pkt;
>
> @@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> return -1;
> }
>
> - if (frame_len > LIO_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> eth_dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
> - eth_dev->data->mtu = mtu;
> -
> return 0;
> }
>
> @@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev)
> static int
> lio_dev_start(struct rte_eth_dev *eth_dev)
> {
> - uint16_t mtu;
> - uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> struct lio_device *lio_dev = LIO_DEV(eth_dev);
> uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> int ret = 0;
> @@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
> goto dev_mtu_set_error;
> }
>
> - mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN);
> - if (mtu < RTE_ETHER_MIN_MTU)
> - mtu = RTE_ETHER_MIN_MTU;
> -
> - if (eth_dev->data->mtu != mtu) {
> - ret = lio_dev_mtu_set(eth_dev, mtu);
> - if (ret)
> - goto dev_mtu_set_error;
> - }
> + ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
> + if (ret != 0)
> + goto dev_mtu_set_error;
>
> return 0;
>
> diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
> index 978cbb8201ea..4a5cfd22aa71 100644
> --- a/drivers/net/mlx4/mlx4_rxq.c
> +++ b/drivers/net/mlx4/mlx4_rxq.c
> @@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> int ret;
> uint32_t crc_present;
> uint64_t offloads;
> + uint32_t max_rx_pktlen;
>
> offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
>
> @@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> };
> /* Enable scattered packets support for this queue if necessary. */
> MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - (mb_len - RTE_PKTMBUF_HEADROOM)) {
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> + if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
> ;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> - uint32_t size =
> - RTE_PKTMBUF_HEADROOM +
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
> uint32_t sges_n;
>
> /*
> @@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> /* Make sure sges_n did not overflow. */
> size = mb_len * (1 << rxq->sges_n);
> size -= RTE_PKTMBUF_HEADROOM;
> - if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
> + if (size < max_rx_pktlen) {
> rte_errno = EOVERFLOW;
> ERROR("%p: too many SGEs (%u) needed to handle"
> " requested maximum packet size %u",
> (void *)dev,
> - 1 << sges_n,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + 1 << sges_n, max_rx_pktlen);
> goto error;
> }
> } else {
> WARN("%p: the requested maximum Rx packet size (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - (void *)dev,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + (void *)dev, max_rx_pktlen,
> mb_len - RTE_PKTMBUF_HEADROOM);
> }
> DEBUG("%p: maximum number of segments per packet: %u",
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index abd8ce798986..6f4f351222d3 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -1330,10 +1330,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> uint64_t offloads = conf->offloads |
> dev->data->dev_conf.rxmode.offloads;
> unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
> - unsigned int max_rx_pkt_len = lro_on_queue ?
> + unsigned int max_rx_pktlen = lro_on_queue ?
> dev->data->dev_conf.rxmode.max_lro_pkt_size :
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
> + dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
> RTE_PKTMBUF_HEADROOM;
> unsigned int max_lro_size = 0;
> unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
> @@ -1372,7 +1373,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> * needed to handle max size packets, replace zero length
> * with the buffer length from the pool.
> */
> - tail_len = max_rx_pkt_len;
> + tail_len = max_rx_pktlen;
> do {
> struct mlx5_eth_rxseg *hw_seg =
> &tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
> @@ -1410,7 +1411,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to handle"
> " requested maximum packet size %u, the maximum"
> " supported are %u", dev->data->port_id,
> - tmpl->rxq.rxseg_n, max_rx_pkt_len,
> + tmpl->rxq.rxseg_n, max_rx_pktlen,
> MLX5_MAX_RXQ_NSEG);
> rte_errno = ENOTSUP;
> goto error;
> @@ -1435,7 +1436,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
> " configured and no enough mbuf space(%u) to contain "
> "the maximum RX packet length(%u) with head-room(%u)",
> - dev->data->port_id, idx, mb_len, max_rx_pkt_len,
> + dev->data->port_id, idx, mb_len, max_rx_pktlen,
> RTE_PKTMBUF_HEADROOM);
> rte_errno = ENOSPC;
> goto error;
> @@ -1454,7 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> * following conditions are met:
> * - MPRQ is enabled.
> * - The number of descs is more than the number of strides.
> - * - max_rx_pkt_len plus overhead is less than the max size
> + * - max_rx_pktlen plus overhead is less than the max size
> * of a stride or mprq_stride_size is specified by a user.
> * Need to make sure that there are enough strides to encap
> * the maximum packet size in case mprq_stride_size is set.
> @@ -1478,7 +1479,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> !!(offloads & DEV_RX_OFFLOAD_SCATTER);
> tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
> config->mprq.max_memcpy_len);
> - max_lro_size = RTE_MIN(max_rx_pkt_len,
> + max_lro_size = RTE_MIN(max_rx_pktlen,
> (1u << tmpl->rxq.strd_num_n) *
> (1u << tmpl->rxq.strd_sz_n));
> DRV_LOG(DEBUG,
> @@ -1487,9 +1488,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> dev->data->port_id, idx,
> tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
> } else if (tmpl->rxq.rxseg_n == 1) {
> - MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
> + MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
> tmpl->rxq.sges_n = 0;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> unsigned int sges_n;
>
> @@ -1511,13 +1512,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to handle"
> " requested maximum packet size %u, the maximum"
> " supported are %u", dev->data->port_id,
> - 1 << sges_n, max_rx_pkt_len,
> + 1 << sges_n, max_rx_pktlen,
> 1u << MLX5_MAX_LOG_RQ_SEGS);
> rte_errno = ENOTSUP;
> goto error;
> }
> tmpl->rxq.sges_n = sges_n;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> }
> if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
> DRV_LOG(WARNING,
> diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
> index a3ee15020466..520c6fdb1d31 100644
> --- a/drivers/net/mvneta/mvneta_ethdev.c
> +++ b/drivers/net/mvneta/mvneta_ethdev.c
> @@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - MRVL_NETA_ETH_HDRS_LEN;
> -
> if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
> priv->multiseg = 1;
>
> @@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - dev->data->mtu = mtu;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
> -
> if (!priv->ppio)
> /* It is OK. New MTU will be set later on mvneta_dev_start */
> return 0;
> diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
> index dfa7ecc09039..2cd4fb31348b 100644
> --- a/drivers/net/mvneta/mvneta_rxtx.c
> +++ b/drivers/net/mvneta/mvneta_rxtx.c
> @@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> struct mvneta_priv *priv = dev->data->dev_private;
> struct mvneta_rxq *rxq;
> uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
> - uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
>
> frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS;
>
> - if (frame_size < max_rx_pkt_len) {
> + if (frame_size < max_rx_pktlen) {
> MVNETA_LOG(ERR,
> "Mbuf size must be increased to %u bytes to hold up "
> "to %u bytes of data.",
> - buf_size + max_rx_pkt_len - frame_size,
> - max_rx_pkt_len);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> - MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_rx_pktlen + buf_size - frame_size,
> + max_rx_pktlen);
> + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
> + MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
> }
>
> if (dev->data->rx_queues[idx]) {
> diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
> index 078aefbb8da4..5ce71661c84e 100644
> --- a/drivers/net/mvpp2/mrvl_ethdev.c
> +++ b/drivers/net/mvpp2/mrvl_ethdev.c
> @@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - MRVL_PP2_ETH_HDRS_LEN;
> - if (dev->data->mtu > priv->max_mtu) {
> - MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n",
> - dev->data->mtu,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - priv->max_mtu);
> - return -EINVAL;
> - }
> + if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
> + MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
> + dev->data->dev_conf.rxmode.mtu,
> + priv->max_mtu);
> + return -EINVAL;
> }
>
> if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
> @@ -595,9 +590,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - dev->data->mtu = mtu;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
> -
> if (!priv->ppio)
> return 0;
>
> @@ -1994,7 +1986,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> struct mrvl_priv *priv = dev->data->dev_private;
> struct mrvl_rxq *rxq;
> uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
> - uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
> int ret, tc, inq;
> uint64_t offloads;
>
> @@ -2009,17 +2001,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> return -EFAULT;
> }
>
> - frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
> - MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
> - if (frame_size < max_rx_pkt_len) {
> + frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS;
> + if (frame_size < max_rx_pktlen) {
> MRVL_LOG(WARNING,
> "Mbuf size must be increased to %u bytes to hold up "
> "to %u bytes of data.",
> - buf_size + max_rx_pkt_len - frame_size,
> - max_rx_pkt_len);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> - MRVL_LOG(INFO, "Setting max rx pkt len to %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_rx_pktlen + buf_size - frame_size,
> + max_rx_pktlen);
> + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
> + MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
> }
>
> if (dev->data->rx_queues[idx]) {
> diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
> index 1b4bc33593fb..a2031a7a82cc 100644
> --- a/drivers/net/nfp/nfp_common.c
> +++ b/drivers/net/nfp/nfp_common.c
> @@ -370,7 +370,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
> }
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - hw->mtu = rxmode->max_rx_pkt_len;
> + hw->mtu = dev->data->mtu;
>
> if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
> ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
> @@ -963,16 +963,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> /* switch to jumbo mode if needed */
> - if ((uint32_t)mtu > RTE_ETHER_MTU)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
> -
> /* writing to configuration space */
> - nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
> + nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
>
> hw->mtu = mtu;
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
> index 9f4c0503b4d4..69c3bda12df8 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.c
> +++ b/drivers/net/octeontx/octeontx_ethdev.c
> @@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (frame_size > OCCTX_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
> frame_size);
>
> @@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
> buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
>
> /* Setup scatter mode if needed by jumbo */
> - if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (data->mtu > buffsz) {
> nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
> nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
> @@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
> evdev_priv->rx_offload_flags = nic->rx_offload_flags;
> evdev_priv->tx_offload_flags = nic->tx_offload_flags;
>
> - /* Setup MTU based on max_rx_pkt_len */
> - nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD;
> + /* Setup MTU */
> + nic->mtu = data->mtu;
>
> return 0;
> }
> @@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
> octeontx_recheck_rx_offloads(rxq);
> }
>
> - /* Setting up the mtu based on max_rx_pkt_len */
> + /* Setting up the mtu */
> ret = octeontx_dev_mtu_set(dev, nic->mtu);
> if (ret) {
> octeontx_log_err("Failed to set default MTU size %d", ret);
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
> index 75d4cabf2e7c..787e8d890215 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.c
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
> mbp_priv = rte_mempool_get_priv(rxq->pool);
> buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
>
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
> dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
>
> diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
> index 552e6bd43d2b..cf7804157198 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_ops.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
> @@ -59,14 +59,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (frame_size > NIX_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return rc;
> }
>
> @@ -75,7 +72,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> {
> struct rte_eth_dev_data *data = eth_dev->data;
> struct otx2_eth_rxq *rxq;
> - uint16_t mtu;
> int rc;
>
> rxq = data->rx_queues[0];
> @@ -83,10 +79,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> /* Setup scatter mode if needed by jumbo */
> otx2_nix_enable_mseg_on_jumbo(rxq);
>
> - /* Setup MTU based on max_rx_pkt_len */
> - mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
> -
> - rc = otx2_nix_mtu_set(eth_dev, mtu);
> + rc = otx2_nix_mtu_set(eth_dev, data->mtu);
> if (rc)
> otx2_err("Failed to set default MTU size %d", rc);
>
> diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
> index feec4d10a26e..2619bd2f2a19 100644
> --- a/drivers/net/pfe/pfe_ethdev.c
> +++ b/drivers/net/pfe/pfe_ethdev.c
> @@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev)
> static int
> pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> - int ret;
> struct pfe_eth_priv_s *priv = dev->data->dev_private;
> uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
>
> /*TODO Support VLAN*/
> - ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
> - if (!ret)
> - dev->data->mtu = mtu;
> -
> - return ret;
> + return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
> }
>
> /* pfe_eth_enet_addr_byte_mac
> diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
> index a4304e0eff44..4b971fd1fe3c 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
> return -ENOMEM;
> }
>
> - /* If jumbo enabled adjust MTU */
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - eth_dev->data->mtu =
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
> -
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
> eth_dev->data->scattered_rx = 1;
>
> @@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
> struct rte_eth_dev_info dev_info = {0};
> struct qede_fastpath *fp;
> - uint32_t max_rx_pkt_len;
> uint32_t frame_size;
> uint16_t bufsz;
> bool restart = false;
> @@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> DP_ERR(edev, "Error during getting ethernet device info\n");
> return rc;
> }
> - max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
> - frame_size = max_rx_pkt_len;
> +
> + frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
> if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
> DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
> mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
> @@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> fp->rxq->rx_buf_size = rc;
> }
> }
> - if (frame_size > QEDE_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> dev->data->dev_started = 1;
> }
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
> -
> return 0;
> }
>
> diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
> index 35cde561ba59..c2263787b4ec 100644
> --- a/drivers/net/qede/qede_rxtx.c
> +++ b/drivers/net/qede/qede_rxtx.c
> @@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
> struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> struct qede_rx_queue *rxq;
> - uint16_t max_rx_pkt_len;
> + uint16_t max_rx_pktlen;
> uint16_t bufsz;
> int rc;
>
> @@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
> dev->data->rx_queues[qid] = NULL;
> }
>
> - max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
>
> /* Fix up RX buffer size */
> bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
> /* cache align the mbuf size to simplfy rx_buf_size calculation */
> bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
> if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
> - (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
> + (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
> if (!dev->data->scattered_rx) {
> DP_INFO(edev, "Forcing scatter-gather mode\n");
> dev->data->scattered_rx = 1;
> }
> }
>
> - rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
> + rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
> if (rc < 0)
> return rc;
>
> diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
> index 2db0d000c3ad..1f55c90b419d 100644
> --- a/drivers/net/sfc/sfc_ethdev.c
> +++ b/drivers/net/sfc/sfc_ethdev.c
> @@ -1066,15 +1066,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
>
> /*
> * The driver does not use it, but other PMDs update jumbo frame
> - * flag and max_rx_pkt_len when MTU is set.
> + * flag when MTU is set.
> */
> if (mtu > RTE_ETHER_MTU) {
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> }
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
> -
> sfc_adapter_unlock(sa);
>
> sfc_log_init(sa, "done");
> diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
> index adb2b2cb8175..22f74735db08 100644
> --- a/drivers/net/sfc/sfc_port.c
> +++ b/drivers/net/sfc/sfc_port.c
> @@ -383,14 +383,10 @@ sfc_port_configure(struct sfc_adapter *sa)
> {
> const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
> struct sfc_port *port = &sa->port;
> - const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
>
> sfc_log_init(sa, "entry");
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - port->pdu = rxmode->max_rx_pkt_len;
> - else
> - port->pdu = EFX_MAC_PDU(dev_data->mtu);
> + port->pdu = EFX_MAC_PDU(dev_data->mtu);
>
> return 0;
> }
> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
> index c515de3bf71d..0a8d29277aeb 100644
> --- a/drivers/net/tap/rte_eth_tap.c
> +++ b/drivers/net/tap/rte_eth_tap.c
> @@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct pmd_internals *pmd = dev->data->dev_private;
> struct ifreq ifr = { .ifr_mtu = mtu };
> - int err = 0;
>
> - err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
> - if (!err)
> - dev->data->mtu = mtu;
> -
> - return err;
> + return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
> }
>
> static int
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
> index 561a98fc81a3..c8ae95a61306 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> (frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
> return -EINVAL;
>
> - if (frame_size > NIC_HW_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> if (nicvf_mbox_update_hw_max_frs(nic, mtu))
> return -EINVAL;
>
> - /* Update max_rx_pkt_len */
> - rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
> nic->mtu = mtu;
>
> for (i = 0; i < nic->sqs_count; i++)
> @@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
> }
>
> /* Setup scatter mode if needed by jumbo */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE > buffsz)
> + if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
> dev->data->scattered_rx = 1;
> if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
> dev->data->scattered_rx = 1;
>
> - /* Setup MTU based on max_rx_pkt_len or default */
> - mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
> - dev->data->dev_conf.rxmode.max_rx_pkt_len
> - - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
> + /* Setup MTU */
> + mtu = dev->data->mtu;
>
> if (nicvf_dev_set_mtu(dev, mtu)) {
> PMD_INIT_LOG(ERR, "Failed to set default mtu size");
> diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
> index 006399468841..269de9f848dd 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -3486,8 +3486,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + /* switch to jumbo mode if needed */
> + if (mtu > RTE_ETHER_MTU)
> + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> + dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (hw->mode)
> wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
> index 3021933965c8..44cfcd76bca4 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.h
> +++ b/drivers/net/txgbe/txgbe_ethdev.h
> @@ -55,6 +55,10 @@
> #define TXGBE_5TUPLE_MAX_PRI 7
> #define TXGBE_5TUPLE_MIN_PRI 1
>
> +
> +/* The overhead from MTU to max frame size. */
> +#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
> +
> #define TXGBE_RSS_OFFLOAD_ALL ( \
> ETH_RSS_IPV4 | \
> ETH_RSS_NONFRAG_IPV4_TCP | \
> diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
> index 896da8a88770..43dc0ed39b75 100644
> --- a/drivers/net/txgbe/txgbe_ethdev_vf.c
> +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
> @@ -1128,8 +1128,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> if (txgbevf_rlpml_set_vf(hw, max_frame))
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
> return 0;
> }
>
> diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
> index 1a261287d1bd..c6cd3803c434 100644
> --- a/drivers/net/txgbe/txgbe_rxtx.c
> +++ b/drivers/net/txgbe/txgbe_rxtx.c
> @@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure jumbo frame support, if any.
> */
> - if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> - TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
> - } else {
> - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> - TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
> - }
> + wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> + TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD));
>
> /*
> * If loopback mode is configured, set LPBK bit.
> @@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
> + if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
> + 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
> dev->data->scattered_rx = 1;
> if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> @@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
> * VF packets received can work in all cases.
> */
> if (txgbevf_rlpml_set_vf(hw,
> - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
> + (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
> PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + dev->data->mtu + TXGBE_ETH_OVERHEAD);
> return -EINVAL;
> }
>
> @@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
> /* It adds dual VLAN length for supporting dual VLAN */
> - (rxmode->max_rx_pkt_len +
> + (dev->data->mtu + TXGBE_ETH_OVERHEAD +
> 2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter mode");
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index b60eeb24abe7..5d341a3e23bb 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -930,7 +930,6 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> hw->max_rx_pkt_len = frame_size;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = hw->max_rx_pkt_len;
>
> return 0;
> }
> @@ -2116,14 +2115,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
> return ret;
> }
>
> - if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
> - (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
> + if (rxmode->mtu > hw->max_mtu)
> req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
> - else
> - hw->max_rx_pkt_len = ether_hdr_len + dev->data->mtu;
> + hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
>
> if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM))
> diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
> index adbd40808396..68e3c13730ad 100644
> --- a/examples/bbdev_app/main.c
> +++ b/examples/bbdev_app/main.c
> @@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/bond/main.c b/examples/bond/main.c
> index a63ca70a7f06..25ca459be57b 100644
> --- a/examples/bond/main.c
> +++ b/examples/bond/main.c
> @@ -116,7 +116,6 @@ static struct rte_mempool *mbuf_pool;
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> diff --git a/examples/distributor/main.c b/examples/distributor/main.c
> index d0f40a1fb4bc..8c4a8feec0c2 100644
> --- a/examples/distributor/main.c
> +++ b/examples/distributor/main.c
> @@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
> index 5ed0dc73ec60..e26be8edf28f 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> @@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
> index ab8c6d6a0dad..476b147bdfcc 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> @@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
> index 65c1d85cf2fb..8a43f6ac0f92 100644
> --- a/examples/flow_classify/flow_classify.c
> +++ b/examples/flow_classify/flow_classify.c
> @@ -59,14 +59,6 @@ static struct{
> } parm_config;
> const char cb_port_delim[] = ":";
>
> -/* Ethernet ports configured with default settings using struct. 8< */
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -/* >8 End of configuration of Ethernet ports. */
> -
> /* Creation of flow classifier object. 8< */
> struct flow_classifier {
> struct rte_flow_classifier *cls;
> @@ -200,7 +192,7 @@ static struct rte_flow_attr attr;
> static inline int
> port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> struct rte_ether_addr addr;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> @@ -211,6 +203,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
> index b3977a8be561..fdc66368dce9 100644
> --- a/examples/ioat/ioatfwd.c
> +++ b/examples/ioat/ioatfwd.c
> @@ -820,7 +820,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
> index f24536972084..12062a785dc6 100644
> --- a/examples/ip_fragmentation/main.c
> +++ b/examples/ip_fragmentation/main.c
> @@ -146,7 +146,8 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> @@ -918,9 +919,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> @@ -963,8 +964,7 @@ main(int argc, char **argv)
> }
>
> /* set the mtu to the maximum received packet size */
> - ret = rte_eth_dev_set_mtu(portid,
> - local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD);
> + ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu);
> if (ret < 0) {
> printf("\n");
> rte_exit(EXIT_FAILURE, "Set MTU failed: "
> diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
> index 16bcffe356bc..9ba02e687adb 100644
> --- a/examples/ip_pipeline/link.c
> +++ b/examples/ip_pipeline/link.c
> @@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
> .link_speeds = 0,
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
> + .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
> .split_hdr_size = 0, /* Header split buffer size */
> },
> .rx_adv_conf = {
> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
> index 8645ac790be4..e5c7d46d2caa 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -162,7 +162,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> DEV_RX_OFFLOAD_JUMBO_FRAME),
> @@ -882,7 +883,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
>
> /* mbufs stored int the gragment table. 8< */
> nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM;
> - nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
> + nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
> + + BUF_SIZE - 1) / BUF_SIZE;
> nb_mbuf *= 2; /* ipv4 and ipv6 */
> nb_mbuf += nb_rxd + nb_txd;
>
> @@ -1054,9 +1056,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
> index 7ad94cb8228b..d032a47d1c3b 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -2163,7 +2162,6 @@ cryptodevs_init(uint16_t req_queue_num)
> static void
> port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
> {
> - uint32_t frame_size;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_txconf *txconf;
> uint16_t nb_tx_queue, nb_rx_queue;
> @@ -2211,10 +2209,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
> printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
> nb_rx_queue, nb_tx_queue);
>
> - frame_size = MTU_TO_FRAMELEN(mtu_size);
> - if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
> + if (mtu_size > RTE_ETHER_MTU)
> local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - local_port_conf.rxmode.max_rx_pkt_len = frame_size;
> + local_port_conf.rxmode.mtu = mtu_size;
>
> if (multi_seg_required()) {
> local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
> diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
> index cc527d7f6b38..b3993685ec92 100644
> --- a/examples/ipv4_multicast/main.c
> +++ b/examples/ipv4_multicast/main.c
> @@ -110,7 +110,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
> },
> @@ -715,9 +716,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> diff --git a/examples/kni/main.c b/examples/kni/main.c
> index 2a993a0ca460..62f6e42a9437 100644
> --- a/examples/kni/main.c
> +++ b/examples/kni/main.c
> @@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
>
> memcpy(&conf, &port_conf, sizeof(conf));
> /* Set new MTU */
> - if (new_mtu > RTE_ETHER_MAX_LEN)
> + if (new_mtu > RTE_ETHER_MTU)
> conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* mtu + length of header + length of FCS = max pkt length */
> - conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
> - KNI_ENET_FCS_SIZE;
> + conf.rxmode.mtu = new_mtu;
> ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
> if (ret < 0) {
> RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
> diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
> index 9b3e324efb23..d9cf00c9dfc7 100644
> --- a/examples/l2fwd-cat/l2fwd-cat.c
> +++ b/examples/l2fwd-cat/l2fwd-cat.c
> @@ -19,10 +19,6 @@
> #define MBUF_CACHE_SIZE 250
> #define BURST_SIZE 32
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> -};
> -
> /* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
>
> /*
> @@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> uint16_t q;
> @@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> /* Configure the Ethernet device. */
> retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
> if (retval != 0)
> diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
> index 66d1491bf76d..f9438176cbb1 100644
> --- a/examples/l2fwd-crypto/main.c
> +++ b/examples/l2fwd-crypto/main.c
> @@ -217,7 +217,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
> index 19f32809aa9d..9040be5ed9b6 100644
> --- a/examples/l2fwd-event/l2fwd_common.c
> +++ b/examples/l2fwd-event/l2fwd_common.c
> @@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
> uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
> struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
> index a1f457b564b6..7abb612ee6a4 100644
> --- a/examples/l3fwd-acl/main.c
> +++ b/examples/l3fwd-acl/main.c
> @@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -141,6 +140,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
>
> /* ethernet addresses of ports */
> @@ -201,8 +202,8 @@ enum {
> OPT_CONFIG_NUM = 256,
> #define OPT_NONUMA "no-numa"
> OPT_NONUMA_NUM,
> -#define OPT_ENBJMO "enable-jumbo"
> - OPT_ENBJMO_NUM,
> +#define OPT_MAX_PKT_LEN "max-pkt-len"
> + OPT_MAX_PKT_LEN_NUM,
> #define OPT_RULE_IPV4 "rule_ipv4"
> OPT_RULE_IPV4_NUM,
> #define OPT_RULE_IPV6 "rule_ipv6"
> @@ -1619,26 +1620,21 @@ print_usage(const char *prgname)
>
> usage_acl_alg(alg, sizeof(alg));
> printf("%s [EAL options] -- -p PORTMASK -P"
> - "--"OPT_RULE_IPV4"=FILE"
> - "--"OPT_RULE_IPV6"=FILE"
> + " --"OPT_RULE_IPV4"=FILE"
> + " --"OPT_RULE_IPV6"=FILE"
> " [--"OPT_CONFIG" (port,queue,lcore)[,(port,queue,lcore]]"
> - " [--"OPT_ENBJMO" [--max-pkt-len PKTLEN]]\n"
> + " [--"OPT_MAX_PKT_LEN" PKTLEN]\n"
> " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
> - " -P : enable promiscuous mode\n"
> - " --"OPT_CONFIG": (port,queue,lcore): "
> - "rx queues configuration\n"
> + " -P: enable promiscuous mode\n"
> + " --"OPT_CONFIG" (port,queue,lcore): rx queues configuration\n"
> " --"OPT_NONUMA": optional, disable numa awareness\n"
> - " --"OPT_ENBJMO": enable jumbo frame"
> - " which max packet len is PKTLEN in decimal (64-9600)\n"
> - " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries "
> - "file. "
> + " --"OPT_MAX_PKT_LEN" PKTLEN: maximum packet length in decimal (64-9600)\n"
> + " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries file. "
> "Each rule occupy one line. "
> "2 kinds of rules are supported. "
> "One is ACL entry at while line leads with character '%c', "
> - "another is route entry at while line leads with "
> - "character '%c'.\n"
> - " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules "
> - "entries file.\n"
> + "another is route entry at while line leads with character '%c'.\n"
> + " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules entries file.\n"
> " --"OPT_ALG": ACL classify method to use, one of: %s\n",
> prgname, ACL_LEAD_CHAR, ROUTE_LEAD_CHAR, alg);
> }
> @@ -1758,14 +1754,14 @@ parse_args(int argc, char **argv)
> int option_index;
> char *prgname = argv[0];
> static struct option lgopts[] = {
> - {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
> - {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
> - {OPT_ENBJMO, 0, NULL, OPT_ENBJMO_NUM },
> - {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
> - {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
> - {OPT_ALG, 1, NULL, OPT_ALG_NUM },
> - {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
> - {NULL, 0, 0, 0 }
> + {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
> + {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
> + {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
> + {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
> + {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
> + {OPT_ALG, 1, NULL, OPT_ALG_NUM },
> + {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
> + {NULL, 0, 0, 0 }
> };
>
> argvopt = argv;
> @@ -1804,43 +1800,11 @@ parse_args(int argc, char **argv)
> numa_on = 0;
> break;
>
> - case OPT_ENBJMO_NUM:
> - {
> - struct option lenopts = {
> - "max-pkt-len",
> - required_argument,
> - 0,
> - 0
> - };
> -
> - printf("jumbo frame is enabled\n");
> - port_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> - DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /*
> - * if no max-pkt-len set, then use the
> - * default value RTE_ETHER_MAX_LEN
> - */
> - if (getopt_long(argc, argvopt, "",
> - &lenopts, &option_index) == 0) {
> - ret = parse_max_pkt_len(optarg);
> - if ((ret < 64) ||
> - (ret > MAX_JUMBO_PKT_LEN)) {
> - printf("invalid packet "
> - "length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> - printf("set jumbo frame max packet length "
> - "to %u\n",
> - (unsigned int)
> - port_conf.rxmode.max_rx_pkt_len);
> + case OPT_MAX_PKT_LEN_NUM:
> + printf("Custom frame size is configured\n");
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> - }
> +
> case OPT_RULE_IPV4_NUM:
> parm_config.rule_ipv4_name = optarg;
> break;
> @@ -2007,6 +1971,43 @@ set_default_dest_mac(void)
> }
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> int
> main(int argc, char **argv)
> {
> @@ -2080,6 +2081,12 @@ main(int argc, char **argv)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
> index a0de8ca9b42d..b431b9ff5f3c 100644
> --- a/examples/l3fwd-graph/main.c
> +++ b/examples/l3fwd-graph/main.c
> @@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> @@ -126,6 +125,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
>
> static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
> @@ -259,7 +260,7 @@ print_usage(const char *prgname)
> " [-P]"
> " --config (port,queue,lcore)[,(port,queue,lcore)]"
> " [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]"
> + " [--max-pkt-len PKTLEN]"
> " [--no-numa]"
> " [--per-port-pool]\n\n"
>
> @@ -268,9 +269,7 @@ print_usage(const char *prgname)
> " --config (port,queue,lcore): Rx queue configuration\n"
> " --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for "
> "port X\n"
> - " --enable-jumbo: Enable jumbo frames\n"
> - " --max-pkt-len: Under the premise of enabling jumbo,\n"
> - " maximum packet length in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
> " --no-numa: Disable numa awareness\n"
> " --per-port-pool: Use separate buffer pool per port\n\n",
> prgname);
> @@ -404,7 +403,7 @@ static const char short_options[] = "p:" /* portmask */
> #define CMD_LINE_OPT_CONFIG "config"
> #define CMD_LINE_OPT_ETH_DEST "eth-dest"
> #define CMD_LINE_OPT_NO_NUMA "no-numa"
> -#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
> +#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
> #define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
> enum {
> /* Long options mapped to a short option */
> @@ -416,7 +415,7 @@ enum {
> CMD_LINE_OPT_CONFIG_NUM,
> CMD_LINE_OPT_ETH_DEST_NUM,
> CMD_LINE_OPT_NO_NUMA_NUM,
> - CMD_LINE_OPT_ENABLE_JUMBO_NUM,
> + CMD_LINE_OPT_MAX_PKT_LEN_NUM,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL,
> };
>
> @@ -424,7 +423,7 @@ static const struct option lgopts[] = {
> {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
> {CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
> {CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
> - {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
> + {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
> {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
> {NULL, 0, 0, 0},
> };
> @@ -490,28 +489,8 @@ parse_args(int argc, char **argv)
> numa_on = 0;
> break;
>
> - case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
> - const struct option lenopts = {"max-pkt-len",
> - required_argument, 0, 0};
> -
> - port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /*
> - * if no max-pkt-len set, use the default
> - * value RTE_ETHER_MAX_LEN.
> - */
> - if (getopt_long(argc, argvopt, "", &lenopts,
> - &option_index) == 0) {
> - ret = parse_max_pkt_len(optarg);
> - if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
> - fprintf(stderr, "Invalid maximum "
> - "packet length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> + case CMD_LINE_OPT_MAX_PKT_LEN_NUM: {
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> }
>
> @@ -722,6 +701,43 @@ graph_main_loop(void *conf)
> }
> /* >8 End of main processing loop. */
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> int
> main(int argc, char **argv)
> {
> @@ -807,6 +823,13 @@ main(int argc, char **argv)
> nb_rx_queue, n_tx_queue);
>
> rte_eth_dev_info_get(portid, &dev_info);
> +
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
> index aa7b8db44ae8..e58561327c48 100644
> --- a/examples/l3fwd-power/main.c
> +++ b/examples/l3fwd-power/main.c
> @@ -251,7 +251,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -266,6 +265,8 @@ static struct rte_eth_conf port_conf = {
> }
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool * pktmbuf_pool[NB_SOCKETS];
>
>
> @@ -1601,16 +1602,15 @@ print_usage(const char *prgname)
> " [--config (port,queue,lcore)[,(port,queue,lcore]]"
> " [--high-perf-cores CORELIST"
> " [--perf-config (port,queue,hi_perf,lcore_index)[,(port,queue,hi_perf,lcore_index]]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
> + " [--max-pkt-len PKTLEN]\n"
> " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
> - " -P : enable promiscuous mode\n"
> + " -P: enable promiscuous mode\n"
> " --config (port,queue,lcore): rx queues configuration\n"
> " --high-perf-cores CORELIST: list of high performance cores\n"
> " --perf-config: similar as config, cores specified as indices"
> " for bins containing high or regular performance cores\n"
> " --no-numa: optional, disable numa awareness\n"
> - " --enable-jumbo: enable jumbo frame"
> - " which max packet len is PKTLEN in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
> " --parse-ptype: parse packet type by software\n"
> " --legacy: use legacy interrupt-based scaling\n"
> " --empty-poll: enable empty poll detection"
> @@ -1795,6 +1795,7 @@ parse_ep_config(const char *q_arg)
> #define CMD_LINE_OPT_INTERRUPT_ONLY "interrupt-only"
> #define CMD_LINE_OPT_TELEMETRY "telemetry"
> #define CMD_LINE_OPT_PMD_MGMT "pmd-mgmt"
> +#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
>
> /* Parse the argument given in the command line of the application */
> static int
> @@ -1810,7 +1811,7 @@ parse_args(int argc, char **argv)
> {"perf-config", 1, 0, 0},
> {"high-perf-cores", 1, 0, 0},
> {"no-numa", 0, 0, 0},
> - {"enable-jumbo", 0, 0, 0},
> + {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, 0},
> {CMD_LINE_OPT_EMPTY_POLL, 1, 0, 0},
> {CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
> {CMD_LINE_OPT_LEGACY, 0, 0, 0},
> @@ -1954,36 +1955,10 @@ parse_args(int argc, char **argv)
> }
>
> if (!strncmp(lgopts[option_index].name,
> - "enable-jumbo", 12)) {
> - struct option lenopts =
> - {"max-pkt-len", required_argument, \
> - 0, 0};
> -
> - printf("jumbo frame is enabled \n");
> - port_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> - DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /**
> - * if no max-pkt-len set, use the default value
> - * RTE_ETHER_MAX_LEN
> - */
> - if (0 == getopt_long(argc, argvopt, "",
> - &lenopts, &option_index)) {
> - ret = parse_max_pkt_len(optarg);
> - if ((ret < 64) ||
> - (ret > MAX_JUMBO_PKT_LEN)){
> - printf("invalid packet "
> - "length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> - printf("set jumbo frame "
> - "max packet length to %u\n",
> - (unsigned int)port_conf.rxmode.max_rx_pkt_len);
> + CMD_LINE_OPT_MAX_PKT_LEN,
> + sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
> + printf("Custom frame size is configured\n");
> + max_pkt_len = parse_max_pkt_len(optarg);
> }
>
> if (!strncmp(lgopts[option_index].name,
> @@ -2505,6 +2480,43 @@ mode_to_str(enum appmode mode)
> }
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> /* Power library initialized in the main routine. 8< */
> int
> main(int argc, char **argv)
> @@ -2622,6 +2634,12 @@ main(int argc, char **argv)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 00ac267af1dd..cb9bc7ad6002 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -136,6 +135,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
> static uint8_t lkp_per_socket[NB_SOCKETS];
>
> @@ -326,7 +327,7 @@ print_usage(const char *prgname)
> " [--lookup]"
> " --config (port,queue,lcore)[,(port,queue,lcore)]"
> " [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]"
> + " [--max-pkt-len PKTLEN]"
> " [--no-numa]"
> " [--hash-entry-num]"
> " [--ipv6]"
> @@ -344,9 +345,7 @@ print_usage(const char *prgname)
> " Accepted: em (Exact Match), lpm (Longest Prefix Match), fib (Forwarding Information Base)\n"
> " --config (port,queue,lcore): Rx queue configuration\n"
> " --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for port X\n"
> - " --enable-jumbo: Enable jumbo frames\n"
> - " --max-pkt-len: Under the premise of enabling jumbo,\n"
> - " maximum packet length in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
> " --no-numa: Disable numa awareness\n"
> " --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
> " --ipv6: Set if running ipv6 packets\n"
> @@ -566,7 +565,7 @@ static const char short_options[] =
> #define CMD_LINE_OPT_ETH_DEST "eth-dest"
> #define CMD_LINE_OPT_NO_NUMA "no-numa"
> #define CMD_LINE_OPT_IPV6 "ipv6"
> -#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
> +#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
> #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
> #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
> #define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
> @@ -584,7 +583,7 @@ enum {
> CMD_LINE_OPT_ETH_DEST_NUM,
> CMD_LINE_OPT_NO_NUMA_NUM,
> CMD_LINE_OPT_IPV6_NUM,
> - CMD_LINE_OPT_ENABLE_JUMBO_NUM,
> + CMD_LINE_OPT_MAX_PKT_LEN_NUM,
> CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
> CMD_LINE_OPT_PARSE_PTYPE_NUM,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL,
> @@ -599,7 +598,7 @@ static const struct option lgopts[] = {
> {CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
> {CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
> {CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
> - {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
> + {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
> {CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
> {CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
> {CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
> @@ -698,31 +697,9 @@ parse_args(int argc, char **argv)
> ipv6 = 1;
> break;
>
> - case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
> - const struct option lenopts = {
> - "max-pkt-len", required_argument, 0, 0
> - };
> -
> - port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /*
> - * if no max-pkt-len set, use the default
> - * value RTE_ETHER_MAX_LEN.
> - */
> - if (getopt_long(argc, argvopt, "",
> - &lenopts, &option_index) == 0) {
> - ret = parse_max_pkt_len(optarg);
> - if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
> - fprintf(stderr,
> - "invalid maximum packet length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> + case CMD_LINE_OPT_MAX_PKT_LEN_NUM:
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> - }
>
> case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
> ret = parse_hash_entry_number(optarg);
> @@ -981,6 +958,43 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
> return 0;
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> static void
> l3fwd_poll_resource_setup(void)
> {
> @@ -1035,6 +1049,12 @@ l3fwd_poll_resource_setup(void)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
> index 2f593abf263d..b6cddc8c7b51 100644
> --- a/examples/performance-thread/l3fwd-thread/main.c
> +++ b/examples/performance-thread/l3fwd-thread/main.c
> @@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -323,6 +322,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
>
> #if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
> @@ -2643,7 +2644,7 @@ print_usage(const char *prgname)
> printf("%s [EAL options] -- -p PORTMASK -P"
> " [--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread]]"
> " [--tx (lcore,thread)[,(lcore,thread]]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
> + " [--max-pkt-len PKTLEN]"
> " [--parse-ptype]\n\n"
> " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
> " -P : enable promiscuous mode\n"
> @@ -2653,8 +2654,7 @@ print_usage(const char *prgname)
> " --eth-dest=X,MM:MM:MM:MM:MM:MM: optional, ethernet destination for port X\n"
> " --no-numa: optional, disable numa awareness\n"
> " --ipv6: optional, specify it if running ipv6 packets\n"
> - " --enable-jumbo: enable jumbo frame"
> - " which max packet len is PKTLEN in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
> " --hash-entry-num: specify the hash entry number in hexadecimal to be setup\n"
> " --no-lthreads: turn off lthread model\n"
> " --parse-ptype: set to use software to analyze packet type\n\n",
> @@ -2877,8 +2877,8 @@ enum {
> OPT_NO_NUMA_NUM,
> #define OPT_IPV6 "ipv6"
> OPT_IPV6_NUM,
> -#define OPT_ENABLE_JUMBO "enable-jumbo"
> - OPT_ENABLE_JUMBO_NUM,
> +#define OPT_MAX_PKT_LEN "max-pkt-len"
> + OPT_MAX_PKT_LEN_NUM,
> #define OPT_HASH_ENTRY_NUM "hash-entry-num"
> OPT_HASH_ENTRY_NUM_NUM,
> #define OPT_NO_LTHREADS "no-lthreads"
> @@ -2902,7 +2902,7 @@ parse_args(int argc, char **argv)
> {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
> {OPT_NO_NUMA, 0, NULL, OPT_NO_NUMA_NUM },
> {OPT_IPV6, 0, NULL, OPT_IPV6_NUM },
> - {OPT_ENABLE_JUMBO, 0, NULL, OPT_ENABLE_JUMBO_NUM },
> + {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
> {OPT_HASH_ENTRY_NUM, 1, NULL, OPT_HASH_ENTRY_NUM_NUM },
> {OPT_NO_LTHREADS, 0, NULL, OPT_NO_LTHREADS_NUM },
> {OPT_PARSE_PTYPE, 0, NULL, OPT_PARSE_PTYPE_NUM },
> @@ -2981,35 +2981,10 @@ parse_args(int argc, char **argv)
> parse_ptype_on = 1;
> break;
>
> - case OPT_ENABLE_JUMBO_NUM:
> - {
> - struct option lenopts = {"max-pkt-len",
> - required_argument, 0, 0};
> -
> - printf("jumbo frame is enabled - disabling simple TX path\n");
> - port_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> - DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /* if no max-pkt-len set, use the default value
> - * RTE_ETHER_MAX_LEN
> - */
> - if (getopt_long(argc, argvopt, "", &lenopts,
> - &option_index) == 0) {
> -
> - ret = parse_max_pkt_len(optarg);
> - if ((ret < 64) || (ret > MAX_JUMBO_PKT_LEN)) {
> - printf("invalid packet length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> - printf("set jumbo frame max packet length to %u\n",
> - (unsigned int)port_conf.rxmode.max_rx_pkt_len);
> + case OPT_MAX_PKT_LEN_NUM:
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> - }
> +
> #if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
> case OPT_HASH_ENTRY_NUM_NUM:
> ret = parse_hash_entry_number(optarg);
> @@ -3489,6 +3464,43 @@ check_all_ports_link_status(uint32_t port_mask)
> }
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> int
> main(int argc, char **argv)
> {
> @@ -3577,6 +3589,12 @@ main(int argc, char **argv)
> "Error during getting device (port %u) info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/performance-thread/l3fwd-thread/test.sh b/examples/performance-thread/l3fwd-thread/test.sh
> index f0b6e271a5f3..3dd33407ea41 100755
> --- a/examples/performance-thread/l3fwd-thread/test.sh
> +++ b/examples/performance-thread/l3fwd-thread/test.sh
> @@ -11,7 +11,7 @@ case "$1" in
> echo "1.1 1 L-core per pcore (N=2)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,0)" \
> --tx="(1,0)" \
> --stat-lcore 2 \
> @@ -23,7 +23,7 @@ case "$1" in
> echo "1.2 1 L-core per pcore (N=4)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,1,1)" \
> --tx="(2,0)(3,1)" \
> --stat-lcore 4 \
> @@ -34,7 +34,7 @@ case "$1" in
> echo "1.3 1 L-core per pcore (N=8)"
>
> ./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
> --tx="(4,0)(5,1)(6,2)(7,3)" \
> --stat-lcore 8 \
> @@ -45,7 +45,7 @@ case "$1" in
> echo "1.3 1 L-core per pcore (N=16)"
>
> ./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
> --tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
> --stat-lcore 16 \
> @@ -61,7 +61,7 @@ case "$1" in
> echo "2.1 N L-core per pcore (N=2)"
>
> ./build/l3fwd-thread -c ff -n 2 --lcores="2,(0-1)@0" -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,0)" \
> --tx="(1,0)" \
> --stat-lcore 2 \
> @@ -73,7 +73,7 @@ case "$1" in
> echo "2.2 N L-core per pcore (N=4)"
>
> ./build/l3fwd-thread -c ff -n 2 --lcores="(0-3)@0,4" -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,1,1)" \
> --tx="(2,0)(3,1)" \
> --stat-lcore 4 \
> @@ -84,7 +84,7 @@ case "$1" in
> echo "2.3 N L-core per pcore (N=8)"
>
> ./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-7)@0,8" -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
> --tx="(4,0)(5,1)(6,2)(7,3)" \
> --stat-lcore 8 \
> @@ -95,7 +95,7 @@ case "$1" in
> echo "2.3 N L-core per pcore (N=16)"
>
> ./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-15)@0,16" -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
> --tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
> --stat-lcore 16 \
> @@ -111,7 +111,7 @@ case "$1" in
> echo "3.1 N L-threads per pcore (N=2)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,0)" \
> --tx="(0,0)" \
> --stat-lcore 1
> @@ -121,7 +121,7 @@ case "$1" in
> echo "3.2 N L-threads per pcore (N=4)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,1)" \
> --tx="(0,0)(0,1)" \
> --stat-lcore 1
> @@ -131,7 +131,7 @@ case "$1" in
> echo "3.2 N L-threads per pcore (N=8)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)" \
> --tx="(0,0)(0,1)(0,2)(0,3)" \
> --stat-lcore 1
> @@ -141,7 +141,7 @@ case "$1" in
> echo "3.2 N L-threads per pcore (N=16)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)" \
> --tx="(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)" \
> --stat-lcore 1
> diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
> index 467cda5a6dac..4f20dfc4be06 100644
> --- a/examples/pipeline/obj.c
> +++ b/examples/pipeline/obj.c
> @@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
> .link_speeds = 0,
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
> + .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
> .split_hdr_size = 0, /* Header split buffer size */
> },
> .rx_adv_conf = {
> diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
> index 4f32ade7fbf7..3b6c6c297f43 100644
> --- a/examples/ptpclient/ptpclient.c
> +++ b/examples/ptpclient/ptpclient.c
> @@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
> uint8_t ptp_enabled_port_nb;
> static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static const struct rte_ether_addr ether_multicast = {
> .addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
> };
> @@ -178,7 +172,7 @@ static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> struct rte_eth_dev_info dev_info;
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1;
> const uint16_t tx_rings = 1;
> int retval;
> @@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
> index 7ffccc8369dc..c32d2e12e633 100644
> --- a/examples/qos_meter/main.c
> +++ b/examples/qos_meter/main.c
> @@ -52,7 +52,6 @@ static struct rte_mempool *pool = NULL;
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
> index 1abe003fc6ae..1367569c65db 100644
> --- a/examples/qos_sched/init.c
> +++ b/examples/qos_sched/init.c
> @@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
> index ab6fa7d56c5d..6845c396b8d9 100644
> --- a/examples/rxtx_callbacks/main.c
> +++ b/examples/rxtx_callbacks/main.c
> @@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
> static const char usage[] =
> "%s EAL_ARGS -- [-t]\n";
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static struct {
> uint64_t total_cycles;
> uint64_t total_queue_cycles;
> @@ -124,7 +118,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused,
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> uint16_t nb_rxd = RX_RING_SIZE;
> uint16_t nb_txd = TX_RING_SIZE;
> @@ -137,6 +131,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
> index ae9bbee8d820..fd7207aee758 100644
> --- a/examples/skeleton/basicfwd.c
> +++ b/examples/skeleton/basicfwd.c
> @@ -17,14 +17,6 @@
> #define MBUF_CACHE_SIZE 250
> #define BURST_SIZE 32
>
> -/* Configuration of ethernet ports. 8< */
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -/* >8 End of configuration of ethernet ports. */
> -
> /* basicfwd.c: Basic DPDK skeleton forwarding example. */
>
> /*
> @@ -36,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> uint16_t nb_rxd = RX_RING_SIZE;
> uint16_t nb_txd = TX_RING_SIZE;
> @@ -48,6 +40,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index d0bf1f31e36a..da381b41c0c5 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -44,6 +44,7 @@
> #define BURST_RX_RETRIES 4 /* Number of retries on RX. */
>
> #define JUMBO_FRAME_MAX_SIZE 0x2600
> +#define MAX_MTU (JUMBO_FRAME_MAX_SIZE - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN))
>
> /* State of virtio device. */
> #define DEVICE_MAC_LEARNING 0
> @@ -633,8 +634,7 @@ us_vhost_parse_args(int argc, char **argv)
> if (ret) {
> vmdq_conf_default.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - vmdq_conf_default.rxmode.max_rx_pkt_len
> - = JUMBO_FRAME_MAX_SIZE;
> + vmdq_conf_default.rxmode.mtu = MAX_MTU;
> }
> break;
>
> diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
> index e59fb7d3478b..e19d79a40802 100644
> --- a/examples/vm_power_manager/main.c
> +++ b/examples/vm_power_manager/main.c
> @@ -51,17 +51,10 @@
> static uint32_t enabled_port_mask;
> static volatile bool force_quit;
>
> -/****************/
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> uint16_t q;
> @@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index daf5ca924221..4d0584af52e3 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1324,6 +1324,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
> return ret;
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1331,6 +1344,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> struct rte_eth_dev *dev;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_conf orig_conf;
> + uint32_t max_rx_pktlen;
> uint16_t overhead_len;
> int diag;
> int ret;
> @@ -1381,11 +1395,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
>
> /* Get the real Ethernet overhead length */
> - if (dev_info.max_mtu != UINT16_MAX &&
> - dev_info.max_rx_pktlen > dev_info.max_mtu)
> - overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
> - else
> - overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
>
> /* If number of queues specified by application for both Rx and Tx is
> * zero, use driver preferred values. This cannot be done individually
> @@ -1454,49 +1465,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> /*
> - * If jumbo frames are enabled, check that the maximum RX packet
> - * length is supported by the configured device.
> + * Check that the maximum RX packet length is supported by the
> + * configured device.
> */
> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - dev_info.max_rx_pktlen);
> - ret = -EINVAL;
> - goto rollback;
> - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - (unsigned int)RTE_ETHER_MIN_LEN);
> - ret = -EINVAL;
> - goto rollback;
> - }
> + if (dev_conf->rxmode.mtu == 0)
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
> + ret = -EINVAL;
> + goto rollback;
> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
> + ret = -EINVAL;
> + goto rollback;
> + }
>
> - /* Scale the MTU size to adapt max_rx_pkt_len */
> - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - overhead_len;
> - } else {
> - uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
> - if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
> - pktlen > RTE_ETHER_MTU + overhead_len)
> + if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> + if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
> + dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
> /* Use default value */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - RTE_ETHER_MTU + overhead_len;
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> }
>
> + dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
> +
> /*
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> if (dev_conf->rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data->dev_conf.rxmode.max_lro_pkt_size,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> goto rollback;
> @@ -2156,13 +2163,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> + /* Get the real Ethernet overhead length */
> if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + uint16_t overhead_len;
> + uint32_t max_rx_pktlen;
> + int ret;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
> + max_rx_pktlen = dev->data->mtu + overhead_len;
> if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - int ret = eth_dev_check_lro_pkt_size(port_id,
> + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> + ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data->dev_conf.rxmode.max_lro_pkt_size,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> return ret;
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index afdc53b674cc..9fba2bd73c84 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -416,7 +416,7 @@ enum rte_eth_tx_mq_mode {
> struct rte_eth_rxmode {
> /** The multi-queue packet distribution mode to be used, e.g. RSS. */
> enum rte_eth_rx_mq_mode mq_mode;
> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
> + uint32_t mtu; /**< Requested MTU. */
> /** Maximum allowed size of LRO aggregated packet. */
> uint32_t max_lro_pkt_size;
> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
> diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
> index 0036bda7465c..1491c815c312 100644
> --- a/lib/ethdev/rte_ethdev_trace.h
> +++ b/lib/ethdev/rte_ethdev_trace.h
> @@ -28,7 +28,7 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_u16(nb_tx_q);
> rte_trace_point_emit_u32(dev_conf->link_speeds);
> rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
> - rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
> + rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
> rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
> rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
> rte_trace_point_emit_u64(dev_conf->txmode.offloads);
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-08 17:20 ` Ananyev, Konstantin
@ 2021-10-09 10:58 ` lihuisong (C)
1 sibling, 0 replies; 112+ messages in thread
From: lihuisong (C) @ 2021-10-09 10:58 UTC (permalink / raw)
To: Ferruh Yigit, Somalapuram Amaranath, Ajit Khaparde,
Somnath Kotur, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy, Hemant Agrawal,
Sachin Saxena, Haiyue Wang, Gagandeep Singh, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: dev
在 2021/10/8 0:56, Ferruh Yigit 写道:
> Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
> and application should enable the jumbo frame offload support for it.
>
> When jumbo frame offload is not enabled by application, but MTU bigger
> than RTE_ETHER_MTU is requested there are two options, either fail or
> enable jumbo frame offload implicitly.
>
> Enabling jumbo frame offload implicitly is selected by many drivers
> since setting a big MTU value already implies it, and this increases
> usability.
>
> This patch moves this logic from drivers to the library, both to reduce
> the duplicated code in the drivers and to make behaviour more visible.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> ---
> drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
> drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
> drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
> drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
> drivers/net/dpaa/dpaa_ethdev.c | 7 -------
> drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
> drivers/net/e1000/em_ethdev.c | 9 ++-------
> drivers/net/e1000/igb_ethdev.c | 9 ++-------
> drivers/net/enetc/enetc_ethdev.c | 7 -------
> drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
> drivers/net/hns3/hns3_ethdev.c | 8 --------
> drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
> drivers/net/i40e/i40e_ethdev.c | 5 -----
> drivers/net/iavf/iavf_ethdev.c | 7 -------
> drivers/net/ice/ice_ethdev.c | 5 -----
> drivers/net/igc/igc_ethdev.c | 9 ++-------
> drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
> drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
> drivers/net/liquidio/lio_ethdev.c | 7 -------
> drivers/net/nfp/nfp_common.c | 6 ------
> drivers/net/octeontx/octeontx_ethdev.c | 5 -----
> drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
> drivers/net/qede/qede_ethdev.c | 4 ----
> drivers/net/sfc/sfc_ethdev.c | 9 ---------
> drivers/net/thunderx/nicvf_ethdev.c | 6 ------
> drivers/net/txgbe/txgbe_ethdev.c | 6 ------
> lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
> 27 files changed, 29 insertions(+), 166 deletions(-)
>
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
> index 76aeec077f2b..2960834b4539 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> val = 1;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> val = 0;
> - }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> return 0;
> }
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index 8c6f20b75aed..07ee19938930 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -3052,15 +3052,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> return -EINVAL;
> }
>
> - if (new_mtu > RTE_ETHER_MTU) {
> + if (new_mtu > RTE_ETHER_MTU)
> bp->flags |= BNXT_FLAG_JUMBO;
> - bp->eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - } else {
> - bp->eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> bp->flags &= ~BNXT_FLAG_JUMBO;
> - }
>
> /* Is there a change in mtu setting? */
> if (eth_dev->data->mtu == new_mtu)
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index 695d0d6fd3e2..349896f6a1bf 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> plt_err("Failed to max Rx frame length, rc=%d", rc);
> goto exit;
> }
> -
> - if (mtu > RTE_ETHER_MTU)
> - dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> exit:
> return rc;
> }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 8cf61f12a8d6..0c9cc2f5bb3f 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
> return -EINVAL;
>
> - /* set to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
> -1, -1, true);
> return err;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
> index adbdb87baab9..57b09f16ba44 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> fman_if_set_maxfrm(dev->process_private, frame_size);
>
> return 0;
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 97dd8e079a73..737b474dd814 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -1472,13 +1472,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
> return -EINVAL;
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
> index 6f418a36aa04..1b41dd04df5a 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~E1000_RCTL_LPE;
> - }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> return 0;
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index 4c114bf90fc7..a061d0529dd1 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -4396,15 +4396,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= E1000_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~E1000_RCTL_LPE;
> - }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
> diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
> index cdb9783b5372..fbcbbb6c0533 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads &=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
> enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
>
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 2d8271cb6095..4b30dfa222a8 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -1547,13 +1547,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> return ret;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> nic_dev->mtu_size = mtu;
>
> return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
> index 4ead227f9122..e1d465de8234 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2571,7 +2571,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> struct hns3_adapter *hns = dev->data->dev_private;
> uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
> struct hns3_hw *hw = &hns->hw;
> - bool is_jumbo_frame;
> int ret;
>
> if (dev->data->dev_started) {
> @@ -2581,7 +2580,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> rte_spinlock_lock(&hw->lock);
> - is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
> frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
>
> /*
> @@ -2596,12 +2594,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return ret;
> }
>
> - if (is_jumbo_frame)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
> index 0b5db486f8d6..3438b3650de6 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -908,12 +908,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> rte_spinlock_unlock(&hw->lock);
> return ret;
> }
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
Acked-by: Huisong Li <lihuisong@huawei.com>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index ab571a921f9e..9283adb19304 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11775,11 +11775,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return ret;
> }
>
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 0eabce275d92..844d26d87ba6 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -1473,13 +1473,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return ret;
> }
>
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 8ee1335ac6cf..3038a9714517 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3992,11 +3992,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return 0;
> }
>
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
> index b26723064b07..dcbc26b8186e 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> rctl = IGC_READ_REG(hw, IGC_RCTL);
> -
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> rctl |= IGC_RCTL_LPE;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> rctl &= ~IGC_RCTL_LPE;
> - }
> IGC_WRITE_REG(hw, IGC_RCTL, rctl);
>
> IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 3634c0c8c5f0..e8a33f04bd69 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (rpst->i40e_pf_eth) {
> ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
> mtu);
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
> index e5ddae219182..c337430f2df8 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -5198,13 +5198,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>
> /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> - } else {
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> - }
> IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
>
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
> index 976916f870a5..3a516c52d199 100644
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ b/drivers/net/liquidio/lio_ethdev.c
> @@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> return -1;
> }
>
> - if (mtu > RTE_ETHER_MTU)
> - eth_dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - eth_dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return 0;
> }
>
> diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
> index a2031a7a82cc..850ec7655f82 100644
> --- a/drivers/net/nfp/nfp_common.c
> +++ b/drivers/net/nfp/nfp_common.c
> @@ -962,12 +962,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EBUSY;
> }
>
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> /* writing to configuration space */
> nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
> index 69c3bda12df8..fb65be2c2dc3 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.c
> +++ b/drivers/net/octeontx/octeontx_ethdev.c
> @@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (mtu > RTE_ETHER_MTU)
> - nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
> frame_size);
>
> diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
> index cf7804157198..293306c7be2a 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_ops.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
> @@ -59,11 +59,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (mtu > RTE_ETHER_MTU)
> - dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> return rc;
> }
>
> diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
> index 4b971fd1fe3c..6886a4e5efb4 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> fp->rxq->rx_buf_size = rc;
> }
> }
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (!dev->data->dev_started && restart) {
> qede_dev_start(dev);
> diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
> index 1f55c90b419d..2ee80e2dc41f 100644
> --- a/drivers/net/sfc/sfc_ethdev.c
> +++ b/drivers/net/sfc/sfc_ethdev.c
> @@ -1064,15 +1064,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> }
> }
>
> - /*
> - * The driver does not use it, but other PMDs update jumbo frame
> - * flag when MTU is set.
> - */
> - if (mtu > RTE_ETHER_MTU) {
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> - rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - }
> -
> sfc_adapter_unlock(sa);
>
> sfc_log_init(sa, "done");
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
> index c8ae95a61306..b501fee5332c 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> struct nicvf *nic = nicvf_pmd_priv(dev);
> uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
> size_t i;
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> (frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
> return -EINVAL;
>
> - if (mtu > RTE_ETHER_MTU)
> - rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (nicvf_mbox_update_hw_max_frs(nic, mtu))
> return -EINVAL;
>
> diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
> index 269de9f848dd..35b98097c3a4 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -3486,12 +3486,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - /* switch to jumbo mode if needed */
> - if (mtu > RTE_ETHER_MTU)
> - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> - else
> - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> if (hw->mode)
> wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> TXGBE_FRAME_SIZE_MAX);
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 4d0584af52e3..1740bab98a83 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -3639,6 +3639,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> int ret;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_dev *dev;
> + int is_jumbo_frame_capable = 0;
>
> RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> dev = &rte_eth_devices[port_id];
> @@ -3657,12 +3658,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>
> if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
> return -EINVAL;
> +
> + if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> + is_jumbo_frame_capable = 1;
> }
>
> + if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
> + return -EINVAL;
> +
> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
> - if (!ret)
> + if (ret == 0) {
> dev->data->mtu = mtu;
>
> + /* switch to jumbo mode if needed */
> + if (mtu > RTE_ETHER_MTU)
> + dev->data->dev_conf.rxmode.offloads |=
> + DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> + dev->data->dev_conf.rxmode.offloads &=
> + ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> return eth_err(port_id, ret);
> }
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag
2021-10-08 17:11 ` Ananyev, Konstantin
@ 2021-10-09 11:09 ` lihuisong (C)
0 siblings, 0 replies; 112+ messages in thread
From: lihuisong (C) @ 2021-10-09 11:09 UTC (permalink / raw)
To: Ananyev, Konstantin, Yigit, Ferruh, Jerin Jacob, Li, Xiaoyun,
Ajit Khaparde, Somnath Kotur, Igor Russkikh,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Chas Williams,
Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Wang, Haiyue,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, Daley, John, Hyong Youb Kim,
Gaetan Rivet, Zhang, Qi Z, Wang, Xiao W, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou, Xing, Beilei,
Wu, Jingjing, Yang, Qiming, Andrew Boyer, Xu, Rosen, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia,
Chenbo, Yong Wang, Nicolau, Radu, Akhil Goyal, Hunt, David,
Mcnamara, John, Thomas Monjalon
Cc: dev, Yigit, Ferruh
在 2021/10/9 1:11, Ananyev, Konstantin 写道:
>
>> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>>
>> Instead of drivers announce this capability, application can deduct the
>> capability by checking reported 'dev_info.max_mtu' or
>> 'dev_info.max_rx_pktlen'.
>>
>> And instead of application setting this flag explicitly to enable jumbo
>> frames, this can be deduced by driver by comparing requested 'mtu' to
>> 'RTE_ETHER_MTU'.
>>
>> Removing this additional configuration for simplification.
>>
>> Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
>> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
>> ---
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
+1
Acked-by: Huisong Li <lihuisong@huawei.com>
>
>> 2.31.1
> .
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-08 16:51 ` Ananyev, Konstantin
@ 2021-10-09 11:43 ` lihuisong (C)
2021-10-11 20:15 ` Ferruh Yigit
1 sibling, 1 reply; 112+ messages in thread
From: lihuisong (C) @ 2021-10-09 11:43 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, Thomas Monjalon, Andrew Rybchenko
Hi, Ferruh
在 2021/10/8 0:56, Ferruh Yigit 写道:
> Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
> have slightly different checks. Like one checks min MTU against
> RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
>
> Checks moved into common function to unify the checks. Also this has
> benefit to have common error logs.
>
> Suggested-by: Huisong Li <lihuisong@huawei.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> lib/ethdev/rte_ethdev.c | 82 ++++++++++++++++++++++++++---------------
> lib/ethdev/rte_ethdev.h | 2 +-
> 2 files changed, 54 insertions(+), 30 deletions(-)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index c2b624aba1a0..0a6e952722ae 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1336,6 +1336,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> return overhead_len;
> }
>
> +/* rte_eth_dev_info_get() should be called prior to this function */
> +static int
> +eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
> + uint16_t mtu)
> +{
> + uint16_t overhead_len;
> + uint32_t frame_size;
> +
> + if (mtu < dev_info->min_mtu) {
> + RTE_ETHDEV_LOG(ERR,
> + "MTU (%u) < device min MTU (%u) for port_id %u\n",
> + mtu, dev_info->min_mtu, port_id);
> + return -EINVAL;
> + }
> + if (mtu > dev_info->max_mtu) {
> + RTE_ETHDEV_LOG(ERR,
> + "MTU (%u) > device max MTU (%u) for port_id %u\n",
> + mtu, dev_info->max_mtu, port_id);
> + return -EINVAL;
> + }
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
> + dev_info->max_mtu);
> + frame_size = mtu + overhead_len;
> + if (frame_size < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR,
> + "Frame size (%u) < min frame size (%u) for port_id %u\n",
> + frame_size, RTE_ETHER_MIN_LEN, port_id);
> + return -EINVAL;
> + }
> +
> + if (frame_size > dev_info->max_rx_pktlen) {
> + RTE_ETHDEV_LOG(ERR,
> + "Frame size (%u) > device max frame size (%u) for port_id %u\n",
> + frame_size, dev_info->max_rx_pktlen, port_id);
> + return -EINVAL;
> + }
This function is used to verify the MTU. So "frame_size" is redundant.
As modified by this patch, dev_info->min_mtu is calculated based on
RTE_ETHER_MIN_LEN.
> +
> + return 0;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1463,26 +1504,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
> }
>
> - /*
> - * Check that the maximum RX packet length is supported by the
> - * configured device.
> - */
> if (dev_conf->rxmode.mtu == 0)
> dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> - max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> - if (max_rx_pktlen > dev_info.max_rx_pktlen) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
> - port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
> - ret = -EINVAL;
> - goto rollback;
> - } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
> - port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
> - ret = -EINVAL;
> +
> + ret = eth_dev_validate_mtu(port_id, &dev_info,
> + dev->data->dev_conf.rxmode.mtu);
> + if (ret != 0)
> goto rollback;
> - }
>
> dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
>
> @@ -1491,6 +1519,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> * size is supported by the configured device.
> */
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> if (dev_conf->rxmode.max_lro_pkt_size == 0)
> dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
> ret = eth_dev_check_lro_pkt_size(port_id,
> @@ -3437,7 +3468,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
> dev_info->rx_desc_lim = lim;
> dev_info->tx_desc_lim = lim;
> dev_info->device = dev->device;
> - dev_info->min_mtu = RTE_ETHER_MIN_MTU;
> + dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN;
I suggest that the adjustment to the minimum mtu size is also explicitly
reflected in the commit log.
> dev_info->max_mtu = UINT16_MAX;
>
> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
> @@ -3643,21 +3675,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
> * which relies on dev->dev_ops->dev_infos_get.
> */
> if (*dev->dev_ops->dev_infos_get != NULL) {
> - uint16_t overhead_len;
> - uint32_t frame_size;
> -
> ret = rte_eth_dev_info_get(port_id, &dev_info);
> if (ret != 0)
> return ret;
>
> - if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
> - return -EINVAL;
> -
> - overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> - dev_info.max_mtu);
> - frame_size = mtu + overhead_len;
> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
> - return -EINVAL;
> + ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
> + if (ret != 0)
> + return ret;
> }
>
> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 4d0f956a4b28..50e124ff631f 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -3056,7 +3056,7 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
> * };
> *
> * device = dev->device
> - * min_mtu = RTE_ETHER_MIN_MTU
> + * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
> * max_mtu = UINT16_MAX
> *
> * The following fields will be populated if support for dev_infos_get()
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-08 17:11 ` Ananyev, Konstantin
@ 2021-10-10 5:46 ` Matan Azrad
1 sibling, 0 replies; 112+ messages in thread
From: Matan Azrad @ 2021-10-10 5:46 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Ajit Khaparde,
Somnath Kotur, Igor Russkikh, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Slava Ovsiienko, Zyta Szpak,
Liron Himi, Heinrich Kuhn, Harman Kalra, Nalla Pradeep,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Devendra Singh Rawat, Andrew Rybchenko, Maciej Czekaj, Jiawen Wu,
Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Konstantin Ananyev, Radu Nicolau, Akhil Goyal, David Hunt,
John McNamara, NBU-Contact-Thomas Monjalon
Cc: dev
From: Ferruh Yigit
> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>
> Instead of drivers announce this capability, application can deduct the
> capability by checking reported 'dev_info.max_mtu' or
> 'dev_info.max_rx_pktlen'.
>
> And instead of application setting this flag explicitly to enable jumbo
> frames, this can be deduced by driver by comparing requested 'mtu' to
> 'RTE_ETHER_MTU'.
>
> Removing this additional configuration for simplification.
>
> Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Matan Azrad <matan@nvidia.com>
For mlx4/5 PMDs.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
` (6 preceding siblings ...)
2021-10-08 8:36 ` Xu, Rosen
@ 2021-10-10 6:30 ` Matan Azrad
2021-10-11 21:59 ` Ferruh Yigit
7 siblings, 1 reply; 112+ messages in thread
From: Matan Azrad @ 2021-10-10 6:30 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang,
Slava Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
NBU-Contact-Thomas Monjalon
Cc: dev
Hi Ferruh
From: Ferruh Yigit
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
Should it be compared also against max_lro_pkt_size for the SCATTER enabling by the PMD?
What do you think about enabling SCATTER by the API instead of making the comparison in each PMD?
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
<snip>
Please see more below regarding SCATTER.
> diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
> index 978cbb8201ea..4a5cfd22aa71 100644
> --- a/drivers/net/mlx4/mlx4_rxq.c
> +++ b/drivers/net/mlx4/mlx4_rxq.c
> @@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> int ret;
> uint32_t crc_present;
> uint64_t offloads;
> + uint32_t max_rx_pktlen;
>
> offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
>
> @@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> };
> /* Enable scattered packets support for this queue if necessary. */
> MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - (mb_len - RTE_PKTMBUF_HEADROOM)) {
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> + if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
> ;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> - uint32_t size =
> - RTE_PKTMBUF_HEADROOM +
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
> uint32_t sges_n;
>
> /*
> @@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> /* Make sure sges_n did not overflow. */
> size = mb_len * (1 << rxq->sges_n);
> size -= RTE_PKTMBUF_HEADROOM;
> - if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
> + if (size < max_rx_pktlen) {
> rte_errno = EOVERFLOW;
> ERROR("%p: too many SGEs (%u) needed to handle"
> " requested maximum packet size %u",
> (void *)dev,
> - 1 << sges_n,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + 1 << sges_n, max_rx_pktlen);
> goto error;
> }
> } else {
> WARN("%p: the requested maximum Rx packet size (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - (void *)dev,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + (void *)dev, max_rx_pktlen,
> mb_len - RTE_PKTMBUF_HEADROOM);
> }
If, by definition, SCATTER should be enabled implicitly by the PMD according to the comparison you wrote above, maybe this check for SCATTER offload is not needed.
Also, it can be documented on SCATTER offload precisely the parameters that are used for the comparison and that it is for capability only and no need to configure it.
Also, for the case of multi RX mempools configuration, it can be implicitly understood by the PMDs to enable SCATTER and no need to check that in PMD/API.
What do you think?
> DEBUG("%p: maximum number of segments per packet: %u",
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index abd8ce798986..6f4f351222d3 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -1330,10 +1330,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> uint64_t offloads = conf->offloads |
> dev->data->dev_conf.rxmode.offloads;
> unsigned int lro_on_queue = !!(offloads &
> DEV_RX_OFFLOAD_TCP_LRO);
> - unsigned int max_rx_pkt_len = lro_on_queue ?
> + unsigned int max_rx_pktlen = lro_on_queue ?
> dev->data->dev_conf.rxmode.max_lro_pkt_size :
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
> + dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
> RTE_PKTMBUF_HEADROOM;
> unsigned int max_lro_size = 0;
> unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
> @@ -1372,7 +1373,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> * needed to handle max size packets, replace zero length
> * with the buffer length from the pool.
> */
> - tail_len = max_rx_pkt_len;
> + tail_len = max_rx_pktlen;
> do {
> struct mlx5_eth_rxseg *hw_seg =
> &tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
> @@ -1410,7 +1411,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to handle"
> " requested maximum packet size %u, the maximum"
> " supported are %u", dev->data->port_id,
> - tmpl->rxq.rxseg_n, max_rx_pkt_len,
> + tmpl->rxq.rxseg_n, max_rx_pktlen,
> MLX5_MAX_RXQ_NSEG);
> rte_errno = ENOTSUP;
> goto error;
> @@ -1435,7 +1436,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
> " configured and no enough mbuf space(%u) to contain "
> "the maximum RX packet length(%u) with head-room(%u)",
> - dev->data->port_id, idx, mb_len, max_rx_pkt_len,
> + dev->data->port_id, idx, mb_len, max_rx_pktlen,
> RTE_PKTMBUF_HEADROOM);
> rte_errno = ENOSPC;
> goto error;
Also, here for the SCATTER check. Here, it is even an error.
> @@ -1454,7 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> * following conditions are met:
> * - MPRQ is enabled.
> * - The number of descs is more than the number of strides.
> - * - max_rx_pkt_len plus overhead is less than the max size
> + * - max_rx_pktlen plus overhead is less than the max size
> * of a stride or mprq_stride_size is specified by a user.
> * Need to make sure that there are enough strides to encap
> * the maximum packet size in case mprq_stride_size is set.
> @@ -1478,7 +1479,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> !!(offloads & DEV_RX_OFFLOAD_SCATTER);
> tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
> config->mprq.max_memcpy_len);
> - max_lro_size = RTE_MIN(max_rx_pkt_len,
> + max_lro_size = RTE_MIN(max_rx_pktlen,
> (1u << tmpl->rxq.strd_num_n) *
> (1u << tmpl->rxq.strd_sz_n));
> DRV_LOG(DEBUG,
> @@ -1487,9 +1488,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> dev->data->port_id, idx,
> tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
> } else if (tmpl->rxq.rxseg_n == 1) {
> - MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
> + MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
> tmpl->rxq.sges_n = 0;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> unsigned int sges_n;
>
> @@ -1511,13 +1512,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to handle"
> " requested maximum packet size %u, the maximum"
> " supported are %u", dev->data->port_id,
> - 1 << sges_n, max_rx_pkt_len,
> + 1 << sges_n, max_rx_pktlen,
> 1u << MLX5_MAX_LOG_RQ_SEGS);
> rte_errno = ENOTSUP;
> goto error;
> }
> tmpl->rxq.sges_n = sges_n;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> }
> if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
> DRV_LOG(WARNING,
<snip>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length
2021-10-08 15:57 ` [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length Ananyev, Konstantin
@ 2021-10-11 19:47 ` Ferruh Yigit
0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 19:47 UTC (permalink / raw)
To: Ananyev, Konstantin, Jerin Jacob, Li, Xiaoyun, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Zhang, Qi Z, Wang, Xiao W,
Matan Azrad, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Iremonger, Bernard, Kiran Kumar K,
Nithin Dabilpuram, Hunt, David, Mcnamara, John, Richardson,
Bruce, Igor Russkikh, Steven Webster, Peters, Matt,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Wang, Haiyue, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, Daley, John,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
Andrew Boyer, Xu, Rosen, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko, Wiles,
Keith, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia, Chenbo,
Chautru, Nicolas, Van Haaren, Harry, Dumitrescu, Cristian,
Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz, Doherty, Declan,
Pavan Nikhilesh, Rybalchenko, Kirill, Singh, Jasvinder,
Thomas Monjalon
Cc: dev
On 10/8/2021 4:57 PM, Ananyev, Konstantin wrote:
>
>
>> There is a confusion on setting max Rx packet length, this patch aims to
>> clarify it.
>>
>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
>> rte_eth_conf'.
>>
>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
>> stored into '(struct rte_eth_dev)->data->mtu'.
>>
>> These two APIs are related but they work in a disconnected way, they
>> store the set values in different variables which makes hard to figure
>> out which one to use, also having two different method for a related
>> functionality is confusing for the users.
>>
>> Other issues causing confusion is:
>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>> Ethernet frame overhead, and this overhead may be different from
>> device to device based on what device supports, like VLAN and QinQ.
>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>> which adds additional confusion and some APIs and PMDs already
>> discards this documented behavior.
>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>> field, this adds configuration complexity for application.
>>
>> As solution, both APIs gets MTU as parameter, and both saves the result
>> in same variable '(struct rte_eth_dev)->data->mtu'. For this
>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>> from jumbo frame.
>>
>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
>> request and it should be used only within configure function and result
>> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
>> both application and PMD uses MTU from this variable.
>>
>> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
>> default 'RTE_ETHER_MTU' value is used.
>>
>> Additional clarification done on scattered Rx configuration, in
>> relation to MTU and Rx buffer size.
>> MTU is used to configure the device for physical Rx/Tx size limitation,
>> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
>> size as Rx buffer size.
>> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
>> or not. If scattered Rx is not supported by device, MTU bigger than Rx
>> buffer size should fail.
>
> LGTM in general, one question below.
>
> ...
>
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index daf5ca924221..4d0584af52e3 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -1324,6 +1324,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
>> return ret;
>> }
>>
>> +static uint16_t
>> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
>> +{
>> + uint16_t overhead_len;
>> +
>> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
>> + overhead_len = max_rx_pktlen - max_mtu;
>
> In theory it could be overflow here, though I do realize that in practise it is unlikely situation.
> Anyway why uint16_t, why not uint32_t for all variables here?
> Just no to worry about such things.
>
That is related to the practically expected values, it should work
fine to use 'uint32_t', so I will switch to it in next version.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks
2021-10-08 16:51 ` Ananyev, Konstantin
@ 2021-10-11 19:50 ` Ferruh Yigit
0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 19:50 UTC (permalink / raw)
To: Ananyev, Konstantin, Thomas Monjalon, Andrew Rybchenko; +Cc: dev, Huisong Li
On 10/8/2021 5:51 PM, Ananyev, Konstantin wrote:
>
>
>> Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
>> have slightly different checks. Like one checks min MTU against
>> RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
>>
>> Checks moved into common function to unify the checks. Also this has
>> benefit to have common error logs.
>>
>> Suggested-by: Huisong Li <lihuisong@huawei.com>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> ---
>> lib/ethdev/rte_ethdev.c | 82 ++++++++++++++++++++++++++---------------
>> lib/ethdev/rte_ethdev.h | 2 +-
>> 2 files changed, 54 insertions(+), 30 deletions(-)
>>
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index c2b624aba1a0..0a6e952722ae 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -1336,6 +1336,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
>> return overhead_len;
>> }
>>
>> +/* rte_eth_dev_info_get() should be called prior to this function */
>> +static int
>> +eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
>> + uint16_t mtu)
>> +{
>> + uint16_t overhead_len;
>
> Again, I would just always use 32-bit arithmetic - safe and easy.
ack
> Apart from that:
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
<...>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks
2021-10-09 11:43 ` lihuisong (C)
@ 2021-10-11 20:15 ` Ferruh Yigit
2021-10-12 4:02 ` lihuisong (C)
0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 20:15 UTC (permalink / raw)
To: lihuisong (C); +Cc: dev, Thomas Monjalon, Andrew Rybchenko
On 10/9/2021 12:43 PM, lihuisong (C) wrote:
> Hi, Ferruh
>
> 在 2021/10/8 0:56, Ferruh Yigit 写道:
>> Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
>> have slightly different checks. Like one checks min MTU against
>> RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
>>
>> Checks moved into common function to unify the checks. Also this has
>> benefit to have common error logs.
>>
>> Suggested-by: Huisong Li <lihuisong@huawei.com>
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> ---
>> lib/ethdev/rte_ethdev.c | 82 ++++++++++++++++++++++++++---------------
>> lib/ethdev/rte_ethdev.h | 2 +-
>> 2 files changed, 54 insertions(+), 30 deletions(-)
>>
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index c2b624aba1a0..0a6e952722ae 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -1336,6 +1336,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
>> return overhead_len;
>> }
>> +/* rte_eth_dev_info_get() should be called prior to this function */
>> +static int
>> +eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
>> + uint16_t mtu)
>> +{
>> + uint16_t overhead_len;
>> + uint32_t frame_size;
>> +
>> + if (mtu < dev_info->min_mtu) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "MTU (%u) < device min MTU (%u) for port_id %u\n",
>> + mtu, dev_info->min_mtu, port_id);
>> + return -EINVAL;
>> + }
>> + if (mtu > dev_info->max_mtu) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "MTU (%u) > device max MTU (%u) for port_id %u\n",
>> + mtu, dev_info->max_mtu, port_id);
>> + return -EINVAL;
>> + }
>> +
>> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
>> + dev_info->max_mtu);
>> + frame_size = mtu + overhead_len;
>> + if (frame_size < RTE_ETHER_MIN_LEN) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "Frame size (%u) < min frame size (%u) for port_id %u\n",
>> + frame_size, RTE_ETHER_MIN_LEN, port_id);
>> + return -EINVAL;
>> + }
>> +
>> + if (frame_size > dev_info->max_rx_pktlen) {
>> + RTE_ETHDEV_LOG(ERR,
>> + "Frame size (%u) > device max frame size (%u) for port_id %u\n",
>> + frame_size, dev_info->max_rx_pktlen, port_id);
>> + return -EINVAL;
>> + }
>
> This function is used to verify the MTU. So "frame_size" is redundant.
>
Yes it is redundant for the drivers that both announce 'max_rx_pktlen' & 'max_mtu',
but stil some drivers doesn't announce the 'max_mtu' values and default value
'UINT16_MAX' is set by ethdev, specially virtual drivers.
That is why I kept both to be in safe side.
> As modified by this patch, dev_info->min_mtu is calculated based on RTE_ETHER_MIN_LEN.
>
And for the min check, for the default 'min_mtu' check is redundant, but for the cases
driver sets "min_mtu < (RTE_ETHER_MIN_LEN - overhead_len)" second check becomes
different limit. I don't know if this happens at all in practice but I think it
doesn't hurt to have both checks to be on safe side.
>> +
>> + return 0;
>> +}
>> +
>> int
>> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>> const struct rte_eth_conf *dev_conf)
>> @@ -1463,26 +1504,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>> goto rollback;
>> }
>> - /*
>> - * Check that the maximum RX packet length is supported by the
>> - * configured device.
>> - */
>> if (dev_conf->rxmode.mtu == 0)
>> dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
>> - max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
>> - if (max_rx_pktlen > dev_info.max_rx_pktlen) {
>> - RTE_ETHDEV_LOG(ERR,
>> - "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
>> - port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
>> - ret = -EINVAL;
>> - goto rollback;
>> - } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
>> - RTE_ETHDEV_LOG(ERR,
>> - "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
>> - port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
>> - ret = -EINVAL;
>> +
>> + ret = eth_dev_validate_mtu(port_id, &dev_info,
>> + dev->data->dev_conf.rxmode.mtu);
>> + if (ret != 0)
>> goto rollback;
>> - }
>> dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
>> @@ -1491,6 +1519,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
>> * size is supported by the configured device.
>> */
>> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
>> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
>> + dev_info.max_mtu);
>> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
>> if (dev_conf->rxmode.max_lro_pkt_size == 0)
>> dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
>> ret = eth_dev_check_lro_pkt_size(port_id,
>> @@ -3437,7 +3468,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
>> dev_info->rx_desc_lim = lim;
>> dev_info->tx_desc_lim = lim;
>> dev_info->device = dev->device;
>> - dev_info->min_mtu = RTE_ETHER_MIN_MTU;
>> + dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
>> + RTE_ETHER_CRC_LEN;
> I suggest that the adjustment to the minimum mtu size is also explicitly reflected in the commit log.
ack, I will
>> dev_info->max_mtu = UINT16_MAX;
>> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
>> @@ -3643,21 +3675,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
>> * which relies on dev->dev_ops->dev_infos_get.
>> */
>> if (*dev->dev_ops->dev_infos_get != NULL) {
>> - uint16_t overhead_len;
>> - uint32_t frame_size;
>> -
>> ret = rte_eth_dev_info_get(port_id, &dev_info);
>> if (ret != 0)
>> return ret;
>> - if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
>> - return -EINVAL;
>> -
>> - overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
>> - dev_info.max_mtu);
>> - frame_size = mtu + overhead_len;
>> - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
>> - return -EINVAL;
>> + ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
>> + if (ret != 0)
>> + return ret;
>> }
>> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> index 4d0f956a4b28..50e124ff631f 100644
>> --- a/lib/ethdev/rte_ethdev.h
>> +++ b/lib/ethdev/rte_ethdev.h
>> @@ -3056,7 +3056,7 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
>> * };
>> *
>> * device = dev->device
>> - * min_mtu = RTE_ETHER_MIN_MTU
>> + * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
>> * max_mtu = UINT16_MAX
>> *
>> * The following fields will be populated if support for dev_infos_get()
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length
2021-10-10 6:30 ` Matan Azrad
@ 2021-10-11 21:59 ` Ferruh Yigit
2021-10-12 7:03 ` Matan Azrad
0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 21:59 UTC (permalink / raw)
To: Matan Azrad, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang,
Slava Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
NBU-Contact-Thomas Monjalon, Dekel Peled
Cc: dev
On 10/10/2021 7:30 AM, Matan Azrad wrote:
>
> Hi Ferruh
>
> From: Ferruh Yigit
>> There is a confusion on setting max Rx packet length, this patch aims to
>> clarify it.
>>
>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
>> rte_eth_conf'.
>>
>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
>> stored into '(struct rte_eth_dev)->data->mtu'.
>>
>> These two APIs are related but they work in a disconnected way, they
>> store the set values in different variables which makes hard to figure
>> out which one to use, also having two different method for a related
>> functionality is confusing for the users.
>>
>> Other issues causing confusion is:
>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>> Ethernet frame overhead, and this overhead may be different from
>> device to device based on what device supports, like VLAN and QinQ.
>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>> which adds additional confusion and some APIs and PMDs already
>> discards this documented behavior.
>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>> field, this adds configuration complexity for application.
>>
>> As solution, both APIs gets MTU as parameter, and both saves the result
>> in same variable '(struct rte_eth_dev)->data->mtu'. For this
>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>> from jumbo frame.
>>
>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
>> request and it should be used only within configure function and result
>> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
>> both application and PMD uses MTU from this variable.
>>
>> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
>> default 'RTE_ETHER_MTU' value is used.
>>
>> Additional clarification done on scattered Rx configuration, in
>> relation to MTU and Rx buffer size.
>> MTU is used to configure the device for physical Rx/Tx size limitation,
>> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
>> size as Rx buffer size.
>> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
>> or not. If scattered Rx is not supported by device, MTU bigger than Rx
>> buffer size should fail.
>
> Should it be compared also against max_lro_pkt_size for the SCATTER enabling by the PMD?
>
I kept the LRO related code same, the Rx packet length change patch already become
complex, LRO related changes can be done later instead of making this set more confusing.
It would be great if you and Dekel can work on it as you introduced the 'max_lro_pkt_size' in ethdev.
> What do you think about enabling SCATTER by the API instead of making the comparison in each PMD?
>
Not sure if we can do that, as far as I can see there is no enforcement on the
Rx buffer size but PMDs select it.
>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
> <snip>
>
> Please see more below regarding SCATTER.
>
>> diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
>> index 978cbb8201ea..4a5cfd22aa71 100644
>> --- a/drivers/net/mlx4/mlx4_rxq.c
>> +++ b/drivers/net/mlx4/mlx4_rxq.c
>> @@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
>> uint16_t idx, uint16_t desc,
>> int ret;
>> uint32_t crc_present;
>> uint64_t offloads;
>> + uint32_t max_rx_pktlen;
>>
>> offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
>>
>> @@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
>> uint16_t idx, uint16_t desc,
>> };
>> /* Enable scattered packets support for this queue if necessary. */
>> MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
>> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
>> - (mb_len - RTE_PKTMBUF_HEADROOM)) {
>> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN +
>> RTE_ETHER_CRC_LEN;
>> + if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
>> ;
>> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
>> - uint32_t size =
>> - RTE_PKTMBUF_HEADROOM +
>> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
>> + uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
>> uint32_t sges_n;
>>
>> /*
>> @@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
>> uint16_t idx, uint16_t desc,
>> /* Make sure sges_n did not overflow. */
>> size = mb_len * (1 << rxq->sges_n);
>> size -= RTE_PKTMBUF_HEADROOM;
>> - if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
>> + if (size < max_rx_pktlen) {
>> rte_errno = EOVERFLOW;
>> ERROR("%p: too many SGEs (%u) needed to handle"
>> " requested maximum packet size %u",
>> (void *)dev,
>> - 1 << sges_n,
>> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
>> + 1 << sges_n, max_rx_pktlen);
>> goto error;
>> }
>> } else {
>> WARN("%p: the requested maximum Rx packet size (%u) is"
>> " larger than a single mbuf (%u) and scattered"
>> " mode has not been requested",
>> - (void *)dev,
>> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
>> + (void *)dev, max_rx_pktlen,
>> mb_len - RTE_PKTMBUF_HEADROOM);
>> }
>
> If, by definition, SCATTER should be enabled implicitly by the PMD according to the comparison you wrote above, maybe this check for SCATTER offload is not needed.
>
This behavior is not documented and not clear, some PMDs enable scattered Rx
implicitly some doesn't.
It looks like we need a clarification patch for scattered Rx too.
For this patch I added scatter related info on the commit log to clarify the
reasoning of the change. PMD behavior not changed.
> Also, it can be documented on SCATTER offload precisely the parameters that are used for the comparison and that it is for capability only and no need to configure it.
>
We are having same question in a few other offloads, should we take user
configuration strictly and fail, or should we adjust config to requested values.
Like if PMD supports scattered Rx and requested MTU is bigger than Rx buffer size,
should PMD enable scattered Rx itself or fails. We should first clarify this
and later fix documentation and driver in a separate patch.
> Also, for the case of multi RX mempools configuration, it can be implicitly understood by the PMDs to enable SCATTER and no need to check that in PMD/API.
>
Yes, multi Rx mempools is something else to take into account for the scattered
Rx config.
> What do you think?
>
>> DEBUG("%p: maximum number of segments per packet: %u",
>> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
>> index abd8ce798986..6f4f351222d3 100644
>> --- a/drivers/net/mlx5/mlx5_rxq.c
>> +++ b/drivers/net/mlx5/mlx5_rxq.c
>> @@ -1330,10 +1330,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev,
>> uint16_t idx, uint16_t desc,
>> uint64_t offloads = conf->offloads |
>> dev->data->dev_conf.rxmode.offloads;
>> unsigned int lro_on_queue = !!(offloads &
>> DEV_RX_OFFLOAD_TCP_LRO);
>> - unsigned int max_rx_pkt_len = lro_on_queue ?
>> + unsigned int max_rx_pktlen = lro_on_queue ?
>> dev->data->dev_conf.rxmode.max_lro_pkt_size :
>> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
>> - unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
>> + dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
>> + RTE_ETHER_CRC_LEN;
>> + unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
>> RTE_PKTMBUF_HEADROOM;
>> unsigned int max_lro_size = 0;
>> unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
>> @@ -1372,7 +1373,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
>> idx, uint16_t desc,
>> * needed to handle max size packets, replace zero length
>> * with the buffer length from the pool.
>> */
>> - tail_len = max_rx_pkt_len;
>> + tail_len = max_rx_pktlen;
>> do {
>> struct mlx5_eth_rxseg *hw_seg =
>> &tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
>> @@ -1410,7 +1411,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
>> idx, uint16_t desc,
>> "port %u too many SGEs (%u) needed to handle"
>> " requested maximum packet size %u, the maximum"
>> " supported are %u", dev->data->port_id,
>> - tmpl->rxq.rxseg_n, max_rx_pkt_len,
>> + tmpl->rxq.rxseg_n, max_rx_pktlen,
>> MLX5_MAX_RXQ_NSEG);
>> rte_errno = ENOTSUP;
>> goto error;
>> @@ -1435,7 +1436,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
>> idx, uint16_t desc,
>> DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
>> " configured and no enough mbuf space(%u) to contain "
>> "the maximum RX packet length(%u) with head-room(%u)",
>> - dev->data->port_id, idx, mb_len, max_rx_pkt_len,
>> + dev->data->port_id, idx, mb_len, max_rx_pktlen,
>> RTE_PKTMBUF_HEADROOM);
>> rte_errno = ENOSPC;
>> goto error;
>
> Also, here for the SCATTER check. Here, it is even an error.
>
>> @@ -1454,7 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
>> idx, uint16_t desc,
>> * following conditions are met:
>> * - MPRQ is enabled.
>> * - The number of descs is more than the number of strides.
>> - * - max_rx_pkt_len plus overhead is less than the max size
>> + * - max_rx_pktlen plus overhead is less than the max size
>> * of a stride or mprq_stride_size is specified by a user.
>> * Need to make sure that there are enough strides to encap
>> * the maximum packet size in case mprq_stride_size is set.
>> @@ -1478,7 +1479,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
>> idx, uint16_t desc,
>> !!(offloads & DEV_RX_OFFLOAD_SCATTER);
>> tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
>> config->mprq.max_memcpy_len);
>> - max_lro_size = RTE_MIN(max_rx_pkt_len,
>> + max_lro_size = RTE_MIN(max_rx_pktlen,
>> (1u << tmpl->rxq.strd_num_n) *
>> (1u << tmpl->rxq.strd_sz_n));
>> DRV_LOG(DEBUG,
>> @@ -1487,9 +1488,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
>> idx, uint16_t desc,
>> dev->data->port_id, idx,
>> tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
>> } else if (tmpl->rxq.rxseg_n == 1) {
>> - MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
>> + MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
>> tmpl->rxq.sges_n = 0;
>> - max_lro_size = max_rx_pkt_len;
>> + max_lro_size = max_rx_pktlen;
>> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
>> unsigned int sges_n;
>>
>> @@ -1511,13 +1512,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev,
>> uint16_t idx, uint16_t desc,
>> "port %u too many SGEs (%u) needed to handle"
>> " requested maximum packet size %u, the maximum"
>> " supported are %u", dev->data->port_id,
>> - 1 << sges_n, max_rx_pkt_len,
>> + 1 << sges_n, max_rx_pktlen,
>> 1u << MLX5_MAX_LOG_RQ_SEGS);
>> rte_errno = ENOTSUP;
>> goto error;
>> }
>> tmpl->rxq.sges_n = sges_n;
>> - max_lro_size = max_rx_pkt_len;
>> + max_lro_size = max_rx_pktlen;
>> }
>> if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
>> DRV_LOG(WARNING,
>
> <snip>
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
` (7 preceding siblings ...)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
@ 2021-10-11 23:53 ` Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
` (9 more replies)
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
9 siblings, 10 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 23:53 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
Thomas Monjalon
Cc: Ferruh Yigit, dev, Huisong Li
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
---
Cc: Min Hu (Connor) <humin29@huawei.com>
v2:
* Converted to explicit checks for zero/non-zero
* fixed hns3 checks
* fixed some sample app rxmode.mtu value
* fixed some sample app max-pkt-len argument and updated doc for it
v3:
* rebased
v4:
* fix typos in commit logs
v5:
* fix testpmd '--max-pkt-len=###' parameter for DTS jumbo frame test
v6:
* uint32_t type used in 'eth_dev_get_overhead_len()' helper function
---
app/test-eventdev/test_perf_common.c | 1 -
app/test-eventdev/test_pipeline_common.c | 5 +-
app/test-pmd/cmdline.c | 49 +++----
app/test-pmd/config.c | 22 ++-
app/test-pmd/parameters.c | 2 +-
app/test-pmd/testpmd.c | 113 +++++++++------
app/test-pmd/testpmd.h | 4 +-
app/test/test_link_bonding.c | 1 -
app/test/test_link_bonding_mode4.c | 1 -
| 2 -
app/test/test_pmd_perf.c | 1 -
doc/guides/nics/dpaa.rst | 2 +-
doc/guides/nics/dpaa2.rst | 2 +-
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/fm10k.rst | 2 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/octeontx.rst | 2 +-
doc/guides/nics/thunderx.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 25 ----
doc/guides/sample_app_ug/flow_classify.rst | 7 +-
doc/guides/sample_app_ug/l3_forward.rst | 6 +-
.../sample_app_ug/l3_forward_access_ctrl.rst | 4 +-
doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +-
.../sample_app_ug/l3_forward_power_man.rst | 4 +-
.../sample_app_ug/performance_thread.rst | 4 +-
doc/guides/sample_app_ug/skeleton.rst | 7 +-
drivers/net/atlantic/atl_ethdev.c | 3 -
drivers/net/avp/avp_ethdev.c | 17 +--
drivers/net/axgbe/axgbe_ethdev.c | 7 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
drivers/net/bnxt/bnxt_ethdev.c | 21 +--
drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 9 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-
drivers/net/cxgbe/cxgbe_main.c | 3 +-
drivers/net/cxgbe/sge.c | 3 +-
drivers/net/dpaa/dpaa_ethdev.c | 52 +++----
drivers/net/dpaa2/dpaa2_ethdev.c | 35 ++---
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/igb_ethdev.c | 18 +--
drivers/net/e1000/igb_rxtx.c | 16 +--
drivers/net/ena/ena_ethdev.c | 27 ++--
drivers/net/enetc/enetc_ethdev.c | 24 +---
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 42 +++---
drivers/net/fm10k/fm10k_ethdev.c | 2 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-
drivers/net/hns3/hns3_ethdev.c | 42 +-----
drivers/net/hns3/hns3_ethdev_vf.c | 28 +---
drivers/net/hns3/hns3_rxtx.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 10 +-
drivers/net/i40e/i40e_rxtx.c | 4 +-
drivers/net/iavf/iavf_ethdev.c | 9 +-
drivers/net/ice/ice_dcf_ethdev.c | 5 +-
drivers/net/ice/ice_ethdev.c | 14 +-
drivers/net/ice/ice_rxtx.c | 12 +-
drivers/net/igc/igc_ethdev.c | 51 ++-----
drivers/net/igc/igc_ethdev.h | 7 +
drivers/net/igc/igc_txrx.c | 22 +--
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ionic/ionic_rxtx.c | 6 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 35 ++---
drivers/net/ixgbe/ixgbe_pf.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 15 +-
drivers/net/liquidio/lio_ethdev.c | 20 +--
drivers/net/mlx4/mlx4_rxq.c | 17 +--
drivers/net/mlx5/mlx5_rxq.c | 25 ++--
drivers/net/mvneta/mvneta_ethdev.c | 7 -
drivers/net/mvneta/mvneta_rxtx.c | 13 +-
drivers/net/mvpp2/mrvl_ethdev.c | 34 ++---
drivers/net/nfp/nfp_common.c | 9 +-
drivers/net/octeontx/octeontx_ethdev.c | 12 +-
drivers/net/octeontx2/otx2_ethdev.c | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-
drivers/net/pfe/pfe_ethdev.c | 7 +-
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/qede/qede_rxtx.c | 8 +-
drivers/net/sfc/sfc_ethdev.c | 4 +-
drivers/net/sfc/sfc_port.c | 6 +-
drivers/net/tap/rte_eth_tap.c | 7 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +-
drivers/net/txgbe/txgbe_ethdev.c | 7 +-
drivers/net/txgbe/txgbe_ethdev.h | 4 +
drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
drivers/net/txgbe/txgbe_rxtx.c | 19 +--
drivers/net/virtio/virtio_ethdev.c | 9 +-
examples/bbdev_app/main.c | 1 -
examples/bond/main.c | 1 -
examples/distributor/main.c | 1 -
.../pipeline_worker_generic.c | 1 -
.../eventdev_pipeline/pipeline_worker_tx.c | 1 -
examples/flow_classify/flow_classify.c | 12 +-
examples/ioat/ioatfwd.c | 1 -
examples/ip_fragmentation/main.c | 12 +-
examples/ip_pipeline/link.c | 2 +-
examples/ip_reassembly/main.c | 12 +-
examples/ipsec-secgw/ipsec-secgw.c | 7 +-
examples/ipv4_multicast/main.c | 9 +-
examples/kni/main.c | 6 +-
examples/l2fwd-cat/l2fwd-cat.c | 8 +-
examples/l2fwd-crypto/main.c | 1 -
examples/l2fwd-event/l2fwd_common.c | 1 -
examples/l3fwd-acl/main.c | 129 +++++++++---------
examples/l3fwd-graph/main.c | 83 +++++++----
examples/l3fwd-power/main.c | 90 +++++++-----
examples/l3fwd/main.c | 84 +++++++-----
.../performance-thread/l3fwd-thread/main.c | 88 +++++++-----
.../performance-thread/l3fwd-thread/test.sh | 24 ++--
examples/pipeline/obj.c | 2 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 1 -
examples/qos_sched/init.c | 1 -
examples/rxtx_callbacks/main.c | 10 +-
examples/skeleton/basicfwd.c | 12 +-
examples/vhost/main.c | 4 +-
examples/vm_power_manager/main.c | 11 +-
lib/ethdev/rte_ethdev.c | 94 +++++++------
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_trace.h | 2 +-
121 files changed, 814 insertions(+), 1074 deletions(-)
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..660d5a0364b6 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..5fcea74b4d43 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
return -EINVAL;
}
- port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
- if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
+ port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
+ if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 826256b0b346..8d07cd4eb61d 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1874,45 +1874,38 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
__rte_unused void *data)
{
struct cmd_config_max_pkt_len_result *res = parsed_result;
- uint32_t max_rx_pkt_len_backup = 0;
- portid_t pid;
+ portid_t port_id;
int ret;
+ if (strcmp(res->name, "max-pkt-len") != 0) {
+ printf("Unknown parameter\n");
+ return;
+ }
+
if (!all_ports_stopped()) {
fprintf(stderr, "Please stop all ports first\n");
return;
}
- RTE_ETH_FOREACH_DEV(pid) {
- struct rte_port *port = &ports[pid];
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_port *port = &ports[port_id];
- if (!strcmp(res->name, "max-pkt-len")) {
- if (res->value < RTE_ETHER_MIN_LEN) {
- fprintf(stderr,
- "max-pkt-len can not be less than %d\n",
- RTE_ETHER_MIN_LEN);
- return;
- }
- if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
- return;
-
- ret = eth_dev_info_get_print_err(pid, &port->dev_info);
- if (ret != 0) {
- fprintf(stderr,
- "rte_eth_dev_info_get() failed for port %u\n",
- pid);
- return;
- }
-
- max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
+ if (res->value < RTE_ETHER_MIN_LEN) {
+ fprintf(stderr,
+ "max-pkt-len can not be less than %d\n",
+ RTE_ETHER_MIN_LEN);
+ return;
+ }
- port->dev_conf.rxmode.max_rx_pkt_len = res->value;
- if (update_jumbo_frame_offload(pid) != 0)
- port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
- } else {
- fprintf(stderr, "Unknown parameter\n");
+ ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
+ if (ret != 0) {
+ fprintf(stderr,
+ "rte_eth_dev_info_get() failed for port %u\n",
+ port_id);
return;
}
+
+ update_jumbo_frame_offload(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9c66329e96ee..db3eeffa0093 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1147,7 +1147,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
int diag;
struct rte_port *rte_port = &ports[port_id];
struct rte_eth_dev_info dev_info;
- uint16_t eth_overhead;
int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
@@ -1164,21 +1163,18 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
return;
}
diag = rte_eth_dev_set_mtu(port_id, mtu);
- if (diag)
+ if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
- else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- /*
- * Ether overhead in driver is equal to the difference of
- * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
- * device supports jumbo frame.
- */
- eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
- if (mtu > RTE_ETHER_MTU) {
+ return;
+ }
+
+ rte_port->dev_conf.rxmode.mtu = mtu;
+
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (mtu > RTE_ETHER_MTU)
rte_port->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- rte_port->dev_conf.rxmode.max_rx_pkt_len =
- mtu + eth_overhead;
- } else
+ else
rte_port->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
}
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 3f94a82e321f..dec5373b346d 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -870,7 +870,7 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
n = atoi(optarg);
if (n >= RTE_ETHER_MIN_LEN)
- rx_mode.max_rx_pkt_len = (uint32_t) n;
+ max_rx_pkt_len = n;
else
rte_exit(EXIT_FAILURE,
"Invalid max-pkt-len=%d - should be > %d\n",
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 97ae52e17ecd..606c3b7e702b 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -214,6 +214,11 @@ uint16_t stats_period; /**< Period to show statistics (disabled by default) */
*/
uint8_t f_quit;
+/*
+ * Max Rx frame size, set by '--max-pkt-len' parameter.
+ */
+uint32_t max_rx_pkt_len;
+
/*
* Configuration of packet segments used to scatter received packets
* if some of split features is configured.
@@ -446,13 +451,7 @@ lcoreid_t latencystats_lcore_id = -1;
/*
* Ethernet device configuration.
*/
-struct rte_eth_rxmode rx_mode = {
- /* Default maximum frame length.
- * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
- * in init_config().
- */
- .max_rx_pkt_len = 0,
-};
+struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
@@ -1481,11 +1480,24 @@ check_nb_hairpinq(queueid_t hairpinq)
return 0;
}
+static int
+get_eth_overhead(struct rte_eth_dev_info *dev_info)
+{
+ uint32_t eth_overhead;
+
+ if (dev_info->max_mtu != UINT16_MAX &&
+ dev_info->max_rx_pktlen > dev_info->max_mtu)
+ eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu;
+ else
+ eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return eth_overhead;
+}
+
static void
init_config_port_offloads(portid_t pid, uint32_t socket_id)
{
struct rte_port *port = &ports[pid];
- uint16_t data_size;
int ret;
int i;
@@ -1496,7 +1508,7 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid);
+ ret = update_jumbo_frame_offload(pid, 0);
if (ret != 0)
fprintf(stderr,
"Updating jumbo frame offload failed for port %u\n",
@@ -1516,6 +1528,10 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (eth_link_speed)
port->dev_conf.link_speeds = eth_link_speed;
+ if (max_rx_pkt_len)
+ port->dev_conf.rxmode.mtu = max_rx_pkt_len -
+ get_eth_overhead(&port->dev_info);
+
/* set flag to initialize port/queue */
port->need_reconfig = 1;
port->need_reconfig_queues = 1;
@@ -1528,14 +1544,20 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
*/
if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
- data_size = rx_mode.max_rx_pkt_len /
- port->dev_info.rx_desc_lim.nb_mtu_seg_max;
-
- if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
- mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
- TESTPMD_LOG(WARNING,
- "Configured mbuf size of the first segment %hu\n",
- mbuf_data_size[0]);
+ uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
+ uint16_t mtu;
+
+ if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
+ uint16_t data_size = (mtu + eth_overhead) /
+ port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+ uint16_t buffer_size = data_size + RTE_PKTMBUF_HEADROOM;
+
+ if (buffer_size > mbuf_data_size[0]) {
+ mbuf_data_size[0] = buffer_size;
+ TESTPMD_LOG(WARNING,
+ "Configured mbuf size of the first segment %hu\n",
+ mbuf_data_size[0]);
+ }
}
}
}
@@ -2552,6 +2574,7 @@ start_port(portid_t pid)
pi);
return -1;
}
+
/* configure port */
diag = eth_dev_configure_mp(pi, nb_rxq + nb_hairpinq,
nb_txq + nb_hairpinq,
@@ -3451,44 +3474,45 @@ rxtx_port_config(struct rte_port *port)
/*
* Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned if JUMBO_FRAME offload is not set.
+ * MTU is also aligned.
*
* port->dev_info should be set before calling this function.
*
+ * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
+ * ETH_OVERHEAD". This is useful to update flags but not MTU value.
+ *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid)
+update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
uint64_t rx_offloads;
- int ret;
+ uint16_t mtu, new_mtu;
bool on;
- /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
- if (port->dev_info.max_mtu != UINT16_MAX &&
- port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
- eth_overhead = port->dev_info.max_rx_pktlen -
- port->dev_info.max_mtu;
- else
- eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ eth_overhead = get_eth_overhead(&port->dev_info);
- rx_offloads = port->dev_conf.rxmode.offloads;
+ if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
+ printf("Failed to get MTU for port %u\n", portid);
+ return -1;
+ }
- /* Default config value is 0 to use PMD specific overhead */
- if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
- port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
+ if (max_rx_pktlen == 0)
+ max_rx_pktlen = mtu + eth_overhead;
+
+ rx_offloads = port->dev_conf.rxmode.offloads;
+ new_mtu = max_rx_pktlen - eth_overhead;
- if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
+ if (new_mtu <= RTE_ETHER_MTU) {
rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
on = false;
} else {
if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
fprintf(stderr,
"Frame size (%u) is not supported by port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len,
- portid);
+ max_rx_pktlen, portid);
return -1;
}
rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -3509,19 +3533,18 @@ update_jumbo_frame_offload(portid_t portid)
}
}
- /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
- * if unset do it here
- */
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- ret = eth_dev_set_mtu_mp(portid,
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
- if (ret)
- fprintf(stderr,
- "Failed to set MTU to %u for port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead,
- portid);
+ if (mtu == new_mtu)
+ return 0;
+
+ if (eth_dev_set_mtu_mp(portid, new_mtu) != 0) {
+ fprintf(stderr,
+ "Failed to set MTU to %u for port %u\n",
+ new_mtu, portid);
+ return -1;
}
+ port->dev_conf.rxmode.mtu = new_mtu;
+
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5863b2f43f3e..e3f022343af2 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -448,6 +448,8 @@ extern uint8_t bitrate_enabled;
extern struct rte_fdir_conf fdir_conf;
+extern uint32_t max_rx_pkt_len;
+
/*
* Configuration of packet segments used to scatter received packets
* if some of split features is configured.
@@ -1022,7 +1024,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid);
+int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..5388d18125a6 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
.split_hdr_size = 0,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index f120b2e3be24..189d2430f27e 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..e7bb0497b663 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
@@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..a3b4f52c65e6 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 7355ec305916..9dad612058c6 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -335,7 +335,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index df23a5704dca..831bc564883a 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -545,7 +545,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index e346018e4b80..f5a8fdd41398 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -166,7 +166,7 @@ Jumbo frame
Supports Rx jumbo frames.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.max_rx_pkt_len``.
+ ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..ed6afd62703d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -141,7 +141,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
up to 15364 bytes can still reach the host interface.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d882..1f5619ed53fc 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -606,9 +606,9 @@ Driver options
and each stride receives one packet. MPRQ can improve throughput for
small-packet traffic.
- When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
+ When MPRQ is enabled, MTU can be larger than the size of
user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
- configure large stride size enough to accommodate max_rx_pkt_len as long as
+ configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index b1a868b054d1..8236cc3e93e0 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -157,7 +157,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 12d43ce93e28..98f23a2b2a3d 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -392,7 +392,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
up to 9200 bytes can still reach the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 309f1056cfba..69dbb87bc5ee 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -81,31 +81,6 @@ Deprecation Notices
In 19.11 PMDs will still update the field even when the offload is not
enabled.
-* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be
- replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
- The new ``mtu`` field will be used to configure the initial device MTU via
- ``rte_eth_dev_configure()`` API.
- Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
- The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store
- the configured ``mtu`` value,
- and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
- be used to store the user configuration request.
- Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled,
- ``mtu`` field will be always valid.
- When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU``
- value will be used.
- ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully,
- either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
-
- An application may need to configure device for a specific Rx packet size, like for
- cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
- can't be bigger than Rx buffer size.
- To cover these cases an application needs to know the device packet overhead to be
- able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
- ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
- the device packet overhead can be calculated as:
- ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
-
* ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done``
will be removed in 21.11.
Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status``
diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst
index 812aaa87b05b..6c4c04e935e4 100644
--- a/doc/guides/sample_app_ug/flow_classify.rst
+++ b/doc/guides/sample_app_ug/flow_classify.rst
@@ -162,12 +162,7 @@ Forwarding application is shown below:
:end-before: >8 End of initializing a given port.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
-
-.. literalinclude:: ../../../examples/flow_classify/flow_classify.c
- :language: c
- :start-after: Ethernet ports configured with default settings using struct. 8<
- :end-before: >8 End of configuration of Ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 2d5cd5f1c0ba..56af5cd5b383 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -65,7 +65,7 @@ The application has a number of command line options::
[--lookup LOOKUP_METHOD]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--hash-entry-num]
[--ipv6]
@@ -95,9 +95,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 2cf6e4556f14..486247ac2e4f 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -236,7 +236,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
+ ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--max-pkt-len PKTLEN] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
where,
@@ -255,8 +255,6 @@ where,
* --alg=<val>: optional, ACL classify method to use, one of:
``scalar|sse|avx2|neon|altivec|avx512x16|avx512x32``
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst b/doc/guides/sample_app_ug/l3_forward_graph.rst
index 03e9a85aa68c..0a3e0d44ecea 100644
--- a/doc/guides/sample_app_ug/l3_forward_graph.rst
+++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
@@ -48,7 +48,7 @@ The application has a number of command line options similar to l3fwd::
[-P]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--per-port-pool]
@@ -63,9 +63,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index 0495314c87d5..8817eaadbfc3 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -88,7 +88,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa]
+ ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--max-pkt-len PKTLEN] [--no-numa]
where,
@@ -99,8 +99,6 @@ where,
* --config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which ports are mapped to which cores.
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst
index 9b09838f6448..7d1bf6eaae8c 100644
--- a/doc/guides/sample_app_ug/performance_thread.rst
+++ b/doc/guides/sample_app_ug/performance_thread.rst
@@ -59,7 +59,7 @@ The application has a number of command line options::
-p PORTMASK [-P]
--rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
--tx(lcore,thread)[,(lcore,thread)]
- [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
+ [--max-pkt-len PKTLEN] [--no-numa]
[--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
[--parse-ptype]
@@ -80,8 +80,6 @@ Where:
the lcore the thread runs on, and the id of RX thread with which it is
associated. The parameters are explained below.
-* ``--enable-jumbo``: optional, enables jumbo frames.
-
* ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
* ``--no-numa``: optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst
index f7bcd7ed2a1d..6d0de6440105 100644
--- a/doc/guides/sample_app_ug/skeleton.rst
+++ b/doc/guides/sample_app_ug/skeleton.rst
@@ -106,12 +106,7 @@ Forwarding application is shown below:
:end-before: >8 End of main functional part of port initialization.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
-
-.. literalinclude:: ../../../examples/skeleton/basicfwd.c
- :language: c
- :start-after: Configuration of ethernet ports. 8<
- :end-before: >8 End of configuration of ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..3f654c071566 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return 0;
}
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 6cb8bb4338de..932ec90265cf 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1059,17 +1059,18 @@ static int
avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
struct avp_dev *avp)
{
- unsigned int max_rx_pkt_len;
+ unsigned int max_rx_pktlen;
- max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
- if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
- (max_rx_pkt_len > avp->host_mbuf_size)) {
+ if (max_rx_pktlen > avp->guest_mbuf_size ||
+ max_rx_pktlen > avp->host_mbuf_size) {
/*
* If the guest MTU is greater than either the host or guest
* buffers then chained mbufs have to be enabled in the TX
* direction. It is assumed that the application will not need
- * to send packets larger than their max_rx_pkt_len (MRU).
+ * to send packets larger than their MTU.
*/
return 1;
}
@@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
avp->max_rx_pkt_len,
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN,
avp->host_mbuf_size,
avp->guest_mbuf_size);
@@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* function; send it truncated to avoid the performance
* hit of having to manage returning the already
* allocated buffer to the free list. This should not
- * happen since the application should have set the
- * max_rx_pkt_len based on its MTU and it should be
+ * happen since the application should have not send
+ * packages larger than its MTU and it should be
* policing its own packet sizes.
*/
txq->errors++;
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index ebd5411fddf3..76cd892eec7b 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
int ret;
struct rte_eth_dev_data *dev_data = dev->data;
- uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len;
+ uint16_t max_pkt_len;
dev->dev_ops = &axgbe_eth_dev_ops;
@@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
+
+ max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (frame_size > AXGBE_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
val = 1;
@@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
val = 0;
}
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..009a94e9a8fa 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -175,16 +175,12 @@ static int
bnx2x_dev_configure(struct rte_eth_dev *dev)
{
struct bnx2x_softc *sc = dev->data->dev_private;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
PMD_INIT_FUNC_TRACE(sc);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- dev->data->mtu = sc->mtu;
- }
+ sc->mtu = dev->data->dev_conf.rxmode.mtu;
if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues");
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85fa..8c6f20b75aed 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1157,13 +1157,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
- BNXT_NUM_VLANS;
- bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
- }
+ bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
+
return 0;
resource_error:
@@ -1201,6 +1196,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
*/
static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
uint16_t buf_size;
int i;
@@ -1215,7 +1211,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size)
+ if (eth_dev->data->mtu + overhead > buf_size)
return 1;
}
return 0;
@@ -3026,6 +3022,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
uint32_t rc = 0;
@@ -3039,8 +3036,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
if (!eth_dev->data->nb_rx_queues)
return rc;
- new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
- VLAN_TAG_SIZE * BNXT_NUM_VLANS;
+ new_pkt_size = new_mtu + overhead;
/*
* Disallow any MTU change that would require scattered receive support
@@ -3067,7 +3063,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
/* Is there a change in mtu setting? */
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size)
+ if (eth_dev->data->mtu == new_mtu)
return rc;
for (i = 0; i < bp->nr_vnics; i++) {
@@ -3089,9 +3085,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
}
- if (!rc)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size;
-
if (bnxt_hwrm_config_host_mtu(bp))
PMD_DRV_LOG(WARNING, "Failed to configure host MTU\n");
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 542c6633b53d..2ee1cf938880 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1724,8 +1724,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_VLAN_FILTER;
- slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ slave_eth_dev->data->dev_conf.rxmode.mtu =
+ bonded_eth_dev->data->dev_conf.rxmode.mtu;
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_JUMBO_FRAME)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 966bd23c7f98..c94fc505fef1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -209,7 +209,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
}
@@ -220,18 +220,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct cnxk_eth_rxq_sp *rxq;
- uint16_t mtu;
int rc;
rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
/* Setup scatter mode if needed by jumbo */
nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
- CNXK_NIX_MAX_VTAG_ACT_SIZE;
-
- rc = cnxk_nix_mtu_set(eth_dev, mtu);
+ rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
if (rc)
plt_err("Failed to set default MTU size, rc=%d", rc);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..695d0d6fd3e2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
goto exit;
}
- frame_size += RTE_ETHER_CRC_LEN;
-
- if (frame_size > RTE_ETHER_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index cd9aa9f84b63..458111ae5b16 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return err;
/* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
+ if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
/* set to jumbo mode if needed */
- if (new_mtu > CXGBE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- if (!err)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;
-
return err;
}
@@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
- unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
struct rte_eth_dev_info dev_info;
@@ -682,7 +680,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
rxq->fl.size = temp_nb_desc;
/* Set to jumbo mode if necessary */
- if (pkt_len > CXGBE_ETH_MAX_LEN)
+ if (eth_dev->data->mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..91d6bb9bbcb0 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
unsigned int mtu;
int ret;
- mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ mtu = pi->eth_dev->data->mtu;
conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..830f5192474d 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
u32 wr_mid;
u64 cntrl, *end;
bool v6;
- u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
+ u32 max_pkt_len;
/* Reject xmit if queue is stopped */
if (unlikely(txq->flags & EQ_STOPPED))
@@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
return 0;
}
+ max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
(unlikely(m->pkt_len > max_pkt_len)))
goto out_free;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 59f4a93b3ed4..0c2b3fbf552f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > DPAA_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
@@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
struct fman_if *fif = dev->process_private;
struct __fman_if *__fif;
struct rte_intr_handle *intr_handle;
+ uint32_t max_rx_pktlen;
int speed, duplex;
int ret;
@@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- DPAA_PMD_DEBUG("enabling jumbo");
-
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- else {
- DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
- "supported is %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- DPAA_MAX_RX_PKT_LEN);
- max_len = DPAA_MAX_RX_PKT_LEN;
- }
-
- fman_if_set_maxfrm(dev->process_private, max_len);
- dev->data->mtu = max_len
- - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
+ DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
+ "supported is %d",
+ max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
+ max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
+ fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
+
if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
@@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
u32 flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -EINVAL;
}
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
+ VLAN_TAG_SIZE;
/* Max packet can fit in single buffer */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
+ if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- buffsz * DPAA_SGT_MAX_ENTRIES) {
- DPAA_PMD_ERR("max RxPkt size %d too big to fit "
+ if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
+ DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz * DPAA_SGT_MAX_ENTRIES);
+ max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES);
rte_errno = EOVERFLOW;
return -rte_errno;
}
@@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz - RTE_PKTMBUF_HEADROOM);
+ max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
}
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
@@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif),
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index ff8ae89922c7..ef709bba4793 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
int tx_l3_csum_offload = false;
int tx_l4_csum_offload = false;
int ret, tc_index;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -559,25 +560,19 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
- ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
- priv->token, eth_conf->rxmode.max_rx_pkt_len
- - RTE_ETHER_CRC_LEN);
- if (ret) {
- DPAA2_PMD_ERR(
- "Unable to set mtu. check config");
- return ret;
- }
- dev->data->mtu =
- dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
- VLAN_TAG_SIZE;
- DPAA2_PMD_INFO("MTU configured for the device: %d",
- dev->data->mtu);
- } else {
- return -1;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
+ ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
+ priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
+ if (ret != 0) {
+ DPAA2_PMD_ERR("Unable to set mtu. check config");
+ return ret;
}
+ DPAA2_PMD_INFO("MTU configured for the device: %d",
+ dev->data->mtu);
+ } else {
+ return -1;
}
if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
@@ -1471,15 +1466,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (frame_size > DPAA2_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..6f418a36aa04 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 6510cd7cebd0..867e5008ac20 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2681,9 +2681,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD);
}
static void
@@ -2699,10 +2697,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE);
+ E1000_WRITE_REG(hw, E1000_RLPML,
+ dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
}
static int
@@ -4400,7 +4396,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -4411,11 +4407,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
return 0;
}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 8d64d7397a4b..6e1315f37d92 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -2329,6 +2329,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
@@ -2347,9 +2348,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
rctl |= E1000_RCTL_LPE;
/*
@@ -2427,8 +2427,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
@@ -2652,15 +2651,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
/* setup MTU */
- e1000_rlpml_set_vf(hw,
- (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE));
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
+ e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -2717,8 +2716,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index a82d4b628736..e2f7213acb84 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -677,26 +677,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev,
return rc;
}
-static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
-{
- uint32_t max_frame_len = adapter->max_mtu;
-
- if (adapter->edev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- max_frame_len =
- adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
-
- return max_frame_len;
-}
-
static int ena_check_valid_conf(struct ena_adapter *adapter)
{
- uint32_t max_frame_len = ena_get_mtu_conf(adapter);
+ uint32_t mtu = adapter->edev_data->mtu;
- if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_INIT_LOG(ERR,
"Unsupported MTU of %d. Max MTU: %d, min MTU: %d\n",
- max_frame_len, adapter->max_mtu, ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return ENA_COM_UNSUPPORTED;
}
@@ -869,10 +857,10 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
ena_dev = &adapter->ena_dev;
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
- if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_DRV_LOG(ERR,
"Invalid MTU setting. New MTU: %d, max MTU: %d, min MTU: %d\n",
- mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return -EINVAL;
}
@@ -1943,7 +1931,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = ENA_HASH_KEY_SIZE;
dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
- dev_info->max_rx_pktlen = adapter->max_mtu;
+ dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ dev_info->min_mtu = ENA_MIN_MTU;
+ dev_info->max_mtu = adapter->max_mtu;
dev_info->max_mac_addrs = 1;
dev_info->max_rx_queues = adapter->max_num_io_queues;
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 246aff467248..1e27ed298354 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -681,7 +681,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > ENETC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads &=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -691,8 +691,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/*setting the MTU*/
enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) |
ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
@@ -709,23 +707,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint32_t checksum = L3_CKSUM | L4_CKSUM;
+ uint32_t max_len;
PMD_INIT_FUNC_TRACE();
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
- enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
- ENETC_SET_MAXFRM(max_len));
- enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
- ENETC_MAC_MAXFRM_SIZE);
- enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
- 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
- RTE_ETHER_CRC_LEN;
- }
+ max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len));
+ enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
+ enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
int config;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index b03e56bc2500..9afb37751fac 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -459,7 +459,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
* max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
* a hint to the driver to size receive buffers accordingly so that
* larger-than-vnic-mtu packets get truncated.. For DPDK, we let
- * the user decide the buffer size via rxmode.max_rx_pkt_len, basically
+ * the user decide the buffer size via rxmode.mtu, basically
* ignoring vNIC mtu.
*/
device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu);
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..dfc7f5d1f94f 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
struct rq_enet_desc *rqd = rq->ring.descs;
unsigned i;
dma_addr_t dma_addr;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint16_t rq_buf_len;
if (!rq->in_use)
@@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
/*
* If *not* using scatter and the mbuf size is greater than the
- * requested max packet size (max_rx_pkt_len), then reduce the
- * posted buffer size to max_rx_pkt_len. HW still receives packets
- * larger than max_rx_pkt_len, but they will be truncated, which we
+ * requested max packet size (mtu + eth overhead), then reduce the
+ * posted buffer size to max packet size. HW still receives packets
+ * larger than max packet size, but they will be truncated, which we
* drop in the rx handler. Not ideal, but better than returning
* large packets when the user is not expecting them.
*/
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM;
- if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
- rq_buf_len = max_rx_pkt_len;
+ if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
+ rq_buf_len = max_rx_pktlen;
for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
mb = rte_mbuf_raw_alloc(rq->mp);
if (mb == NULL) {
@@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
unsigned int mbuf_size, mbufs_per_pkt;
unsigned int nb_sop_desc, nb_data_desc;
uint16_t min_sop, max_sop, min_data, max_data;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
/*
* Representor uses a reserved PF queue. Translate representor
@@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM);
- /* max_rx_pkt_len includes the ethernet header and CRC. */
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ /* max_rx_pktlen includes the ethernet header and CRC. */
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
- mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
+ mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
} else {
dev_info(enic, "Scatter rx mode disabled\n");
mbufs_per_pkt = 1;
- if (max_rx_pkt_len > mbuf_size) {
+ if (max_rx_pktlen > mbuf_size) {
dev_warning(enic, "The maximum Rx packet size (%u) is"
" larger than the mbuf size (%u), and"
" scatter is disabled. Larger packets will"
" be truncated.\n",
- max_rx_pkt_len, mbuf_size);
+ max_rx_pktlen, mbuf_size);
}
}
@@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
rq_sop->data_queue_enable = 1;
rq_data->in_use = 1;
/*
- * HW does not directly support rxmode.max_rx_pkt_len. HW always
+ * HW does not directly support MTU. HW always
* receives packet sizes up to the "max" MTU.
* If not using scatter, we can achieve the effect of dropping
* larger packets by reducing the size of posted buffers.
* See enic_alloc_rx_queue_mbufs().
*/
- if (max_rx_pkt_len <
- enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
- dev_warning(enic, "rxmode.max_rx_pkt_len is ignored"
- " when scatter rx mode is in use.\n");
+ if (enic->rte_dev->data->mtu < enic->max_mtu) {
+ dev_warning(enic,
+ "mtu is ignored when scatter rx mode is in use.\n");
}
} else {
dev_info(enic, "Rq %u Scatter rx mode not being used\n",
@@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
if (mbufs_per_pkt > 1) {
dev_info(enic, "For max packet size %u and mbuf size %u valid"
" rx descriptor range is %u to %u\n",
- max_rx_pkt_len, mbuf_size, min_sop + min_data,
+ max_rx_pktlen, mbuf_size, min_sop + min_data,
max_sop + max_data);
}
dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
@@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
"MTU (%u) is greater than value configured in NIC (%u)\n",
new_mtu, config_mtu);
- /* Update the MTU and maximum packet length */
- eth_dev->data->mtu = new_mtu;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- enic_mtu_to_max_rx_pktlen(new_mtu);
-
/*
* If the device has not started (enic_enable), nothing to do.
* Later, enic_enable() will set up RQs reflecting the new maximum
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 7075d69022c4..0e25dfdd4ce3 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
FM10K_SRRCTL_LOOPBACK_SUPPRESS);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
+ if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
uint32_t reg;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index cd4dad8588f3..aef8adc2e1e0 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -315,19 +315,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- HINIC_MAX_JUMBO_FRAME_SIZE) {
+ if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
+ HINIC_MIN_FRAME_SIZE ||
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
+ HINIC_MAX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR,
- "Max rx pkt len out of range, get max_rx_pkt_len:%d, "
+ "Packet length out of range, get packet length:%d, "
"expect between %d and %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu),
HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- nic_dev->mtu_size =
- HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
/* rss template */
err = hinic_config_mq_mode(dev, TRUE);
@@ -1534,7 +1534,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- uint32_t frame_size;
int ret = 0;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
@@ -1552,16 +1551,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- /* update max frame size */
- frame_size = HINIC_MTU_TO_PKTLEN(mtu);
- if (frame_size > HINIC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index cabf73ffbc7c..81cad6e4305f 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2366,41 +2366,6 @@ hns3_init_ring_with_vector(struct hns3_hw *hw)
return 0;
}
-static int
-hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
-{
- struct hns3_adapter *hns = dev->data->dev_private;
- struct hns3_hw *hw = &hns->hw;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
- int ret;
-
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
- return 0;
-
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater than %u "
- "and no more than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- return -EINVAL;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3_dev_mtu_set(dev, mtu);
- if (ret)
- return ret;
- dev->data->mtu = mtu;
-
- return 0;
-}
-
static int
hns3_setup_dcb(struct rte_eth_dev *dev)
{
@@ -2515,8 +2480,8 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- ret = hns3_refresh_mtu(dev, conf);
- if (ret)
+ ret = hns3_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
goto cfg_err;
ret = hns3_mbuf_dyn_rx_timestamp_register(dev, conf);
@@ -2611,7 +2576,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false;
+ is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2632,7 +2597,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c806..0b5db486f8d6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -784,8 +784,6 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
struct rte_eth_rss_conf rss_conf;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
bool gro_en;
int ret;
@@ -825,28 +823,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater "
- "than %u and less than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- ret = -EINVAL;
- goto cfg_err;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3vf_dev_mtu_set(dev, mtu);
- if (ret)
- goto cfg_err;
- dev->data->mtu = mtu;
- }
+ ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
+ goto cfg_err;
ret = hns3vf_dev_configure_vlan(dev);
if (ret)
@@ -935,7 +914,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 6b77672aa1b4..74263e33d39a 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1747,18 +1747,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size,
uint16_t nb_desc)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
- struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
uint16_t min_vec_bds;
/*
* HNS3 hardware network engine set scattered as default. If the driver
* is not work in scattered mode and the pkts greater than buf_size
- * but smaller than max_rx_pkt_len will be distributed to multiple BDs.
+ * but smaller than frame size will be distributed to multiple BDs.
* Driver cannot handle this situation.
*/
- if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) {
- hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater "
+ if (!hw->data->scattered_rx && frame_size > buf_size) {
+ hns3_err(hw, "frame size is not allowed to be set greater "
"than rx_buf_len if scattered is off.");
return -EINVAL;
}
@@ -1970,7 +1970,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
}
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
- dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
+ dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 1fc3d897a804..cfefef4746e9 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11423,14 +11423,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > I40E_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return ret;
}
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 807e1a4133d3..2dc6b3720f02 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2925,8 +2925,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
}
rxq->max_pkt_len =
- RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
- rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
+ RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
+ data->mtu + I40E_ETH_OVERHEAD);
if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 5a5a7f59e152..0eabce275d92 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -576,13 +576,14 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_dev_data *dev_data = dev->data;
uint16_t buf_size, max_pkt_len;
+ uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
/* Calculate the maximum packet length allowed */
max_pkt_len = RTE_MIN((uint32_t)
rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
@@ -839,7 +840,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
adapter->stopped = 0;
- vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
num_queue_pairs = vf->num_queue_pairs;
@@ -1472,15 +1473,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IAVF_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return ret;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 91f655874287..6e59f8c71c65 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -66,9 +66,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ dev->data->mtu + ICE_ETH_OVERHEAD);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 65e43a18f9f2..ebe9834e6097 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3603,8 +3603,8 @@ ice_dev_start(struct rte_eth_dev *dev)
pf->adapter_stopped = false;
/* Set the max frame size to default value*/
- max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
- pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
+ max_frame_size = pf->dev_data->mtu ?
+ pf->dev_data->mtu + ICE_ETH_OVERHEAD :
ICE_FRAME_SIZE_MAX;
/* Set the max frame size to HW*/
@@ -3992,14 +3992,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > ICE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 7a2220daa448..b2c73eadcf6d 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -271,15 +271,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
+ uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
/* Set buffer size as the head split is disabled. */
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
RTE_PKTMBUF_HEADROOM);
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- rxq->max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev_data->dev_conf.rxmode.max_rx_pkt_len);
+ rxq->max_pkt_len =
+ RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ frame_size);
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
@@ -385,11 +386,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
return -EINVAL;
}
- buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
- RTE_PKTMBUF_HEADROOM);
-
/* Check if scattered RX needs to be used. */
- if (rxq->max_pkt_len > buf_size)
+ if (frame_size > buf_size)
dev_data->scattered_rx = 1;
rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 0e41c85d2963..1ebe7a02ce57 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -20,13 +20,6 @@
#define IGC_INTEL_VENDOR_ID 0x8086
-/*
- * The overhead from MTU to max frame size.
- * Considering VLAN so tag needs to be counted.
- */
-#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
- RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE)
-
#define IGC_FC_PAUSE_TIME 0x0680
#define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
#define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
@@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= IGC_RCTL_LPE;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~IGC_RCTL_LPE;
}
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
return 0;
}
@@ -2486,6 +2473,7 @@ static int
igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
- RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
+ if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
+ frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
@@ -2519,6 +2498,7 @@ static int
igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
+ if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
+ frame_size, MAX_RX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..b3473b5b1646 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -35,6 +35,13 @@ extern "C" {
#define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
#define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX)
+/*
+ * The overhead from MTU to max frame size.
+ * Considering VLAN so tag needs to be counted.
+ */
+#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2)
+
/*
* TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
* multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 7dee1bb0fa5f..45d76f3ec321 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
struct igc_rx_queue *rxq;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint32_t rctl;
uint32_t rxcsum;
uint16_t buf_size;
@@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
rctl |= IGC_RCTL_LPE;
-
- /*
- * Set maximum packet length by default, and might be updated
- * together with enabling/disabling dual VLAN.
- */
- IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
- } else {
+ else
rctl &= ~IGC_RCTL_LPE;
- }
+
+ max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
+ /*
+ * Set maximum packet length by default, and might be updated
+ * together with enabling/disabling dual VLAN.
+ */
+ IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
+ if (max_rx_pktlen > buf_size)
dev->data->scattered_rx = 1;
} else {
/*
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index 344c076f309a..d5d610c80bcd 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -343,25 +343,15 @@ static int
ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
- uint32_t max_frame_size;
int err;
IONIC_PRINT_CALL();
/*
* Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
- * is done by the the API.
+ * is done by the API.
*/
- /*
- * Max frame size is MTU + Ethernet header + VLAN + QinQ
- * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
- */
- max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
-
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
- return -EINVAL;
-
err = ionic_lif_change_mtu(lif, mtu);
if (err)
return err;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 67631a5813b7..4d16a39c6b6d 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -771,7 +771,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
struct rte_mbuf *rxm, *rxm_seg;
uint32_t max_frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint64_t pkt_flags = 0;
uint32_t pkt_type;
struct ionic_rx_stats *stats = &rxq->stats;
@@ -1014,7 +1014,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
int __rte_cold
ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
{
- uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
struct ionic_rx_qcq *rxq;
int err;
@@ -1128,7 +1128,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
{
struct ionic_rx_qcq *rxq = rx_queue;
uint32_t frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
struct ionic_rx_service service_cb_arg;
service_cb_arg.rx_pkts = rx_pkts;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 694435a4ae24..0d1aaa6449b9 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2791,14 +2791,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IPN3KE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a127dc0d869a..0ceb42a5cd9e 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5167,7 +5167,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct ixgbe_hw *hw;
struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
- struct rte_eth_dev_data *dev_data = dev->data;
int ret;
ret = ixgbe_dev_info_get(dev, &dev_info);
@@ -5181,9 +5180,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
*/
- if (dev_data->dev_started && !dev_data->scattered_rx &&
- (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ if (dev->data->dev_started && !dev->data->scattered_rx &&
+ frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -5192,23 +5191,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (frame_size > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU) {
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
return 0;
@@ -6080,12 +6074,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
* set as 0x4.
*/
if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_JUMBO_FRAME);
+ (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_DEFAULT);
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
/* Set RTTBCNRC of queue X */
IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
@@ -6357,8 +6349,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (mtu < RTE_ETHER_MIN_MTU ||
- max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
+ if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
return -EINVAL;
/* If device is started, refuse mtu that requires the support of
@@ -6366,7 +6357,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
*/
if (dev_data->dev_started && !dev_data->scattered_rx &&
(max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -6383,8 +6374,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (ixgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index a1529b4d5659..4ceb5bf322d8 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -573,8 +573,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* if PF has jumbo frames enabled which means legacy
* VFs are disabled.
*/
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ if (dev->data->mtu > RTE_ETHER_MTU)
break;
/* fall through */
default:
@@ -584,8 +583,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* legacy VFs.
*/
if (max_frame > IXGBE_ETH_MAX_LEN ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ dev->data->mtu > RTE_ETHER_MTU)
return -1;
break;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 349180e7c17f..d31cf9e0a7c9 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -5065,6 +5065,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
uint16_t buf_size;
uint16_t i;
struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
int rc;
PMD_INIT_FUNC_TRACE();
@@ -5100,7 +5101,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (rx_conf->max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
} else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
@@ -5174,8 +5175,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
+ if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -5655,6 +5655,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
struct ixgbe_hw *hw;
struct ixgbe_rx_queue *rxq;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
uint64_t bus_addr;
uint32_t srrctl, psrtype = 0;
uint16_t buf_size;
@@ -5691,10 +5692,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
* VF packets received can work in all cases.
*/
- if (ixgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ if (ixgbevf_rlpml_set_vf(hw, frame_size) != 0) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
return -EINVAL;
}
@@ -5753,8 +5753,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
+ (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->data->scattered_rx = 1;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index dbdab188e962..f235653718fc 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
- uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (frame_len > LIO_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
- eth_dev->data->mtu = mtu;
-
return 0;
}
@@ -1402,8 +1398,6 @@ lio_sync_link_state_check(void *eth_dev)
static int
lio_dev_start(struct rte_eth_dev *eth_dev)
{
- uint16_t mtu;
- uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
int ret = 0;
@@ -1446,15 +1440,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
goto dev_mtu_set_error;
}
- mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN);
- if (mtu < RTE_ETHER_MIN_MTU)
- mtu = RTE_ETHER_MIN_MTU;
-
- if (eth_dev->data->mtu != mtu) {
- ret = lio_dev_mtu_set(eth_dev, mtu);
- if (ret)
- goto dev_mtu_set_error;
- }
+ ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
+ if (ret != 0)
+ goto dev_mtu_set_error;
return 0;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 2b75c07fad75..1801d87334a1 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
int ret;
uint32_t crc_present;
uint64_t offloads;
+ uint32_t max_rx_pktlen;
offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
@@ -829,13 +830,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->rx_queues[idx] = rxq;
/* Enable scattered packets support for this queue if necessary. */
MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- (mb_len - RTE_PKTMBUF_HEADROOM)) {
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
- uint32_t size =
- RTE_PKTMBUF_HEADROOM +
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
/*
@@ -847,21 +846,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
/* Make sure sges_n did not overflow. */
size = mb_len * (1 << rxq->sges_n);
size -= RTE_PKTMBUF_HEADROOM;
- if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
+ if (size < max_rx_pktlen) {
rte_errno = EOVERFLOW;
ERROR("%p: too many SGEs (%u) needed to handle"
" requested maximum packet size %u",
(void *)dev,
- 1 << sges_n,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ 1 << sges_n, max_rx_pktlen);
goto error;
}
} else {
WARN("%p: the requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- (void *)dev,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ (void *)dev, max_rx_pktlen,
mb_len - RTE_PKTMBUF_HEADROOM);
}
DEBUG("%p: maximum number of segments per packet: %u",
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b68443bed509..0655965c0fb9 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1327,10 +1327,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
- unsigned int max_rx_pkt_len = lro_on_queue ?
+ unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
+ dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
@@ -1369,7 +1370,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* needed to handle max size packets, replace zero length
* with the buffer length from the pool.
*/
- tail_len = max_rx_pkt_len;
+ tail_len = max_rx_pktlen;
do {
struct mlx5_eth_rxseg *hw_seg =
&tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
@@ -1407,7 +1408,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- tmpl->rxq.rxseg_n, max_rx_pkt_len,
+ tmpl->rxq.rxseg_n, max_rx_pktlen,
MLX5_MAX_RXQ_NSEG);
rte_errno = ENOTSUP;
goto error;
@@ -1432,7 +1433,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
- dev->data->port_id, idx, mb_len, max_rx_pkt_len,
+ dev->data->port_id, idx, mb_len, max_rx_pktlen,
RTE_PKTMBUF_HEADROOM);
rte_errno = ENOSPC;
goto error;
@@ -1451,7 +1452,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* following conditions are met:
* - MPRQ is enabled.
* - The number of descs is more than the number of strides.
- * - max_rx_pkt_len plus overhead is less than the max size
+ * - max_rx_pktlen plus overhead is less than the max size
* of a stride or mprq_stride_size is specified by a user.
* Need to make sure that there are enough strides to encap
* the maximum packet size in case mprq_stride_size is set.
@@ -1475,7 +1476,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
!!(offloads & DEV_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
- max_lro_size = RTE_MIN(max_rx_pkt_len,
+ max_lro_size = RTE_MIN(max_rx_pktlen,
(1u << tmpl->rxq.strd_num_n) *
(1u << tmpl->rxq.strd_sz_n));
DRV_LOG(DEBUG,
@@ -1484,9 +1485,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->port_id, idx,
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
} else if (tmpl->rxq.rxseg_n == 1) {
- MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
+ MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
@@ -1508,13 +1509,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- 1 << sges_n, max_rx_pkt_len,
+ 1 << sges_n, max_rx_pktlen,
1u << MLX5_MAX_LOG_RQ_SEGS);
rte_errno = ENOTSUP;
goto error;
}
tmpl->rxq.sges_n = sges_n;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
}
if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
DRV_LOG(WARNING,
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index f51bc2258fe1..d6497c366696 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_NETA_ETH_HDRS_LEN;
-
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
@@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
/* It is OK. New MTU will be set later on mvneta_dev_start */
return 0;
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 2d61930382cb..9836bb071a82 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mvneta_priv *priv = dev->data->dev_private;
struct mvneta_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS;
- if (frame_size < max_rx_pkt_len) {
+ if (frame_size < max_rx_pktlen) {
MVNETA_LOG(ERR,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 65d011300a97..44761b695a8d 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_PP2_ETH_HDRS_LEN;
- if (dev->data->mtu > priv->max_mtu) {
- MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n",
- dev->data->mtu,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- priv->max_mtu);
- return -EINVAL;
- }
+ if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
+ MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
+ dev->data->dev_conf.rxmode.mtu,
+ priv->max_mtu);
+ return -EINVAL;
}
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
@@ -595,9 +590,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
return 0;
@@ -1994,7 +1986,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mrvl_priv *priv = dev->data->dev_private;
struct mrvl_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
int ret, tc, inq;
uint64_t offloads;
@@ -2009,17 +2001,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
return -EFAULT;
}
- frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
- MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
- if (frame_size < max_rx_pkt_len) {
+ frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS;
+ if (frame_size < max_rx_pktlen) {
MRVL_LOG(WARNING,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MRVL_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 4395a09c597d..928b4983a07a 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -370,7 +370,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = rxmode->max_rx_pkt_len;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -963,16 +963,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
/* switch to jumbo mode if needed */
- if ((uint32_t)mtu > RTE_ETHER_MTU)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
-
/* writing to configuration space */
- nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
+ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
hw->mtu = mtu;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 7c91494f0e28..10151a748d5d 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > OCCTX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
@@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
/* Setup scatter mode if needed by jumbo */
- if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (data->mtu > buffsz) {
nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
@@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
evdev_priv->rx_offload_flags = nic->rx_offload_flags;
evdev_priv->tx_offload_flags = nic->tx_offload_flags;
- /* Setup MTU based on max_rx_pkt_len */
- nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD;
+ /* Setup MTU */
+ nic->mtu = data->mtu;
return 0;
}
@@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
octeontx_recheck_rx_offloads(rxq);
}
- /* Setting up the mtu based on max_rx_pkt_len */
+ /* Setting up the mtu */
ret = octeontx_dev_mtu_set(dev, nic->mtu);
if (ret) {
octeontx_log_err("Failed to set default MTU size %d", ret);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index d576bc698926..2779741352ca 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -913,7 +913,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
mbp_priv = rte_mempool_get_priv(rxq->pool);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 552e6bd43d2b..cf7804157198 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,14 +59,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > NIX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return rc;
}
@@ -75,7 +72,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct otx2_eth_rxq *rxq;
- uint16_t mtu;
int rc;
rxq = data->rx_queues[0];
@@ -83,10 +79,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
/* Setup scatter mode if needed by jumbo */
otx2_nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
-
- rc = otx2_nix_mtu_set(eth_dev, mtu);
+ rc = otx2_nix_mtu_set(eth_dev, data->mtu);
if (rc)
otx2_err("Failed to set default MTU size %d", rc);
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 4c7f568bf42d..91fe39d3ce55 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -670,16 +670,11 @@ pfe_link_up(struct rte_eth_dev *dev)
static int
pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- int ret;
struct pfe_eth_priv_s *priv = dev->data->dev_private;
uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
/*TODO Support VLAN*/
- ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
- if (!ret)
- dev->data->mtu = mtu;
-
- return ret;
+ return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
}
/* pfe_eth_enet_addr_byte_mac
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index fd8c62a1826b..a1cf913dc8ed 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- /* If jumbo enabled adjust MTU */
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
-
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
@@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
- uint32_t max_rx_pkt_len;
uint32_t frame_size;
uint16_t bufsz;
bool restart = false;
@@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
DP_ERR(edev, "Error during getting ethernet device info\n");
return rc;
}
- max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
- frame_size = max_rx_pkt_len;
+
+ frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
@@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (frame_size > QEDE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->dev_started = 1;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
-
return 0;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 35cde561ba59..c2263787b4ec 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
struct qede_rx_queue *rxq;
- uint16_t max_rx_pkt_len;
+ uint16_t max_rx_pktlen;
uint16_t bufsz;
int rc;
@@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
dev->data->rx_queues[qid] = NULL;
}
- max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
/* Fix up RX buffer size */
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
- (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
+ (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
dev->data->scattered_rx = 1;
}
}
- rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
+ rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
if (rc < 0)
return rc;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 9dc5e5b3a3d4..09146f741952 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1067,15 +1067,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
/*
* The driver does not use it, but other PMDs update jumbo frame
- * flag and max_rx_pkt_len when MTU is set.
+ * flag when MTU is set.
*/
if (mtu > RTE_ETHER_MTU) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb8175..22f74735db08 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -383,14 +383,10 @@ sfc_port_configure(struct sfc_adapter *sa)
{
const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
struct sfc_port *port = &sa->port;
- const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
sfc_log_init(sa, "entry");
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- port->pdu = rxmode->max_rx_pkt_len;
- else
- port->pdu = EFX_MAC_PDU(dev_data->mtu);
+ port->pdu = EFX_MAC_PDU(dev_data->mtu);
return 0;
}
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 046f17669d03..e4f1ad45219e 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_mtu = mtu };
- int err = 0;
- err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
- if (!err)
- dev->data->mtu = mtu;
-
- return err;
+ return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
}
static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5502f1ee6939..1000d9855ad3 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (frame_size > NIC_HW_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
- /* Update max_rx_pkt_len */
- rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
nic->mtu = mtu;
for (i = 0; i < nic->sqs_count; i++)
@@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
}
/* Setup scatter mode if needed by jumbo */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE > buffsz)
+ if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
- /* Setup MTU based on max_rx_pkt_len or default */
- mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
- dev->data->dev_conf.rxmode.max_rx_pkt_len
- - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
+ /* Setup MTU */
+ mtu = dev->data->mtu;
if (nicvf_dev_set_mtu(dev, mtu)) {
PMD_INIT_LOG(ERR, "Failed to set default mtu size");
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b267da462bcb..9427ac738f1b 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3486,8 +3486,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 112567eecca4..16485b3c2d2d 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -55,6 +55,10 @@
#define TXGBE_5TUPLE_MAX_PRI 7
#define TXGBE_5TUPLE_MIN_PRI 1
+
+/* The overhead from MTU to max frame size. */
+#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
+
#define TXGBE_RSS_OFFLOAD_ALL ( \
ETH_RSS_IPV4 | \
ETH_RSS_NONFRAG_IPV4_TCP | \
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 896da8a88770..43dc0ed39b75 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -1128,8 +1128,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (txgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index b6339fe50b44..3af1d19e3079 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
- } else {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
- }
+ wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+ TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD));
/*
* If loopback mode is configured, set LPBK bit.
@@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
+ if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
+ 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* VF packets received can work in all cases.
*/
if (txgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ dev->data->mtu + TXGBE_ETH_OVERHEAD);
return -EINVAL;
}
@@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
+ (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 6aa36b3f3942..e1b9066dcf5d 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -924,7 +924,6 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
hw->max_rx_pkt_len = frame_size;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = hw->max_rx_pkt_len;
return 0;
}
@@ -2108,14 +2107,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
+ if (rxmode->mtu > hw->max_mtu)
req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
- else
- hw->max_rx_pkt_len = ether_hdr_len + dev->data->mtu;
+ hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM))
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index adbd40808396..68e3c13730ad 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 7adaa93cad5c..6352a715c0d9 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -116,7 +116,6 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index d0f40a1fb4bc..8c4a8feec0c2 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 5ed0dc73ec60..e26be8edf28f 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ab8c6d6a0dad..476b147bdfcc 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 65c1d85cf2fb..8a43f6ac0f92 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -59,14 +59,6 @@ static struct{
} parm_config;
const char cb_port_delim[] = ":";
-/* Ethernet ports configured with default settings using struct. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of Ethernet ports. */
-
/* Creation of flow classifier object. 8< */
struct flow_classifier {
struct rte_flow_classifier *cls;
@@ -200,7 +192,7 @@ static struct rte_flow_attr attr;
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
struct rte_ether_addr addr;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
@@ -211,6 +203,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index ff36aa7f1e7b..ccfee585f850 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -820,7 +820,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index a7f40970f27f..754fee5a5780 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -146,7 +146,8 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_SCATTER |
@@ -918,9 +919,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
@@ -963,8 +964,7 @@ main(int argc, char **argv)
}
/* set the mtu to the maximum received packet size */
- ret = rte_eth_dev_set_mtu(portid,
- local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD);
+ ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu);
if (ret < 0) {
printf("\n");
rte_exit(EXIT_FAILURE, "Set MTU failed: "
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..9ba02e687adb 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index d611c7d01609..39e12fea47f4 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -162,7 +162,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_JUMBO_FRAME),
@@ -882,7 +883,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
/* mbufs stored int the gragment table. 8< */
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM;
- nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
+ nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
+ + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += nb_rxd + nb_txd;
@@ -1054,9 +1056,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7b01872c6f9f..a5dfca5a9a4b 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -2163,7 +2162,6 @@ cryptodevs_init(uint16_t req_queue_num)
static void
port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
{
- uint32_t frame_size;
struct rte_eth_dev_info dev_info;
struct rte_eth_txconf *txconf;
uint16_t nb_tx_queue, nb_rx_queue;
@@ -2211,10 +2209,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- frame_size = MTU_TO_FRAMELEN(mtu_size);
- if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
+ if (mtu_size > RTE_ETHER_MTU)
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- local_port_conf.rxmode.max_rx_pkt_len = frame_size;
+ local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index d10de30ddbae..e28035998e6c 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -110,7 +110,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
@@ -715,9 +716,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 2a993a0ca460..62f6e42a9437 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
memcpy(&conf, &port_conf, sizeof(conf));
/* Set new MTU */
- if (new_mtu > RTE_ETHER_MAX_LEN)
+ if (new_mtu > RTE_ETHER_MTU)
conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* mtu + length of header + length of FCS = max pkt length */
- conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
- KNI_ENET_FCS_SIZE;
+ conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
index 9b3e324efb23..d9cf00c9dfc7 100644
--- a/examples/l2fwd-cat/l2fwd-cat.c
+++ b/examples/l2fwd-cat/l2fwd-cat.c
@@ -19,10 +19,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
-};
-
/* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
/*
@@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0)
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index c2ffbdd50636..c646f1748ca7 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -217,7 +217,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 19f32809aa9d..9040be5ed9b6 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 60545f305934..67e6356acff6 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -141,6 +140,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
/* ethernet addresses of ports */
@@ -201,8 +202,8 @@ enum {
OPT_CONFIG_NUM = 256,
#define OPT_NONUMA "no-numa"
OPT_NONUMA_NUM,
-#define OPT_ENBJMO "enable-jumbo"
- OPT_ENBJMO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_RULE_IPV4 "rule_ipv4"
OPT_RULE_IPV4_NUM,
#define OPT_RULE_IPV6 "rule_ipv6"
@@ -1620,26 +1621,21 @@ print_usage(const char *prgname)
usage_acl_alg(alg, sizeof(alg));
printf("%s [EAL options] -- -p PORTMASK -P"
- "--"OPT_RULE_IPV4"=FILE"
- "--"OPT_RULE_IPV6"=FILE"
+ " --"OPT_RULE_IPV4"=FILE"
+ " --"OPT_RULE_IPV6"=FILE"
" [--"OPT_CONFIG" (port,queue,lcore)[,(port,queue,lcore]]"
- " [--"OPT_ENBJMO" [--max-pkt-len PKTLEN]]\n"
+ " [--"OPT_MAX_PKT_LEN" PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
- " --"OPT_CONFIG": (port,queue,lcore): "
- "rx queues configuration\n"
+ " -P: enable promiscuous mode\n"
+ " --"OPT_CONFIG" (port,queue,lcore): rx queues configuration\n"
" --"OPT_NONUMA": optional, disable numa awareness\n"
- " --"OPT_ENBJMO": enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
- " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries "
- "file. "
+ " --"OPT_MAX_PKT_LEN" PKTLEN: maximum packet length in decimal (64-9600)\n"
+ " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries file. "
"Each rule occupy one line. "
"2 kinds of rules are supported. "
"One is ACL entry at while line leads with character '%c', "
- "another is route entry at while line leads with "
- "character '%c'.\n"
- " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules "
- "entries file.\n"
+ "another is route entry at while line leads with character '%c'.\n"
+ " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules entries file.\n"
" --"OPT_ALG": ACL classify method to use, one of: %s\n",
prgname, ACL_LEAD_CHAR, ROUTE_LEAD_CHAR, alg);
}
@@ -1760,14 +1756,14 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
- {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
- {OPT_ENBJMO, 0, NULL, OPT_ENBJMO_NUM },
- {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
- {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
- {OPT_ALG, 1, NULL, OPT_ALG_NUM },
- {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
- {NULL, 0, 0, 0 }
+ {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
+ {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
+ {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
+ {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
+ {OPT_ALG, 1, NULL, OPT_ALG_NUM },
+ {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
+ {NULL, 0, 0, 0 }
};
argvopt = argv;
@@ -1806,43 +1802,11 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case OPT_ENBJMO_NUM:
- {
- struct option lenopts = {
- "max-pkt-len",
- required_argument,
- 0,
- 0
- };
-
- printf("jumbo frame is enabled\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, then use the
- * default value RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length "
- "to %u\n",
- (unsigned int)
- port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
case OPT_RULE_IPV4_NUM:
parm_config.rule_ipv4_name = optarg;
break;
@@ -2010,6 +1974,43 @@ set_default_dest_mac(void)
}
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -2083,6 +2084,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index a0de8ca9b42d..46568eba9c01 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
@@ -126,6 +125,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
@@ -259,7 +260,7 @@ print_usage(const char *prgname)
" [-P]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--per-port-pool]\n\n"
@@ -268,9 +269,7 @@ print_usage(const char *prgname)
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for "
"port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --per-port-pool: Use separate buffer pool per port\n\n",
prgname);
@@ -404,7 +403,7 @@ static const char short_options[] = "p:" /* portmask */
#define CMD_LINE_OPT_CONFIG "config"
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
enum {
/* Long options mapped to a short option */
@@ -416,7 +415,7 @@ enum {
CMD_LINE_OPT_CONFIG_NUM,
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
};
@@ -424,7 +423,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
{NULL, 0, 0, 0},
};
@@ -490,28 +489,8 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr, "Invalid maximum "
- "packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM: {
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
}
@@ -722,6 +701,43 @@ graph_main_loop(void *conf)
}
/* >8 End of main processing loop. */
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -807,6 +823,13 @@ main(int argc, char **argv)
nb_rx_queue, n_tx_queue);
rte_eth_dev_info_get(portid, &dev_info);
+
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 73a3ab5bc0eb..03c0b8bb15b8 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -251,7 +251,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -266,6 +265,8 @@ static struct rte_eth_conf port_conf = {
}
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool * pktmbuf_pool[NB_SOCKETS];
@@ -1601,16 +1602,15 @@ print_usage(const char *prgname)
" [--config (port,queue,lcore)[,(port,queue,lcore]]"
" [--high-perf-cores CORELIST"
" [--perf-config (port,queue,hi_perf,lcore_index)[,(port,queue,hi_perf,lcore_index]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
+ " -P: enable promiscuous mode\n"
" --config (port,queue,lcore): rx queues configuration\n"
" --high-perf-cores CORELIST: list of high performance cores\n"
" --perf-config: similar as config, cores specified as indices"
" for bins containing high or regular performance cores\n"
" --no-numa: optional, disable numa awareness\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --parse-ptype: parse packet type by software\n"
" --legacy: use legacy interrupt-based scaling\n"
" --empty-poll: enable empty poll detection"
@@ -1795,6 +1795,7 @@ parse_ep_config(const char *q_arg)
#define CMD_LINE_OPT_INTERRUPT_ONLY "interrupt-only"
#define CMD_LINE_OPT_TELEMETRY "telemetry"
#define CMD_LINE_OPT_PMD_MGMT "pmd-mgmt"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
/* Parse the argument given in the command line of the application */
static int
@@ -1810,7 +1811,7 @@ parse_args(int argc, char **argv)
{"perf-config", 1, 0, 0},
{"high-perf-cores", 1, 0, 0},
{"no-numa", 0, 0, 0},
- {"enable-jumbo", 0, 0, 0},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, 0},
{CMD_LINE_OPT_EMPTY_POLL, 1, 0, 0},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
{CMD_LINE_OPT_LEGACY, 0, 0, 0},
@@ -1954,36 +1955,10 @@ parse_args(int argc, char **argv)
}
if (!strncmp(lgopts[option_index].name,
- "enable-jumbo", 12)) {
- struct option lenopts =
- {"max-pkt-len", required_argument, \
- 0, 0};
-
- printf("jumbo frame is enabled \n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /**
- * if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (0 == getopt_long(argc, argvopt, "",
- &lenopts, &option_index)) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)){
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame "
- "max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ CMD_LINE_OPT_MAX_PKT_LEN,
+ sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
}
if (!strncmp(lgopts[option_index].name,
@@ -2505,6 +2480,43 @@ mode_to_str(enum appmode mode)
}
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
/* Power library initialized in the main routine. 8< */
int
main(int argc, char **argv)
@@ -2622,6 +2634,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 00ac267af1dd..66d76e87cb25 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -136,6 +135,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static uint8_t lkp_per_socket[NB_SOCKETS];
@@ -326,7 +327,7 @@ print_usage(const char *prgname)
" [--lookup]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--hash-entry-num]"
" [--ipv6]"
@@ -344,9 +345,7 @@ print_usage(const char *prgname)
" Accepted: em (Exact Match), lpm (Longest Prefix Match), fib (Forwarding Information Base)\n"
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
" --ipv6: Set if running ipv6 packets\n"
@@ -566,7 +565,7 @@ static const char short_options[] =
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
#define CMD_LINE_OPT_IPV6 "ipv6"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
#define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
@@ -584,7 +583,7 @@ enum {
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
CMD_LINE_OPT_IPV6_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
CMD_LINE_OPT_PARSE_PTYPE_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
@@ -599,7 +598,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
{CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
@@ -698,31 +697,9 @@ parse_args(int argc, char **argv)
ipv6 = 1;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {
- "max-pkt-len", required_argument, 0, 0
- };
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr,
- "invalid maximum packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -981,6 +958,43 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
return 0;
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
static void
l3fwd_poll_resource_setup(void)
{
@@ -1035,6 +1049,12 @@ l3fwd_poll_resource_setup(void)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2905199743a7..2db1b5fc154f 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -323,6 +322,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
@@ -2643,7 +2644,7 @@ print_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK -P"
" [--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread]]"
" [--tx (lcore,thread)[,(lcore,thread]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]"
" [--parse-ptype]\n\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -P : enable promiscuous mode\n"
@@ -2653,8 +2654,7 @@ print_usage(const char *prgname)
" --eth-dest=X,MM:MM:MM:MM:MM:MM: optional, ethernet destination for port X\n"
" --no-numa: optional, disable numa awareness\n"
" --ipv6: optional, specify it if running ipv6 packets\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --hash-entry-num: specify the hash entry number in hexadecimal to be setup\n"
" --no-lthreads: turn off lthread model\n"
" --parse-ptype: set to use software to analyze packet type\n\n",
@@ -2877,8 +2877,8 @@ enum {
OPT_NO_NUMA_NUM,
#define OPT_IPV6 "ipv6"
OPT_IPV6_NUM,
-#define OPT_ENABLE_JUMBO "enable-jumbo"
- OPT_ENABLE_JUMBO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_HASH_ENTRY_NUM "hash-entry-num"
OPT_HASH_ENTRY_NUM_NUM,
#define OPT_NO_LTHREADS "no-lthreads"
@@ -2902,7 +2902,7 @@ parse_args(int argc, char **argv)
{OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
{OPT_NO_NUMA, 0, NULL, OPT_NO_NUMA_NUM },
{OPT_IPV6, 0, NULL, OPT_IPV6_NUM },
- {OPT_ENABLE_JUMBO, 0, NULL, OPT_ENABLE_JUMBO_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
{OPT_HASH_ENTRY_NUM, 1, NULL, OPT_HASH_ENTRY_NUM_NUM },
{OPT_NO_LTHREADS, 0, NULL, OPT_NO_LTHREADS_NUM },
{OPT_PARSE_PTYPE, 0, NULL, OPT_PARSE_PTYPE_NUM },
@@ -2981,35 +2981,10 @@ parse_args(int argc, char **argv)
parse_ptype_on = 1;
break;
- case OPT_ENABLE_JUMBO_NUM:
- {
- struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- printf("jumbo frame is enabled - disabling simple TX path\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /* if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
-
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) || (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
case OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -3489,6 +3464,43 @@ check_all_ports_link_status(uint32_t port_mask)
}
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -3577,6 +3589,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/test.sh b/examples/performance-thread/l3fwd-thread/test.sh
index f0b6e271a5f3..3dd33407ea41 100755
--- a/examples/performance-thread/l3fwd-thread/test.sh
+++ b/examples/performance-thread/l3fwd-thread/test.sh
@@ -11,7 +11,7 @@ case "$1" in
echo "1.1 1 L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -23,7 +23,7 @@ case "$1" in
echo "1.2 1 L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -34,7 +34,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=8)"
./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -45,7 +45,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -61,7 +61,7 @@ case "$1" in
echo "2.1 N L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 --lcores="2,(0-1)@0" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -73,7 +73,7 @@ case "$1" in
echo "2.2 N L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 --lcores="(0-3)@0,4" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -84,7 +84,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=8)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-7)@0,8" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -95,7 +95,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-15)@0,16" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -111,7 +111,7 @@ case "$1" in
echo "3.1 N L-threads per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(0,0)" \
--stat-lcore 1
@@ -121,7 +121,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(0,0)(0,1)" \
--stat-lcore 1
@@ -131,7 +131,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=8)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)" \
--tx="(0,0)(0,1)(0,2)(0,3)" \
--stat-lcore 1
@@ -141,7 +141,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=16)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)" \
--tx="(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)" \
--stat-lcore 1
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..4f20dfc4be06 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 61e4ee0ea140..0f86cee0569c 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
uint8_t ptp_enabled_port_nb;
static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static const struct rte_ether_addr ether_multicast = {
.addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
};
@@ -178,7 +172,7 @@ static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_dev_info dev_info;
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1;
const uint16_t tx_rings = 1;
int retval;
@@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369dc..c32d2e12e633 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -52,7 +52,6 @@ static struct rte_mempool *pool = NULL;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..1367569c65db 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index ab6fa7d56c5d..6845c396b8d9 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
static const char usage[] =
"%s EAL_ARGS -- [-t]\n";
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static struct {
uint64_t total_cycles;
uint64_t total_queue_cycles;
@@ -124,7 +118,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused,
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -137,6 +131,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index ae9bbee8d820..fd7207aee758 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -17,14 +17,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-/* Configuration of ethernet ports. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of ethernet ports. */
-
/* basicfwd.c: Basic DPDK skeleton forwarding example. */
/*
@@ -36,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -48,6 +40,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index b24fd82a6e71..427b882831bf 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -44,6 +44,7 @@
#define BURST_RX_RETRIES 4 /* Number of retries on RX. */
#define JUMBO_FRAME_MAX_SIZE 0x2600
+#define MAX_MTU (JUMBO_FRAME_MAX_SIZE - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN))
/* State of virtio device. */
#define DEVICE_MAC_LEARNING 0
@@ -633,8 +634,7 @@ us_vhost_parse_args(int argc, char **argv)
if (ret) {
vmdq_conf_default.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- vmdq_conf_default.rxmode.max_rx_pkt_len
- = JUMBO_FRAME_MAX_SIZE;
+ vmdq_conf_default.rxmode.mtu = MAX_MTU;
}
break;
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e59fb7d3478b..e19d79a40802 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -51,17 +51,10 @@
static uint32_t enabled_port_mask;
static volatile bool force_quit;
-/****************/
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 028907bc4b91..634e4d7d7fd6 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1338,6 +1338,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
return ret;
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1345,7 +1358,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
- uint16_t overhead_len;
+ uint32_t max_rx_pktlen;
+ uint32_t overhead_len;
int diag;
int ret;
uint16_t old_mtu;
@@ -1395,11 +1409,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
/* Get the real Ethernet overhead length */
- if (dev_info.max_mtu != UINT16_MAX &&
- dev_info.max_rx_pktlen > dev_info.max_mtu)
- overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
- else
- overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
/* If number of queues specified by application for both Rx and Tx is
* zero, use driver preferred values. This cannot be done individually
@@ -1468,49 +1479,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/*
- * If jumbo frames are enabled, check that the maximum RX packet
- * length is supported by the configured device.
+ * Check that the maximum RX packet length is supported by the
+ * configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- (unsigned int)RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
- goto rollback;
- }
+ if (dev_conf->rxmode.mtu == 0)
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
+ if (max_rx_pktlen > dev_info.max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
+ port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
+ ret = -EINVAL;
+ goto rollback;
+ } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
+ port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ goto rollback;
+ }
- /* Scale the MTU size to adapt max_rx_pkt_len */
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- overhead_len;
- } else {
- uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
- if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
- pktlen > RTE_ETHER_MTU + overhead_len)
+ if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
+ dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
/* Use default value */
- dev->data->dev_conf.rxmode.max_rx_pkt_len =
- RTE_ETHER_MTU + overhead_len;
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
}
+ dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
+
/*
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
goto rollback;
@@ -2163,13 +2170,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
+ /* Get the real Ethernet overhead length */
if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ uint32_t overhead_len;
+ uint32_t max_rx_pktlen;
+ int ret;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->mtu + overhead_len;
if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- int ret = eth_dev_check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
+ ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 6d80514ba7a5..e73c7c522196 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -416,7 +416,7 @@ enum rte_eth_tx_mq_mode {
struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
- uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ uint32_t mtu; /**< Requested MTU. */
/** Maximum allowed size of LRO aggregated packet. */
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index 0036bda7465c..1491c815c312 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -28,7 +28,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(nb_tx_q);
rte_trace_point_emit_u32(dev_conf->link_speeds);
rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
- rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
+ rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
rte_trace_point_emit_u64(dev_conf->txmode.offloads);
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v6 2/6] ethdev: move jumbo frame offload check to library
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
@ 2021-10-11 23:53 ` Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 3/6] ethdev: move check to library for MTU set Ferruh Yigit
` (8 subsequent siblings)
9 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 23:53 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: Ferruh Yigit, dev, Konstantin Ananyev, Huisong Li
Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
and application should enable the jumbo frame offload support for it.
When jumbo frame offload is not enabled by application, but MTU bigger
than RTE_ETHER_MTU is requested there are two options, either fail or
enable jumbo frame offload implicitly.
Enabling jumbo frame offload implicitly is selected by many drivers
since setting a big MTU value already implies it, and this increases
usability.
This patch moves this logic from drivers to the library, both to reduce
the duplicated code in the drivers and to make behaviour more visible.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
drivers/net/dpaa/dpaa_ethdev.c | 7 -------
drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
drivers/net/e1000/em_ethdev.c | 9 ++-------
drivers/net/e1000/igb_ethdev.c | 9 ++-------
drivers/net/enetc/enetc_ethdev.c | 7 -------
drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
drivers/net/hns3/hns3_ethdev.c | 8 --------
drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
drivers/net/i40e/i40e_ethdev.c | 5 -----
drivers/net/iavf/iavf_ethdev.c | 7 -------
drivers/net/ice/ice_ethdev.c | 5 -----
drivers/net/igc/igc_ethdev.c | 9 ++-------
drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
drivers/net/liquidio/lio_ethdev.c | 7 -------
drivers/net/nfp/nfp_common.c | 6 ------
drivers/net/octeontx/octeontx_ethdev.c | 5 -----
drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
drivers/net/qede/qede_ethdev.c | 4 ----
drivers/net/sfc/sfc_ethdev.c | 9 ---------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 6 ------
lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
27 files changed, 29 insertions(+), 166 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 76cd892eec7b..2dc5fa245bd8 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
val = 1;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
val = 0;
- }
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 8c6f20b75aed..07ee19938930 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3052,15 +3052,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
return -EINVAL;
}
- if (new_mtu > RTE_ETHER_MTU) {
+ if (new_mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* Is there a change in mtu setting? */
if (eth_dev->data->mtu == new_mtu)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 695d0d6fd3e2..349896f6a1bf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
plt_err("Failed to max Rx frame length, rc=%d", rc);
goto exit;
}
-
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 458111ae5b16..cdecf6b512ef 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
- /* set to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
return err;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 0c2b3fbf552f..d0013e7f5b67 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index ef709bba4793..4245db78cf12 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1466,13 +1466,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6f418a36aa04..1b41dd04df5a 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
return 0;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 867e5008ac20..36e71b5e7561 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4396,15 +4396,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 1e27ed298354..61bb009c403f 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -681,13 +681,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads &=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index aef8adc2e1e0..5d6700c18303 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1551,13 +1551,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 81cad6e4305f..5b3ac9d2fa3f 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2566,7 +2566,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct hns3_adapter *hns = dev->data->dev_private;
uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
struct hns3_hw *hw = &hns->hw;
- bool is_jumbo_frame;
int ret;
if (dev->data->dev_started) {
@@ -2576,7 +2575,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2591,12 +2589,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (is_jumbo_frame)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 0b5db486f8d6..3438b3650de6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -908,12 +908,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rte_spinlock_unlock(&hw->lock);
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index cfefef4746e9..25e924bca2b0 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11423,11 +11423,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 0eabce275d92..844d26d87ba6 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1473,13 +1473,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index ebe9834e6097..001868f321c7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3992,11 +3992,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 1ebe7a02ce57..044200f58354 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rctl = IGC_READ_REG(hw, IGC_RCTL);
-
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~IGC_RCTL_LPE;
- }
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 0d1aaa6449b9..6bf139c85dea 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2791,11 +2791,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
mtu);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 0ceb42a5cd9e..9acd4a43aad7 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5191,13 +5191,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index f235653718fc..3045508fa9b9 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 928b4983a07a..d7bd5883b107 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -962,12 +962,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* writing to configuration space */
nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 10151a748d5d..5c6e5d201528 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index cf7804157198..293306c7be2a 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,11 +59,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return rc;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a1cf913dc8ed..7b12794405a1 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 09146f741952..29236a3a5c0a 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1065,15 +1065,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
- /*
- * The driver does not use it, but other PMDs update jumbo frame
- * flag when MTU is set.
- */
- if (mtu > RTE_ETHER_MTU) {
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1000d9855ad3..3ea3699784d2 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct nicvf *nic = nicvf_pmd_priv(dev);
uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
size_t i;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
PMD_INIT_FUNC_TRACE();
@@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 9427ac738f1b..003ce0d196da 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3486,12 +3486,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
TXGBE_FRAME_SIZE_MAX);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 634e4d7d7fd6..37c7ea49bd63 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3619,6 +3619,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
+ int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3637,12 +3638,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+
+ if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ is_jumbo_frame_capable = 1;
}
+ if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
+ return -EINVAL;
+
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (!ret)
+ if (ret == 0) {
dev->data->mtu = mtu;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
return eth_err(port_id, ret);
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v6 3/6] ethdev: move check to library for MTU set
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-10-11 23:53 ` Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
` (7 subsequent siblings)
9 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 23:53 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Heinrich Kuhn,
Harman Kalra, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
Rasesh Mody, Devendra Singh Rawat, Maciej Czekaj, Jiawen Wu,
Jian Wang, Thomas Monjalon, Andrew Rybchenko
Cc: Ferruh Yigit, dev, Konstantin Ananyev
Move requested MTU value check to the API to prevent the duplicated
code.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 15 ++++-----------
drivers/net/bnxt/bnxt_ethdev.c | 2 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------
drivers/net/dpaa/dpaa_ethdev.c | 2 --
drivers/net/dpaa2/dpaa2_ethdev.c | 4 ----
drivers/net/e1000/em_ethdev.c | 10 ----------
drivers/net/e1000/igb_ethdev.c | 11 -----------
drivers/net/enetc/enetc_ethdev.c | 4 ----
drivers/net/hinic/hinic_pmd_ethdev.c | 8 +-------
drivers/net/i40e/i40e_ethdev.c | 17 ++++-------------
drivers/net/iavf/iavf_ethdev.c | 10 ++--------
drivers/net/ice/ice_ethdev.c | 14 +++-----------
drivers/net/igc/igc_ethdev.c | 5 -----
drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------
drivers/net/liquidio/lio_ethdev.c | 10 ----------
drivers/net/nfp/nfp_common.c | 4 ----
drivers/net/octeontx/octeontx_ethdev.c | 4 ----
drivers/net/octeontx2/otx2_ethdev_ops.c | 4 ----
drivers/net/qede/qede_ethdev.c | 12 ------------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 10 ----------
lib/ethdev/rte_ethdev.c | 9 +++++++++
22 files changed, 25 insertions(+), 155 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 2dc5fa245bd8..d302329525d0 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct axgbe_port *pdata = dev->data->dev_private;
- uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- unsigned int val = 0;
- axgbe_dev_info_get(dev, &dev_info);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ unsigned int val;
+
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- val = 1;
- else
- val = 0;
+ val = mtu > RTE_ETHER_MTU ? 1 : 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
+
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 07ee19938930..dc33b961320a 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3025,7 +3025,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
- uint32_t rc = 0;
+ uint32_t rc;
uint32_t i;
rc = is_bnxt_in_error(bp);
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index cdecf6b512ef..32a01009107d 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
- struct rte_eth_dev_info dev_info;
- int err;
uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- err = cxgbe_dev_info_get(eth_dev, &dev_info);
- if (err != 0)
- return err;
-
- /* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
- return -EINVAL;
-
- err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
+ return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- return err;
}
/*
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index d0013e7f5b67..0cef07e8b990 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
- return -EINVAL;
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 4245db78cf12..9aa74e7815ec 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1462,10 +1462,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
- return -EINVAL;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 1b41dd04df5a..6ebef55588bc 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1788,22 +1788,12 @@ eth_em_default_mac_addr_set(struct rte_eth_dev *dev,
static int
eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct e1000_hw *hw;
uint32_t frame_size;
uint32_t rctl;
- int ret;
-
- ret = eth_em_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
frame_size = mtu + E1000_ETH_OVERHEAD;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 36e71b5e7561..e8e76f7379e8 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4363,9 +4363,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
uint32_t rctl;
struct e1000_hw *hw;
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + E1000_ETH_OVERHEAD;
- int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -4374,15 +4372,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (hw->mac.type == e1000_82571)
return -ENOTSUP;
#endif
- ret = eth_igb_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 61bb009c403f..57534be8e8b8 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -666,10 +666,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- /* check that mtu is within the allowed range */
- if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size > ENETC_MAC_MAXFRM_SIZE)
- return -EINVAL;
-
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 5d6700c18303..9a974dff580e 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1534,17 +1534,11 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- int ret = 0;
+ int ret;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu));
- if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) {
- PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d",
- mtu, HINIC_MIN_MTU_SIZE, HINIC_MAX_MTU_SIZE);
- return -EINVAL;
- }
-
ret = hinic_set_port_mtu(nic_dev->hwdev, mtu);
if (ret) {
PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret);
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 25e924bca2b0..45edbf5a51fe 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11405,25 +11405,16 @@ static int i40e_set_default_mac_addr(struct rte_eth_dev *dev,
}
static int
-i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
- int ret = 0;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
- return ret;
+ return 0;
}
/* Restore ethertype filter */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 844d26d87ba6..2d43c666fdbb 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1459,21 +1459,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
static int
-iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD;
- int ret = 0;
-
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port must be stopped before configuration");
return -EBUSY;
}
- return ret;
+ return 0;
}
static int
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 001868f321c7..bdaca245f9ea 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3974,21 +3974,13 @@ ice_dev_set_link_down(struct rte_eth_dev *dev)
}
static int
-ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + ICE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR,
"port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 044200f58354..aea2a3a3f86c 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1576,11 +1576,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN)
frame_size += VLAN_TAG_SIZE;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > MAX_RX_JUMBO_FRAME_SIZE)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 6bf139c85dea..0438c3f08c24 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2768,12 +2768,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
int ret = 0;
struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev);
struct rte_eth_dev_data *dev_data = ethdev->data;
- uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)
- return -EINVAL;
/* mtu setting is forbidden if port is start */
/* make sure NIC port is stopped */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 3045508fa9b9..bc423016cd60 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -434,7 +434,6 @@ static int
lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -446,15 +445,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -EINVAL;
}
- /* check if VF MTU is within allowed range.
- * New value should not exceed PF MTU.
- */
- if (mtu < RTE_ETHER_MIN_MTU || mtu > pf_mtu) {
- lio_dev_err(lio_dev, "VF MTU should be >= %d and <= %d\n",
- RTE_ETHER_MIN_MTU, pf_mtu);
- return -EINVAL;
- }
-
/* flush added to prevent cmd failure
* incase the queue is full
*/
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index d7bd5883b107..dc906872192f 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -951,10 +951,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || (uint32_t)mtu > hw->max_mtu)
- return -EINVAL;
-
/* mtu setting is forbidden if port is started */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 5c6e5d201528..549b4b4ccf97 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -524,10 +524,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
struct rte_eth_dev_data *data = eth_dev->data;
int rc = 0;
- /* Check if MTU is within the allowed range */
- if (frame_size < OCCTX_MIN_FRS || frame_size > OCCTX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 293306c7be2a..206da6f7cfda 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -20,10 +20,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (dev->configured && otx2_ethdev_is_ptp_en(dev))
frame_size += NIX_TIMESYNC_RX_OFFSET;
- /* Check if MTU is within the allowed range */
- if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 7b12794405a1..663cb1460f4f 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2307,7 +2307,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
uint32_t frame_size;
uint16_t bufsz;
@@ -2315,19 +2314,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
int i, rc;
PMD_INIT_FUNC_TRACE(edev);
- rc = qede_dev_info_get(dev, &dev_info);
- if (rc != 0) {
- DP_ERR(edev, "Error during getting ethernet device info\n");
- return rc;
- }
frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
- DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
- mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
- QEDE_ETH_OVERHEAD);
- return -EINVAL;
- }
if (!dev->data->scattered_rx &&
frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
DP_INFO(edev, "MTU greater than minimum RX buffer size of %u\n",
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 3ea3699784d2..501171475d70 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -154,12 +154,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (frame_size > NIC_HW_MAX_FRS)
- return -EINVAL;
-
- if (frame_size < NIC_HW_MIN_FRS)
- return -EINVAL;
-
buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/*
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 003ce0d196da..41199dfb70aa 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3463,18 +3463,8 @@ static int
txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct rte_eth_dev_data *dev_data = dev->data;
- int ret;
-
- ret = txgbe_dev_info_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 37c7ea49bd63..8683857cd1ac 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3632,6 +3632,9 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
@@ -3639,6 +3642,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ frame_size = mtu + overhead_len;
+ if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
+ return -EINVAL;
+
if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
is_jumbo_frame_capable = 1;
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 3/6] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-10-11 23:53 ` Ferruh Yigit
2021-10-12 17:20 ` Hyong Youb Kim (hyonkim)
2021-10-13 7:16 ` Michał Krawczyk
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 5/6] ethdev: unify MTU checks Ferruh Yigit
` (6 subsequent siblings)
9 siblings, 2 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 23:53 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Yong Wang, Konstantin Ananyev, Radu Nicolau, Akhil Goyal,
David Hunt, John McNamara, Thomas Monjalon
Cc: Ferruh Yigit, dev, Huisong Li
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 56200 bytes --]
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
---
app/test-eventdev/test_pipeline_common.c | 2 -
app/test-pmd/cmdline.c | 2 +-
app/test-pmd/config.c | 25 +---------
app/test-pmd/testpmd.c | 48 +------------------
app/test-pmd/testpmd.h | 2 +-
doc/guides/howto/debug_troubleshoot.rst | 2 -
doc/guides/nics/bnxt.rst | 1 -
doc/guides/nics/features.rst | 3 +-
drivers/net/atlantic/atl_ethdev.c | 1 -
drivers/net/axgbe/axgbe_ethdev.c | 1 -
drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
drivers/net/bnxt/bnxt.h | 1 -
drivers/net/bnxt/bnxt_ethdev.c | 10 +---
drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
drivers/net/cnxk/cnxk_ethdev.h | 6 +--
drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
drivers/net/cxgbe/cxgbe.h | 1 -
drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
drivers/net/cxgbe/sge.c | 5 +-
drivers/net/dpaa/dpaa_ethdev.c | 2 -
drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
drivers/net/e1000/e1000_ethdev.h | 4 +-
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/em_rxtx.c | 19 +++-----
drivers/net/e1000/igb_rxtx.c | 3 +-
drivers/net/ena/ena_ethdev.c | 1 -
drivers/net/enetc/enetc_ethdev.c | 3 +-
drivers/net/enic/enic_res.c | 1 -
drivers/net/failsafe/failsafe_ops.c | 2 -
drivers/net/fm10k/fm10k_ethdev.c | 1 -
drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev_vf.c | 1 -
drivers/net/i40e/i40e_ethdev.c | 1 -
drivers/net/i40e/i40e_rxtx.c | 2 +-
drivers/net/iavf/iavf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_vf_representor.c | 1 -
drivers/net/ice/ice_ethdev.c | 1 -
drivers/net/ice/ice_rxtx.c | 3 +-
drivers/net/igc/igc_ethdev.h | 1 -
drivers/net/igc/igc_txrx.c | 2 +-
drivers/net/ionic/ionic_ethdev.c | 1 -
drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
drivers/net/ixgbe/ixgbe_pf.c | 9 +---
drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
drivers/net/mlx4/mlx4_rxq.c | 1 -
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/mvneta/mvneta_ethdev.h | 3 +-
drivers/net/mvpp2/mrvl_ethdev.c | 1 -
drivers/net/nfp/nfp_common.c | 6 +--
drivers/net/octeontx/octeontx_ethdev.h | 1 -
drivers/net/octeontx2/otx2_ethdev.h | 1 -
drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
drivers/net/qede/qede_ethdev.c | 1 -
drivers/net/sfc/sfc_rx.c | 2 -
drivers/net/thunderx/nicvf_ethdev.h | 1 -
drivers/net/txgbe/txgbe_rxtx.c | 1 -
drivers/net/virtio/virtio_ethdev.c | 1 -
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
examples/ip_fragmentation/main.c | 3 +-
examples/ip_reassembly/main.c | 3 +-
examples/ipsec-secgw/ipsec-secgw.c | 2 -
examples/ipv4_multicast/main.c | 1 -
examples/kni/main.c | 5 --
examples/l3fwd-acl/main.c | 4 +-
examples/l3fwd-graph/main.c | 4 +-
examples/l3fwd-power/main.c | 4 +-
examples/l3fwd/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 4 +-
examples/vhost/main.c | 5 +-
lib/ethdev/rte_ethdev.c | 26 +---------
lib/ethdev/rte_ethdev.h | 1 -
75 files changed, 48 insertions(+), 259 deletions(-)
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 5fcea74b4d43..2775e72c580d 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN;
- if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
RTE_ETH_FOREACH_DEV(i) {
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 8d07cd4eb61d..7400163c1857 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1905,7 +1905,7 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
return;
}
- update_jumbo_frame_offload(port_id, res->value);
+ update_mtu_from_frame_size(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index db3eeffa0093..e890fadc716c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1144,40 +1144,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v)
void
port_mtu_set(portid_t port_id, uint16_t mtu)
{
+ struct rte_port *port = &ports[port_id];
int diag;
- struct rte_port *rte_port = &ports[port_id];
- struct rte_eth_dev_info dev_info;
- int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ret = eth_dev_info_get_print_err(port_id, &dev_info);
- if (ret != 0)
- return;
-
- if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
- fprintf(stderr,
- "Set MTU failed. MTU:%u is not in valid range, min:%u - max:%u\n",
- mtu, dev_info.min_mtu, dev_info.max_mtu);
- return;
- }
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
return;
}
- rte_port->dev_conf.rxmode.mtu = mtu;
-
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (mtu > RTE_ETHER_MTU)
- rte_port->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rte_port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
+ port->dev_conf.rxmode.mtu = mtu;
}
/* Generic flow management functions. */
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 606c3b7e702b..1f420baf3726 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1508,12 +1508,6 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid, 0);
- if (ret != 0)
- fprintf(stderr,
- "Updating jumbo frame offload failed for port %u\n",
- pid);
-
if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
@@ -3473,24 +3467,18 @@ rxtx_port_config(struct rte_port *port)
}
/*
- * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned.
+ * Helper function to set MTU from frame size
*
* port->dev_info should be set before calling this function.
*
- * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
- * ETH_OVERHEAD". This is useful to update flags but not MTU value.
- *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
+update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
- uint64_t rx_offloads;
uint16_t mtu, new_mtu;
- bool on;
eth_overhead = get_eth_overhead(&port->dev_info);
@@ -3499,40 +3487,8 @@ update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
return -1;
}
- if (max_rx_pktlen == 0)
- max_rx_pktlen = mtu + eth_overhead;
-
- rx_offloads = port->dev_conf.rxmode.offloads;
new_mtu = max_rx_pktlen - eth_overhead;
- if (new_mtu <= RTE_ETHER_MTU) {
- rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = false;
- } else {
- if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- fprintf(stderr,
- "Frame size (%u) is not supported by port %u\n",
- max_rx_pktlen, portid);
- return -1;
- }
- rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = true;
- }
-
- if (rx_offloads != port->dev_conf.rxmode.offloads) {
- uint16_t qid;
-
- port->dev_conf.rxmode.offloads = rx_offloads;
-
- /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
- for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
- if (on)
- port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
- }
-
if (mtu == new_mtu)
return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index e3f022343af2..42c5d79208e1 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1024,7 +1024,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
+int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..df69fa8bcc24 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
* Identify if port Speed and Duplex is matching to desired values with
``rte_eth_link_get``.
- * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
-
* Check promiscuous mode if the drops do not occur for unique MAC address
with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3bc..8f10c6c78a1f 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -885,7 +885,6 @@ processing. This improved performance is derived from a number of optimizations:
DEV_RX_OFFLOAD_VLAN_STRIP
DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_JUMBO_FRAME
DEV_RX_OFFLOAD_IPV4_CKSUM
DEV_RX_OFFLOAD_UDP_CKSUM
DEV_RX_OFFLOAD_TCP_CKSUM
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index f5a8fdd41398..1b7121631c8e 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -165,8 +165,7 @@ Jumbo frame
Supports Rx jumbo frames.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.mtu``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 3f654c071566..5a198f53fce7 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
| DEV_RX_OFFLOAD_IPV4_CKSUM \
| DEV_RX_OFFLOAD_UDP_CKSUM \
| DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_JUMBO_FRAME \
| DEV_RX_OFFLOAD_MACSEC_STRIP \
| DEV_RX_OFFLOAD_VLAN_FILTER)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index d302329525d0..0250256830ac 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 009a94e9a8fa..50ff04bb2241 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 5121d05da65f..6743cf92b0e6 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -595,7 +595,6 @@ struct bnxt_rep_info {
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_VLAN_EXTEND | \
DEV_RX_OFFLOAD_TCP_LRO | \
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index dc33b961320a..e9d04f354a39 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -742,15 +742,10 @@ static int bnxt_start_nic(struct bnxt *bp)
unsigned int i, j;
int rc;
- if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* THOR does not support ring groups.
* But we will use the array to save RSS context IDs.
@@ -1250,7 +1245,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
if (eth_dev->data->dev_conf.rxmode.offloads &
~(DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 2ee1cf938880..ce382323f41a 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1727,14 +1727,6 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index ff21b977b70d..2304af6ffa8b 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -78,9 +78,9 @@
#define CNXK_NIX_RX_OFFLOAD_CAPA \
(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_SECURITY)
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
+ DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP | \
+ DEV_RX_OFFLOAD_SECURITY)
#define RSS_IPV4_ENABLE \
(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 349896f6a1bf..d0924df76152 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_SECURITY, " Security,"},
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..37625c5bfb69 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -51,7 +51,6 @@
DEV_RX_OFFLOAD_IPV4_CKSUM | \
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 32a01009107d..f77b2976002c 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -660,14 +660,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
rxq->rspq.size = temp_nb_desc;
rxq->fl.size = temp_nb_desc;
- /* Set to jumbo mode if necessary */
- if (eth_dev->data->mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
&rxq->fl, NULL,
is_pf4(adapter) ?
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 830f5192474d..21b8fe61c9a7 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
struct rte_mbuf *buf_bulk[n];
int ret, i;
struct rte_pktmbuf_pool_private *mbp_priv;
- u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME;
/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
- if (jumbo_en &&
- ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000))
+ if ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000)
buf_size_idx = RX_LARGE_MTU_BUF;
ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk, n);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 0cef07e8b990..c4846ac51281 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,7 +54,6 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
@@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 9aa74e7815ec..b8740375ed92 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
@@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 8e10e2777e64..e92612568c8b 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -468,8 +468,8 @@ void eth_em_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
void em_dev_clear_queues(struct rte_eth_dev *dev);
void em_dev_free_queues(struct rte_eth_dev *dev);
-uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
-uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
+uint64_t em_get_rx_port_offloads_capa(void);
+uint64_t em_get_rx_queue_offloads_capa(void);
int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 6ebef55588bc..8a752eef52cf 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
- dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(dev);
- dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
+ dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa();
+ dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
dev_info->rx_queue_offload_capa;
dev_info->tx_queue_offload_capa = em_get_tx_queue_offloads_capa(dev);
dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 8542a1532048..45238678dd4c 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1364,12 +1364,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
}
uint64_t
-em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_port_offloads_capa(void)
{
uint64_t rx_offload_capa;
- uint32_t max_rx_pktlen;
-
- max_rx_pktlen = em_get_max_pktlen(dev);
rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
@@ -1379,14 +1376,12 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER;
- if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
- rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
return rx_offload_capa;
}
uint64_t
-em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_queue_offloads_capa(void)
{
uint64_t rx_queue_offload_capa;
@@ -1395,7 +1390,7 @@ em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
* capability be same to per port queue offloading capability
* for better convenience.
*/
- rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
+ rx_queue_offload_capa = em_get_rx_port_offloads_capa();
return rx_queue_offload_capa;
}
@@ -1843,7 +1838,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* to avoid splitting packets that don't fit into
* one buffer.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+ if (dev->data->mtu > RTE_ETHER_MTU ||
rctl_bsize < RTE_ETHER_MAX_LEN) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1878,14 +1873,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
if ((hw->mac.type == e1000_ich9lan ||
hw->mac.type == e1000_pch2lan ||
hw->mac.type == e1000_ich10lan) &&
- rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ dev->data->mtu > RTE_ETHER_MTU) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
}
if (hw->mac.type == e1000_pch2lan) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
@@ -1912,7 +1907,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 6e1315f37d92..51f63eaf867f 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1645,7 +1645,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -2349,7 +2348,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Configure support of jumbo frames, if any.
*/
max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
rctl |= E1000_RCTL_LPE;
/*
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index e2f7213acb84..3fde099ab42c 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -1916,7 +1916,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
- rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
/* Inform framework about available features */
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 57534be8e8b8..33f374fbc131 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
(DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME);
+ DEV_RX_OFFLOAD_KEEP_CRC);
return 0;
}
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 0493e096d031..c5777772a09e 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
DEV_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index d0030af0610b..29de39910c6e 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1183,7 +1183,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
@@ -1201,7 +1200,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 0e25dfdd4ce3..f2292ffd5593 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_RSS_HASH);
}
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 9a974dff580e..c2374ebb6759 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -738,7 +738,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 5b3ac9d2fa3f..a663a862fc4b 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2686,7 +2686,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 3438b3650de6..eee65ac77399 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -944,7 +944,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 45edbf5a51fe..7baba5202f0a 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3725,7 +3725,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 2dc6b3720f02..20a2e5e3a7ad 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2927,7 +2927,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
rxq->max_pkt_len =
RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
data->mtu + I40E_ETH_OVERHEAD);
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 2d43c666fdbb..2c4103ac7ef9 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -588,7 +588,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->mtu & RTE_ETHER_MTU) {
if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
max_pkt_len > IAVF_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -968,7 +968,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6e59f8c71c65..05d2d82caaea 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -72,7 +72,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (max_pkt_len <= ICE_ETH_MAX_LEN ||
max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -683,7 +683,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index b547c42f9137..d28fedc96e1a 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -149,7 +149,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index bdaca245f9ea..c5532c568611 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3676,7 +3676,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_FILTER;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index b2c73eadcf6d..54198a2636fc 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -267,7 +267,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
struct ice_rlan_ctx rx_ctx;
enum ice_status err;
uint16_t buf_size;
- struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
@@ -282,7 +281,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
frame_size);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index b3473b5b1646..5e6c2ff30157 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -73,7 +73,6 @@ extern "C" {
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 45d76f3ec321..89a54a212b19 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ if (dev->data->mtu & RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
else
rctl &= ~IGC_RCTL_LPE;
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index d5d610c80bcd..f94a1fed0a38 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_SCATTER |
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 0438c3f08c24..063a9c6a6f7f 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ DEV_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 9acd4a43aad7..3b607970f984 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6042,7 +6042,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct rte_eth_rxmode *rxmode;
uint32_t rf_dec, rf_int;
uint32_t bcnrc_val;
uint16_t link_speed = dev->data->dev_link.link_speed;
@@ -6064,14 +6063,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
bcnrc_val = 0;
}
- rxmode = &dev->data->dev_conf.rxmode;
/*
* Set global transmit compensation time to the MMW_SIZE in RTTBCNRM
* register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
* set as 0x4.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ if (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 4ceb5bf322d8..295e5a39b245 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -597,15 +597,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT;
if (max_frs < max_frame) {
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (max_frame > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (max_frame > IXGBE_ETH_MAX_LEN)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index d31cf9e0a7c9..9f2dee4abaee 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3036,7 +3036,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -5097,7 +5096,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 1801d87334a1..ee2d2b75e59a 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 0655965c0fb9..d8d7e481dea0 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..6428f9ff7931 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,8 +54,7 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 44761b695a8d..a6458d2ce9b5 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -59,7 +59,6 @@
/** Port Rx offload capabilities */
#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index dc906872192f..0003fd54dde5 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -369,8 +369,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = dev->data->mtu;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -757,9 +756,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
};
- /* All NFP devices support jumbo frames */
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..3a02824e3948 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -60,7 +60,6 @@
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index b1575f59a204..88b9d11cc738 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -148,7 +148,6 @@
DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
DEV_RX_OFFLOAD_VLAN_FILTER | \
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index eed0e05a8fc1..698d22e22685 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
- devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..aa4dcd33cc79 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l3_len = hdr_lens.l3_len;
droq_pkt->l4_len = hdr_lens.l4_len;
- if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
- rte_pktmbuf_free(droq_pkt);
- goto oq_read_fail;
- }
-
if (droq_pkt->nb_segs > 1 &&
!(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 663cb1460f4f..27f6932dc74e 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_RSS_HASH);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9e0..62b215f62cd6 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -940,8 +940,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
{
uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
- caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return caps & sfc_rx_get_offload_mask(sa);
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..5d38750d6313 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -40,7 +40,6 @@
#define NICVF_RX_OFFLOAD_CAPA ( \
DEV_RX_OFFLOAD_CHECKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 3af1d19e3079..746ce343b0d9 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1953,7 +1953,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_SCATTER;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e1b9066dcf5d..4abe561677de 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2548,7 +2548,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
host_features = VIRTIO_OPS(hw)->get_features(hw);
dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index cfffc94c4895..a19895af1f17 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -54,7 +54,6 @@
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 754fee5a5780..8644454a9aef 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -150,8 +150,7 @@ static struct rte_eth_conf port_conf = {
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ DEV_RX_OFFLOAD_SCATTER),
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 39e12fea47f4..4caa9ac3cafa 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -165,8 +165,7 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index a5dfca5a9a4b..5f5ec260f315 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2209,8 +2209,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- if (mtu_size > RTE_ETHER_MTU)
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index e28035998e6c..87538dccc879 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -113,7 +113,6 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 62f6e42a9437..1790ec024072 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
}
memcpy(&conf, &port_conf, sizeof(conf));
- /* Set new MTU */
- if (new_mtu > RTE_ETHER_MTU)
- conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 67e6356acff6..1890c88a5b01 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -2003,10 +2003,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 46568eba9c01..05385807e83e 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -730,10 +730,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 03c0b8bb15b8..6aa1b66ecfcc 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -2509,10 +2509,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 66d76e87cb25..f27c76bb7a73 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -987,10 +987,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2db1b5fc154f..5de5df997ee9 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -3493,10 +3493,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 427b882831bf..999809e6ed41 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -631,11 +631,8 @@ us_vhost_parse_args(int argc, char **argv)
return -1;
}
mergeable = !!ret;
- if (ret) {
- vmdq_conf_default.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (ret)
vmdq_conf_default.rxmode.mtu = MAX_MTU;
- }
break;
case OPT_STATS_NUM:
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 8683857cd1ac..ea90bb19680a 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -118,7 +118,6 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT),
RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
- RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
RTE_RX_OFFLOAD_BIT2STR(SCATTER),
RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
RTE_RX_OFFLOAD_BIT2STR(SECURITY),
@@ -1499,13 +1498,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
- dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
- /* Use default value */
- dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- }
-
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
/*
@@ -3619,7 +3611,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
- int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3647,27 +3638,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
frame_size = mtu + overhead_len;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
-
- if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
- is_jumbo_frame_capable = 1;
}
- if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
- return -EINVAL;
-
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (ret == 0) {
+ if (ret == 0)
dev->data->mtu = mtu;
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
return eth_err(port_id, ret);
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index e73c7c522196..5d64cd6d9504 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1356,7 +1356,6 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v6 5/6] ethdev: unify MTU checks
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
` (2 preceding siblings ...)
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-10-11 23:53 ` Ferruh Yigit
2021-10-12 5:58 ` Andrew Rybchenko
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
` (5 subsequent siblings)
9 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 23:53 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko
Cc: Ferruh Yigit, dev, Huisong Li, Konstantin Ananyev
Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
have slightly different checks. Like one checks min MTU against
RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
Checks moved into common function to unify the checks. Also this has
benefit to have common error logs.
Default 'dev_info->min_mtu' (the one set by ethdev if driver doesn't
provide one), changed to ('RTE_ETHER_MIN_LEN' - overhead). Previously it
was 'RTE_ETHER_MIN_MTU' which is min MTU for IPv4 packets. Since the
intention is to provide min MTU corresponding minimum frame size, new
default value suits better.
Suggested-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v6:
* Commit log updated to document new default 'dev_info->min_mtu' value
---
lib/ethdev/rte_ethdev.c | 91 +++++++++++++++++++++++++----------------
lib/ethdev/rte_ethdev.h | 2 +-
2 files changed, 57 insertions(+), 36 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index ea90bb19680a..2cf50f3abc94 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1350,6 +1350,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
return overhead_len;
}
+/* rte_eth_dev_info_get() should be called prior to this function */
+static int
+eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
+ uint16_t mtu)
+{
+ uint32_t overhead_len;
+ uint32_t frame_size;
+
+ if (mtu < dev_info->min_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) < device min MTU (%u) for port_id %u\n",
+ mtu, dev_info->min_mtu, port_id);
+ return -EINVAL;
+ }
+ if (mtu > dev_info->max_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) > device max MTU (%u) for port_id %u\n",
+ mtu, dev_info->max_mtu, port_id);
+ return -EINVAL;
+ }
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ frame_size = mtu + overhead_len;
+ if (frame_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) < min frame size (%u) for port_id %u\n",
+ frame_size, RTE_ETHER_MIN_LEN, port_id);
+ return -EINVAL;
+ }
+
+ if (frame_size > dev_info->max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) > device max frame size (%u) for port_id %u\n",
+ frame_size, dev_info->max_rx_pktlen, port_id);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1357,8 +1398,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
- uint32_t max_rx_pktlen;
- uint32_t overhead_len;
int diag;
int ret;
uint16_t old_mtu;
@@ -1407,10 +1446,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
if (ret != 0)
goto rollback;
- /* Get the real Ethernet overhead length */
- overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
- dev_info.max_mtu);
-
/* If number of queues specified by application for both Rx and Tx is
* zero, use driver preferred values. This cannot be done individually
* as it is valid for either Tx or Rx (but not both) to be zero.
@@ -1477,26 +1512,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- /*
- * Check that the maximum RX packet length is supported by the
- * configured device.
- */
if (dev_conf->rxmode.mtu == 0)
dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
- if (max_rx_pktlen > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
- port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
- port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
+
+ ret = eth_dev_validate_mtu(port_id, &dev_info,
+ dev->data->dev_conf.rxmode.mtu);
+ if (ret != 0)
goto rollback;
- }
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
@@ -1505,6 +1527,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ uint32_t max_rx_pktlen;
+ uint32_t overhead_len;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
if (dev_conf->rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
@@ -3417,7 +3445,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = lim;
dev_info->tx_desc_lim = lim;
dev_info->device = dev->device;
- dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
dev_info->max_mtu = UINT16_MAX;
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
@@ -3623,21 +3652,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
- uint16_t overhead_len;
- uint32_t frame_size;
-
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
- if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
- return -EINVAL;
-
- overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
- dev_info.max_mtu);
- frame_size = mtu + overhead_len;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
+ if (ret != 0)
+ return ret;
}
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 5d64cd6d9504..d7d01d99640c 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3026,7 +3026,7 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
* };
*
* device = dev->device
- * min_mtu = RTE_ETHER_MIN_MTU
+ * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
* max_mtu = UINT16_MAX
*
* The following fields will be populated if support for dev_infos_get()
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v6 6/6] examples/ip_reassembly: remove unused parameter
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
` (3 preceding siblings ...)
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 5/6] ethdev: unify MTU checks Ferruh Yigit
@ 2021-10-11 23:53 ` Ferruh Yigit
2021-10-12 6:02 ` [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length Andrew Rybchenko
` (4 subsequent siblings)
9 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-11 23:53 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: Ferruh Yigit, dev
Remove 'max-pkt-len' parameter.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
examples/ip_reassembly/main.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 4caa9ac3cafa..4f0e12e62447 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -516,7 +516,6 @@ static void
print_usage(const char *prgname)
{
printf("%s [EAL options] -- -p PORTMASK [-q NQ]"
- " [--max-pkt-len PKTLEN]"
" [--maxflows=<flows>] [--flowttl=<ttl>[(s|ms)]]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of RX queues per lcore\n"
@@ -618,7 +617,6 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {"max-pkt-len", 1, 0, 0},
{"maxflows", 1, 0, 0},
{"flowttl", 1, 0, 0},
{NULL, 0, 0, 0}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks
2021-10-11 20:15 ` Ferruh Yigit
@ 2021-10-12 4:02 ` lihuisong (C)
0 siblings, 0 replies; 112+ messages in thread
From: lihuisong (C) @ 2021-10-12 4:02 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, Thomas Monjalon, Andrew Rybchenko
在 2021/10/12 4:15, Ferruh Yigit 写道:
> On 10/9/2021 12:43 PM, lihuisong (C) wrote:
>> Hi, Ferruh
>>
>> 在 2021/10/8 0:56, Ferruh Yigit 写道:
>>> Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
>>> have slightly different checks. Like one checks min MTU against
>>> RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
>>>
>>> Checks moved into common function to unify the checks. Also this has
>>> benefit to have common error logs.
>>>
>>> Suggested-by: Huisong Li <lihuisong@huawei.com>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>> ---
>>> lib/ethdev/rte_ethdev.c | 82
>>> ++++++++++++++++++++++++++---------------
>>> lib/ethdev/rte_ethdev.h | 2 +-
>>> 2 files changed, 54 insertions(+), 30 deletions(-)
>>>
>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>> index c2b624aba1a0..0a6e952722ae 100644
>>> --- a/lib/ethdev/rte_ethdev.c
>>> +++ b/lib/ethdev/rte_ethdev.c
>>> @@ -1336,6 +1336,47 @@ eth_dev_get_overhead_len(uint32_t
>>> max_rx_pktlen, uint16_t max_mtu)
>>> return overhead_len;
>>> }
>>> +/* rte_eth_dev_info_get() should be called prior to this function */
>>> +static int
>>> +eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info
>>> *dev_info,
>>> + uint16_t mtu)
>>> +{
>>> + uint16_t overhead_len;
>>> + uint32_t frame_size;
>>> +
>>> + if (mtu < dev_info->min_mtu) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "MTU (%u) < device min MTU (%u) for port_id %u\n",
>>> + mtu, dev_info->min_mtu, port_id);
>>> + return -EINVAL;
>>> + }
>>> + if (mtu > dev_info->max_mtu) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "MTU (%u) > device max MTU (%u) for port_id %u\n",
>>> + mtu, dev_info->max_mtu, port_id);
>>> + return -EINVAL;
>>> + }
>>> +
>>> + overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
>>> + dev_info->max_mtu);
>>> + frame_size = mtu + overhead_len;
>>> + if (frame_size < RTE_ETHER_MIN_LEN) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Frame size (%u) < min frame size (%u) for port_id %u\n",
>>> + frame_size, RTE_ETHER_MIN_LEN, port_id);
>>> + return -EINVAL;
>>> + }
>>> +
>>> + if (frame_size > dev_info->max_rx_pktlen) {
>>> + RTE_ETHDEV_LOG(ERR,
>>> + "Frame size (%u) > device max frame size (%u) for
>>> port_id %u\n",
>>> + frame_size, dev_info->max_rx_pktlen, port_id);
>>> + return -EINVAL;
>>> + }
>>
>> This function is used to verify the MTU. So "frame_size" is redundant.
>>
>
> Yes it is redundant for the drivers that both announce 'max_rx_pktlen'
> & 'max_mtu',
> but stil some drivers doesn't announce the 'max_mtu' values and
> default value
> 'UINT16_MAX' is set by ethdev, specially virtual drivers.
> That is why I kept both to be in safe side.
>
Good job!
>> As modified by this patch, dev_info->min_mtu is calculated based on
>> RTE_ETHER_MIN_LEN.
>>
>
> And for the min check, for the default 'min_mtu' check is redundant,
> but for the cases
> driver sets "min_mtu < (RTE_ETHER_MIN_LEN - overhead_len)" second
> check becomes
> different limit. I don't know if this happens at all in practice but I
> think it
> doesn't hurt to have both checks to be on safe side.
It's a little ingenious.
is it better to add comments on the check for "frame_size"?
Another comment,
A few drivers report the minimum MTU using previous value.
Now, we have modified the "min_mtu" in ethdev layer.
Do you think we need to delete them?
>
>>> +
>>> + return 0;
>>> +}
>>> +
>>> int
>>> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t
>>> nb_tx_q,
>>> const struct rte_eth_conf *dev_conf)
>>> @@ -1463,26 +1504,13 @@ rte_eth_dev_configure(uint16_t port_id,
>>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>> goto rollback;
>>> }
>>> - /*
>>> - * Check that the maximum RX packet length is supported by the
>>> - * configured device.
>>> - */
>>> if (dev_conf->rxmode.mtu == 0)
>>> dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
>>> - max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
>>> - if (max_rx_pktlen > dev_info.max_rx_pktlen) {
>>> - RTE_ETHDEV_LOG(ERR,
>>> - "Ethdev port_id=%u max_rx_pktlen %u > max valid value
>>> %u\n",
>>> - port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
>>> - ret = -EINVAL;
>>> - goto rollback;
>>> - } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
>>> - RTE_ETHDEV_LOG(ERR,
>>> - "Ethdev port_id=%u max_rx_pktlen %u < min valid value
>>> %u\n",
>>> - port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
>>> - ret = -EINVAL;
>>> +
>>> + ret = eth_dev_validate_mtu(port_id, &dev_info,
>>> + dev->data->dev_conf.rxmode.mtu);
>>> + if (ret != 0)
>>> goto rollback;
>>> - }
>>> dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
>>> @@ -1491,6 +1519,9 @@ rte_eth_dev_configure(uint16_t port_id,
>>> uint16_t nb_rx_q, uint16_t nb_tx_q,
>>> * size is supported by the configured device.
>>> */
>>> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
>>> + overhead_len =
>>> eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
>>> + dev_info.max_mtu);
>>> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
>>> if (dev_conf->rxmode.max_lro_pkt_size == 0)
>>> dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
>>> ret = eth_dev_check_lro_pkt_size(port_id,
>>> @@ -3437,7 +3468,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct
>>> rte_eth_dev_info *dev_info)
>>> dev_info->rx_desc_lim = lim;
>>> dev_info->tx_desc_lim = lim;
>>> dev_info->device = dev->device;
>>> - dev_info->min_mtu = RTE_ETHER_MIN_MTU;
>>> + dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
>>> + RTE_ETHER_CRC_LEN;
>> I suggest that the adjustment to the minimum mtu size is also
>> explicitly reflected in the commit log.
>
> ack, I will
>
>>> dev_info->max_mtu = UINT16_MAX;
>>> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
>>> @@ -3643,21 +3675,13 @@ rte_eth_dev_set_mtu(uint16_t port_id,
>>> uint16_t mtu)
>>> * which relies on dev->dev_ops->dev_infos_get.
>>> */
>>> if (*dev->dev_ops->dev_infos_get != NULL) {
>>> - uint16_t overhead_len;
>>> - uint32_t frame_size;
>>> -
>>> ret = rte_eth_dev_info_get(port_id, &dev_info);
>>> if (ret != 0)
>>> return ret;
>>> - if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
>>> - return -EINVAL;
>>> -
>>> - overhead_len =
>>> eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
>>> - dev_info.max_mtu);
>>> - frame_size = mtu + overhead_len;
>>> - if (mtu < RTE_ETHER_MIN_MTU || frame_size >
>>> dev_info.max_rx_pktlen)
>>> - return -EINVAL;
>>> + ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
>>> + if (ret != 0)
>>> + return ret;
>>> }
>>> ret = (*dev->dev_ops->mtu_set)(dev, mtu);
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>> index 4d0f956a4b28..50e124ff631f 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -3056,7 +3056,7 @@ int rte_eth_macaddr_get(uint16_t port_id,
>>> struct rte_ether_addr *mac_addr);
>>> * };
>>> *
>>> * device = dev->device
>>> - * min_mtu = RTE_ETHER_MIN_MTU
>>> + * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
>>> * max_mtu = UINT16_MAX
>>> *
>>> * The following fields will be populated if support for
>>> dev_infos_get()
>
> .
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v6 5/6] ethdev: unify MTU checks
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 5/6] ethdev: unify MTU checks Ferruh Yigit
@ 2021-10-12 5:58 ` Andrew Rybchenko
0 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-12 5:58 UTC (permalink / raw)
To: Ferruh Yigit, Thomas Monjalon; +Cc: dev, Huisong Li, Konstantin Ananyev
On 10/12/21 2:53 AM, Ferruh Yigit wrote:
> Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
> have slightly different checks. Like one checks min MTU against
> RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
>
> Checks moved into common function to unify the checks. Also this has
> benefit to have common error logs.
>
> Default 'dev_info->min_mtu' (the one set by ethdev if driver doesn't
> provide one), changed to ('RTE_ETHER_MIN_LEN' - overhead). Previously it
> was 'RTE_ETHER_MIN_MTU' which is min MTU for IPv4 packets. Since the
> intention is to provide min MTU corresponding minimum frame size, new
> default value suits better.
>
> Suggested-by: Huisong Li <lihuisong@huawei.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
` (4 preceding siblings ...)
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
@ 2021-10-12 6:02 ` Andrew Rybchenko
2021-10-12 9:42 ` Ananyev, Konstantin
` (3 subsequent siblings)
9 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-12 6:02 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Keith Wiles, Jiawen Wu,
Jian Wang, Maxime Coquelin, Chenbo Xia, Nicolas Chautru,
Harry van Haaren, Cristian Dumitrescu, Radu Nicolau, Akhil Goyal,
Tomasz Kantecki, Declan Doherty, Pavan Nikhilesh,
Kirill Rybalchenko, Jasvinder Singh, Thomas Monjalon
Cc: dev, Huisong Li
On 10/12/21 2:53 AM, Ferruh Yigit wrote:
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length
2021-10-11 21:59 ` Ferruh Yigit
@ 2021-10-12 7:03 ` Matan Azrad
2021-10-12 11:03 ` Ferruh Yigit
0 siblings, 1 reply; 112+ messages in thread
From: Matan Azrad @ 2021-10-12 7:03 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang,
Slava Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
NBU-Contact-Thomas Monjalon, Dekel Peled
Cc: dev
Hi Ferruh
From: Ferruh Yigit
> On 10/10/2021 7:30 AM, Matan Azrad wrote:
> >
> > Hi Ferruh
> >
> > From: Ferruh Yigit
> >> There is a confusion on setting max Rx packet length, this patch aims
> >> to clarify it.
> >>
> >> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> >> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> >> rte_eth_conf'.
> >>
> >> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and
> >> result stored into '(struct rte_eth_dev)->data->mtu'.
> >>
> >> These two APIs are related but they work in a disconnected way, they
> >> store the set values in different variables which makes hard to
> >> figure out which one to use, also having two different method for a
> >> related functionality is confusing for the users.
> >>
> >> Other issues causing confusion is:
> >> * maximum transmission unit (MTU) is payload of the Ethernet frame.
> And
> >> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> >> Ethernet frame overhead, and this overhead may be different from
> >> device to device based on what device supports, like VLAN and QinQ.
> >> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> >> which adds additional confusion and some APIs and PMDs already
> >> discards this documented behavior.
> >> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> >> field, this adds configuration complexity for application.
> >>
> >> As solution, both APIs gets MTU as parameter, and both saves the
> >> result in same variable '(struct rte_eth_dev)->data->mtu'. For this
> >> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> >> from jumbo frame.
> >>
> >> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is
> >> user request and it should be used only within configure function and
> >> result should be stored to '(struct rte_eth_dev)->data->mtu'. After
> >> that point both application and PMD uses MTU from this variable.
> >>
> >> When application doesn't provide an MTU during
> 'rte_eth_dev_configure()'
> >> default 'RTE_ETHER_MTU' value is used.
> >>
> >> Additional clarification done on scattered Rx configuration, in
> >> relation to MTU and Rx buffer size.
> >> MTU is used to configure the device for physical Rx/Tx size
> >> limitation, Rx buffer is where to store Rx packets, many PMDs use
> >> mbuf data buffer size as Rx buffer size.
> >> PMDs compare MTU against Rx buffer size to decide enabling scattered
> >> Rx or not. If scattered Rx is not supported by device, MTU bigger
> >> than Rx buffer size should fail.
> >
> > Should it be compared also against max_lro_pkt_size for the SCATTER
> enabling by the PMD?
> >
>
> I kept the LRO related code same, the Rx packet length change patch already
> become complex, LRO related changes can be done later instead of making
> this set more confusing.
> It would be great if you and Dekel can work on it as you introduced the
> 'max_lro_pkt_size' in ethdev.
'max_lro_pkt_size' is not like MTU (the LRO is done after the PHY received the packet in MTU size.),
I just asked regarding the SCATTER comparison for this case; I think it should be the same comparison as MTU.
> > What do you think about enabling SCATTER by the API instead of making
> the comparison in each PMD?
> >
>
> Not sure if we can do that, as far as I can see there is no enforcement on the
> Rx buffer size but PMDs select it.
Yes, it looks like currently, it is the PMD decision.
And we can take scattering later(we all the time say that 😊).
Acked-by: Matan Azrad <matan@nvidia.com>
Maybe, it is good to report the device's max Rx buffer length to let the application have more information to configure the most efficient mbuf size and whether it may get scattered packets or not.
Also, it will help do all the validations in ethdev layer.
<snip, same discussion>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
` (5 preceding siblings ...)
2021-10-12 6:02 ` [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length Andrew Rybchenko
@ 2021-10-12 9:42 ` Ananyev, Konstantin
2021-10-13 7:08 ` Xu, Rosen
` (2 subsequent siblings)
9 siblings, 0 replies; 112+ messages in thread
From: Ananyev, Konstantin @ 2021-10-12 9:42 UTC (permalink / raw)
To: Yigit, Ferruh, Jerin Jacob, Li, Xiaoyun, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Zhang, Qi Z, Wang, Xiao W,
Matan Azrad, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Iremonger, Bernard, Kiran Kumar K,
Nithin Dabilpuram, Hunt, David, Mcnamara, John, Richardson,
Bruce, Igor Russkikh, Steven Webster, Peters, Matt,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Wang, Haiyue, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, Daley, John,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
Andrew Boyer, Xu, Rosen, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko, Wiles,
Keith, Jiawen Wu, Jian Wang, Maxime Coquelin, Xia, Chenbo,
Chautru, Nicolas, Van Haaren, Harry, Dumitrescu, Cristian,
Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz, Doherty, Declan,
Pavan Nikhilesh, Rybalchenko, Kirill, Singh, Jasvinder,
Thomas Monjalon
Cc: dev, Huisong Li
>
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Acked-by: Huisong Li <lihuisong@huawei.com>
> ---
> Cc: Min Hu (Connor) <humin29@huawei.com>
>
> v2:
> * Converted to explicit checks for zero/non-zero
> * fixed hns3 checks
> * fixed some sample app rxmode.mtu value
> * fixed some sample app max-pkt-len argument and updated doc for it
>
> v3:
> * rebased
>
> v4:
> * fix typos in commit logs
>
> v5:
> * fix testpmd '--max-pkt-len=###' parameter for DTS jumbo frame test
>
> v6:
> * uint32_t type used in 'eth_dev_get_overhead_len()' helper function
> ---
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> --
> 2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length
2021-10-12 7:03 ` Matan Azrad
@ 2021-10-12 11:03 ` Ferruh Yigit
0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-12 11:03 UTC (permalink / raw)
To: Matan Azrad, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang,
Slava Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
NBU-Contact-Thomas Monjalon, Dekel Peled
Cc: dev
On 10/12/2021 8:03 AM, Matan Azrad wrote:
> Hi Ferruh
>
> From: Ferruh Yigit
>> On 10/10/2021 7:30 AM, Matan Azrad wrote:
>>>
>>> Hi Ferruh
>>>
>>> From: Ferruh Yigit
>>>> There is a confusion on setting max Rx packet length, this patch aims
>>>> to clarify it.
>>>>
>>>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>>>> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
>>>> rte_eth_conf'.
>>>>
>>>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and
>>>> result stored into '(struct rte_eth_dev)->data->mtu'.
>>>>
>>>> These two APIs are related but they work in a disconnected way, they
>>>> store the set values in different variables which makes hard to
>>>> figure out which one to use, also having two different method for a
>>>> related functionality is confusing for the users.
>>>>
>>>> Other issues causing confusion is:
>>>> * maximum transmission unit (MTU) is payload of the Ethernet frame.
>> And
>>>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>>>> Ethernet frame overhead, and this overhead may be different from
>>>> device to device based on what device supports, like VLAN and QinQ.
>>>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>>>> which adds additional confusion and some APIs and PMDs already
>>>> discards this documented behavior.
>>>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>>>> field, this adds configuration complexity for application.
>>>>
>>>> As solution, both APIs gets MTU as parameter, and both saves the
>>>> result in same variable '(struct rte_eth_dev)->data->mtu'. For this
>>>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>>>> from jumbo frame.
>>>>
>>>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is
>>>> user request and it should be used only within configure function and
>>>> result should be stored to '(struct rte_eth_dev)->data->mtu'. After
>>>> that point both application and PMD uses MTU from this variable.
>>>>
>>>> When application doesn't provide an MTU during
>> 'rte_eth_dev_configure()'
>>>> default 'RTE_ETHER_MTU' value is used.
>>>>
>>>> Additional clarification done on scattered Rx configuration, in
>>>> relation to MTU and Rx buffer size.
>>>> MTU is used to configure the device for physical Rx/Tx size
>>>> limitation, Rx buffer is where to store Rx packets, many PMDs use
>>>> mbuf data buffer size as Rx buffer size.
>>>> PMDs compare MTU against Rx buffer size to decide enabling scattered
>>>> Rx or not. If scattered Rx is not supported by device, MTU bigger
>>>> than Rx buffer size should fail.
>>>
>>> Should it be compared also against max_lro_pkt_size for the SCATTER
>> enabling by the PMD?
>>>
>>
>> I kept the LRO related code same, the Rx packet length change patch already
>> become complex, LRO related changes can be done later instead of making
>> this set more confusing.
>> It would be great if you and Dekel can work on it as you introduced the
>> 'max_lro_pkt_size' in ethdev.
>
> 'max_lro_pkt_size' is not like MTU (the LRO is done after the PHY received the packet in MTU size.),
> I just asked regarding the SCATTER comparison for this case; I think it should be the same comparison as MTU.
>
>>> What do you think about enabling SCATTER by the API instead of making
>> the comparison in each PMD?
>>>
>>
>> Not sure if we can do that, as far as I can see there is no enforcement on the
>> Rx buffer size but PMDs select it.
>
> Yes, it looks like currently, it is the PMD decision.
> And we can take scattering later(we all the time say that 😊).
>
These details being not clear cause drivers implement slightly different,
which makes later hard to decide which one is correct design and update those
various implementations.
'max_rx_pkt_len' was one of those nuisances that took some time to address.
For the others, agree to address them, just not in this set, what you have
mentioned is:
1) 'max_lro_pkt_size'
2) Scattered Rx configuration
Can you please make a patch to update deprecation notice for them? At least
documenting them prevents us to forget (or postpone) them.
Later we can find an owner, start a discussion thread and fix them for v22.11.
Does it sound reasonable?
> Acked-by: Matan Azrad <matan@nvidia.com>
>
> Maybe, it is good to report the device's max Rx buffer length to let the application have more information to configure the most efficient mbuf size and whether it may get scattered packets or not.
> Also, it will help do all the validations in ethdev layer.
>
>
> <snip, same discussion>
>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-10-12 17:20 ` Hyong Youb Kim (hyonkim)
2021-10-13 7:16 ` Michał Krawczyk
1 sibling, 0 replies; 112+ messages in thread
From: Hyong Youb Kim (hyonkim) @ 2021-10-12 17:20 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Ajit Khaparde,
Somnath Kotur, Igor Russkikh, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley (johndale),
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Yong Wang, Konstantin Ananyev, Radu Nicolau, Akhil Goyal,
David Hunt, John McNamara, Thomas Monjalon
Cc: dev, Huisong Li
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Tuesday, October 12, 2021 8:54 AM
[...]
> Subject: [PATCH v6 4/6] ethdev: remove jumbo offload flag
>
> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>
> Instead of drivers announce this capability, application can deduct the
> capability by checking reported 'dev_info.max_mtu' or
> 'dev_info.max_rx_pktlen'.
>
> And instead of application setting this flag explicitly to enable jumbo
> frames, this can be deduced by driver by comparing requested 'mtu' to
> 'RTE_ETHER_MTU'.
>
> Removing this additional configuration for simplification.
>
> Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Huisong Li <lihuisong@huawei.com>
> ---
For net/enic,
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Thanks.
-Hyong
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
` (6 preceding siblings ...)
2021-10-12 9:42 ` Ananyev, Konstantin
@ 2021-10-13 7:08 ` Xu, Rosen
2021-10-15 1:31 ` Hyong Youb Kim (hyonkim)
2021-10-16 0:24 ` Ferruh Yigit
9 siblings, 0 replies; 112+ messages in thread
From: Xu, Rosen @ 2021-10-13 7:08 UTC (permalink / raw)
To: Yigit, Ferruh, Jerin Jacob, Li, Xiaoyun, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Zhang, Qi Z, Wang, Xiao W,
Matan Azrad, Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj,
Ray Kinsella, Iremonger, Bernard, Ananyev, Konstantin,
Kiran Kumar K, Nithin Dabilpuram, Hunt, David, Mcnamara, John,
Richardson, Bruce, Igor Russkikh, Steven Webster, Peters, Matt,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Wang, Haiyue, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, Daley, John,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Xing, Beilei, Wu, Jingjing, Yang, Qiming,
Andrew Boyer, Shijith Thotton, Srisivasubramanian Srinivasan,
Zyta Szpak, Liron Himi, Heinrich Kuhn, Devendra Singh Rawat,
Andrew Rybchenko, Wiles, Keith, Jiawen Wu, Jian Wang,
Maxime Coquelin, Xia, Chenbo, Chautru, Nicolas, Van Haaren,
Harry, Dumitrescu, Cristian, Nicolau, Radu, Akhil Goyal,
Kantecki, Tomasz, Doherty, Declan, Pavan Nikhilesh, Rybalchenko,
Kirill, Singh, Jasvinder, Thomas Monjalon
Cc: dev, Huisong Li
Hi,
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Tuesday, October 12, 2021 7:54
> To: Jerin Jacob <jerinj@marvell.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Chas Williams <chas3@att.com>; Min Hu (Connor) <humin29@huawei.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> <sachin.saxena@oss.nxp.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang,
> Xiao W <xiao.w.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> Viacheslav Ovsiienko <viacheslavo@nvidia.com>; Harman Kalra
> <hkalra@marvell.com>; Maciej Czekaj <mczekaj@marvell.com>; Ray Kinsella
> <mdr@ashroe.eu>; Iremonger, Bernard <bernard.iremonger@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Kiran Kumar K
> <kirankumark@marvell.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Hunt, David <david.hunt@intel.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Igor Russkikh <irusskikh@marvell.com>;
> Steven Webster <steven.webster@windriver.com>; Peters, Matt
> <matt.peters@windriver.com>; Somalapuram Amaranath
> <asomalap@amd.com>; Rasesh Mody <rmody@marvell.com>; Shahed
> Shaikh <shshaikh@marvell.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Sunil Kumar Kori <skori@marvell.com>;
> Satha Rao <skoteshwar@marvell.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> Marcin Wojtas <mw@semihalf.com>; Michal Krawczyk <mk@semihalf.com>;
> Shai Brandes <shaibran@amazon.com>; Evgeny Schemeilin
> <evgenys@amazon.com>; Igor Chauskin <igorch@amazon.com>; Gagandeep
> Singh <g.singh@nxp.com>; Daley, John <johndale@cisco.com>; Hyong Youb
> Kim <hyonkim@cisco.com>; Ziyang Xuan <xuanziyang2@huawei.com>;
> Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Yisen Zhuang <yisen.zhuang@huawei.com>;
> Lijun Ou <oulijun@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> Andrew Boyer <aboyer@pensando.io>; Xu, Rosen <rosen.xu@intel.com>;
> Shijith Thotton <sthotton@marvell.com>; Srisivasubramanian Srinivasan
> <srinivasan@marvell.com>; Zyta Szpak <zr@semihalf.com>; Liron Himi
> <lironh@marvell.com>; Heinrich Kuhn <heinrich.kuhn@corigine.com>;
> Devendra Singh Rawat <dsinghrawat@marvell.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Wiles, Keith <keith.wiles@intel.com>;
> Jiawen Wu <jiawenwu@trustnetic.com>; Jian Wang
> <jianwang@trustnetic.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>;
> Chautru, Nicolas <nicolas.chautru@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>;
> Akhil Goyal <gakhil@marvell.com>; Kantecki, Tomasz
> <tomasz.kantecki@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Pavan Nikhilesh <pbhagavatula@marvell.com>; Rybalchenko, Kirill
> <kirill.rybalchenko@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org; Huisong Li
> <lihuisong@huawei.com>
> Subject: [PATCH v6 1/6] ethdev: fix max Rx packet length
>
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Acked-by: Huisong Li <lihuisong@huawei.com>
> ---
> Cc: Min Hu (Connor) <humin29@huawei.com>
>
> v2:
> * Converted to explicit checks for zero/non-zero
> * fixed hns3 checks
> * fixed some sample app rxmode.mtu value
> * fixed some sample app max-pkt-len argument and updated doc for it
>
> v3:
> * rebased
>
> v4:
> * fix typos in commit logs
>
> v5:
> * fix testpmd '--max-pkt-len=###' parameter for DTS jumbo frame test
>
> v6:
> * uint32_t type used in 'eth_dev_get_overhead_len()' helper function
> ---
Acked-by: Rosen Xu <rosen.xu@intel.com>
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-12 17:20 ` Hyong Youb Kim (hyonkim)
@ 2021-10-13 7:16 ` Michał Krawczyk
1 sibling, 0 replies; 112+ messages in thread
From: Michał Krawczyk @ 2021-10-13 7:16 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Shai Brandes, Evgeny Schemeilin, Igor Chauskin,
Gagandeep Singh, John Daley, Hyong Youb Kim, Gaetan Rivet,
Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Matan Azrad, Viacheslav Ovsiienko,
Zyta Szpak, Liron Himi, Heinrich Kuhn, Harman Kalra,
Nalla Pradeep, Radha Mohan Chintakuntla, Veerasenareddy Burru,
Devendra Singh Rawat, Andrew Rybchenko, Maciej Czekaj, Jiawen Wu,
Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Konstantin Ananyev, Radu Nicolau, Akhil Goyal, David Hunt,
John McNamara, Thomas Monjalon, dev, Huisong Li
wt., 12 paź 2021 o 01:54 Ferruh Yigit <ferruh.yigit@intel.com> napisał(a):
>
> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>
> Instead of drivers announce this capability, application can deduct the
> capability by checking reported 'dev_info.max_mtu' or
> 'dev_info.max_rx_pktlen'.
>
> And instead of application setting this flag explicitly to enable jumbo
> frames, this can be deduced by driver by comparing requested 'mtu' to
> 'RTE_ETHER_MTU'.
>
> Removing this additional configuration for simplification.
>
> Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Huisong Li <lihuisong@huawei.com>
> ---
For net/ena:
Acked-by: Michal Krawczyk <mk@semihalf.com>
Thanks,
Michal
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
` (7 preceding siblings ...)
2021-10-13 7:08 ` Xu, Rosen
@ 2021-10-15 1:31 ` Hyong Youb Kim (hyonkim)
2021-10-16 0:24 ` Ferruh Yigit
9 siblings, 0 replies; 112+ messages in thread
From: Hyong Youb Kim (hyonkim) @ 2021-10-15 1:31 UTC (permalink / raw)
To: Ferruh Yigit, Jerin Jacob, Xiaoyun Li, Chas Williams,
Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh,
John Daley (johndale),
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Yisen Zhuang, Lijun Ou,
Beilei Xing, Jingjing Wu, Qiming Yang, Andrew Boyer, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Zyta Szpak,
Liron Himi, Heinrich Kuhn, Devendra Singh Rawat,
Andrew Rybchenko, Keith Wiles, Jiawen Wu, Jian Wang,
Maxime Coquelin, Chenbo Xia, Nicolas Chautru, Harry van Haaren,
Cristian Dumitrescu, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Declan Doherty, Pavan Nikhilesh, Kirill Rybalchenko,
Jasvinder Singh, Thomas Monjalon
Cc: dev, Huisong Li
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Tuesday, October 12, 2021 8:54 AM
[...]
> Subject: [PATCH v6 1/6] ethdev: fix max Rx packet length
>
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Acked-by: Huisong Li <lihuisong@huawei.com>
> ---
For net/enic,
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Thanks.
-Hyong
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
` (8 preceding siblings ...)
2021-10-15 1:31 ` Hyong Youb Kim (hyonkim)
@ 2021-10-16 0:24 ` Ferruh Yigit
2021-10-18 8:54 ` Ferruh Yigit
9 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-16 0:24 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
Thomas Monjalon
Cc: dev, Huisong Li
On 10/12/2021 12:53 AM, Ferruh Yigit wrote:
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit<ferruh.yigit@intel.com>
> Acked-by: Ajit Khaparde<ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur<somnath.kotur@broadcom.com>
> Acked-by: Huisong Li<lihuisong@huawei.com>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length
2021-10-16 0:24 ` Ferruh Yigit
@ 2021-10-18 8:54 ` Ferruh Yigit
0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-18 8:54 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
Thomas Monjalon
Cc: dev, Huisong Li
On 10/16/2021 1:24 AM, Ferruh Yigit wrote:
> On 10/12/2021 12:53 AM, Ferruh Yigit wrote:
>> There is a confusion on setting max Rx packet length, this patch aims to
>> clarify it.
>>
>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
>> rte_eth_conf'.
>>
>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
>> stored into '(struct rte_eth_dev)->data->mtu'.
>>
>> These two APIs are related but they work in a disconnected way, they
>> store the set values in different variables which makes hard to figure
>> out which one to use, also having two different method for a related
>> functionality is confusing for the users.
>>
>> Other issues causing confusion is:
>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>> Ethernet frame overhead, and this overhead may be different from
>> device to device based on what device supports, like VLAN and QinQ.
>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>> which adds additional confusion and some APIs and PMDs already
>> discards this documented behavior.
>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>> field, this adds configuration complexity for application.
>>
>> As solution, both APIs gets MTU as parameter, and both saves the result
>> in same variable '(struct rte_eth_dev)->data->mtu'. For this
>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>> from jumbo frame.
>>
>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
>> request and it should be used only within configure function and result
>> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
>> both application and PMD uses MTU from this variable.
>>
>> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
>> default 'RTE_ETHER_MTU' value is used.
>>
>> Additional clarification done on scattered Rx configuration, in
>> relation to MTU and Rx buffer size.
>> MTU is used to configure the device for physical Rx/Tx size limitation,
>> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
>> size as Rx buffer size.
>> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
>> or not. If scattered Rx is not supported by device, MTU bigger than Rx
>> buffer size should fail.
>>
>> Signed-off-by: Ferruh Yigit<ferruh.yigit@intel.com>
>> Acked-by: Ajit Khaparde<ajit.khaparde@broadcom.com>
>> Acked-by: Somnath Kotur<somnath.kotur@broadcom.com>
>> Acked-by: Huisong Li<lihuisong@huawei.com>
>
> Series applied to dpdk-next-net/main, thanks.
>
I recognized some errors in jumbo frame detection checks, I am sending a new
version, dropped from next-net.
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
` (8 preceding siblings ...)
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
@ 2021-10-18 13:48 ` Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
` (5 more replies)
9 siblings, 6 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-18 13:48 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
Thomas Monjalon
Cc: Ferruh Yigit, dev, Huisong Li
There is a confusion on setting max Rx packet length, this patch aims to
clarify it.
'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.
Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.
These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.
Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.
As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.
For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.
When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.
Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
---
app/test-eventdev/test_perf_common.c | 1 -
app/test-eventdev/test_pipeline_common.c | 5 +-
app/test-pmd/cmdline.c | 49 +++----
app/test-pmd/config.c | 22 ++-
app/test-pmd/parameters.c | 2 +-
app/test-pmd/testpmd.c | 113 +++++++++------
app/test-pmd/testpmd.h | 4 +-
app/test/test_link_bonding.c | 1 -
app/test/test_link_bonding_mode4.c | 1 -
| 2 -
app/test/test_pmd_perf.c | 1 -
doc/guides/nics/dpaa.rst | 2 +-
doc/guides/nics/dpaa2.rst | 2 +-
doc/guides/nics/features.rst | 2 +-
doc/guides/nics/fm10k.rst | 2 +-
doc/guides/nics/mlx5.rst | 4 +-
doc/guides/nics/octeontx.rst | 2 +-
doc/guides/nics/thunderx.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 25 ----
doc/guides/sample_app_ug/flow_classify.rst | 7 +-
doc/guides/sample_app_ug/l3_forward.rst | 6 +-
.../sample_app_ug/l3_forward_access_ctrl.rst | 4 +-
doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +-
.../sample_app_ug/l3_forward_power_man.rst | 4 +-
.../sample_app_ug/performance_thread.rst | 4 +-
doc/guides/sample_app_ug/skeleton.rst | 7 +-
drivers/net/atlantic/atl_ethdev.c | 3 -
drivers/net/avp/avp_ethdev.c | 17 +--
drivers/net/axgbe/axgbe_ethdev.c | 7 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
drivers/net/bnxt/bnxt_ethdev.c | 21 +--
drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
drivers/net/cnxk/cnxk_ethdev.c | 9 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-
drivers/net/cxgbe/cxgbe_main.c | 3 +-
drivers/net/cxgbe/sge.c | 3 +-
drivers/net/dpaa/dpaa_ethdev.c | 52 +++----
drivers/net/dpaa2/dpaa2_ethdev.c | 35 ++---
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/igb_ethdev.c | 18 +--
drivers/net/e1000/igb_rxtx.c | 16 +--
drivers/net/ena/ena_ethdev.c | 27 ++--
drivers/net/enetc/enetc_ethdev.c | 24 +---
drivers/net/enic/enic_ethdev.c | 2 +-
drivers/net/enic/enic_main.c | 42 +++---
drivers/net/fm10k/fm10k_ethdev.c | 2 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-
drivers/net/hns3/hns3_ethdev.c | 42 +-----
drivers/net/hns3/hns3_ethdev_vf.c | 28 +---
drivers/net/hns3/hns3_rxtx.c | 10 +-
drivers/net/i40e/i40e_ethdev.c | 10 +-
drivers/net/i40e/i40e_rxtx.c | 4 +-
drivers/net/iavf/iavf_ethdev.c | 9 +-
drivers/net/ice/ice_dcf_ethdev.c | 5 +-
drivers/net/ice/ice_ethdev.c | 14 +-
drivers/net/ice/ice_rxtx.c | 12 +-
drivers/net/igc/igc_ethdev.c | 51 ++-----
drivers/net/igc/igc_ethdev.h | 7 +
drivers/net/igc/igc_txrx.c | 22 +--
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ionic/ionic_rxtx.c | 6 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 35 ++---
drivers/net/ixgbe/ixgbe_pf.c | 6 +-
drivers/net/ixgbe/ixgbe_rxtx.c | 15 +-
drivers/net/liquidio/lio_ethdev.c | 20 +--
drivers/net/mlx4/mlx4_rxq.c | 17 +--
drivers/net/mlx5/mlx5_rxq.c | 25 ++--
drivers/net/mvneta/mvneta_ethdev.c | 7 -
drivers/net/mvneta/mvneta_rxtx.c | 13 +-
drivers/net/mvpp2/mrvl_ethdev.c | 34 ++---
drivers/net/nfp/nfp_common.c | 9 +-
drivers/net/octeontx/octeontx_ethdev.c | 12 +-
drivers/net/octeontx2/otx2_ethdev.c | 2 +-
drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-
drivers/net/pfe/pfe_ethdev.c | 7 +-
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/qede/qede_rxtx.c | 8 +-
drivers/net/sfc/sfc_ethdev.c | 4 +-
drivers/net/sfc/sfc_port.c | 6 +-
drivers/net/tap/rte_eth_tap.c | 7 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +-
drivers/net/txgbe/txgbe_ethdev.c | 7 +-
drivers/net/txgbe/txgbe_ethdev.h | 4 +
drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
drivers/net/txgbe/txgbe_rxtx.c | 19 +--
drivers/net/virtio/virtio_ethdev.c | 9 +-
examples/bbdev_app/main.c | 1 -
examples/bond/main.c | 1 -
examples/distributor/main.c | 1 -
.../pipeline_worker_generic.c | 1 -
.../eventdev_pipeline/pipeline_worker_tx.c | 1 -
examples/flow_classify/flow_classify.c | 12 +-
examples/ioat/ioatfwd.c | 1 -
examples/ip_fragmentation/main.c | 12 +-
examples/ip_pipeline/link.c | 2 +-
examples/ip_reassembly/main.c | 12 +-
examples/ipsec-secgw/ipsec-secgw.c | 7 +-
examples/ipv4_multicast/main.c | 9 +-
examples/kni/main.c | 6 +-
examples/l2fwd-cat/l2fwd-cat.c | 8 +-
examples/l2fwd-crypto/main.c | 1 -
examples/l2fwd-event/l2fwd_common.c | 1 -
examples/l3fwd-acl/main.c | 129 +++++++++---------
examples/l3fwd-graph/main.c | 83 +++++++----
examples/l3fwd-power/main.c | 90 +++++++-----
examples/l3fwd/main.c | 84 +++++++-----
.../performance-thread/l3fwd-thread/main.c | 88 +++++++-----
.../performance-thread/l3fwd-thread/test.sh | 24 ++--
examples/pipeline/obj.c | 2 +-
examples/ptpclient/ptpclient.c | 10 +-
examples/qos_meter/main.c | 1 -
examples/qos_sched/init.c | 1 -
examples/rxtx_callbacks/main.c | 10 +-
examples/skeleton/basicfwd.c | 12 +-
examples/vhost/main.c | 4 +-
examples/vm_power_manager/main.c | 11 +-
lib/ethdev/rte_ethdev.c | 94 +++++++------
lib/ethdev/rte_ethdev.h | 2 +-
lib/ethdev/rte_ethdev_trace.h | 2 +-
121 files changed, 814 insertions(+), 1074 deletions(-)
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..660d5a0364b6 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..5fcea74b4d43 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
return -EINVAL;
}
- port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
- if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
+ port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
+ if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index b8f06063d225..f777cc453836 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1880,45 +1880,38 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
__rte_unused void *data)
{
struct cmd_config_max_pkt_len_result *res = parsed_result;
- uint32_t max_rx_pkt_len_backup = 0;
- portid_t pid;
+ portid_t port_id;
int ret;
+ if (strcmp(res->name, "max-pkt-len") != 0) {
+ printf("Unknown parameter\n");
+ return;
+ }
+
if (!all_ports_stopped()) {
fprintf(stderr, "Please stop all ports first\n");
return;
}
- RTE_ETH_FOREACH_DEV(pid) {
- struct rte_port *port = &ports[pid];
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_port *port = &ports[port_id];
- if (!strcmp(res->name, "max-pkt-len")) {
- if (res->value < RTE_ETHER_MIN_LEN) {
- fprintf(stderr,
- "max-pkt-len can not be less than %d\n",
- RTE_ETHER_MIN_LEN);
- return;
- }
- if (res->value == port->dev_conf.rxmode.max_rx_pkt_len)
- return;
-
- ret = eth_dev_info_get_print_err(pid, &port->dev_info);
- if (ret != 0) {
- fprintf(stderr,
- "rte_eth_dev_info_get() failed for port %u\n",
- pid);
- return;
- }
-
- max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len;
+ if (res->value < RTE_ETHER_MIN_LEN) {
+ fprintf(stderr,
+ "max-pkt-len can not be less than %d\n",
+ RTE_ETHER_MIN_LEN);
+ return;
+ }
- port->dev_conf.rxmode.max_rx_pkt_len = res->value;
- if (update_jumbo_frame_offload(pid) != 0)
- port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup;
- } else {
- fprintf(stderr, "Unknown parameter\n");
+ ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
+ if (ret != 0) {
+ fprintf(stderr,
+ "rte_eth_dev_info_get() failed for port %u\n",
+ port_id);
return;
}
+
+ update_jumbo_frame_offload(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f83a1abb09cf..333d3dd62259 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1209,7 +1209,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
int diag;
struct rte_port *rte_port = &ports[port_id];
struct rte_eth_dev_info dev_info;
- uint16_t eth_overhead;
int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
@@ -1226,21 +1225,18 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
return;
}
diag = rte_eth_dev_set_mtu(port_id, mtu);
- if (diag)
+ if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
- else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- /*
- * Ether overhead in driver is equal to the difference of
- * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
- * device supports jumbo frame.
- */
- eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
- if (mtu > RTE_ETHER_MTU) {
+ return;
+ }
+
+ rte_port->dev_conf.rxmode.mtu = mtu;
+
+ if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (mtu > RTE_ETHER_MTU)
rte_port->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- rte_port->dev_conf.rxmode.max_rx_pkt_len =
- mtu + eth_overhead;
- } else
+ else
rte_port->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
}
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index b3217d6e5cff..ab8e8f7e694a 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -951,7 +951,7 @@ launch_args_parse(int argc, char** argv)
if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
n = atoi(optarg);
if (n >= RTE_ETHER_MIN_LEN)
- rx_mode.max_rx_pkt_len = (uint32_t) n;
+ max_rx_pkt_len = n;
else
rte_exit(EXIT_FAILURE,
"Invalid max-pkt-len=%d - should be > %d\n",
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 909b6571dc26..50d0ec4fe3db 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -219,6 +219,11 @@ unsigned int xstats_display_num; /**< Size of extended statistics to show */
*/
uint8_t f_quit;
+/*
+ * Max Rx frame size, set by '--max-pkt-len' parameter.
+ */
+uint32_t max_rx_pkt_len;
+
/*
* Configuration of packet segments used to scatter received packets
* if some of split features is configured.
@@ -451,13 +456,7 @@ lcoreid_t latencystats_lcore_id = -1;
/*
* Ethernet device configuration.
*/
-struct rte_eth_rxmode rx_mode = {
- /* Default maximum frame length.
- * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
- * in init_config().
- */
- .max_rx_pkt_len = 0,
-};
+struct rte_eth_rxmode rx_mode;
struct rte_eth_txmode tx_mode = {
.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
@@ -1542,11 +1541,24 @@ check_nb_hairpinq(queueid_t hairpinq)
return 0;
}
+static int
+get_eth_overhead(struct rte_eth_dev_info *dev_info)
+{
+ uint32_t eth_overhead;
+
+ if (dev_info->max_mtu != UINT16_MAX &&
+ dev_info->max_rx_pktlen > dev_info->max_mtu)
+ eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu;
+ else
+ eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return eth_overhead;
+}
+
static void
init_config_port_offloads(portid_t pid, uint32_t socket_id)
{
struct rte_port *port = &ports[pid];
- uint16_t data_size;
int ret;
int i;
@@ -1560,7 +1572,7 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid);
+ ret = update_jumbo_frame_offload(pid, 0);
if (ret != 0)
fprintf(stderr,
"Updating jumbo frame offload failed for port %u\n",
@@ -1580,6 +1592,10 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (eth_link_speed)
port->dev_conf.link_speeds = eth_link_speed;
+ if (max_rx_pkt_len)
+ port->dev_conf.rxmode.mtu = max_rx_pkt_len -
+ get_eth_overhead(&port->dev_info);
+
/* set flag to initialize port/queue */
port->need_reconfig = 1;
port->need_reconfig_queues = 1;
@@ -1592,14 +1608,20 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
*/
if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
- data_size = rx_mode.max_rx_pkt_len /
- port->dev_info.rx_desc_lim.nb_mtu_seg_max;
-
- if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) {
- mbuf_data_size[0] = data_size + RTE_PKTMBUF_HEADROOM;
- TESTPMD_LOG(WARNING,
- "Configured mbuf size of the first segment %hu\n",
- mbuf_data_size[0]);
+ uint32_t eth_overhead = get_eth_overhead(&port->dev_info);
+ uint16_t mtu;
+
+ if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
+ uint16_t data_size = (mtu + eth_overhead) /
+ port->dev_info.rx_desc_lim.nb_mtu_seg_max;
+ uint16_t buffer_size = data_size + RTE_PKTMBUF_HEADROOM;
+
+ if (buffer_size > mbuf_data_size[0]) {
+ mbuf_data_size[0] = buffer_size;
+ TESTPMD_LOG(WARNING,
+ "Configured mbuf size of the first segment %hu\n",
+ mbuf_data_size[0]);
+ }
}
}
}
@@ -2735,6 +2757,7 @@ start_port(portid_t pid)
pi);
return -1;
}
+
/* configure port */
diag = eth_dev_configure_mp(pi, nb_rxq + nb_hairpinq,
nb_txq + nb_hairpinq,
@@ -3669,44 +3692,45 @@ rxtx_port_config(struct rte_port *port)
/*
* Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned if JUMBO_FRAME offload is not set.
+ * MTU is also aligned.
*
* port->dev_info should be set before calling this function.
*
+ * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
+ * ETH_OVERHEAD". This is useful to update flags but not MTU value.
+ *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid)
+update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
uint64_t rx_offloads;
- int ret;
+ uint16_t mtu, new_mtu;
bool on;
- /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
- if (port->dev_info.max_mtu != UINT16_MAX &&
- port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
- eth_overhead = port->dev_info.max_rx_pktlen -
- port->dev_info.max_mtu;
- else
- eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ eth_overhead = get_eth_overhead(&port->dev_info);
- rx_offloads = port->dev_conf.rxmode.offloads;
+ if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
+ printf("Failed to get MTU for port %u\n", portid);
+ return -1;
+ }
- /* Default config value is 0 to use PMD specific overhead */
- if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
- port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
+ if (max_rx_pktlen == 0)
+ max_rx_pktlen = mtu + eth_overhead;
+
+ rx_offloads = port->dev_conf.rxmode.offloads;
+ new_mtu = max_rx_pktlen - eth_overhead;
- if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
+ if (new_mtu <= RTE_ETHER_MTU) {
rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
on = false;
} else {
if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
fprintf(stderr,
"Frame size (%u) is not supported by port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len,
- portid);
+ max_rx_pktlen, portid);
return -1;
}
rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -3727,19 +3751,18 @@ update_jumbo_frame_offload(portid_t portid)
}
}
- /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
- * if unset do it here
- */
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- ret = eth_dev_set_mtu_mp(portid,
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
- if (ret)
- fprintf(stderr,
- "Failed to set MTU to %u for port %u\n",
- port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead,
- portid);
+ if (mtu == new_mtu)
+ return 0;
+
+ if (eth_dev_set_mtu_mp(portid, new_mtu) != 0) {
+ fprintf(stderr,
+ "Failed to set MTU to %u for port %u\n",
+ new_mtu, portid);
+ return -1;
}
+ port->dev_conf.rxmode.mtu = new_mtu;
+
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 39f464f1ee16..42a597596fdd 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -467,6 +467,8 @@ extern uint8_t bitrate_enabled;
extern struct rte_fdir_conf fdir_conf;
+extern uint32_t max_rx_pkt_len;
+
/*
* Configuration of packet segments used to scatter received packets
* if some of split features is configured.
@@ -1043,7 +1045,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid);
+int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..5388d18125a6 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
.split_hdr_size = 0,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index f120b2e3be24..189d2430f27e 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..e7bb0497b663 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = {
static struct rte_eth_conf default_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
@@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
static struct rte_eth_conf rss_pmd_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..a3b4f52c65e6 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 7355ec305916..9dad612058c6 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -335,7 +335,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index df23a5704dca..831bc564883a 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -545,7 +545,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
up to 10240 bytes can still reach the host interface.
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index f883f11a8b19..79bce2784195 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -166,7 +166,7 @@ Jumbo frame
Supports Rx jumbo frames.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.max_rx_pkt_len``.
+ ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..ed6afd62703d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -141,7 +141,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
up to 15364 bytes can still reach the host interface.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index b76e979f4704..e4f58c899031 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -606,9 +606,9 @@ Driver options
and each stride receives one packet. MPRQ can improve throughput for
small-packet traffic.
- When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
+ When MPRQ is enabled, MTU can be larger than the size of
user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
- configure large stride size enough to accommodate max_rx_pkt_len as long as
+ configure large stride size enough to accommodate MTU as long as
device allows. Note that this can waste system memory compared to enabling Rx
scatter and multi-segment packet.
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index b1a868b054d1..8236cc3e93e0 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -157,7 +157,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
up to 32k bytes can still reach the host interface.
diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
index 12d43ce93e28..98f23a2b2a3d 100644
--- a/doc/guides/nics/thunderx.rst
+++ b/doc/guides/nics/thunderx.rst
@@ -392,7 +392,7 @@ Maximum packet length
~~~~~~~~~~~~~~~~~~~~~
The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
-is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
+is fixed and cannot be changed. So, even when the ``rxmode.mtu``
member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
up to 9200 bytes can still reach the host interface.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e656a293cac6..0b4d03fb961f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -81,31 +81,6 @@ Deprecation Notices
In 19.11 PMDs will still update the field even when the offload is not
enabled.
-* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be
- replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
- The new ``mtu`` field will be used to configure the initial device MTU via
- ``rte_eth_dev_configure()`` API.
- Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
- The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store
- the configured ``mtu`` value,
- and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
- be used to store the user configuration request.
- Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled,
- ``mtu`` field will be always valid.
- When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU``
- value will be used.
- ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully,
- either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
-
- An application may need to configure device for a specific Rx packet size, like for
- cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
- can't be bigger than Rx buffer size.
- To cover these cases an application needs to know the device packet overhead to be
- able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
- ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
- the device packet overhead can be calculated as:
- ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu``
-
* ethdev: Announce moving from dedicated modify function for each field,
to using the general ``rte_flow_modify_field`` action.
diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst
index 812aaa87b05b..6c4c04e935e4 100644
--- a/doc/guides/sample_app_ug/flow_classify.rst
+++ b/doc/guides/sample_app_ug/flow_classify.rst
@@ -162,12 +162,7 @@ Forwarding application is shown below:
:end-before: >8 End of initializing a given port.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
-
-.. literalinclude:: ../../../examples/flow_classify/flow_classify.c
- :language: c
- :start-after: Ethernet ports configured with default settings using struct. 8<
- :end-before: >8 End of configuration of Ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 2d5cd5f1c0ba..56af5cd5b383 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -65,7 +65,7 @@ The application has a number of command line options::
[--lookup LOOKUP_METHOD]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--hash-entry-num]
[--ipv6]
@@ -95,9 +95,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
index 2cf6e4556f14..486247ac2e4f 100644
--- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
+++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
@@ -236,7 +236,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
+ ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --rule_ipv6 FILENAME [--alg=<val>] [--max-pkt-len PKTLEN] [--no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
where,
@@ -255,8 +255,6 @@ where,
* --alg=<val>: optional, ACL classify method to use, one of:
``scalar|sse|avx2|neon|altivec|avx512x16|avx512x32``
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst b/doc/guides/sample_app_ug/l3_forward_graph.rst
index 03e9a85aa68c..0a3e0d44ecea 100644
--- a/doc/guides/sample_app_ug/l3_forward_graph.rst
+++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
@@ -48,7 +48,7 @@ The application has a number of command line options similar to l3fwd::
[-P]
--config(port,queue,lcore)[,(port,queue,lcore)]
[--eth-dest=X,MM:MM:MM:MM:MM:MM]
- [--enable-jumbo [--max-pkt-len PKTLEN]]
+ [--max-pkt-len PKTLEN]
[--no-numa]
[--per-port-pool]
@@ -63,9 +63,7 @@ Where,
* ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet destination for port X.
-* ``--enable-jumbo:`` Optional, enables jumbo frames.
-
-* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo, maximum packet length in decimal (64-9600).
+* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
* ``--no-numa:`` Optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst
index 0495314c87d5..8817eaadbfc3 100644
--- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
+++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
@@ -88,7 +88,7 @@ The application has a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-pkt-len PKTLEN]] [--no-numa]
+ ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--max-pkt-len PKTLEN] [--no-numa]
where,
@@ -99,8 +99,6 @@ where,
* --config (port,queue,lcore)[,(port,queue,lcore)]: determines which queues from which ports are mapped to which cores.
-* --enable-jumbo: optional, enables jumbo frames
-
* --max-pkt-len: optional, maximum packet length in decimal (64-9600)
* --no-numa: optional, disables numa awareness
diff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst
index 9b09838f6448..7d1bf6eaae8c 100644
--- a/doc/guides/sample_app_ug/performance_thread.rst
+++ b/doc/guides/sample_app_ug/performance_thread.rst
@@ -59,7 +59,7 @@ The application has a number of command line options::
-p PORTMASK [-P]
--rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
--tx(lcore,thread)[,(lcore,thread)]
- [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
+ [--max-pkt-len PKTLEN] [--no-numa]
[--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
[--parse-ptype]
@@ -80,8 +80,6 @@ Where:
the lcore the thread runs on, and the id of RX thread with which it is
associated. The parameters are explained below.
-* ``--enable-jumbo``: optional, enables jumbo frames.
-
* ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
* ``--no-numa``: optional, disables numa awareness.
diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst
index f7bcd7ed2a1d..6d0de6440105 100644
--- a/doc/guides/sample_app_ug/skeleton.rst
+++ b/doc/guides/sample_app_ug/skeleton.rst
@@ -106,12 +106,7 @@ Forwarding application is shown below:
:end-before: >8 End of main functional part of port initialization.
The Ethernet ports are configured with default settings using the
-``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
-
-.. literalinclude:: ../../../examples/skeleton/basicfwd.c
- :language: c
- :start-after: Configuration of ethernet ports. 8<
- :end-before: >8 End of configuration of ethernet ports.
+``rte_eth_dev_configure()`` function.
For this example the ports are set up with 1 RX and 1 TX queue using the
``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..3f654c071566 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return 0;
}
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 6cb8bb4338de..932ec90265cf 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1059,17 +1059,18 @@ static int
avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
struct avp_dev *avp)
{
- unsigned int max_rx_pkt_len;
+ unsigned int max_rx_pktlen;
- max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
- if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
- (max_rx_pkt_len > avp->host_mbuf_size)) {
+ if (max_rx_pktlen > avp->guest_mbuf_size ||
+ max_rx_pktlen > avp->host_mbuf_size) {
/*
* If the guest MTU is greater than either the host or guest
* buffers then chained mbufs have to be enabled in the TX
* direction. It is assumed that the application will not need
- * to send packets larger than their max_rx_pkt_len (MRU).
+ * to send packets larger than their MTU.
*/
return 1;
}
@@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n",
avp->max_rx_pkt_len,
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN,
avp->host_mbuf_size,
avp->guest_mbuf_size);
@@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* function; send it truncated to avoid the performance
* hit of having to manage returning the already
* allocated buffer to the free list. This should not
- * happen since the application should have set the
- * max_rx_pkt_len based on its MTU and it should be
+ * happen since the application should have not send
+ * packages larger than its MTU and it should be
* policing its own packet sizes.
*/
txq->errors++;
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index ebd5411fddf3..76cd892eec7b 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
struct axgbe_port *pdata = dev->data->dev_private;
int ret;
struct rte_eth_dev_data *dev_data = dev->data;
- uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len;
+ uint16_t max_pkt_len;
dev->dev_ops = &axgbe_eth_dev_ops;
@@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
+
+ max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
max_pkt_len > pdata->rx_buf_size)
dev_data->scattered_rx = 1;
@@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (frame_size > AXGBE_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
val = 1;
@@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
val = 0;
}
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 8b0806016ff0..aff53fedb980 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -175,16 +175,12 @@ static int
bnx2x_dev_configure(struct rte_eth_dev *dev)
{
struct bnx2x_softc *sc = dev->data->dev_private;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
PMD_INIT_FUNC_TRACE(sc);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- dev->data->mtu = sc->mtu;
- }
+ sc->mtu = dev->data->dev_conf.rxmode.mtu;
if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues");
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index ebda74d02f3a..890197d34037 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1161,13 +1161,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
- BNXT_NUM_VLANS;
- bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
- }
+ bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
+
return 0;
resource_error:
@@ -1205,6 +1200,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
*/
static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
uint16_t buf_size;
int i;
@@ -1219,7 +1215,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
RTE_PKTMBUF_HEADROOM);
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size)
+ if (eth_dev->data->mtu + overhead > buf_size)
return 1;
}
return 0;
@@ -3030,6 +3026,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id,
int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
{
+ uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
uint32_t rc = 0;
@@ -3043,8 +3040,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
if (!eth_dev->data->nb_rx_queues)
return rc;
- new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
- VLAN_TAG_SIZE * BNXT_NUM_VLANS;
+ new_pkt_size = new_mtu + overhead;
/*
* Disallow any MTU change that would require scattered receive support
@@ -3071,7 +3067,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
/* Is there a change in mtu setting? */
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size)
+ if (eth_dev->data->mtu == new_mtu)
return rc;
for (i = 0; i < bp->nr_vnics; i++) {
@@ -3093,9 +3089,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
}
}
- if (!rc)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size;
-
if (bnxt_hwrm_config_host_mtu(bp))
PMD_DRV_LOG(WARNING, "Failed to configure host MTU\n");
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 0ca34c604ba8..6d8b3c245a84 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1721,8 +1721,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_VLAN_FILTER;
- slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ slave_eth_dev->data->dev_conf.rxmode.mtu =
+ bonded_eth_dev->data->dev_conf.rxmode.mtu;
if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_JUMBO_FRAME)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 966bd23c7f98..c94fc505fef1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -209,7 +209,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
}
@@ -220,18 +220,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct cnxk_eth_rxq_sp *rxq;
- uint16_t mtu;
int rc;
rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
/* Setup scatter mode if needed by jumbo */
nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD +
- CNXK_NIX_MAX_VTAG_ACT_SIZE;
-
- rc = cnxk_nix_mtu_set(eth_dev, mtu);
+ rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
if (rc)
plt_err("Failed to set default MTU size, rc=%d", rc);
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..695d0d6fd3e2 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
goto exit;
}
- frame_size += RTE_ETHER_CRC_LEN;
-
- if (frame_size > RTE_ETHER_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index cd9aa9f84b63..458111ae5b16 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return err;
/* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
+ if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
/* set to jumbo mode if needed */
- if (new_mtu > CXGBE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- if (!err)
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu;
-
return err;
}
@@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
const struct rte_eth_rxconf *rx_conf __rte_unused,
struct rte_mempool *mp)
{
- unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
struct rte_eth_dev_info dev_info;
@@ -682,7 +680,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
rxq->fl.size = temp_nb_desc;
/* Set to jumbo mode if necessary */
- if (pkt_len > CXGBE_ETH_MAX_LEN)
+ if (eth_dev->data->mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..91d6bb9bbcb0 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
unsigned int mtu;
int ret;
- mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
+ mtu = pi->eth_dev->data->mtu;
conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..830f5192474d 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
u32 wr_mid;
u64 cntrl, *end;
bool v6;
- u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
+ u32 max_pkt_len;
/* Reject xmit if queue is stopped */
if (unlikely(txq->flags & EQ_STOPPED))
@@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
return 0;
}
+ max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
(unlikely(m->pkt_len > max_pkt_len)))
goto out_free;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 840257c607dd..c244c6f5a422 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > DPAA_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
@@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
struct fman_if *fif = dev->process_private;
struct __fman_if *__fif;
struct rte_intr_handle *intr_handle;
+ uint32_t max_rx_pktlen;
int speed, duplex;
int ret;
@@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- DPAA_PMD_DEBUG("enabling jumbo");
-
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- DPAA_MAX_RX_PKT_LEN)
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
- else {
- DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
- "supported is %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- DPAA_MAX_RX_PKT_LEN);
- max_len = DPAA_MAX_RX_PKT_LEN;
- }
-
- fman_if_set_maxfrm(dev->process_private, max_len);
- dev->data->mtu = max_len
- - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
+ DPAA_PMD_INFO("enabling jumbo override conf max len=%d "
+ "supported is %d",
+ max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
+ max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
+ fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
+
if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
DPAA_PMD_DEBUG("enabling scatter mode");
fman_if_set_sg(dev->process_private, 1);
@@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
u32 flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -EINVAL;
}
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
+ VLAN_TAG_SIZE;
/* Max packet can fit in single buffer */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
+ if (max_rx_pktlen <= buffsz) {
;
} else if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- buffsz * DPAA_SGT_MAX_ENTRIES) {
- DPAA_PMD_ERR("max RxPkt size %d too big to fit "
+ if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
+ DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit "
"MaxSGlist %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz * DPAA_SGT_MAX_ENTRIES);
+ max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES);
rte_errno = EOVERFLOW;
return -rte_errno;
}
@@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- buffsz - RTE_PKTMBUF_HEADROOM);
+ max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
}
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
@@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->valid = 1;
DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif),
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index f2519f0fadf4..b2a0c2dd40c5 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
int tx_l3_csum_offload = false;
int tx_l4_csum_offload = false;
int ret, tc_index;
+ uint32_t max_rx_pktlen;
PMD_INIT_FUNC_TRACE();
@@ -559,25 +560,19 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
tx_offloads, dev_tx_offloads_nodis);
}
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
- ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
- priv->token, eth_conf->rxmode.max_rx_pkt_len
- - RTE_ETHER_CRC_LEN);
- if (ret) {
- DPAA2_PMD_ERR(
- "Unable to set mtu. check config");
- return ret;
- }
- dev->data->mtu =
- dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
- VLAN_TAG_SIZE;
- DPAA2_PMD_INFO("MTU configured for the device: %d",
- dev->data->mtu);
- } else {
- return -1;
+ max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
+ if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
+ ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
+ priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
+ if (ret != 0) {
+ DPAA2_PMD_ERR("Unable to set mtu. check config");
+ return ret;
}
+ DPAA2_PMD_INFO("MTU configured for the device: %d",
+ dev->data->mtu);
+ } else {
+ return -1;
}
if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
@@ -1470,15 +1465,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (frame_size > DPAA2_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 5a3af0da9028..c9692bd7b7bc 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1816,7 +1816,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -1827,8 +1827,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
return 0;
}
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 194da6b5b3f0..9b75b5d08b3a 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2677,9 +2677,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD);
}
static void
@@ -2695,10 +2693,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE);
+ E1000_WRITE_REG(hw, E1000_RLPML,
+ dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
}
static int
@@ -4396,7 +4392,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (frame_size > E1000_ETH_MAX_LEN) {
+ if (mtu > RTE_ETHER_MTU) {
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= E1000_RCTL_LPE;
@@ -4407,11 +4403,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- E1000_WRITE_REG(hw, E1000_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
return 0;
}
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index e04c2b41ab42..2fc27bbbc682 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -2312,6 +2312,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
@@ -2330,9 +2331,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
rctl |= E1000_RCTL_LPE;
/*
@@ -2410,8 +2410,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
@@ -2635,15 +2634,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
uint32_t srrctl;
uint16_t buf_size;
uint16_t rctl_bsize;
+ uint32_t max_len;
uint16_t i;
int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
/* setup MTU */
- e1000_rlpml_set_vf(hw,
- (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE));
+ max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
+ e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -2700,8 +2699,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
E1000_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE) > buf_size){
+ if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG,
"forcing scatter mode");
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index a82d4b628736..e2f7213acb84 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -677,26 +677,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev,
return rc;
}
-static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
-{
- uint32_t max_frame_len = adapter->max_mtu;
-
- if (adapter->edev_data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- max_frame_len =
- adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
-
- return max_frame_len;
-}
-
static int ena_check_valid_conf(struct ena_adapter *adapter)
{
- uint32_t max_frame_len = ena_get_mtu_conf(adapter);
+ uint32_t mtu = adapter->edev_data->mtu;
- if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_INIT_LOG(ERR,
"Unsupported MTU of %d. Max MTU: %d, min MTU: %d\n",
- max_frame_len, adapter->max_mtu, ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return ENA_COM_UNSUPPORTED;
}
@@ -869,10 +857,10 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
ena_dev = &adapter->ena_dev;
ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
- if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
+ if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
PMD_DRV_LOG(ERR,
"Invalid MTU setting. New MTU: %d, max MTU: %d, min MTU: %d\n",
- mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
+ mtu, adapter->max_mtu, ENA_MIN_MTU);
return -EINVAL;
}
@@ -1943,7 +1931,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
dev_info->hash_key_size = ENA_HASH_KEY_SIZE;
dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
- dev_info->max_rx_pktlen = adapter->max_mtu;
+ dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ dev_info->min_mtu = ENA_MIN_MTU;
+ dev_info->max_mtu = adapter->max_mtu;
dev_info->max_mac_addrs = 1;
dev_info->max_rx_queues = adapter->max_num_io_queues;
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 784ed391b749..16c83914e8ce 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -681,7 +681,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (frame_size > ENETC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads &=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
@@ -691,8 +691,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
/*setting the MTU*/
enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) |
ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
@@ -709,23 +707,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
uint64_t rx_offloads = eth_conf->rxmode.offloads;
uint32_t checksum = L3_CKSUM | L4_CKSUM;
+ uint32_t max_len;
PMD_INIT_FUNC_TRACE();
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- uint32_t max_len;
-
- max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
-
- enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
- ENETC_SET_MAXFRM(max_len));
- enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
- ENETC_MAC_MAXFRM_SIZE);
- enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
- 2 * ENETC_MAC_MAXFRM_SIZE);
- dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN -
- RTE_ETHER_CRC_LEN;
- }
+ max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len));
+ enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
+ enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
int config;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index a1a53248f63b..8df7332bc5e0 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -459,7 +459,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
* max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
* a hint to the driver to size receive buffers accordingly so that
* larger-than-vnic-mtu packets get truncated.. For DPDK, we let
- * the user decide the buffer size via rxmode.max_rx_pkt_len, basically
+ * the user decide the buffer size via rxmode.mtu, basically
* ignoring vNIC mtu.
*/
device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu);
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..dfc7f5d1f94f 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
struct rq_enet_desc *rqd = rq->ring.descs;
unsigned i;
dma_addr_t dma_addr;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint16_t rq_buf_len;
if (!rq->in_use)
@@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq)
/*
* If *not* using scatter and the mbuf size is greater than the
- * requested max packet size (max_rx_pkt_len), then reduce the
- * posted buffer size to max_rx_pkt_len. HW still receives packets
- * larger than max_rx_pkt_len, but they will be truncated, which we
+ * requested max packet size (mtu + eth overhead), then reduce the
+ * posted buffer size to max packet size. HW still receives packets
+ * larger than max packet size, but they will be truncated, which we
* drop in the rx handler. Not ideal, but better than returning
* large packets when the user is not expecting them.
*/
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM;
- if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
- rq_buf_len = max_rx_pkt_len;
+ if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
+ rq_buf_len = max_rx_pktlen;
for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
mb = rte_mbuf_raw_alloc(rq->mp);
if (mb == NULL) {
@@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
unsigned int mbuf_size, mbufs_per_pkt;
unsigned int nb_sop_desc, nb_data_desc;
uint16_t min_sop, max_sop, min_data, max_data;
- uint32_t max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
/*
* Representor uses a reserved PF queue. Translate representor
@@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM);
- /* max_rx_pkt_len includes the ethernet header and CRC. */
- max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ /* max_rx_pktlen includes the ethernet header and CRC. */
+ max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu);
if (enic->rte_dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_SCATTER) {
dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
/* ceil((max pkt len)/mbuf_size) */
- mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
+ mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size;
} else {
dev_info(enic, "Scatter rx mode disabled\n");
mbufs_per_pkt = 1;
- if (max_rx_pkt_len > mbuf_size) {
+ if (max_rx_pktlen > mbuf_size) {
dev_warning(enic, "The maximum Rx packet size (%u) is"
" larger than the mbuf size (%u), and"
" scatter is disabled. Larger packets will"
" be truncated.\n",
- max_rx_pkt_len, mbuf_size);
+ max_rx_pktlen, mbuf_size);
}
}
@@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
rq_sop->data_queue_enable = 1;
rq_data->in_use = 1;
/*
- * HW does not directly support rxmode.max_rx_pkt_len. HW always
+ * HW does not directly support MTU. HW always
* receives packet sizes up to the "max" MTU.
* If not using scatter, we can achieve the effect of dropping
* larger packets by reducing the size of posted buffers.
* See enic_alloc_rx_queue_mbufs().
*/
- if (max_rx_pkt_len <
- enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
- dev_warning(enic, "rxmode.max_rx_pkt_len is ignored"
- " when scatter rx mode is in use.\n");
+ if (enic->rte_dev->data->mtu < enic->max_mtu) {
+ dev_warning(enic,
+ "mtu is ignored when scatter rx mode is in use.\n");
}
} else {
dev_info(enic, "Rq %u Scatter rx mode not being used\n",
@@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
if (mbufs_per_pkt > 1) {
dev_info(enic, "For max packet size %u and mbuf size %u valid"
" rx descriptor range is %u to %u\n",
- max_rx_pkt_len, mbuf_size, min_sop + min_data,
+ max_rx_pktlen, mbuf_size, min_sop + min_data,
max_sop + max_data);
}
dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
@@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
"MTU (%u) is greater than value configured in NIC (%u)\n",
new_mtu, config_mtu);
- /* Update the MTU and maximum packet length */
- eth_dev->data->mtu = new_mtu;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
- enic_mtu_to_max_rx_pktlen(new_mtu);
-
/*
* If the device has not started (enic_enable), nothing to do.
* Later, enic_enable() will set up RQs reflecting the new maximum
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index c436263c7c9c..400e77ec6200 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
FM10K_SRRCTL_LOOPBACK_SUPPRESS);
/* It adds dual VLAN length for supporting dual VLAN */
- if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
+ if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
uint32_t reg;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index cd4dad8588f3..aef8adc2e1e0 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -315,19 +315,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
/* mtu size is 256~9600 */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- HINIC_MAX_JUMBO_FRAME_SIZE) {
+ if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
+ HINIC_MIN_FRAME_SIZE ||
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
+ HINIC_MAX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR,
- "Max rx pkt len out of range, get max_rx_pkt_len:%d, "
+ "Packet length out of range, get packet length:%d, "
"expect between %d and %d",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu),
HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- nic_dev->mtu_size =
- HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
/* rss template */
err = hinic_config_mq_mode(dev, TRUE);
@@ -1534,7 +1534,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- uint32_t frame_size;
int ret = 0;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
@@ -1552,16 +1551,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- /* update max frame size */
- frame_size = HINIC_MTU_TO_PKTLEN(mtu);
- if (frame_size > HINIC_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index b98a46f73e8c..e1fcba9e9482 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2366,41 +2366,6 @@ hns3_init_ring_with_vector(struct hns3_hw *hw)
return 0;
}
-static int
-hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
-{
- struct hns3_adapter *hns = dev->data->dev_private;
- struct hns3_hw *hw = &hns->hw;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
- int ret;
-
- if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
- return 0;
-
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater than %u "
- "and no more than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- return -EINVAL;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3_dev_mtu_set(dev, mtu);
- if (ret)
- return ret;
- dev->data->mtu = mtu;
-
- return 0;
-}
-
static int
hns3_setup_dcb(struct rte_eth_dev *dev)
{
@@ -2515,8 +2480,8 @@ hns3_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- ret = hns3_refresh_mtu(dev, conf);
- if (ret)
+ ret = hns3_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
goto cfg_err;
ret = hns3_mbuf_dyn_rx_timestamp_register(dev, conf);
@@ -2611,7 +2576,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false;
+ is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2632,7 +2597,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index e896de58a422..b10fa2d5ad8a 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -784,8 +784,6 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
uint16_t nb_rx_q = dev->data->nb_rx_queues;
uint16_t nb_tx_q = dev->data->nb_tx_queues;
struct rte_eth_rss_conf rss_conf;
- uint32_t max_rx_pkt_len;
- uint16_t mtu;
bool gro_en;
int ret;
@@ -825,28 +823,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
goto cfg_err;
}
- /*
- * If jumbo frames are enabled, MTU needs to be refreshed
- * according to the maximum RX packet length.
- */
- if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
- if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
- max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
- hns3_err(hw, "maximum Rx packet length must be greater "
- "than %u and less than %u when jumbo frame enabled.",
- (uint16_t)HNS3_DEFAULT_FRAME_LEN,
- (uint16_t)HNS3_MAX_FRAME_LEN);
- ret = -EINVAL;
- goto cfg_err;
- }
-
- mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
- ret = hns3vf_dev_mtu_set(dev, mtu);
- if (ret)
- goto cfg_err;
- dev->data->mtu = mtu;
- }
+ ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
+ if (ret != 0)
+ goto cfg_err;
ret = hns3vf_dev_configure_vlan(dev);
if (ret)
@@ -935,7 +914,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 02040b84f3c4..602548a4f25b 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1747,18 +1747,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size,
uint16_t nb_desc)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
- struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
uint16_t min_vec_bds;
/*
* HNS3 hardware network engine set scattered as default. If the driver
* is not work in scattered mode and the pkts greater than buf_size
- * but smaller than max_rx_pkt_len will be distributed to multiple BDs.
+ * but smaller than frame size will be distributed to multiple BDs.
* Driver cannot handle this situation.
*/
- if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) {
- hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater "
+ if (!hw->data->scattered_rx && frame_size > buf_size) {
+ hns3_err(hw, "frame size is not allowed to be set greater "
"than rx_buf_len if scattered is off.");
return -EINVAL;
}
@@ -1970,7 +1970,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
}
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
- dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
+ dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
dev->data->scattered_rx = true;
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index f856bbed0476..57abc2cf747d 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11437,14 +11437,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > I40E_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return ret;
}
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 3df4e3de187c..9b030198e537 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2899,8 +2899,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
}
rxq->max_pkt_len =
- RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
- rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
+ RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
+ data->mtu + I40E_ETH_OVERHEAD);
if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 18428049d805..5fc663f6bd46 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -576,13 +576,14 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_dev_data *dev_data = dev->data;
uint16_t buf_size, max_pkt_len;
+ uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
/* Calculate the maximum packet length allowed */
max_pkt_len = RTE_MIN((uint32_t)
rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
@@ -839,7 +840,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
adapter->stopped = 0;
- vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
dev->data->nb_tx_queues);
num_queue_pairs = vf->num_queue_pairs;
@@ -1472,15 +1473,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IAVF_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return ret;
}
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 14f4fe80fef2..00d9e873e64f 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -66,9 +66,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ dev->data->mtu + ICE_ETH_OVERHEAD);
/* Check if the jumbo frame and maximum packet length are set
* correctly.
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 170a12759d52..878b3b1410c9 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3603,8 +3603,8 @@ ice_dev_start(struct rte_eth_dev *dev)
pf->adapter_stopped = false;
/* Set the max frame size to default value*/
- max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
- pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
+ max_frame_size = pf->dev_data->mtu ?
+ pf->dev_data->mtu + ICE_ETH_OVERHEAD :
ICE_FRAME_SIZE_MAX;
/* Set the max frame size to HW*/
@@ -3992,14 +3992,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > ICE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
return 0;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index a20f4c751a1b..220537741d6c 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -271,15 +271,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
+ uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
/* Set buffer size as the head split is disabled. */
buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
RTE_PKTMBUF_HEADROOM);
rxq->rx_hdr_len = 0;
rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
- rxq->max_pkt_len = RTE_MIN((uint32_t)
- ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
- dev_data->dev_conf.rxmode.max_rx_pkt_len);
+ rxq->max_pkt_len =
+ RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+ frame_size);
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
@@ -385,11 +386,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
return -EINVAL;
}
- buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
- RTE_PKTMBUF_HEADROOM);
-
/* Check if scattered RX needs to be used. */
- if (rxq->max_pkt_len > buf_size)
+ if (frame_size > buf_size)
dev_data->scattered_rx = 1;
rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 03a77b377182..2b1f2f5a39d9 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -20,13 +20,6 @@
#define IGC_INTEL_VENDOR_ID 0x8086
-/*
- * The overhead from MTU to max frame size.
- * Considering VLAN so tag needs to be counted.
- */
-#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
- RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE)
-
#define IGC_FC_PAUSE_TIME 0x0680
#define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
#define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
@@ -1601,21 +1594,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* switch to jumbo mode if needed */
if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl |= IGC_RCTL_LPE;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rctl &= ~IGC_RCTL_LPE;
}
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
return 0;
}
@@ -2485,6 +2472,7 @@ static int
igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2493,23 +2481,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
- RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
+ if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
+ frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
@@ -2518,6 +2497,7 @@ static int
igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
+ uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
uint32_t ctrl_ext;
ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
@@ -2526,23 +2506,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
return 0;
- if ((dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
- goto write_ext_vlan;
-
/* Update maximum packet length */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
+ if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len +
- VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
+ frame_size, MAX_RX_JUMBO_FRAME_SIZE);
return -EINVAL;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
- IGC_WRITE_REG(hw, IGC_RLPML,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
-write_ext_vlan:
IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN);
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..b3473b5b1646 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -35,6 +35,13 @@ extern "C" {
#define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
#define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX)
+/*
+ * The overhead from MTU to max frame size.
+ * Considering VLAN so tag needs to be counted.
+ */
+#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
+ RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2)
+
/*
* TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be
* multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 383bf834f3b6..9b7a9d953bff 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1062,7 +1062,7 @@ igc_rx_init(struct rte_eth_dev *dev)
struct igc_rx_queue *rxq;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen;
uint32_t rctl;
uint32_t rxcsum;
uint16_t buf_size;
@@ -1080,17 +1080,17 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
rctl |= IGC_RCTL_LPE;
-
- /*
- * Set maximum packet length by default, and might be updated
- * together with enabling/disabling dual VLAN.
- */
- IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
- } else {
+ else
rctl &= ~IGC_RCTL_LPE;
- }
+
+ max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
+ /*
+ * Set maximum packet length by default, and might be updated
+ * together with enabling/disabling dual VLAN.
+ */
+ IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
/* Configure and enable each RX queue. */
rctl_bsize = 0;
@@ -1149,7 +1149,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
+ if (max_rx_pktlen > buf_size)
dev->data->scattered_rx = 1;
} else {
/*
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index 344c076f309a..d5d610c80bcd 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -343,25 +343,15 @@ static int
ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
- uint32_t max_frame_size;
int err;
IONIC_PRINT_CALL();
/*
* Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
- * is done by the the API.
+ * is done by the API.
*/
- /*
- * Max frame size is MTU + Ethernet header + VLAN + QinQ
- * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
- */
- max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
-
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size)
- return -EINVAL;
-
err = ionic_lif_change_mtu(lif, mtu);
if (err)
return err;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index 67631a5813b7..4d16a39c6b6d 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -771,7 +771,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
struct rte_mbuf *rxm, *rxm_seg;
uint32_t max_frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint64_t pkt_flags = 0;
uint32_t pkt_type;
struct ionic_rx_stats *stats = &rxq->stats;
@@ -1014,7 +1014,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
int __rte_cold
ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id)
{
- uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
struct ionic_rx_qcq *rxq;
int err;
@@ -1128,7 +1128,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
{
struct ionic_rx_qcq *rxq = rx_queue;
uint32_t frame_size =
- rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
struct ionic_rx_service service_cb_arg;
service_cb_arg.rx_pkts = rx_pkts;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 694435a4ae24..0d1aaa6449b9 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2791,14 +2791,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (frame_size > IPN3KE_ETH_MAX_LEN)
- dev_data->dev_conf.rxmode.offloads |=
- (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+ if (mtu > RTE_ETHER_MTU)
+ dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
- dev_data->dev_conf.rxmode.offloads &=
- (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
-
- dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 4dbe049fe986..29456ab59502 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5165,7 +5165,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct ixgbe_hw *hw;
struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
- struct rte_eth_dev_data *dev_data = dev->data;
int ret;
ret = ixgbe_dev_info_get(dev, &dev_info);
@@ -5179,9 +5178,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
*/
- if (dev_data->dev_started && !dev_data->scattered_rx &&
- (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ if (dev->data->dev_started && !dev->data->scattered_rx &&
+ frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -5190,23 +5189,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (frame_size > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU) {
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
} else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
}
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
return 0;
@@ -6078,12 +6072,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
* set as 0x4.
*/
if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_JUMBO_FRAME);
+ (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
- IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
- IXGBE_MMW_SIZE_DEFAULT);
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
/* Set RTTBCNRC of queue X */
IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
@@ -6355,8 +6347,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (mtu < RTE_ETHER_MIN_MTU ||
- max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
+ if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
return -EINVAL;
/* If device is started, refuse mtu that requires the support of
@@ -6364,7 +6355,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
*/
if (dev_data->dev_started && !dev_data->scattered_rx &&
(max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
- dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+ dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
PMD_INIT_LOG(ERR, "Stop port first.");
return -EINVAL;
}
@@ -6381,8 +6372,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (ixgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index a1529b4d5659..4ceb5bf322d8 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -573,8 +573,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* if PF has jumbo frames enabled which means legacy
* VFs are disabled.
*/
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ if (dev->data->mtu > RTE_ETHER_MTU)
break;
/* fall through */
default:
@@ -584,8 +583,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
* legacy VFs.
*/
if (max_frame > IXGBE_ETH_MAX_LEN ||
- dev->data->dev_conf.rxmode.max_rx_pkt_len >
- IXGBE_ETH_MAX_LEN)
+ dev->data->mtu > RTE_ETHER_MTU)
return -1;
break;
}
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 4d3d30b6622e..575cc8c4ffe5 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -5047,6 +5047,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
uint16_t buf_size;
uint16_t i;
struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
int rc;
PMD_INIT_FUNC_TRACE();
@@ -5082,7 +5083,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
- maxfrs |= (rx_conf->max_rx_pkt_len << 16);
+ maxfrs |= (frame_size << 16);
IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
} else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
@@ -5156,8 +5157,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
IXGBE_SRRCTL_BSIZEPKT_SHIFT);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
+ if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -5637,6 +5637,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
struct ixgbe_hw *hw;
struct ixgbe_rx_queue *rxq;
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
uint64_t bus_addr;
uint32_t srrctl, psrtype = 0;
uint16_t buf_size;
@@ -5673,10 +5674,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
* ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
* VF packets received can work in all cases.
*/
- if (ixgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ if (ixgbevf_rlpml_set_vf(hw, frame_size) != 0) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ frame_size);
return -EINVAL;
}
@@ -5735,8 +5735,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
- 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
+ (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
dev->data->scattered_rx = 1;
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 41b3f63ac059..3fac28dcfcf9 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
- uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (frame_len > LIO_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
eth_dev->data->dev_conf.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
else
eth_dev->data->dev_conf.rxmode.offloads &=
~DEV_RX_OFFLOAD_JUMBO_FRAME;
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
- eth_dev->data->mtu = mtu;
-
return 0;
}
@@ -1402,8 +1398,6 @@ lio_sync_link_state_check(void *eth_dev)
static int
lio_dev_start(struct rte_eth_dev *eth_dev)
{
- uint16_t mtu;
- uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
struct lio_device *lio_dev = LIO_DEV(eth_dev);
uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
int ret = 0;
@@ -1446,15 +1440,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
goto dev_mtu_set_error;
}
- mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN);
- if (mtu < RTE_ETHER_MIN_MTU)
- mtu = RTE_ETHER_MIN_MTU;
-
- if (eth_dev->data->mtu != mtu) {
- ret = lio_dev_mtu_set(eth_dev, mtu);
- if (ret)
- goto dev_mtu_set_error;
- }
+ ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
+ if (ret != 0)
+ goto dev_mtu_set_error;
return 0;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 2b75c07fad75..1801d87334a1 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
int ret;
uint32_t crc_present;
uint64_t offloads;
+ uint32_t max_rx_pktlen;
offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
@@ -829,13 +830,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->rx_queues[idx] = rxq;
/* Enable scattered packets support for this queue if necessary. */
MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- (mb_len - RTE_PKTMBUF_HEADROOM)) {
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
- uint32_t size =
- RTE_PKTMBUF_HEADROOM +
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
uint32_t sges_n;
/*
@@ -847,21 +846,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
/* Make sure sges_n did not overflow. */
size = mb_len * (1 << rxq->sges_n);
size -= RTE_PKTMBUF_HEADROOM;
- if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
+ if (size < max_rx_pktlen) {
rte_errno = EOVERFLOW;
ERROR("%p: too many SGEs (%u) needed to handle"
" requested maximum packet size %u",
(void *)dev,
- 1 << sges_n,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ 1 << sges_n, max_rx_pktlen);
goto error;
}
} else {
WARN("%p: the requested maximum Rx packet size (%u) is"
" larger than a single mbuf (%u) and scattered"
" mode has not been requested",
- (void *)dev,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ (void *)dev, max_rx_pktlen,
mb_len - RTE_PKTMBUF_HEADROOM);
}
DEBUG("%p: maximum number of segments per packet: %u",
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b68443bed509..0655965c0fb9 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1327,10 +1327,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
- unsigned int max_rx_pkt_len = lro_on_queue ?
+ unsigned int max_rx_pktlen = lro_on_queue ?
dev->data->dev_conf.rxmode.max_lro_pkt_size :
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
+ dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN +
+ RTE_ETHER_CRC_LEN;
+ unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
@@ -1369,7 +1370,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* needed to handle max size packets, replace zero length
* with the buffer length from the pool.
*/
- tail_len = max_rx_pkt_len;
+ tail_len = max_rx_pktlen;
do {
struct mlx5_eth_rxseg *hw_seg =
&tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
@@ -1407,7 +1408,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- tmpl->rxq.rxseg_n, max_rx_pkt_len,
+ tmpl->rxq.rxseg_n, max_rx_pktlen,
MLX5_MAX_RXQ_NSEG);
rte_errno = ENOTSUP;
goto error;
@@ -1432,7 +1433,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
" configured and no enough mbuf space(%u) to contain "
"the maximum RX packet length(%u) with head-room(%u)",
- dev->data->port_id, idx, mb_len, max_rx_pkt_len,
+ dev->data->port_id, idx, mb_len, max_rx_pktlen,
RTE_PKTMBUF_HEADROOM);
rte_errno = ENOSPC;
goto error;
@@ -1451,7 +1452,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
* following conditions are met:
* - MPRQ is enabled.
* - The number of descs is more than the number of strides.
- * - max_rx_pkt_len plus overhead is less than the max size
+ * - max_rx_pktlen plus overhead is less than the max size
* of a stride or mprq_stride_size is specified by a user.
* Need to make sure that there are enough strides to encap
* the maximum packet size in case mprq_stride_size is set.
@@ -1475,7 +1476,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
!!(offloads & DEV_RX_OFFLOAD_SCATTER);
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
config->mprq.max_memcpy_len);
- max_lro_size = RTE_MIN(max_rx_pkt_len,
+ max_lro_size = RTE_MIN(max_rx_pktlen,
(1u << tmpl->rxq.strd_num_n) *
(1u << tmpl->rxq.strd_sz_n));
DRV_LOG(DEBUG,
@@ -1484,9 +1485,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
dev->data->port_id, idx,
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
} else if (tmpl->rxq.rxseg_n == 1) {
- MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
+ MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
tmpl->rxq.sges_n = 0;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
unsigned int sges_n;
@@ -1508,13 +1509,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u, the maximum"
" supported are %u", dev->data->port_id,
- 1 << sges_n, max_rx_pkt_len,
+ 1 << sges_n, max_rx_pktlen,
1u << MLX5_MAX_LOG_RQ_SEGS);
rte_errno = ENOTSUP;
goto error;
}
tmpl->rxq.sges_n = sges_n;
- max_lro_size = max_rx_pkt_len;
+ max_lro_size = max_rx_pktlen;
}
if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
DRV_LOG(WARNING,
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 90f246636916..2a0288087357 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_NETA_ETH_HDRS_LEN;
-
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
priv->multiseg = 1;
@@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
/* It is OK. New MTU will be set later on mvneta_dev_start */
return 0;
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index 2d61930382cb..9836bb071a82 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mvneta_priv *priv = dev->data->dev_private;
struct mvneta_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS;
- if (frame_size < max_rx_pkt_len) {
+ if (frame_size < max_rx_pktlen) {
MVNETA_LOG(ERR,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 65d011300a97..44761b695a8d 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
return -EINVAL;
}
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- MRVL_PP2_ETH_HDRS_LEN;
- if (dev->data->mtu > priv->max_mtu) {
- MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n",
- dev->data->mtu,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- priv->max_mtu);
- return -EINVAL;
- }
+ if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
+ MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
+ dev->data->dev_conf.rxmode.mtu,
+ priv->max_mtu);
+ return -EINVAL;
}
if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
@@ -595,9 +590,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- dev->data->mtu = mtu;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
-
if (!priv->ppio)
return 0;
@@ -1994,7 +1986,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
struct mrvl_priv *priv = dev->data->dev_private;
struct mrvl_rxq *rxq;
uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
- uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
int ret, tc, inq;
uint64_t offloads;
@@ -2009,17 +2001,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
return -EFAULT;
}
- frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
- MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
- if (frame_size < max_rx_pkt_len) {
+ frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS;
+ if (frame_size < max_rx_pktlen) {
MRVL_LOG(WARNING,
"Mbuf size must be increased to %u bytes to hold up "
"to %u bytes of data.",
- buf_size + max_rx_pkt_len - frame_size,
- max_rx_pkt_len);
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
- MRVL_LOG(INFO, "Setting max rx pkt len to %u",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pktlen + buf_size - frame_size,
+ max_rx_pktlen);
+ dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
+ MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
}
if (dev->data->rx_queues[idx]) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 4395a09c597d..928b4983a07a 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -370,7 +370,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
}
if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = rxmode->max_rx_pkt_len;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -963,16 +963,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
/* switch to jumbo mode if needed */
- if ((uint32_t)mtu > RTE_ETHER_MTU)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
-
/* writing to configuration space */
- nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
+ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
hw->mtu = mtu;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 096b5f6ae3da..67c7d8929eb2 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > OCCTX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
@@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
/* Setup scatter mode if needed by jumbo */
- if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (data->mtu > buffsz) {
nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
@@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
evdev_priv->rx_offload_flags = nic->rx_offload_flags;
evdev_priv->tx_offload_flags = nic->tx_offload_flags;
- /* Setup MTU based on max_rx_pkt_len */
- nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD;
+ /* Setup MTU */
+ nic->mtu = data->mtu;
return 0;
}
@@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
octeontx_recheck_rx_offloads(rxq);
}
- /* Setting up the mtu based on max_rx_pkt_len */
+ /* Setting up the mtu */
ret = octeontx_dev_mtu_set(dev, nic->mtu);
if (ret) {
octeontx_log_err("Failed to set default MTU size %d", ret);
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index e0eb2b030788..9c5d748e8575 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -913,7 +913,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
mbp_priv = rte_mempool_get_priv(rxq->pool);
buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
- if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
+ if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 3a763f691ba4..3c591c8fbaa0 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,14 +59,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (frame_size > NIX_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* Update max_rx_pkt_len */
- data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
-
return rc;
}
@@ -75,7 +72,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
{
struct rte_eth_dev_data *data = eth_dev->data;
struct otx2_eth_rxq *rxq;
- uint16_t mtu;
int rc;
rxq = data->rx_queues[0];
@@ -83,10 +79,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
/* Setup scatter mode if needed by jumbo */
otx2_nix_enable_mseg_on_jumbo(rxq);
- /* Setup MTU based on max_rx_pkt_len */
- mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
-
- rc = otx2_nix_mtu_set(eth_dev, mtu);
+ rc = otx2_nix_mtu_set(eth_dev, data->mtu);
if (rc)
otx2_err("Failed to set default MTU size %d", rc);
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index d6a69449073b..4cc002ee8fab 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -670,16 +670,11 @@ pfe_link_up(struct rte_eth_dev *dev)
static int
pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- int ret;
struct pfe_eth_priv_s *priv = dev->data->dev_private;
uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
/*TODO Support VLAN*/
- ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
- if (!ret)
- dev->data->mtu = mtu;
-
- return ret;
+ return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
}
/* pfe_eth_enet_addr_byte_mac
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index fd8c62a1826b..a1cf913dc8ed 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
return -ENOMEM;
}
- /* If jumbo enabled adjust MTU */
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- eth_dev->data->mtu =
- eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
- RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
-
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
eth_dev->data->scattered_rx = 1;
@@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
- uint32_t max_rx_pkt_len;
uint32_t frame_size;
uint16_t bufsz;
bool restart = false;
@@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
DP_ERR(edev, "Error during getting ethernet device info\n");
return rc;
}
- max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
- frame_size = max_rx_pkt_len;
+
+ frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
@@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (frame_size > QEDE_ETH_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->dev_started = 1;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
-
return 0;
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 35cde561ba59..c2263787b4ec 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
struct qede_rx_queue *rxq;
- uint16_t max_rx_pkt_len;
+ uint16_t max_rx_pktlen;
uint16_t bufsz;
int rc;
@@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
dev->data->rx_queues[qid] = NULL;
}
- max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
+ max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
/* Fix up RX buffer size */
bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
/* cache align the mbuf size to simplfy rx_buf_size calculation */
bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
- (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
+ (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
if (!dev->data->scattered_rx) {
DP_INFO(edev, "Forcing scatter-gather mode\n");
dev->data->scattered_rx = 1;
}
}
- rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
+ rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
if (rc < 0)
return rc;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 8ec56a9ed57d..d3b12675e5bf 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1142,15 +1142,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
/*
* The driver does not use it, but other PMDs update jumbo frame
- * flag and max_rx_pkt_len when MTU is set.
+ * flag when MTU is set.
*/
if (mtu > RTE_ETHER_MTU) {
struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
}
- dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index 7a3f59a1123d..5320d8903dac 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -383,14 +383,10 @@ sfc_port_configure(struct sfc_adapter *sa)
{
const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
struct sfc_port *port = &sa->port;
- const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
sfc_log_init(sa, "entry");
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- port->pdu = rxmode->max_rx_pkt_len;
- else
- port->pdu = EFX_MAC_PDU(dev_data->mtu);
+ port->pdu = EFX_MAC_PDU(dev_data->mtu);
return 0;
}
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 046f17669d03..e4f1ad45219e 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct pmd_internals *pmd = dev->data->dev_private;
struct ifreq ifr = { .ifr_mtu = mtu };
- int err = 0;
- err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
- if (!err)
- dev->data->mtu = mtu;
-
- return err;
+ return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
}
static int
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 2103f96d5eeb..e1b9e276af90 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (frame_size > NIC_HW_L2_MAX_LEN)
+ if (mtu > RTE_ETHER_MTU)
rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
@@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
- /* Update max_rx_pkt_len */
- rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
nic->mtu = mtu;
for (i = 0; i < nic->sqs_count; i++)
@@ -1723,16 +1721,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
}
/* Setup scatter mode if needed by jumbo */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * VLAN_TAG_SIZE > buffsz)
+ if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz)
dev->data->scattered_rx = 1;
if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
dev->data->scattered_rx = 1;
- /* Setup MTU based on max_rx_pkt_len or default */
- mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
- dev->data->dev_conf.rxmode.max_rx_pkt_len
- - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
+ /* Setup MTU */
+ mtu = dev->data->mtu;
if (nicvf_dev_set_mtu(dev, mtu)) {
PMD_INIT_LOG(ERR, "Failed to set default mtu size");
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index dc822d69f742..45afe872bde0 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3482,8 +3482,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 528f11439bbd..fd65d89ffe7d 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -55,6 +55,10 @@
#define TXGBE_5TUPLE_MAX_PRI 7
#define TXGBE_5TUPLE_MIN_PRI 1
+
+/* The overhead from MTU to max frame size. */
+#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN)
+
#define TXGBE_RSS_OFFLOAD_ALL ( \
ETH_RSS_IPV4 | \
ETH_RSS_NONFRAG_IPV4_TCP | \
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 896da8a88770..43dc0ed39b75 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -1128,8 +1128,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
if (txgbevf_rlpml_set_vf(hw, max_frame))
return -EINVAL;
- /* update max frame size */
- dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
return 0;
}
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 596142378ad9..5cd6ecc2a399 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -4326,13 +4326,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
- } else {
- wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
- TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
- }
+ wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+ TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD));
/*
* If loopback mode is configured, set LPBK bit.
@@ -4394,8 +4389,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
/* It adds dual VLAN length for supporting dual VLAN */
- if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
- 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
+ if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
+ 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
dev->data->scattered_rx = 1;
if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
@@ -4847,9 +4842,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
* VF packets received can work in all cases.
*/
if (txgbevf_rlpml_set_vf(hw,
- (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
+ (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ dev->data->mtu + TXGBE_ETH_OVERHEAD);
return -EINVAL;
}
@@ -4911,7 +4906,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
/* It adds dual VLAN length for supporting dual VLAN */
- (rxmode->max_rx_pkt_len +
+ (dev->data->mtu + TXGBE_ETH_OVERHEAD +
2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index aff791fbd0c0..a28f9607277e 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -924,7 +924,6 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
hw->max_rx_pkt_len = frame_size;
- dev->data->dev_conf.rxmode.max_rx_pkt_len = hw->max_rx_pkt_len;
return 0;
}
@@ -2107,14 +2106,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
return ret;
}
- if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
+ if (rxmode->mtu > hw->max_mtu)
req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
- if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
- else
- hw->max_rx_pkt_len = ether_hdr_len + dev->data->mtu;
+ hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM))
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index adbd40808396..68e3c13730ad 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 7adaa93cad5c..6352a715c0d9 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -116,7 +116,6 @@ static struct rte_mempool *mbuf_pool;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index d0f40a1fb4bc..8c4a8feec0c2 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 5ed0dc73ec60..e26be8edf28f 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ab8c6d6a0dad..476b147bdfcc 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
static const struct rte_eth_conf port_conf_default = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index 65c1d85cf2fb..8a43f6ac0f92 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -59,14 +59,6 @@ static struct{
} parm_config;
const char cb_port_delim[] = ":";
-/* Ethernet ports configured with default settings using struct. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of Ethernet ports. */
-
/* Creation of flow classifier object. 8< */
struct flow_classifier {
struct rte_flow_classifier *cls;
@@ -200,7 +192,7 @@ static struct rte_flow_attr attr;
static inline int
port_init(uint8_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
struct rte_ether_addr addr;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
@@ -211,6 +203,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index ff36aa7f1e7b..ccfee585f850 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -820,7 +820,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
static const struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index a7f40970f27f..754fee5a5780 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -146,7 +146,8 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_SCATTER |
@@ -918,9 +919,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
@@ -963,8 +964,7 @@ main(int argc, char **argv)
}
/* set the mtu to the maximum received packet size */
- ret = rte_eth_dev_set_mtu(portid,
- local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD);
+ ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu);
if (ret < 0) {
printf("\n");
rte_exit(EXIT_FAILURE, "Set MTU failed: "
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..9ba02e687adb 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index d611c7d01609..39e12fea47f4 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -162,7 +162,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_JUMBO_FRAME),
@@ -882,7 +883,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
/* mbufs stored int the gragment table. 8< */
nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM;
- nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE;
+ nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN
+ + BUF_SIZE - 1) / BUF_SIZE;
nb_mbuf *= 2; /* ipv4 and ipv6 */
nb_mbuf += nb_rxd + nb_txd;
@@ -1054,9 +1056,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 7b01872c6f9f..a5dfca5a9a4b 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -2163,7 +2162,6 @@ cryptodevs_init(uint16_t req_queue_num)
static void
port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
{
- uint32_t frame_size;
struct rte_eth_dev_info dev_info;
struct rte_eth_txconf *txconf;
uint16_t nb_tx_queue, nb_rx_queue;
@@ -2211,10 +2209,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- frame_size = MTU_TO_FRAMELEN(mtu_size);
- if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
+ if (mtu_size > RTE_ETHER_MTU)
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- local_port_conf.rxmode.max_rx_pkt_len = frame_size;
+ local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index d10de30ddbae..e28035998e6c 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -110,7 +110,8 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
+ .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
@@ -715,9 +716,9 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
- local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
- dev_info.max_rx_pktlen,
- local_port_conf.rxmode.max_rx_pkt_len);
+ local_port_conf.rxmode.mtu = RTE_MIN(
+ dev_info.max_mtu,
+ local_port_conf.rxmode.mtu);
/* get the lcore_id for this port */
while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 2a993a0ca460..62f6e42a9437 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
memcpy(&conf, &port_conf, sizeof(conf));
/* Set new MTU */
- if (new_mtu > RTE_ETHER_MAX_LEN)
+ if (new_mtu > RTE_ETHER_MTU)
conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
else
conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- /* mtu + length of header + length of FCS = max pkt length */
- conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
- KNI_ENET_FCS_SIZE;
+ conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
if (ret < 0) {
RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
index 9b3e324efb23..d9cf00c9dfc7 100644
--- a/examples/l2fwd-cat/l2fwd-cat.c
+++ b/examples/l2fwd-cat/l2fwd-cat.c
@@ -19,10 +19,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
-};
-
/* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
/*
@@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
/* Configure the Ethernet device. */
retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
if (retval != 0)
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index c2ffbdd50636..c646f1748ca7 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -217,7 +217,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 19f32809aa9d..9040be5ed9b6 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 60545f305934..67e6356acff6 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -141,6 +140,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
/* ethernet addresses of ports */
@@ -201,8 +202,8 @@ enum {
OPT_CONFIG_NUM = 256,
#define OPT_NONUMA "no-numa"
OPT_NONUMA_NUM,
-#define OPT_ENBJMO "enable-jumbo"
- OPT_ENBJMO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_RULE_IPV4 "rule_ipv4"
OPT_RULE_IPV4_NUM,
#define OPT_RULE_IPV6 "rule_ipv6"
@@ -1620,26 +1621,21 @@ print_usage(const char *prgname)
usage_acl_alg(alg, sizeof(alg));
printf("%s [EAL options] -- -p PORTMASK -P"
- "--"OPT_RULE_IPV4"=FILE"
- "--"OPT_RULE_IPV6"=FILE"
+ " --"OPT_RULE_IPV4"=FILE"
+ " --"OPT_RULE_IPV6"=FILE"
" [--"OPT_CONFIG" (port,queue,lcore)[,(port,queue,lcore]]"
- " [--"OPT_ENBJMO" [--max-pkt-len PKTLEN]]\n"
+ " [--"OPT_MAX_PKT_LEN" PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
- " --"OPT_CONFIG": (port,queue,lcore): "
- "rx queues configuration\n"
+ " -P: enable promiscuous mode\n"
+ " --"OPT_CONFIG" (port,queue,lcore): rx queues configuration\n"
" --"OPT_NONUMA": optional, disable numa awareness\n"
- " --"OPT_ENBJMO": enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
- " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries "
- "file. "
+ " --"OPT_MAX_PKT_LEN" PKTLEN: maximum packet length in decimal (64-9600)\n"
+ " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries file. "
"Each rule occupy one line. "
"2 kinds of rules are supported. "
"One is ACL entry at while line leads with character '%c', "
- "another is route entry at while line leads with "
- "character '%c'.\n"
- " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules "
- "entries file.\n"
+ "another is route entry at while line leads with character '%c'.\n"
+ " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules entries file.\n"
" --"OPT_ALG": ACL classify method to use, one of: %s\n",
prgname, ACL_LEAD_CHAR, ROUTE_LEAD_CHAR, alg);
}
@@ -1760,14 +1756,14 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
- {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
- {OPT_ENBJMO, 0, NULL, OPT_ENBJMO_NUM },
- {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
- {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
- {OPT_ALG, 1, NULL, OPT_ALG_NUM },
- {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
- {NULL, 0, 0, 0 }
+ {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
+ {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
+ {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
+ {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
+ {OPT_ALG, 1, NULL, OPT_ALG_NUM },
+ {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
+ {NULL, 0, 0, 0 }
};
argvopt = argv;
@@ -1806,43 +1802,11 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case OPT_ENBJMO_NUM:
- {
- struct option lenopts = {
- "max-pkt-len",
- required_argument,
- 0,
- 0
- };
-
- printf("jumbo frame is enabled\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, then use the
- * default value RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length "
- "to %u\n",
- (unsigned int)
- port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
case OPT_RULE_IPV4_NUM:
parm_config.rule_ipv4_name = optarg;
break;
@@ -2010,6 +1974,43 @@ set_default_dest_mac(void)
}
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -2083,6 +2084,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index a0de8ca9b42d..46568eba9c01 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.rx_adv_conf = {
@@ -126,6 +125,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
@@ -259,7 +260,7 @@ print_usage(const char *prgname)
" [-P]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--per-port-pool]\n\n"
@@ -268,9 +269,7 @@ print_usage(const char *prgname)
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for "
"port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --per-port-pool: Use separate buffer pool per port\n\n",
prgname);
@@ -404,7 +403,7 @@ static const char short_options[] = "p:" /* portmask */
#define CMD_LINE_OPT_CONFIG "config"
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
enum {
/* Long options mapped to a short option */
@@ -416,7 +415,7 @@ enum {
CMD_LINE_OPT_CONFIG_NUM,
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
};
@@ -424,7 +423,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
{NULL, 0, 0, 0},
};
@@ -490,28 +489,8 @@ parse_args(int argc, char **argv)
numa_on = 0;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr, "Invalid maximum "
- "packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM: {
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
}
@@ -722,6 +701,43 @@ graph_main_loop(void *conf)
}
/* >8 End of main processing loop. */
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -807,6 +823,13 @@ main(int argc, char **argv)
nb_rx_queue, n_tx_queue);
rte_eth_dev_info_get(portid, &dev_info);
+
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 73a3ab5bc0eb..03c0b8bb15b8 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -251,7 +251,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -266,6 +265,8 @@ static struct rte_eth_conf port_conf = {
}
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool * pktmbuf_pool[NB_SOCKETS];
@@ -1601,16 +1602,15 @@ print_usage(const char *prgname)
" [--config (port,queue,lcore)[,(port,queue,lcore]]"
" [--high-perf-cores CORELIST"
" [--perf-config (port,queue,hi_perf,lcore_index)[,(port,queue,hi_perf,lcore_index]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
- " -P : enable promiscuous mode\n"
+ " -P: enable promiscuous mode\n"
" --config (port,queue,lcore): rx queues configuration\n"
" --high-perf-cores CORELIST: list of high performance cores\n"
" --perf-config: similar as config, cores specified as indices"
" for bins containing high or regular performance cores\n"
" --no-numa: optional, disable numa awareness\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --parse-ptype: parse packet type by software\n"
" --legacy: use legacy interrupt-based scaling\n"
" --empty-poll: enable empty poll detection"
@@ -1795,6 +1795,7 @@ parse_ep_config(const char *q_arg)
#define CMD_LINE_OPT_INTERRUPT_ONLY "interrupt-only"
#define CMD_LINE_OPT_TELEMETRY "telemetry"
#define CMD_LINE_OPT_PMD_MGMT "pmd-mgmt"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
/* Parse the argument given in the command line of the application */
static int
@@ -1810,7 +1811,7 @@ parse_args(int argc, char **argv)
{"perf-config", 1, 0, 0},
{"high-perf-cores", 1, 0, 0},
{"no-numa", 0, 0, 0},
- {"enable-jumbo", 0, 0, 0},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, 0},
{CMD_LINE_OPT_EMPTY_POLL, 1, 0, 0},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
{CMD_LINE_OPT_LEGACY, 0, 0, 0},
@@ -1954,36 +1955,10 @@ parse_args(int argc, char **argv)
}
if (!strncmp(lgopts[option_index].name,
- "enable-jumbo", 12)) {
- struct option lenopts =
- {"max-pkt-len", required_argument, \
- 0, 0};
-
- printf("jumbo frame is enabled \n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /**
- * if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (0 == getopt_long(argc, argvopt, "",
- &lenopts, &option_index)) {
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) ||
- (ret > MAX_JUMBO_PKT_LEN)){
- printf("invalid packet "
- "length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame "
- "max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ CMD_LINE_OPT_MAX_PKT_LEN,
+ sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
+ printf("Custom frame size is configured\n");
+ max_pkt_len = parse_max_pkt_len(optarg);
}
if (!strncmp(lgopts[option_index].name,
@@ -2505,6 +2480,43 @@ mode_to_str(enum appmode mode)
}
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
/* Power library initialized in the main routine. 8< */
int
main(int argc, char **argv)
@@ -2622,6 +2634,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 00ac267af1dd..66d76e87cb25 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -136,6 +135,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
static uint8_t lkp_per_socket[NB_SOCKETS];
@@ -326,7 +327,7 @@ print_usage(const char *prgname)
" [--lookup]"
" --config (port,queue,lcore)[,(port,queue,lcore)]"
" [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]"
+ " [--max-pkt-len PKTLEN]"
" [--no-numa]"
" [--hash-entry-num]"
" [--ipv6]"
@@ -344,9 +345,7 @@ print_usage(const char *prgname)
" Accepted: em (Exact Match), lpm (Longest Prefix Match), fib (Forwarding Information Base)\n"
" --config (port,queue,lcore): Rx queue configuration\n"
" --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for port X\n"
- " --enable-jumbo: Enable jumbo frames\n"
- " --max-pkt-len: Under the premise of enabling jumbo,\n"
- " maximum packet length in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --no-numa: Disable numa awareness\n"
" --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
" --ipv6: Set if running ipv6 packets\n"
@@ -566,7 +565,7 @@ static const char short_options[] =
#define CMD_LINE_OPT_ETH_DEST "eth-dest"
#define CMD_LINE_OPT_NO_NUMA "no-numa"
#define CMD_LINE_OPT_IPV6 "ipv6"
-#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
+#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
#define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
#define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
#define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
@@ -584,7 +583,7 @@ enum {
CMD_LINE_OPT_ETH_DEST_NUM,
CMD_LINE_OPT_NO_NUMA_NUM,
CMD_LINE_OPT_IPV6_NUM,
- CMD_LINE_OPT_ENABLE_JUMBO_NUM,
+ CMD_LINE_OPT_MAX_PKT_LEN_NUM,
CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
CMD_LINE_OPT_PARSE_PTYPE_NUM,
CMD_LINE_OPT_PARSE_PER_PORT_POOL,
@@ -599,7 +598,7 @@ static const struct option lgopts[] = {
{CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
{CMD_LINE_OPT_NO_NUMA, 0, 0, CMD_LINE_OPT_NO_NUMA_NUM},
{CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
- {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0, CMD_LINE_OPT_ENABLE_JUMBO_NUM},
+ {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, CMD_LINE_OPT_MAX_PKT_LEN_NUM},
{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
@@ -698,31 +697,9 @@ parse_args(int argc, char **argv)
ipv6 = 1;
break;
- case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
- const struct option lenopts = {
- "max-pkt-len", required_argument, 0, 0
- };
-
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /*
- * if no max-pkt-len set, use the default
- * value RTE_ETHER_MAX_LEN.
- */
- if (getopt_long(argc, argvopt, "",
- &lenopts, &option_index) == 0) {
- ret = parse_max_pkt_len(optarg);
- if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
- fprintf(stderr,
- "invalid maximum packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
+ case CMD_LINE_OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -981,6 +958,43 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
return 0;
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
static void
l3fwd_poll_resource_setup(void)
{
@@ -1035,6 +1049,12 @@ l3fwd_poll_resource_setup(void)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2905199743a7..2db1b5fc154f 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
@@ -323,6 +322,8 @@ static struct rte_eth_conf port_conf = {
},
};
+static uint32_t max_pkt_len;
+
static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
@@ -2643,7 +2644,7 @@ print_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK -P"
" [--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread]]"
" [--tx (lcore,thread)[,(lcore,thread]]"
- " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
+ " [--max-pkt-len PKTLEN]"
" [--parse-ptype]\n\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -P : enable promiscuous mode\n"
@@ -2653,8 +2654,7 @@ print_usage(const char *prgname)
" --eth-dest=X,MM:MM:MM:MM:MM:MM: optional, ethernet destination for port X\n"
" --no-numa: optional, disable numa awareness\n"
" --ipv6: optional, specify it if running ipv6 packets\n"
- " --enable-jumbo: enable jumbo frame"
- " which max packet len is PKTLEN in decimal (64-9600)\n"
+ " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n"
" --hash-entry-num: specify the hash entry number in hexadecimal to be setup\n"
" --no-lthreads: turn off lthread model\n"
" --parse-ptype: set to use software to analyze packet type\n\n",
@@ -2877,8 +2877,8 @@ enum {
OPT_NO_NUMA_NUM,
#define OPT_IPV6 "ipv6"
OPT_IPV6_NUM,
-#define OPT_ENABLE_JUMBO "enable-jumbo"
- OPT_ENABLE_JUMBO_NUM,
+#define OPT_MAX_PKT_LEN "max-pkt-len"
+ OPT_MAX_PKT_LEN_NUM,
#define OPT_HASH_ENTRY_NUM "hash-entry-num"
OPT_HASH_ENTRY_NUM_NUM,
#define OPT_NO_LTHREADS "no-lthreads"
@@ -2902,7 +2902,7 @@ parse_args(int argc, char **argv)
{OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
{OPT_NO_NUMA, 0, NULL, OPT_NO_NUMA_NUM },
{OPT_IPV6, 0, NULL, OPT_IPV6_NUM },
- {OPT_ENABLE_JUMBO, 0, NULL, OPT_ENABLE_JUMBO_NUM },
+ {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
{OPT_HASH_ENTRY_NUM, 1, NULL, OPT_HASH_ENTRY_NUM_NUM },
{OPT_NO_LTHREADS, 0, NULL, OPT_NO_LTHREADS_NUM },
{OPT_PARSE_PTYPE, 0, NULL, OPT_PARSE_PTYPE_NUM },
@@ -2981,35 +2981,10 @@ parse_args(int argc, char **argv)
parse_ptype_on = 1;
break;
- case OPT_ENABLE_JUMBO_NUM:
- {
- struct option lenopts = {"max-pkt-len",
- required_argument, 0, 0};
-
- printf("jumbo frame is enabled - disabling simple TX path\n");
- port_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MULTI_SEGS;
-
- /* if no max-pkt-len set, use the default value
- * RTE_ETHER_MAX_LEN
- */
- if (getopt_long(argc, argvopt, "", &lenopts,
- &option_index) == 0) {
-
- ret = parse_max_pkt_len(optarg);
- if ((ret < 64) || (ret > MAX_JUMBO_PKT_LEN)) {
- printf("invalid packet length\n");
- print_usage(prgname);
- return -1;
- }
- port_conf.rxmode.max_rx_pkt_len = ret;
- }
- printf("set jumbo frame max packet length to %u\n",
- (unsigned int)port_conf.rxmode.max_rx_pkt_len);
+ case OPT_MAX_PKT_LEN_NUM:
+ max_pkt_len = parse_max_pkt_len(optarg);
break;
- }
+
#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
case OPT_HASH_ENTRY_NUM_NUM:
ret = parse_hash_entry_number(optarg);
@@ -3489,6 +3464,43 @@ check_all_ports_link_status(uint32_t port_mask)
}
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
+static int
+config_port_max_pkt_len(struct rte_eth_conf *conf,
+ struct rte_eth_dev_info *dev_info)
+{
+ uint32_t overhead_len;
+
+ if (max_pkt_len == 0)
+ return 0;
+
+ if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)
+ return -1;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ conf->rxmode.mtu = max_pkt_len - overhead_len;
+
+ if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+ conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
+ return 0;
+}
+
int
main(int argc, char **argv)
{
@@ -3577,6 +3589,12 @@ main(int argc, char **argv)
"Error during getting device (port %u) info: %s\n",
portid, strerror(-ret));
+ ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
+ if (ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Invalid max packet length: %u (port %u)\n",
+ max_pkt_len, portid);
+
if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
local_port_conf.txmode.offloads |=
DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/examples/performance-thread/l3fwd-thread/test.sh b/examples/performance-thread/l3fwd-thread/test.sh
index f0b6e271a5f3..3dd33407ea41 100755
--- a/examples/performance-thread/l3fwd-thread/test.sh
+++ b/examples/performance-thread/l3fwd-thread/test.sh
@@ -11,7 +11,7 @@ case "$1" in
echo "1.1 1 L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -23,7 +23,7 @@ case "$1" in
echo "1.2 1 L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -34,7 +34,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=8)"
./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -45,7 +45,7 @@ case "$1" in
echo "1.3 1 L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -61,7 +61,7 @@ case "$1" in
echo "2.1 N L-core per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 --lcores="2,(0-1)@0" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(1,0)" \
--stat-lcore 2 \
@@ -73,7 +73,7 @@ case "$1" in
echo "2.2 N L-core per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 --lcores="(0-3)@0,4" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,1,1)" \
--tx="(2,0)(3,1)" \
--stat-lcore 4 \
@@ -84,7 +84,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=8)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-7)@0,8" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)" \
--tx="(4,0)(5,1)(6,2)(7,3)" \
--stat-lcore 8 \
@@ -95,7 +95,7 @@ case "$1" in
echo "2.3 N L-core per pcore (N=16)"
./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-15)@0,16" -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
--tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
--stat-lcore 16 \
@@ -111,7 +111,7 @@ case "$1" in
echo "3.1 N L-threads per pcore (N=2)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,0)" \
--tx="(0,0)" \
--stat-lcore 1
@@ -121,7 +121,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=4)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(1,0,0,1)" \
--tx="(0,0)(0,1)" \
--stat-lcore 1
@@ -131,7 +131,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=8)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)" \
--tx="(0,0)(0,1)(0,2)(0,3)" \
--stat-lcore 1
@@ -141,7 +141,7 @@ case "$1" in
echo "3.2 N L-threads per pcore (N=16)"
./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
- --enable-jumbo --max-pkt-len 1500 \
+ --max-pkt-len 1500 \
--rx="(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)" \
--tx="(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)" \
--stat-lcore 1
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..4f20dfc4be06 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
.link_speeds = 0,
.rxmode = {
.mq_mode = ETH_MQ_RX_NONE,
- .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
+ .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN), /* Jumbo frame MTU */
.split_hdr_size = 0, /* Header split buffer size */
},
.rx_adv_conf = {
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index d94eca0353d7..229a277032cb 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
uint8_t ptp_enabled_port_nb;
static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static const struct rte_ether_addr ether_multicast = {
.addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
};
@@ -178,7 +172,7 @@ static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
struct rte_eth_dev_info dev_info;
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1;
const uint16_t tx_rings = 1;
int retval;
@@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369dc..c32d2e12e633 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -52,7 +52,6 @@ static struct rte_mempool *pool = NULL;
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
.offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..1367569c65db 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
static struct rte_eth_conf port_conf = {
.rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
.split_hdr_size = 0,
},
.txmode = {
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index ab6fa7d56c5d..6845c396b8d9 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
static const char usage[] =
"%s EAL_ARGS -- [-t]\n";
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static struct {
uint64_t total_cycles;
uint64_t total_queue_cycles;
@@ -124,7 +118,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused,
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -137,6 +131,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index ae9bbee8d820..fd7207aee758 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -17,14 +17,6 @@
#define MBUF_CACHE_SIZE 250
#define BURST_SIZE 32
-/* Configuration of ethernet ports. 8< */
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-/* >8 End of configuration of ethernet ports. */
-
/* basicfwd.c: Basic DPDK skeleton forwarding example. */
/*
@@ -36,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
uint16_t nb_rxd = RX_RING_SIZE;
uint16_t nb_txd = TX_RING_SIZE;
@@ -48,6 +40,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index b24fd82a6e71..427b882831bf 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -44,6 +44,7 @@
#define BURST_RX_RETRIES 4 /* Number of retries on RX. */
#define JUMBO_FRAME_MAX_SIZE 0x2600
+#define MAX_MTU (JUMBO_FRAME_MAX_SIZE - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN))
/* State of virtio device. */
#define DEVICE_MAC_LEARNING 0
@@ -633,8 +634,7 @@ us_vhost_parse_args(int argc, char **argv)
if (ret) {
vmdq_conf_default.rxmode.offloads |=
DEV_RX_OFFLOAD_JUMBO_FRAME;
- vmdq_conf_default.rxmode.max_rx_pkt_len
- = JUMBO_FRAME_MAX_SIZE;
+ vmdq_conf_default.rxmode.mtu = MAX_MTU;
}
break;
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index e59fb7d3478b..e19d79a40802 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -51,17 +51,10 @@
static uint32_t enabled_port_mask;
static volatile bool force_quit;
-/****************/
-static const struct rte_eth_conf port_conf_default = {
- .rxmode = {
- .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
- },
-};
-
static inline int
port_init(uint16_t port, struct rte_mempool *mbuf_pool)
{
- struct rte_eth_conf port_conf = port_conf_default;
+ struct rte_eth_conf port_conf;
const uint16_t rx_rings = 1, tx_rings = 1;
int retval;
uint16_t q;
@@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
if (!rte_eth_dev_is_valid_port(port))
return -1;
+ memset(&port_conf, 0, sizeof(struct rte_eth_conf));
+
retval = rte_eth_dev_info_get(port, &dev_info);
if (retval != 0) {
printf("Error during getting device (port %u) info: %s\n",
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 66f905c822e2..8d1ccf6f732c 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1315,6 +1315,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads,
return ret;
}
+static uint32_t
+eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
+{
+ uint32_t overhead_len;
+
+ if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
+ overhead_len = max_rx_pktlen - max_mtu;
+ else
+ overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
+ return overhead_len;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1322,7 +1335,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
- uint16_t overhead_len;
+ uint32_t max_rx_pktlen;
+ uint32_t overhead_len;
int diag;
int ret;
uint16_t old_mtu;
@@ -1372,11 +1386,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
/* Get the real Ethernet overhead length */
- if (dev_info.max_mtu != UINT16_MAX &&
- dev_info.max_rx_pktlen > dev_info.max_mtu)
- overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
- else
- overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
/* If number of queues specified by application for both Rx and Tx is
* zero, use driver preferred values. This cannot be done individually
@@ -1445,49 +1456,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
/*
- * If jumbo frames are enabled, check that the maximum RX packet
- * length is supported by the configured device.
+ * Check that the maximum RX packet length is supported by the
+ * configured device.
*/
- if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n",
- port_id, dev_conf->rxmode.max_rx_pkt_len,
- (unsigned int)RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
- goto rollback;
- }
+ if (dev_conf->rxmode.mtu == 0)
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
+ if (max_rx_pktlen > dev_info.max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
+ port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
+ ret = -EINVAL;
+ goto rollback;
+ } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
+ port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
+ ret = -EINVAL;
+ goto rollback;
+ }
- /* Scale the MTU size to adapt max_rx_pkt_len */
- dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
- overhead_len;
- } else {
- uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
- if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
- pktlen > RTE_ETHER_MTU + overhead_len)
+ if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+ if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
+ dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
/* Use default value */
- dev->data->dev_conf.rxmode.max_rx_pkt_len =
- RTE_ETHER_MTU + overhead_len;
+ dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
}
+ dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
+
/*
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
if (dev_conf->rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
goto rollback;
@@ -2146,13 +2153,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
* If LRO is enabled, check that the maximum aggregated packet
* size is supported by the configured device.
*/
+ /* Get the real Ethernet overhead length */
if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ uint32_t overhead_len;
+ uint32_t max_rx_pktlen;
+ int ret;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->mtu + overhead_len;
if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
- dev->data->dev_conf.rxmode.max_lro_pkt_size =
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
- int ret = eth_dev_check_lro_pkt_size(port_id,
+ dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
+ ret = eth_dev_check_lro_pkt_size(port_id,
dev->data->dev_conf.rxmode.max_lro_pkt_size,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
+ max_rx_pktlen,
dev_info.max_lro_pkt_size);
if (ret != 0)
return ret;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index b7db29405e03..e82019218a91 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -416,7 +416,7 @@ enum rte_eth_tx_mq_mode {
struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
- uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
+ uint32_t mtu; /**< Requested MTU. */
/** Maximum allowed size of LRO aggregated packet. */
uint32_t max_lro_pkt_size;
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
index 0036bda7465c..1491c815c312 100644
--- a/lib/ethdev/rte_ethdev_trace.h
+++ b/lib/ethdev/rte_ethdev_trace.h
@@ -28,7 +28,7 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(nb_tx_q);
rte_trace_point_emit_u32(dev_conf->link_speeds);
rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
- rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
+ rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
rte_trace_point_emit_u64(dev_conf->txmode.offloads);
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v7 2/6] ethdev: move jumbo frame offload check to library
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
@ 2021-10-18 13:48 ` Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 3/6] ethdev: move check to library for MTU set Ferruh Yigit
` (4 subsequent siblings)
5 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-18 13:48 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Qi Zhang, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Heinrich Kuhn, Harman Kalra,
Jerin Jacob, Rasesh Mody, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Thomas Monjalon
Cc: Ferruh Yigit, dev, Konstantin Ananyev, Huisong Li
Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support,
and application should enable the jumbo frame offload support for it.
When jumbo frame offload is not enabled by application, but MTU bigger
than RTE_ETHER_MTU is requested there are two options, either fail or
enable jumbo frame offload implicitly.
Enabling jumbo frame offload implicitly is selected by many drivers
since setting a big MTU value already implies it, and this increases
usability.
This patch moves this logic from drivers to the library, both to reduce
the duplicated code in the drivers and to make behaviour more visible.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 9 ++-------
drivers/net/bnxt/bnxt_ethdev.c | 9 ++-------
drivers/net/cnxk/cnxk_ethdev_ops.c | 5 -----
drivers/net/cxgbe/cxgbe_ethdev.c | 8 --------
drivers/net/dpaa/dpaa_ethdev.c | 7 -------
drivers/net/dpaa2/dpaa2_ethdev.c | 7 -------
drivers/net/e1000/em_ethdev.c | 9 ++-------
drivers/net/e1000/igb_ethdev.c | 9 ++-------
drivers/net/enetc/enetc_ethdev.c | 7 -------
drivers/net/hinic/hinic_pmd_ethdev.c | 7 -------
drivers/net/hns3/hns3_ethdev.c | 8 --------
drivers/net/hns3/hns3_ethdev_vf.c | 6 ------
drivers/net/i40e/i40e_ethdev.c | 5 -----
drivers/net/iavf/iavf_ethdev.c | 7 -------
drivers/net/ice/ice_ethdev.c | 5 -----
drivers/net/igc/igc_ethdev.c | 9 ++-------
drivers/net/ipn3ke/ipn3ke_representor.c | 5 -----
drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++-----
drivers/net/liquidio/lio_ethdev.c | 7 -------
drivers/net/nfp/nfp_common.c | 6 ------
drivers/net/octeontx/octeontx_ethdev.c | 5 -----
drivers/net/octeontx2/otx2_ethdev_ops.c | 5 -----
drivers/net/qede/qede_ethdev.c | 4 ----
drivers/net/sfc/sfc_ethdev.c | 9 ---------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 6 ------
lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++-
27 files changed, 29 insertions(+), 166 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 76cd892eec7b..2dc5fa245bd8 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
val = 1;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
val = 0;
- }
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 890197d34037..6a66ed824a47 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3056,15 +3056,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
return -EINVAL;
}
- if (new_mtu > RTE_ETHER_MTU) {
+ if (new_mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* Is there a change in mtu setting? */
if (eth_dev->data->mtu == new_mtu)
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 695d0d6fd3e2..349896f6a1bf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
plt_err("Failed to max Rx frame length, rc=%d", rc);
goto exit;
}
-
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
exit:
return rc;
}
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 458111ae5b16..cdecf6b512ef 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
return -EINVAL;
- /* set to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
return err;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c244c6f5a422..f24ec55bee8b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
fman_if_set_maxfrm(dev->process_private, frame_size);
return 0;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index b2a0c2dd40c5..02e1647d1f42 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1465,13 +1465,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index c9692bd7b7bc..de4267bf5995 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1816,15 +1816,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
return 0;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 9b75b5d08b3a..72bdd1087cdf 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4392,15 +4392,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rctl = E1000_READ_REG(hw, E1000_RCTL);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~E1000_RCTL_LPE;
- }
E1000_WRITE_REG(hw, E1000_RCTL, rctl);
E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 16c83914e8ce..52c89aa03840 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -681,13 +681,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads &=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index aef8adc2e1e0..5d6700c18303 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1551,13 +1551,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nic_dev->mtu_size = mtu;
return ret;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index e1fcba9e9482..8cf6a98c5690 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2566,7 +2566,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct hns3_adapter *hns = dev->data->dev_private;
uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD;
struct hns3_hw *hw = &hns->hw;
- bool is_jumbo_frame;
int ret;
if (dev->data->dev_started) {
@@ -2576,7 +2575,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rte_spinlock_lock(&hw->lock);
- is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
/*
@@ -2591,12 +2589,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return ret;
}
- if (is_jumbo_frame)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index b10fa2d5ad8a..7e016917769b 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -908,12 +908,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
rte_spinlock_unlock(&hw->lock);
return ret;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
rte_spinlock_unlock(&hw->lock);
return 0;
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 57abc2cf747d..208e60ed8c62 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11437,11 +11437,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 5fc663f6bd46..1df4cf17ab92 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1473,13 +1473,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return ret;
}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 878b3b1410c9..4929fc7d3a1c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3992,11 +3992,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2b1f2f5a39d9..c36f0c879ef9 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1591,15 +1591,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
}
rctl = IGC_READ_REG(hw, IGC_RCTL);
-
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
rctl &= ~IGC_RCTL_LPE;
- }
IGC_WRITE_REG(hw, IGC_RCTL, rctl);
IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 0d1aaa6449b9..6bf139c85dea 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2791,11 +2791,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (rpst->i40e_pf_eth) {
ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth,
mtu);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 29456ab59502..4fbc70b4ca74 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -5189,13 +5189,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
/* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU) {
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (mtu > RTE_ETHER_MTU)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 3fac28dcfcf9..5e3b2aa7a316 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -1;
}
- if (mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return 0;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 928b4983a07a..d7bd5883b107 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -962,12 +962,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EBUSY;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
/* writing to configuration space */
nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 67c7d8929eb2..c5dbcc45d86b 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
octeontx_log_info("Received pkt beyond maxlen %d will be dropped",
frame_size);
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 3c591c8fbaa0..fa6d4030b827 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -59,11 +59,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (rc)
return rc;
- if (mtu > RTE_ETHER_MTU)
- dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return rc;
}
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a1cf913dc8ed..7b12794405a1 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
fp->rxq->rx_buf_size = rc;
}
}
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
if (!dev->data->dev_started && restart) {
qede_dev_start(dev);
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index d3b12675e5bf..de0fac899f77 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1140,15 +1140,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
}
}
- /*
- * The driver does not use it, but other PMDs update jumbo frame
- * flag when MTU is set.
- */
- if (mtu > RTE_ETHER_MTU) {
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
sfc_adapter_unlock(sa);
sfc_log_init(sa, "done");
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index e1b9e276af90..1974957b3930 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
struct nicvf *nic = nicvf_pmd_priv(dev);
uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD;
size_t i;
- struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
PMD_INIT_FUNC_TRACE();
@@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
(frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS))
return -EINVAL;
- if (mtu > RTE_ETHER_MTU)
- rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (nicvf_mbox_update_hw_max_frs(nic, mtu))
return -EINVAL;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 45afe872bde0..cc1d4a623818 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3482,12 +3482,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->mode)
wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
TXGBE_FRAME_SIZE_MAX);
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 8d1ccf6f732c..850208b640dd 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3647,6 +3647,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
+ int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3665,12 +3666,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+
+ if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ is_jumbo_frame_capable = 1;
}
+ if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
+ return -EINVAL;
+
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (!ret)
+ if (ret == 0) {
dev->data->mtu = mtu;
+ /* switch to jumbo mode if needed */
+ if (mtu > RTE_ETHER_MTU)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ }
+
return eth_err(port_id, ret);
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v7 3/6] ethdev: move check to library for MTU set
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
@ 2021-10-18 13:48 ` Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
` (3 subsequent siblings)
5 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-18 13:48 UTC (permalink / raw)
To: Somalapuram Amaranath, Ajit Khaparde, Somnath Kotur,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Gagandeep Singh, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang, Rosen Xu,
Shijith Thotton, Srisivasubramanian Srinivasan, Heinrich Kuhn,
Harman Kalra, Jerin Jacob, Nithin Dabilpuram, Kiran Kumar K,
Rasesh Mody, Devendra Singh Rawat, Maciej Czekaj, Jiawen Wu,
Jian Wang, Thomas Monjalon, Andrew Rybchenko
Cc: Ferruh Yigit, dev, Konstantin Ananyev
Move requested MTU value check to the API to prevent the duplicated
code.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
drivers/net/axgbe/axgbe_ethdev.c | 15 ++++-----------
drivers/net/bnxt/bnxt_ethdev.c | 2 +-
drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------
drivers/net/dpaa/dpaa_ethdev.c | 2 --
drivers/net/dpaa2/dpaa2_ethdev.c | 4 ----
drivers/net/e1000/em_ethdev.c | 10 ----------
drivers/net/e1000/igb_ethdev.c | 11 -----------
drivers/net/enetc/enetc_ethdev.c | 4 ----
drivers/net/hinic/hinic_pmd_ethdev.c | 8 +-------
drivers/net/i40e/i40e_ethdev.c | 17 ++++-------------
drivers/net/iavf/iavf_ethdev.c | 10 ++--------
drivers/net/ice/ice_ethdev.c | 14 +++-----------
drivers/net/igc/igc_ethdev.c | 5 -----
drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------
drivers/net/liquidio/lio_ethdev.c | 10 ----------
drivers/net/nfp/nfp_common.c | 4 ----
drivers/net/octeontx/octeontx_ethdev.c | 4 ----
drivers/net/octeontx2/otx2_ethdev_ops.c | 4 ----
drivers/net/qede/qede_ethdev.c | 12 ------------
drivers/net/thunderx/nicvf_ethdev.c | 6 ------
drivers/net/txgbe/txgbe_ethdev.c | 10 ----------
lib/ethdev/rte_ethdev.c | 9 +++++++++
22 files changed, 25 insertions(+), 155 deletions(-)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 2dc5fa245bd8..d302329525d0 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct axgbe_port *pdata = dev->data->dev_private;
- uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- unsigned int val = 0;
- axgbe_dev_info_get(dev, &dev_info);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ unsigned int val;
+
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
dev->data->port_id);
return -EBUSY;
}
- if (mtu > RTE_ETHER_MTU)
- val = 1;
- else
- val = 0;
+ val = mtu > RTE_ETHER_MTU ? 1 : 0;
AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
+
return 0;
}
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 6a66ed824a47..f3cd756447cf 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3029,7 +3029,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
struct bnxt *bp = eth_dev->data->dev_private;
uint32_t new_pkt_size;
- uint32_t rc = 0;
+ uint32_t rc;
uint32_t i;
rc = is_bnxt_in_error(bp);
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index cdecf6b512ef..32a01009107d 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct port_info *pi = eth_dev->data->dev_private;
struct adapter *adapter = pi->adapter;
- struct rte_eth_dev_info dev_info;
- int err;
uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- err = cxgbe_dev_info_get(eth_dev, &dev_info);
- if (err != 0)
- return err;
-
- /* Must accommodate at least RTE_ETHER_MIN_MTU */
- if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen)
- return -EINVAL;
-
- err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
+ return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
-1, -1, true);
- return err;
}
/*
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f24ec55bee8b..c117115066b0 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN)
- return -EINVAL;
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 02e1647d1f42..3d1df34aa852 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1461,10 +1461,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
return -EINVAL;
}
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN)
- return -EINVAL;
-
/* Set the Max Rx frame length as 'mtu' +
* Maximum Ethernet header length
*/
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index de4267bf5995..e8d55ddf3349 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1786,22 +1786,12 @@ eth_em_default_mac_addr_set(struct rte_eth_dev *dev,
static int
eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev_info dev_info;
struct e1000_hw *hw;
uint32_t frame_size;
uint32_t rctl;
- int ret;
-
- ret = eth_em_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
frame_size = mtu + E1000_ETH_OVERHEAD;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 72bdd1087cdf..dbe811a1ad2f 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -4359,9 +4359,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
uint32_t rctl;
struct e1000_hw *hw;
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + E1000_ETH_OVERHEAD;
- int ret;
hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -4370,15 +4368,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (hw->mac.type == e1000_82571)
return -ENOTSUP;
#endif
- ret = eth_igb_infos_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index 52c89aa03840..ca83fbd0a3d8 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -666,10 +666,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
struct enetc_hw *enetc_hw = &hw->hw;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
- /* check that mtu is within the allowed range */
- if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size > ENETC_MAC_MAXFRM_SIZE)
- return -EINVAL;
-
/*
* Refuse mtu that requires the support of scattered packets
* when this feature has not been enabled before.
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 5d6700c18303..9a974dff580e 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1534,17 +1534,11 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
- int ret = 0;
+ int ret;
PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu));
- if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) {
- PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d",
- mtu, HINIC_MIN_MTU_SIZE, HINIC_MAX_MTU_SIZE);
- return -EINVAL;
- }
-
ret = hinic_set_port_mtu(nic_dev->hwdev, mtu);
if (ret) {
PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret);
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 208e60ed8c62..cf3f20e79f1a 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -11419,25 +11419,16 @@ static int i40e_set_default_mac_addr(struct rte_eth_dev *dev,
}
static int
-i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + I40E_ETH_OVERHEAD;
- int ret = 0;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
- return ret;
+ return 0;
}
/* Restore ethertype filter */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 1df4cf17ab92..654337187842 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1459,21 +1459,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
}
static int
-iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD;
- int ret = 0;
-
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port must be stopped before configuration");
return -EBUSY;
}
- return ret;
+ return 0;
}
static int
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 4929fc7d3a1c..3a1bcc4e3eb7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3974,21 +3974,13 @@ ice_dev_set_link_down(struct rte_eth_dev *dev)
}
static int
-ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct rte_eth_dev_data *dev_data = pf->dev_data;
- uint32_t frame_size = mtu + ICE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX)
- return -EINVAL;
-
/* mtu setting is forbidden if port is start */
- if (dev_data->dev_started) {
+ if (dev->data->dev_started != 0) {
PMD_DRV_LOG(ERR,
"port %d must be stopped before configuration",
- dev_data->port_id);
+ dev->data->port_id);
return -EBUSY;
}
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index c36f0c879ef9..2a1ed90b641b 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1575,11 +1575,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN)
frame_size += VLAN_TAG_SIZE;
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > MAX_RX_JUMBO_FRAME_SIZE)
- return -EINVAL;
-
/*
* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 6bf139c85dea..0438c3f08c24 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2768,12 +2768,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
int ret = 0;
struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev);
struct rte_eth_dev_data *dev_data = ethdev->data;
- uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD;
-
- /* check if mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU ||
- frame_size > IPN3KE_MAC_FRAME_SIZE_MAX)
- return -EINVAL;
/* mtu setting is forbidden if port is start */
/* make sure NIC port is stopped */
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index 5e3b2aa7a316..0fc3f0ab66a9 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -434,7 +434,6 @@ static int
lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
{
struct lio_device *lio_dev = LIO_DEV(eth_dev);
- uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
struct lio_dev_ctrl_cmd ctrl_cmd;
struct lio_ctrl_pkt ctrl_pkt;
@@ -446,15 +445,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
return -EINVAL;
}
- /* check if VF MTU is within allowed range.
- * New value should not exceed PF MTU.
- */
- if (mtu < RTE_ETHER_MIN_MTU || mtu > pf_mtu) {
- lio_dev_err(lio_dev, "VF MTU should be >= %d and <= %d\n",
- RTE_ETHER_MIN_MTU, pf_mtu);
- return -EINVAL;
- }
-
/* flush added to prevent cmd failure
* incase the queue is full
*/
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index d7bd5883b107..dc906872192f 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -951,10 +951,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || (uint32_t)mtu > hw->max_mtu)
- return -EINVAL;
-
/* mtu setting is forbidden if port is started */
if (dev->data->dev_started) {
PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index c5dbcc45d86b..f578123ed00b 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -524,10 +524,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
struct rte_eth_dev_data *data = eth_dev->data;
int rc = 0;
- /* Check if MTU is within the allowed range */
- if (frame_size < OCCTX_MIN_FRS || frame_size > OCCTX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index fa6d4030b827..22a8af5cba45 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -20,10 +20,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
if (dev->configured && otx2_ethdev_is_ptp_en(dev))
frame_size += NIX_TIMESYNC_RX_OFFSET;
- /* Check if MTU is within the allowed range */
- if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS)
- return -EINVAL;
-
buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/* Refuse MTU that requires the support of scattered packets
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 7b12794405a1..663cb1460f4f 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -2307,7 +2307,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
- struct rte_eth_dev_info dev_info = {0};
struct qede_fastpath *fp;
uint32_t frame_size;
uint16_t bufsz;
@@ -2315,19 +2314,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
int i, rc;
PMD_INIT_FUNC_TRACE(edev);
- rc = qede_dev_info_get(dev, &dev_info);
- if (rc != 0) {
- DP_ERR(edev, "Error during getting ethernet device info\n");
- return rc;
- }
frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) {
- DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n",
- mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
- QEDE_ETH_OVERHEAD);
- return -EINVAL;
- }
if (!dev->data->scattered_rx &&
frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
DP_INFO(edev, "MTU greater than minimum RX buffer size of %u\n",
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 1974957b3930..328d6d56d921 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -154,12 +154,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
PMD_INIT_FUNC_TRACE();
- if (frame_size > NIC_HW_MAX_FRS)
- return -EINVAL;
-
- if (frame_size < NIC_HW_MIN_FRS)
- return -EINVAL;
-
buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
/*
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index cc1d4a623818..7b46ffb68635 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -3459,18 +3459,8 @@ static int
txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
- struct rte_eth_dev_info dev_info;
uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
struct rte_eth_dev_data *dev_data = dev->data;
- int ret;
-
- ret = txgbe_dev_info_get(dev, &dev_info);
- if (ret != 0)
- return ret;
-
- /* check that mtu is within the allowed range */
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
/* If device is started, refuse mtu that requires the support of
* scattered packets when this feature has not been enabled before.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 850208b640dd..4e32b6d964f6 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -3660,6 +3660,9 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
+ uint16_t overhead_len;
+ uint32_t frame_size;
+
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
@@ -3667,6 +3670,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
return -EINVAL;
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ frame_size = mtu + overhead_len;
+ if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
+ return -EINVAL;
+
if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
is_jumbo_frame_capable = 1;
}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 3/6] ethdev: move check to library for MTU set Ferruh Yigit
@ 2021-10-18 13:48 ` Ferruh Yigit
2021-10-21 0:43 ` Thomas Monjalon
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 5/6] ethdev: unify MTU checks Ferruh Yigit
` (2 subsequent siblings)
5 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-18 13:48 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Andrew Rybchenko,
Maciej Czekaj, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Yong Wang, Konstantin Ananyev, Radu Nicolau, Akhil Goyal,
David Hunt, John McNamara, Thomas Monjalon
Cc: Ferruh Yigit, dev, Huisong Li
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=true, Size: 56289 bytes --]
Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.
And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.
Removing this additional configuration for simplification.
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>
---
app/test-eventdev/test_pipeline_common.c | 2 -
app/test-pmd/cmdline.c | 2 +-
app/test-pmd/config.c | 25 +---------
app/test-pmd/testpmd.c | 48 +------------------
app/test-pmd/testpmd.h | 2 +-
doc/guides/howto/debug_troubleshoot.rst | 2 -
doc/guides/nics/bnxt.rst | 1 -
doc/guides/nics/features.rst | 3 +-
drivers/net/atlantic/atl_ethdev.c | 1 -
drivers/net/axgbe/axgbe_ethdev.c | 1 -
drivers/net/bnx2x/bnx2x_ethdev.c | 1 -
drivers/net/bnxt/bnxt.h | 1 -
drivers/net/bnxt/bnxt_ethdev.c | 10 +---
drivers/net/bonding/rte_eth_bond_pmd.c | 8 ----
drivers/net/cnxk/cnxk_ethdev.h | 6 +--
drivers/net/cnxk/cnxk_ethdev_ops.c | 1 -
drivers/net/cxgbe/cxgbe.h | 1 -
drivers/net/cxgbe/cxgbe_ethdev.c | 8 ----
drivers/net/cxgbe/sge.c | 5 +-
drivers/net/dpaa/dpaa_ethdev.c | 2 -
drivers/net/dpaa2/dpaa2_ethdev.c | 2 -
drivers/net/e1000/e1000_ethdev.h | 4 +-
drivers/net/e1000/em_ethdev.c | 4 +-
drivers/net/e1000/em_rxtx.c | 19 +++-----
drivers/net/e1000/igb_rxtx.c | 3 +-
drivers/net/ena/ena_ethdev.c | 1 -
drivers/net/enetc/enetc_ethdev.c | 3 +-
drivers/net/enic/enic_res.c | 1 -
drivers/net/failsafe/failsafe_ops.c | 2 -
drivers/net/fm10k/fm10k_ethdev.c | 1 -
drivers/net/hinic/hinic_pmd_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev.c | 1 -
drivers/net/hns3/hns3_ethdev_vf.c | 1 -
drivers/net/i40e/i40e_ethdev.c | 1 -
drivers/net/i40e/i40e_rxtx.c | 2 +-
drivers/net/iavf/iavf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_ethdev.c | 3 +-
drivers/net/ice/ice_dcf_vf_representor.c | 1 -
drivers/net/ice/ice_ethdev.c | 1 -
drivers/net/ice/ice_rxtx.c | 3 +-
drivers/net/igc/igc_ethdev.h | 1 -
drivers/net/igc/igc_txrx.c | 2 +-
drivers/net/ionic/ionic_ethdev.c | 1 -
drivers/net/ipn3ke/ipn3ke_representor.c | 3 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 5 +-
drivers/net/ixgbe/ixgbe_pf.c | 9 +---
drivers/net/ixgbe/ixgbe_rxtx.c | 3 +-
drivers/net/mlx4/mlx4_rxq.c | 1 -
drivers/net/mlx5/mlx5_rxq.c | 1 -
drivers/net/mvneta/mvneta_ethdev.h | 3 +-
drivers/net/mvpp2/mrvl_ethdev.c | 1 -
drivers/net/nfp/nfp_common.c | 6 +--
drivers/net/octeontx/octeontx_ethdev.h | 1 -
drivers/net/octeontx2/otx2_ethdev.h | 1 -
drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 ---
drivers/net/qede/qede_ethdev.c | 1 -
drivers/net/sfc/sfc_rx.c | 2 -
drivers/net/thunderx/nicvf_ethdev.h | 1 -
drivers/net/txgbe/txgbe_rxtx.c | 1 -
drivers/net/virtio/virtio_ethdev.c | 1 -
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 -
examples/ip_fragmentation/main.c | 3 +-
examples/ip_reassembly/main.c | 3 +-
examples/ipsec-secgw/ipsec-secgw.c | 2 -
examples/ipv4_multicast/main.c | 1 -
examples/kni/main.c | 5 --
examples/l3fwd-acl/main.c | 4 +-
examples/l3fwd-graph/main.c | 4 +-
examples/l3fwd-power/main.c | 4 +-
examples/l3fwd/main.c | 4 +-
.../performance-thread/l3fwd-thread/main.c | 4 +-
examples/vhost/main.c | 5 +-
lib/ethdev/rte_ethdev.c | 26 +---------
lib/ethdev/rte_ethdev.h | 1 -
75 files changed, 48 insertions(+), 259 deletions(-)
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 5fcea74b4d43..2775e72c580d 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN;
- if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
- port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
t->internal_port = 1;
RTE_ETH_FOREACH_DEV(i) {
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index f777cc453836..88354ccab9d4 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1911,7 +1911,7 @@ cmd_config_max_pkt_len_parsed(void *parsed_result,
return;
}
- update_jumbo_frame_offload(port_id, res->value);
+ update_mtu_from_frame_size(port_id, res->value);
}
init_port_config();
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 333d3dd62259..bdcd826490d1 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1206,40 +1206,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v)
void
port_mtu_set(portid_t port_id, uint16_t mtu)
{
+ struct rte_port *port = &ports[port_id];
int diag;
- struct rte_port *rte_port = &ports[port_id];
- struct rte_eth_dev_info dev_info;
- int ret;
if (port_id_is_invalid(port_id, ENABLED_WARN))
return;
- ret = eth_dev_info_get_print_err(port_id, &dev_info);
- if (ret != 0)
- return;
-
- if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) {
- fprintf(stderr,
- "Set MTU failed. MTU:%u is not in valid range, min:%u - max:%u\n",
- mtu, dev_info.min_mtu, dev_info.max_mtu);
- return;
- }
diag = rte_eth_dev_set_mtu(port_id, mtu);
if (diag != 0) {
fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
return;
}
- rte_port->dev_conf.rxmode.mtu = mtu;
-
- if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
- if (mtu > RTE_ETHER_MTU)
- rte_port->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- rte_port->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
+ port->dev_conf.rxmode.mtu = mtu;
}
/* Generic flow management functions. */
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 50d0ec4fe3db..de7a8c295527 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1572,12 +1572,6 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
if (ret != 0)
rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
- ret = update_jumbo_frame_offload(pid, 0);
- if (ret != 0)
- fprintf(stderr,
- "Updating jumbo frame offload failed for port %u\n",
- pid);
-
if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
port->dev_conf.txmode.offloads &=
~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
@@ -3691,24 +3685,18 @@ rxtx_port_config(struct rte_port *port)
}
/*
- * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload,
- * MTU is also aligned.
+ * Helper function to set MTU from frame size
*
* port->dev_info should be set before calling this function.
*
- * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
- * ETH_OVERHEAD". This is useful to update flags but not MTU value.
- *
* return 0 on success, negative on error
*/
int
-update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
+update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen)
{
struct rte_port *port = &ports[portid];
uint32_t eth_overhead;
- uint64_t rx_offloads;
uint16_t mtu, new_mtu;
- bool on;
eth_overhead = get_eth_overhead(&port->dev_info);
@@ -3717,40 +3705,8 @@ update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
return -1;
}
- if (max_rx_pktlen == 0)
- max_rx_pktlen = mtu + eth_overhead;
-
- rx_offloads = port->dev_conf.rxmode.offloads;
new_mtu = max_rx_pktlen - eth_overhead;
- if (new_mtu <= RTE_ETHER_MTU) {
- rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = false;
- } else {
- if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- fprintf(stderr,
- "Frame size (%u) is not supported by port %u\n",
- max_rx_pktlen, portid);
- return -1;
- }
- rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- on = true;
- }
-
- if (rx_offloads != port->dev_conf.rxmode.offloads) {
- uint16_t qid;
-
- port->dev_conf.rxmode.offloads = rx_offloads;
-
- /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
- for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
- if (on)
- port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
- }
-
if (mtu == new_mtu)
return 0;
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 42a597596fdd..dd8f27a296b6 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -1045,7 +1045,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue,
__rte_unused void *user_param);
void add_tx_dynf_callback(portid_t portid);
void remove_tx_dynf_callback(portid_t portid);
-int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
+int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen);
/*
* Work-around of a compilation error with ICC on invocations of the
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..df69fa8bcc24 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`.
* Identify if port Speed and Duplex is matching to desired values with
``rte_eth_link_get``.
- * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
-
* Check promiscuous mode if the drops do not occur for unique MAC address
with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index f2f5eff48dd4..aa6032889a55 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -885,7 +885,6 @@ processing. This improved performance is derived from a number of optimizations:
DEV_RX_OFFLOAD_VLAN_STRIP
DEV_RX_OFFLOAD_KEEP_CRC
- DEV_RX_OFFLOAD_JUMBO_FRAME
DEV_RX_OFFLOAD_IPV4_CKSUM
DEV_RX_OFFLOAD_UDP_CKSUM
DEV_RX_OFFLOAD_TCP_CKSUM
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 79bce2784195..8dd421ca013b 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -165,8 +165,7 @@ Jumbo frame
Supports Rx jumbo frames.
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
- ``dev_conf.rxmode.mtu``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 3f654c071566..5a198f53fce7 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = {
| DEV_RX_OFFLOAD_IPV4_CKSUM \
| DEV_RX_OFFLOAD_UDP_CKSUM \
| DEV_RX_OFFLOAD_TCP_CKSUM \
- | DEV_RX_OFFLOAD_JUMBO_FRAME \
| DEV_RX_OFFLOAD_MACSEC_STRIP \
| DEV_RX_OFFLOAD_VLAN_FILTER)
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index d302329525d0..0250256830ac 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index aff53fedb980..567ea2382864 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN;
dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS;
dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
- dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 5121d05da65f..6743cf92b0e6 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -595,7 +595,6 @@ struct bnxt_rep_info {
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_VLAN_EXTEND | \
DEV_RX_OFFLOAD_TCP_LRO | \
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f3cd756447cf..f385723a9f65 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -736,15 +736,10 @@ static int bnxt_start_nic(struct bnxt *bp)
unsigned int i, j;
int rc;
- if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
- bp->eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (bp->eth_dev->data->mtu > RTE_ETHER_MTU)
bp->flags |= BNXT_FLAG_JUMBO;
- } else {
- bp->eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
bp->flags &= ~BNXT_FLAG_JUMBO;
- }
/* THOR does not support ring groups.
* But we will use the array to save RSS context IDs.
@@ -1254,7 +1249,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
if (eth_dev->data->dev_conf.rxmode.offloads &
~(DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 6d8b3c245a84..8d038ba6b6c4 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1724,14 +1724,6 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
slave_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME)
- slave_eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- slave_eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index ff21b977b70d..2304af6ffa8b 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -78,9 +78,9 @@
#define CNXK_NIX_RX_OFFLOAD_CAPA \
(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \
- DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_SECURITY)
+ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \
+ DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP | \
+ DEV_RX_OFFLOAD_SECURITY)
#define RSS_IPV4_ENABLE \
(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index 349896f6a1bf..d0924df76152 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_SECURITY, " Security,"},
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..37625c5bfb69 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -51,7 +51,6 @@
DEV_RX_OFFLOAD_IPV4_CKSUM | \
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 32a01009107d..f77b2976002c 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -660,14 +660,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
rxq->rspq.size = temp_nb_desc;
rxq->fl.size = temp_nb_desc;
- /* Set to jumbo mode if necessary */
- if (eth_dev->data->mtu > RTE_ETHER_MTU)
- eth_dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- eth_dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
-
err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
&rxq->fl, NULL,
is_pf4(adapter) ?
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 830f5192474d..21b8fe61c9a7 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
struct rte_mbuf *buf_bulk[n];
int ret, i;
struct rte_pktmbuf_pool_private *mbp_priv;
- u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
- DEV_RX_OFFLOAD_JUMBO_FRAME;
/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
- if (jumbo_en &&
- ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000))
+ if ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000)
buf_size_idx = RX_LARGE_MTU_BUF;
ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk, n);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c117115066b0..c79cdb8d8ad7 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,7 +54,6 @@
/* Supported Rx offloads */
static uint64_t dev_rx_offloads_sup =
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER;
/* Rx offloads which cannot be disabled */
@@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
uint64_t flags;
const char *output;
} rx_offload_map[] = {
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 3d1df34aa852..a0270e78520e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup =
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TIMESTAMP;
/* Rx offloads which cannot be disabled */
@@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
- {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 050852be79ce..93bee734ae5d 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -465,8 +465,8 @@ void eth_em_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
void em_dev_clear_queues(struct rte_eth_dev *dev);
void em_dev_free_queues(struct rte_eth_dev *dev);
-uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev);
-uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev);
+uint64_t em_get_rx_port_offloads_capa(void);
+uint64_t em_get_rx_queue_offloads_capa(void);
int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index e8d55ddf3349..73152dec6ed1 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1081,8 +1081,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = 1;
dev_info->max_tx_queues = 1;
- dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(dev);
- dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) |
+ dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa();
+ dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() |
dev_info->rx_queue_offload_capa;
dev_info->tx_queue_offload_capa = em_get_tx_queue_offloads_capa(dev);
dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) |
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index 506b4159a2ec..344149c19147 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -1364,12 +1364,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq)
}
uint64_t
-em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_port_offloads_capa(void)
{
uint64_t rx_offload_capa;
- uint32_t max_rx_pktlen;
-
- max_rx_pktlen = em_get_max_pktlen(dev);
rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
@@ -1379,14 +1376,12 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER;
- if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
- rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
return rx_offload_capa;
}
uint64_t
-em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
+em_get_rx_queue_offloads_capa(void)
{
uint64_t rx_queue_offload_capa;
@@ -1395,7 +1390,7 @@ em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
* capability be same to per port queue offloading capability
* for better convenience.
*/
- rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev);
+ rx_queue_offload_capa = em_get_rx_port_offloads_capa();
return rx_queue_offload_capa;
}
@@ -1826,7 +1821,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
* to avoid splitting packets that don't fit into
* one buffer.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+ if (dev->data->mtu > RTE_ETHER_MTU ||
rctl_bsize < RTE_ETHER_MAX_LEN) {
if (!dev->data->scattered_rx)
PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1861,14 +1856,14 @@ eth_em_rx_init(struct rte_eth_dev *dev)
if ((hw->mac.type == e1000_ich9lan ||
hw->mac.type == e1000_pch2lan ||
hw->mac.type == e1000_ich10lan) &&
- rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ dev->data->mtu > RTE_ETHER_MTU) {
u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
}
if (hw->mac.type == e1000_pch2lan) {
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
else
e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
@@ -1895,7 +1890,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
/*
* Configure support of jumbo frames, if any.
*/
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ if (dev->data->mtu > RTE_ETHER_MTU)
rctl |= E1000_RCTL_LPE;
else
rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 2fc27bbbc682..a1d5eecc14a1 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -1645,7 +1645,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -2332,7 +2331,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
* Configure support of jumbo frames, if any.
*/
max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
rctl |= E1000_RCTL_LPE;
/*
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index e2f7213acb84..3fde099ab42c 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -1916,7 +1916,6 @@ static int ena_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM;
- rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
/* Inform framework about available features */
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index ca83fbd0a3d8..1b567f01eae0 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
(DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME);
+ DEV_RX_OFFLOAD_KEEP_CRC);
return 0;
}
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 0493e096d031..c5777772a09e 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic)
DEV_TX_OFFLOAD_TCP_TSO;
enic->rx_offload_capa =
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index d0030af0610b..29de39910c6e 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1183,7 +1183,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
@@ -1201,7 +1200,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
DEV_RX_OFFLOAD_SECURITY |
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 400e77ec6200..66f4a5c6df2c 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_HEADER_SPLIT |
DEV_RX_OFFLOAD_RSS_HASH);
}
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 9a974dff580e..c2374ebb6759 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -738,7 +738,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 8cf6a98c5690..693048f58704 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2686,7 +2686,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 7e016917769b..54dbd4b798f2 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -944,7 +944,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_TCP_LRO);
info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index cf3f20e79f1a..0a4db0891d4a 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3730,7 +3730,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 9b030198e537..554b1142c136 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -2901,7 +2901,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
rxq->max_pkt_len =
RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len,
data->mtu + I40E_ETH_OVERHEAD);
- if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 654337187842..611f1f7722b0 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -588,7 +588,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev->data->mtu & RTE_ETHER_MTU) {
if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
max_pkt_len > IAVF_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -968,7 +968,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 00d9e873e64f..b8a537cb8556 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -72,7 +72,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
/* Check if the jumbo frame and maximum packet length are set
* correctly.
*/
- if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (max_pkt_len <= ICE_ETH_MAX_LEN ||
max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -681,7 +681,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index c5335ac3cc8f..44fb38dbe7b1 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -149,7 +149,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_EXTEND |
DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 3a1bcc4e3eb7..2e7273cd1e93 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3676,7 +3676,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->rx_offload_capa =
DEV_RX_OFFLOAD_VLAN_STRIP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_VLAN_FILTER;
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 220537741d6c..ff362c21d9f5 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -267,7 +267,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
struct ice_rlan_ctx rx_ctx;
enum ice_status err;
uint16_t buf_size;
- struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
uint32_t rxdid = ICE_RXDID_COMMS_OVS;
uint32_t regval;
struct ice_adapter *ad = rxq->vsi->adapter;
@@ -282,7 +281,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
frame_size);
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if (dev_data->mtu > RTE_ETHER_MTU) {
if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
PMD_DRV_LOG(ERR, "maximum packet length must "
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index b3473b5b1646..5e6c2ff30157 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -73,7 +73,6 @@ extern "C" {
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_SCTP_CKSUM | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_KEEP_CRC | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 9b7a9d953bff..56132e8c6cd6 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -1080,7 +1080,7 @@ igc_rx_init(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
/* Configure support of jumbo frames, if any. */
- if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
+ if (dev->data->mtu & RTE_ETHER_MTU)
rctl |= IGC_RCTL_LPE;
else
rctl &= ~IGC_RCTL_LPE;
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index d5d610c80bcd..f94a1fed0a38 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_IPV4_CKSUM |
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_SCATTER |
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 0438c3f08c24..063a9c6a6f7f 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
DEV_RX_OFFLOAD_VLAN_EXTEND |
- DEV_RX_OFFLOAD_VLAN_FILTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ DEV_RX_OFFLOAD_VLAN_FILTER;
dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
dev_info->tx_offload_capa =
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 4fbc70b4ca74..46c95425adfb 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -6040,7 +6040,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct rte_eth_rxmode *rxmode;
uint32_t rf_dec, rf_int;
uint32_t bcnrc_val;
uint16_t link_speed = dev->data->dev_link.link_speed;
@@ -6062,14 +6061,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
bcnrc_val = 0;
}
- rxmode = &dev->data->dev_conf.rxmode;
/*
* Set global transmit compensation time to the MMW_SIZE in RTTBCNRM
* register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
* set as 0x4.
*/
- if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
- (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+ if (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME);
else
IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 4ceb5bf322d8..295e5a39b245 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -597,15 +597,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT;
if (max_frs < max_frame) {
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- if (max_frame > IXGBE_ETH_MAX_LEN) {
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (max_frame > IXGBE_ETH_MAX_LEN)
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
- } else {
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+ else
hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
- }
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index 575cc8c4ffe5..b263dfe1d574 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -3036,7 +3036,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_RSS_HASH;
@@ -5079,7 +5078,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
/*
* Configure jumbo frame support, if any.
*/
- if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+ if ((dev->data->mtu & RTE_ETHER_MTU) != 0) {
hlreg0 |= IXGBE_HLREG0_JUMBOEN;
maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
maxfrs &= 0x0000FFFF;
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 1801d87334a1..ee2d2b75e59a 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
{
uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH;
if (priv->hw_csum)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 0655965c0fb9..d8d7e481dea0 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
struct mlx5_dev_config *config = &priv->config;
uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_TIMESTAMP |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_RSS_HASH);
if (!config->mprq.enabled)
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..6428f9ff7931 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,8 +54,7 @@
#define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN)
/** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
- DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM)
/** Tx offloads capabilities */
#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 44761b695a8d..a6458d2ce9b5 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -59,7 +59,6 @@
/** Port Rx offload capabilities */
#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_CHECKSUM)
/** Port Tx offloads capabilities */
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index dc906872192f..0003fd54dde5 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -369,8 +369,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
}
- if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
- hw->mtu = dev->data->mtu;
+ hw->mtu = dev->data->mtu;
if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
@@ -757,9 +756,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
};
- /* All NFP devices support jumbo frames */
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..3a02824e3948 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -60,7 +60,6 @@
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_VLAN_FILTER)
#define OCTEONTX_TX_OFFLOADS ( \
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7a8d19d5411a..4557a0ee1945 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -148,7 +148,6 @@
DEV_RX_OFFLOAD_SCTP_CKSUM | \
DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
DEV_RX_OFFLOAD_SCATTER | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
DEV_RX_OFFLOAD_VLAN_FILTER | \
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index eed0e05a8fc1..698d22e22685 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
- devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
- devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+ devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..aa4dcd33cc79 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
droq_pkt->l3_len = hdr_lens.l3_len;
droq_pkt->l4_len = hdr_lens.l4_len;
- if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
- !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
- rte_pktmbuf_free(droq_pkt);
- goto oq_read_fail;
- }
-
if (droq_pkt->nb_segs > 1 &&
!(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
rte_pktmbuf_free(droq_pkt);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 663cb1460f4f..27f6932dc74e 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_RX_OFFLOAD_TCP_LRO |
DEV_RX_OFFLOAD_KEEP_CRC |
DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_VLAN_STRIP |
DEV_RX_OFFLOAD_RSS_HASH);
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 2c779c4fbc2e..c60ef17a922a 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -941,8 +941,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
{
uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
- caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-
return caps & sfc_rx_get_offload_mask(sa);
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..5d38750d6313 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -40,7 +40,6 @@
#define NICVF_RX_OFFLOAD_CAPA ( \
DEV_RX_OFFLOAD_CHECKSUM | \
DEV_RX_OFFLOAD_VLAN_STRIP | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 5cd6ecc2a399..7e18dcce0a86 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1974,7 +1974,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
DEV_RX_OFFLOAD_UDP_CKSUM |
DEV_RX_OFFLOAD_TCP_CKSUM |
DEV_RX_OFFLOAD_KEEP_CRC |
- DEV_RX_OFFLOAD_JUMBO_FRAME |
DEV_RX_OFFLOAD_VLAN_FILTER |
DEV_RX_OFFLOAD_RSS_HASH |
DEV_RX_OFFLOAD_SCATTER;
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index a28f9607277e..047d3f43a3cf 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2547,7 +2547,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
host_features = VIRTIO_OPS(hw)->get_features(hw);
dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
- dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index cfffc94c4895..a19895af1f17 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -54,7 +54,6 @@
DEV_RX_OFFLOAD_UDP_CKSUM | \
DEV_RX_OFFLOAD_TCP_CKSUM | \
DEV_RX_OFFLOAD_TCP_LRO | \
- DEV_RX_OFFLOAD_JUMBO_FRAME | \
DEV_RX_OFFLOAD_RSS_HASH)
int vmxnet3_segs_dynfield_offset = -1;
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 754fee5a5780..8644454a9aef 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -150,8 +150,7 @@ static struct rte_eth_conf port_conf = {
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_SCATTER |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ DEV_RX_OFFLOAD_SCATTER),
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 39e12fea47f4..4caa9ac3cafa 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -165,8 +165,7 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
- DEV_RX_OFFLOAD_JUMBO_FRAME),
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM,
},
.rx_adv_conf = {
.rss_conf = {
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index a5dfca5a9a4b..5f5ec260f315 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -2209,8 +2209,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
nb_rx_queue, nb_tx_queue);
- if (mtu_size > RTE_ETHER_MTU)
- local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
local_port_conf.rxmode.mtu = mtu_size;
if (multi_seg_required()) {
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index e28035998e6c..87538dccc879 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -113,7 +113,6 @@ static struct rte_eth_conf port_conf = {
.mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
RTE_ETHER_CRC_LEN,
.split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
diff --git a/examples/kni/main.c b/examples/kni/main.c
index 62f6e42a9437..1790ec024072 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
}
memcpy(&conf, &port_conf, sizeof(conf));
- /* Set new MTU */
- if (new_mtu > RTE_ETHER_MTU)
- conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
conf.rxmode.mtu = new_mtu;
ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index 67e6356acff6..1890c88a5b01 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -2003,10 +2003,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index 46568eba9c01..05385807e83e 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -730,10 +730,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 03c0b8bb15b8..6aa1b66ecfcc 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -2509,10 +2509,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 66d76e87cb25..f27c76bb7a73 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -987,10 +987,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2db1b5fc154f..5de5df997ee9 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -3493,10 +3493,8 @@ config_port_max_pkt_len(struct rte_eth_conf *conf,
dev_info->max_mtu);
conf->rxmode.mtu = max_pkt_len - overhead_len;
- if (conf->rxmode.mtu > RTE_ETHER_MTU) {
+ if (conf->rxmode.mtu > RTE_ETHER_MTU)
conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
- conf->rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
return 0;
}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 427b882831bf..999809e6ed41 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -631,11 +631,8 @@ us_vhost_parse_args(int argc, char **argv)
return -1;
}
mergeable = !!ret;
- if (ret) {
- vmdq_conf_default.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (ret)
vmdq_conf_default.rxmode.mtu = MAX_MTU;
- }
break;
case OPT_STATS_NUM:
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4e32b6d964f6..982c1bbc8679 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -121,7 +121,6 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT),
RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
- RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
RTE_RX_OFFLOAD_BIT2STR(SCATTER),
RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
RTE_RX_OFFLOAD_BIT2STR(SECURITY),
@@ -1476,13 +1475,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
- if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU ||
- dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU)
- /* Use default value */
- dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- }
-
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
/*
@@ -3647,7 +3639,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
int ret;
struct rte_eth_dev_info dev_info;
struct rte_eth_dev *dev;
- int is_jumbo_frame_capable = 0;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
@@ -3675,27 +3666,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
frame_size = mtu + overhead_len;
if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
return -EINVAL;
-
- if ((dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
- is_jumbo_frame_capable = 1;
}
- if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0)
- return -EINVAL;
-
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
- if (ret == 0) {
+ if (ret == 0)
dev->data->mtu = mtu;
- /* switch to jumbo mode if needed */
- if (mtu > RTE_ETHER_MTU)
- dev->data->dev_conf.rxmode.offloads |=
- DEV_RX_OFFLOAD_JUMBO_FRAME;
- else
- dev->data->dev_conf.rxmode.offloads &=
- ~DEV_RX_OFFLOAD_JUMBO_FRAME;
- }
-
return eth_err(port_id, ret);
}
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index e82019218a91..16c2c16831cb 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1356,7 +1356,6 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v7 5/6] ethdev: unify MTU checks
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
` (2 preceding siblings ...)
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-10-18 13:48 ` Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-18 17:31 ` [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length Ferruh Yigit
5 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-18 13:48 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko
Cc: Ferruh Yigit, dev, Huisong Li, Konstantin Ananyev
Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
have slightly different checks. Like one checks min MTU against
RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.
Checks moved into common function to unify the checks. Also this has
benefit to have common error logs.
Default 'dev_info->min_mtu' (the one set by ethdev if driver doesn't
provide one), changed to ('RTE_ETHER_MIN_LEN' - overhead). Previously it
was 'RTE_ETHER_MIN_MTU' which is min MTU for IPv4 packets. Since the
intention is to provide min MTU corresponding minimum frame size, new
default value suits better.
Suggested-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
lib/ethdev/rte_ethdev.c | 91 +++++++++++++++++++++++++----------------
lib/ethdev/rte_ethdev.h | 2 +-
2 files changed, 57 insertions(+), 36 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 982c1bbc8679..3b8ef9ef22e7 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1327,6 +1327,47 @@ eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
return overhead_len;
}
+/* rte_eth_dev_info_get() should be called prior to this function */
+static int
+eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info,
+ uint16_t mtu)
+{
+ uint32_t overhead_len;
+ uint32_t frame_size;
+
+ if (mtu < dev_info->min_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) < device min MTU (%u) for port_id %u\n",
+ mtu, dev_info->min_mtu, port_id);
+ return -EINVAL;
+ }
+ if (mtu > dev_info->max_mtu) {
+ RTE_ETHDEV_LOG(ERR,
+ "MTU (%u) > device max MTU (%u) for port_id %u\n",
+ mtu, dev_info->max_mtu, port_id);
+ return -EINVAL;
+ }
+
+ overhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,
+ dev_info->max_mtu);
+ frame_size = mtu + overhead_len;
+ if (frame_size < RTE_ETHER_MIN_LEN) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) < min frame size (%u) for port_id %u\n",
+ frame_size, RTE_ETHER_MIN_LEN, port_id);
+ return -EINVAL;
+ }
+
+ if (frame_size > dev_info->max_rx_pktlen) {
+ RTE_ETHDEV_LOG(ERR,
+ "Frame size (%u) > device max frame size (%u) for port_id %u\n",
+ frame_size, dev_info->max_rx_pktlen, port_id);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int
rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
@@ -1334,8 +1375,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
struct rte_eth_conf orig_conf;
- uint32_t max_rx_pktlen;
- uint32_t overhead_len;
int diag;
int ret;
uint16_t old_mtu;
@@ -1384,10 +1423,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
if (ret != 0)
goto rollback;
- /* Get the real Ethernet overhead length */
- overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
- dev_info.max_mtu);
-
/* If number of queues specified by application for both Rx and Tx is
* zero, use driver preferred values. This cannot be done individually
* as it is valid for either Tx or Rx (but not both) to be zero.
@@ -1454,26 +1489,13 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
- /*
- * Check that the maximum RX packet length is supported by the
- * configured device.
- */
if (dev_conf->rxmode.mtu == 0)
dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
- max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
- if (max_rx_pktlen > dev_info.max_rx_pktlen) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n",
- port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
- ret = -EINVAL;
- goto rollback;
- } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
- RTE_ETHDEV_LOG(ERR,
- "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n",
- port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
- ret = -EINVAL;
+
+ ret = eth_dev_validate_mtu(port_id, &dev_info,
+ dev->data->dev_conf.rxmode.mtu);
+ if (ret != 0)
goto rollback;
- }
dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
@@ -1482,6 +1504,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
* size is supported by the configured device.
*/
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+ uint32_t max_rx_pktlen;
+ uint32_t overhead_len;
+
+ overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
+ dev_info.max_mtu);
+ max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
if (dev_conf->rxmode.max_lro_pkt_size == 0)
dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen;
ret = eth_dev_check_lro_pkt_size(port_id,
@@ -3400,7 +3428,8 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
dev_info->rx_desc_lim = lim;
dev_info->tx_desc_lim = lim;
dev_info->device = dev->device;
- dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
+ RTE_ETHER_CRC_LEN;
dev_info->max_mtu = UINT16_MAX;
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
@@ -3651,21 +3680,13 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu)
* which relies on dev->dev_ops->dev_infos_get.
*/
if (*dev->dev_ops->dev_infos_get != NULL) {
- uint16_t overhead_len;
- uint32_t frame_size;
-
ret = rte_eth_dev_info_get(port_id, &dev_info);
if (ret != 0)
return ret;
- if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu)
- return -EINVAL;
-
- overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
- dev_info.max_mtu);
- frame_size = mtu + overhead_len;
- if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen)
- return -EINVAL;
+ ret = eth_dev_validate_mtu(port_id, &dev_info, mtu);
+ if (ret != 0)
+ return ret;
}
ret = (*dev->dev_ops->mtu_set)(dev, mtu);
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 16c2c16831cb..69766eaae2d4 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3050,7 +3050,7 @@ int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma,
* };
*
* device = dev->device
- * min_mtu = RTE_ETHER_MIN_MTU
+ * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
* max_mtu = UINT16_MAX
*
* The following fields will be populated if support for dev_infos_get()
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* [dpdk-dev] [PATCH v7 6/6] examples/ip_reassembly: remove unused parameter
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
` (3 preceding siblings ...)
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 5/6] ethdev: unify MTU checks Ferruh Yigit
@ 2021-10-18 13:48 ` Ferruh Yigit
2021-10-18 17:31 ` [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length Ferruh Yigit
5 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-18 13:48 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: Ferruh Yigit, dev
Remove 'max-pkt-len' parameter.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
examples/ip_reassembly/main.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 4caa9ac3cafa..4f0e12e62447 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -516,7 +516,6 @@ static void
print_usage(const char *prgname)
{
printf("%s [EAL options] -- -p PORTMASK [-q NQ]"
- " [--max-pkt-len PKTLEN]"
" [--maxflows=<flows>] [--flowttl=<ttl>[(s|ms)]]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of RX queues per lcore\n"
@@ -618,7 +617,6 @@ parse_args(int argc, char **argv)
int option_index;
char *prgname = argv[0];
static struct option lgopts[] = {
- {"max-pkt-len", 1, 0, 0},
{"maxflows", 1, 0, 0},
{"flowttl", 1, 0, 0},
{NULL, 0, 0, 0}
--
2.31.1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
` (4 preceding siblings ...)
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
@ 2021-10-18 17:31 ` Ferruh Yigit
2021-11-05 14:19 ` Xueming(Steven) Li
5 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-18 17:31 UTC (permalink / raw)
To: Jerin Jacob, Xiaoyun Li, Chas Williams, Min Hu (Connor),
Hemant Agrawal, Sachin Saxena, Qi Zhang, Xiao Wang, Matan Azrad,
Viacheslav Ovsiienko, Harman Kalra, Maciej Czekaj, Ray Kinsella,
Bernard Iremonger, Konstantin Ananyev, Kiran Kumar K,
Nithin Dabilpuram, David Hunt, John McNamara, Bruce Richardson,
Igor Russkikh, Steven Webster, Matt Peters,
Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Sunil Kumar Kori, Satha Rao, Rahul Lakkireddy,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh, John Daley,
Hyong Youb Kim, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Rosen Xu, Shijith Thotton,
Srisivasubramanian Srinivasan, Zyta Szpak, Liron Himi,
Heinrich Kuhn, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia,
Nicolas Chautru, Harry van Haaren, Cristian Dumitrescu,
Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Declan Doherty,
Pavan Nikhilesh, Kirill Rybalchenko, Jasvinder Singh,
Thomas Monjalon
Cc: dev, Huisong Li
On 10/18/2021 2:48 PM, Ferruh Yigit wrote:
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Acked-by: Huisong Li <lihuisong@huawei.com>
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Rosen Xu <rosen.xu@intel.com>
> Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
@ 2021-10-21 0:43 ` Thomas Monjalon
2021-10-22 11:25 ` Ferruh Yigit
0 siblings, 1 reply; 112+ messages in thread
From: Thomas Monjalon @ 2021-10-21 0:43 UTC (permalink / raw)
To: Andrew Rybchenko, Ferruh Yigit
Cc: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Konstantin Ananyev, Radu Nicolau, Akhil Goyal, David Hunt,
John McNamara, dev, Huisong Li
18/10/2021 15:48, Ferruh Yigit:
> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
[...]
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -165,8 +165,7 @@ Jumbo frame
>
> Supports Rx jumbo frames.
>
> -* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> - ``dev_conf.rxmode.mtu``.
> +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
So we keep announcing the feature "Jumbo frame" in the doc for MTU specific values?
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag
2021-10-21 0:43 ` Thomas Monjalon
@ 2021-10-22 11:25 ` Ferruh Yigit
2021-10-22 11:29 ` Andrew Rybchenko
0 siblings, 1 reply; 112+ messages in thread
From: Ferruh Yigit @ 2021-10-22 11:25 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko
Cc: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Konstantin Ananyev, Radu Nicolau, Akhil Goyal, David Hunt,
John McNamara, dev, Huisong Li
On 10/21/2021 1:43 AM, Thomas Monjalon wrote:
> 18/10/2021 15:48, Ferruh Yigit:
>> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
> [...]
>> --- a/doc/guides/nics/features.rst
>> +++ b/doc/guides/nics/features.rst
>> @@ -165,8 +165,7 @@ Jumbo frame
>>
>> Supports Rx jumbo frames.
>>
>> -* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
>> - ``dev_conf.rxmode.mtu``.
>> +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
>
> So we keep announcing the feature "Jumbo frame" in the doc for MTU specific values?
>
It is there mainly I missed to remove all, still it is an option to keep feature
to let PMDs document capability, but since there is not flag/offload for it, I
am for removing feature completely.
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag
2021-10-22 11:25 ` Ferruh Yigit
@ 2021-10-22 11:29 ` Andrew Rybchenko
0 siblings, 0 replies; 112+ messages in thread
From: Andrew Rybchenko @ 2021-10-22 11:29 UTC (permalink / raw)
To: Ferruh Yigit, Thomas Monjalon
Cc: Jerin Jacob, Xiaoyun Li, Ajit Khaparde, Somnath Kotur,
Igor Russkikh, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Rahul Lakkireddy, Hemant Agrawal, Sachin Saxena, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, Gagandeep Singh, John Daley, Hyong Youb Kim,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu,
Qiming Yang, Andrew Boyer, Rosen Xu, Matan Azrad,
Viacheslav Ovsiienko, Zyta Szpak, Liron Himi, Heinrich Kuhn,
Harman Kalra, Nalla Pradeep, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Devendra Singh Rawat, Maciej Czekaj,
Jiawen Wu, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Konstantin Ananyev, Radu Nicolau, Akhil Goyal, David Hunt,
John McNamara, dev, Huisong Li
On 10/22/21 2:25 PM, Ferruh Yigit wrote:
> On 10/21/2021 1:43 AM, Thomas Monjalon wrote:
>> 18/10/2021 15:48, Ferruh Yigit:
>>> Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.
>> [...]
>>> --- a/doc/guides/nics/features.rst
>>> +++ b/doc/guides/nics/features.rst
>>> @@ -165,8 +165,7 @@ Jumbo frame
>>> Supports Rx jumbo frames.
>>> -* **[uses] rte_eth_rxconf,rte_eth_rxmode**:
>>> ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
>>> - ``dev_conf.rxmode.mtu``.
>>> +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``.
>>
>> So we keep announcing the feature "Jumbo frame" in the doc for MTU
>> specific values?
>>
>
> It is there mainly I missed to remove all, still it is an option to keep
> feature
> to let PMDs document capability, but since there is not flag/offload for
> it, I
> am for removing feature completely.
+1
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length
2021-10-18 17:31 ` [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length Ferruh Yigit
@ 2021-11-05 14:19 ` Xueming(Steven) Li
2021-11-05 14:39 ` Ferruh Yigit
0 siblings, 1 reply; 112+ messages in thread
From: Xueming(Steven) Li @ 2021-11-05 14:19 UTC (permalink / raw)
To: mczekaj, bruce.richardson, mdr, sthotton, Matan Azrad,
kirankumark, beilei.xing, rmody, chenbo.xia, somnath.kotur,
jiawenwu, heinrich.kuhn, hemant.agrawal, maxime.coquelin,
asomalap, andrew.rybchenko, skori, pbhagavatula, ajit.khaparde,
hkalra, shaibran, harry.van.haaren, chas3, jasvinder.singh,
cloud.wangxiaoyun, jerinj, qiming.yang, kirill.rybalchenko,
NBU-Contact-Thomas Monjalon, srinivasan, mk, xiaoyun.li,
bernard.iremonger, mw, gakhil, keith.wiles, xiao.w.wang,
xuanziyang2, nicolas.chautru, qi.z.zhang, g.singh, aboyer,
steven.webster, john.mcnamara, evgenys, humin29, johndale,
irusskikh, tomasz.kantecki, dsinghrawat, shshaikh, oulijun,
lironh, ferruh.yigit, Slava Ovsiienko, sachin.saxena, jianwang,
rahul.lakkireddy, matt.peters, skoteshwar, rosen.xu, zr,
jingjing.wu, konstantin.ananyev, radu.nicolau, yisen.zhuang,
igorch, declan.doherty, haiyue.wang, zhouguoyang, hyonkim,
ndabilpuram, cristian.dumitrescu, david.hunt
Cc: dev, lihuisong
On Mon, 2021-10-18 at 18:31 +0100, Ferruh Yigit wrote:
> On 10/18/2021 2:48 PM, Ferruh Yigit wrote:
> > There is a confusion on setting max Rx packet length, this patch aims to
> > clarify it.
> >
> > 'rte_eth_dev_configure()' API accepts max Rx packet size via
> > 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> > rte_eth_conf'.
> >
> > Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> > stored into '(struct rte_eth_dev)->data->mtu'.
> >
> > These two APIs are related but they work in a disconnected way, they
> > store the set values in different variables which makes hard to figure
> > out which one to use, also having two different method for a related
> > functionality is confusing for the users.
> >
> > Other issues causing confusion is:
> > * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> > 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> > Ethernet frame overhead, and this overhead may be different from
> > device to device based on what device supports, like VLAN and QinQ.
> > * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> > which adds additional confusion and some APIs and PMDs already
> > discards this documented behavior.
> > * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> > field, this adds configuration complexity for application.
> >
> > As solution, both APIs gets MTU as parameter, and both saves the result
> > in same variable '(struct rte_eth_dev)->data->mtu'. For this
> > 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> > from jumbo frame.
> >
> > For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> > request and it should be used only within configure function and result
> > should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> > both application and PMD uses MTU from this variable.
> >
> > When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> > default 'RTE_ETHER_MTU' value is used.
> >
> > Additional clarification done on scattered Rx configuration, in
> > relation to MTU and Rx buffer size.
> > MTU is used to configure the device for physical Rx/Tx size limitation,
> > Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> > size as Rx buffer size.
> > PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> > or not. If scattered Rx is not supported by device, MTU bigger than Rx
> > buffer size should fail.
> >
> > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> > Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
> > Acked-by: Huisong Li <lihuisong@huawei.com>
> > Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > Acked-by: Rosen Xu <rosen.xu@intel.com>
> > Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
>
> Series applied to dpdk-next-net/main, thanks.
>
Hi Ferruh,
I noticed that no cc stable in this this "fix" patch, do you expect it
to be part of LTS?
Thanks,
Xueming
^ permalink raw reply [flat|nested] 112+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length
2021-11-05 14:19 ` Xueming(Steven) Li
@ 2021-11-05 14:39 ` Ferruh Yigit
0 siblings, 0 replies; 112+ messages in thread
From: Ferruh Yigit @ 2021-11-05 14:39 UTC (permalink / raw)
To: Xueming(Steven) Li, mczekaj, bruce.richardson, mdr, sthotton,
Matan Azrad, kirankumark, beilei.xing, rmody, chenbo.xia,
somnath.kotur, jiawenwu, heinrich.kuhn, hemant.agrawal,
maxime.coquelin, asomalap, andrew.rybchenko, skori, pbhagavatula,
ajit.khaparde, hkalra, shaibran, harry.van.haaren, chas3,
jasvinder.singh, cloud.wangxiaoyun, jerinj, qiming.yang,
kirill.rybalchenko, NBU-Contact-Thomas Monjalon, srinivasan, mk,
xiaoyun.li, bernard.iremonger, mw, gakhil, keith.wiles,
xiao.w.wang, xuanziyang2, nicolas.chautru, qi.z.zhang, g.singh,
aboyer, steven.webster, john.mcnamara, evgenys, humin29,
johndale, irusskikh, tomasz.kantecki, dsinghrawat, shshaikh,
oulijun, lironh, Slava Ovsiienko, sachin.saxena, jianwang,
rahul.lakkireddy, matt.peters, skoteshwar, rosen.xu, zr,
jingjing.wu, konstantin.ananyev, radu.nicolau, yisen.zhuang,
igorch, declan.doherty, haiyue.wang, zhouguoyang, hyonkim,
ndabilpuram, cristian.dumitrescu, david.hunt
Cc: dev, lihuisong
On 11/5/2021 2:19 PM, Xueming(Steven) Li wrote:
> On Mon, 2021-10-18 at 18:31 +0100, Ferruh Yigit wrote:
>> On 10/18/2021 2:48 PM, Ferruh Yigit wrote:
>>> There is a confusion on setting max Rx packet length, this patch aims to
>>> clarify it.
>>>
>>> 'rte_eth_dev_configure()' API accepts max Rx packet size via
>>> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
>>> rte_eth_conf'.
>>>
>>> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
>>> stored into '(struct rte_eth_dev)->data->mtu'.
>>>
>>> These two APIs are related but they work in a disconnected way, they
>>> store the set values in different variables which makes hard to figure
>>> out which one to use, also having two different method for a related
>>> functionality is confusing for the users.
>>>
>>> Other issues causing confusion is:
>>> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>>> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>>> Ethernet frame overhead, and this overhead may be different from
>>> device to device based on what device supports, like VLAN and QinQ.
>>> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>>> which adds additional confusion and some APIs and PMDs already
>>> discards this documented behavior.
>>> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>>> field, this adds configuration complexity for application.
>>>
>>> As solution, both APIs gets MTU as parameter, and both saves the result
>>> in same variable '(struct rte_eth_dev)->data->mtu'. For this
>>> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
>>> from jumbo frame.
>>>
>>> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
>>> request and it should be used only within configure function and result
>>> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
>>> both application and PMD uses MTU from this variable.
>>>
>>> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
>>> default 'RTE_ETHER_MTU' value is used.
>>>
>>> Additional clarification done on scattered Rx configuration, in
>>> relation to MTU and Rx buffer size.
>>> MTU is used to configure the device for physical Rx/Tx size limitation,
>>> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
>>> size as Rx buffer size.
>>> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
>>> or not. If scattered Rx is not supported by device, MTU bigger than Rx
>>> buffer size should fail.
>>>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>>> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
>>> Acked-by: Huisong Li <lihuisong@huawei.com>
>>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>> Acked-by: Rosen Xu <rosen.xu@intel.com>
>>> Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
>>
>> Series applied to dpdk-next-net/main, thanks.
>>
>
> Hi Ferruh,
>
> I noticed that no cc stable in this this "fix" patch, do you expect it
> to be part of LTS?
>
Hi Xueming,
I didn't put it intentionally, patch changes how frame size / MTU configured,
not exactly a fix, I think not suitable for backport.
^ permalink raw reply [flat|nested] 112+ messages in thread
end of thread, other threads:[~2021-11-05 14:39 UTC | newest]
Thread overview: 112+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-09 17:29 [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Ferruh Yigit
2021-07-09 17:29 ` [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-13 13:48 ` Andrew Rybchenko
2021-07-21 12:26 ` Ferruh Yigit
2021-07-18 7:49 ` Xu, Rosen
2021-07-19 14:38 ` Ajit Khaparde
2021-07-09 17:29 ` [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set Ferruh Yigit
2021-07-13 13:56 ` Andrew Rybchenko
2021-07-18 7:52 ` Xu, Rosen
2021-07-09 17:29 ` [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag Ferruh Yigit
2021-07-13 14:07 ` Andrew Rybchenko
2021-07-21 12:26 ` Ferruh Yigit
2021-07-21 12:39 ` Ferruh Yigit
2021-07-18 7:53 ` Xu, Rosen
2021-07-13 12:47 ` [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Andrew Rybchenko
2021-07-21 16:46 ` Ferruh Yigit
2021-07-22 1:31 ` Ajit Khaparde
2021-07-22 10:27 ` Ferruh Yigit
2021-07-22 10:38 ` Andrew Rybchenko
2021-07-18 7:45 ` Xu, Rosen
2021-07-19 3:35 ` Huisong Li
2021-07-21 15:29 ` Ferruh Yigit
2021-07-22 7:21 ` Huisong Li
2021-07-22 10:12 ` Ferruh Yigit
2021-07-22 10:15 ` Andrew Rybchenko
2021-07-22 14:43 ` Stephen Hemminger
2021-09-17 1:08 ` Min Hu (Connor)
2021-09-17 8:04 ` Ferruh Yigit
2021-09-17 8:16 ` Min Hu (Connor)
2021-09-17 8:17 ` Min Hu (Connor)
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-07-23 3:29 ` Huisong Li
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-04 5:08 ` Somnath Kotur
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-04 5:09 ` Somnath Kotur
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
[not found] ` <CAOBf=muYkU2dwgi3iC8Q7pdSNTJsMUwWYdXj14KeN_=_mUGa0w@mail.gmail.com>
2021-10-04 7:55 ` Somnath Kotur
2021-10-05 16:48 ` Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-01 15:07 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Stephen Hemminger
2021-10-05 16:46 ` Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-08 8:39 ` Xu, Rosen
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-08 8:38 ` Xu, Rosen
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-05 22:07 ` [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length Ajit Khaparde
2021-10-06 6:08 ` Somnath Kotur
2021-10-08 8:36 ` Xu, Rosen
2021-10-10 6:30 ` Matan Azrad
2021-10-11 21:59 ` Ferruh Yigit
2021-10-12 7:03 ` Matan Azrad
2021-10-12 11:03 ` Ferruh Yigit
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-08 17:20 ` Ananyev, Konstantin
2021-10-09 10:58 ` lihuisong (C)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-08 17:19 ` Ananyev, Konstantin
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-08 17:11 ` Ananyev, Konstantin
2021-10-09 11:09 ` lihuisong (C)
2021-10-10 5:46 ` Matan Azrad
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-08 16:51 ` Ananyev, Konstantin
2021-10-11 19:50 ` Ferruh Yigit
2021-10-09 11:43 ` lihuisong (C)
2021-10-11 20:15 ` Ferruh Yigit
2021-10-12 4:02 ` lihuisong (C)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-08 16:53 ` Ananyev, Konstantin
2021-10-08 15:57 ` [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length Ananyev, Konstantin
2021-10-11 19:47 ` Ferruh Yigit
2021-10-09 10:56 ` lihuisong (C)
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-12 17:20 ` Hyong Youb Kim (hyonkim)
2021-10-13 7:16 ` Michał Krawczyk
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-12 5:58 ` Andrew Rybchenko
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-12 6:02 ` [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length Andrew Rybchenko
2021-10-12 9:42 ` Ananyev, Konstantin
2021-10-13 7:08 ` Xu, Rosen
2021-10-15 1:31 ` Hyong Youb Kim (hyonkim)
2021-10-16 0:24 ` Ferruh Yigit
2021-10-18 8:54 ` Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-21 0:43 ` Thomas Monjalon
2021-10-22 11:25 ` Ferruh Yigit
2021-10-22 11:29 ` Andrew Rybchenko
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-18 17:31 ` [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length Ferruh Yigit
2021-11-05 14:19 ` Xueming(Steven) Li
2021-11-05 14:39 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).