DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xu, Rosen" <rosen.xu@intel.com>
To: "Yigit, Ferruh" <ferruh.yigit@intel.com>,
	Jerin Jacob <jerinj@marvell.com>,
	"Li, Xiaoyun" <xiaoyun.li@intel.com>,
	Chas Williams <chas3@att.com>,
	"Min Hu (Connor)" <humin29@huawei.com>,
	Hemant Agrawal <hemant.agrawal@nxp.com>,
	Sachin Saxena <sachin.saxena@oss.nxp.com>,
	"Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Wang, Xiao W" <xiao.w.wang@intel.com>,
	"Matan Azrad" <matan@nvidia.com>,
	Shahaf Shuler <shahafs@nvidia.com>,
	"Viacheslav Ovsiienko" <viacheslavo@nvidia.com>,
	Harman Kalra <hkalra@marvell.com>,
	Maciej Czekaj <mczekaj@marvell.com>, Ray Kinsella <mdr@ashroe.eu>,
	"Neil Horman" <nhorman@tuxdriver.com>,
	"Iremonger, Bernard" <bernard.iremonger@intel.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
	"Mcnamara, John" <john.mcnamara@intel.com>,
	Igor Russkikh <igor.russkikh@aquantia.com>,
	Pavel Belous <pavel.belous@aquantia.com>,
	Steven Webster <steven.webster@windriver.com>,
	Matt Peters <matt.peters@windriver.com>,
	Somalapuram Amaranath <asomalap@amd.com>,
	Rasesh Mody <rmody@marvell.com>,
	Shahed Shaikh <shshaikh@marvell.com>,
	Ajit Khaparde <ajit.khaparde@broadcom.com>,
	"Somnath Kotur" <somnath.kotur@broadcom.com>,
	Nithin Dabilpuram <ndabilpuram@marvell.com>,
	Kiran Kumar K <kirankumark@marvell.com>,
	"Sunil Kumar Kori" <skori@marvell.com>,
	Satha Rao <skoteshwar@marvell.com>,
	"Rahul Lakkireddy" <rahul.lakkireddy@chelsio.com>,
	"Wang, Haiyue" <haiyue.wang@intel.com>,
	Marcin Wojtas <mw@semihalf.com>,
	Michal Krawczyk <mk@semihalf.com>,
	Guy Tzalik <gtzalik@amazon.com>,
	Evgeny Schemeilin <evgenys@amazon.com>,
	Igor Chauskin <igorch@amazon.com>,
	Gagandeep Singh <g.singh@nxp.com>,
	"Daley, John" <johndale@cisco.com>,
	Hyong Youb Kim <hyonkim@cisco.com>,
	Ziyang Xuan <xuanziyang2@huawei.com>,
	Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>,
	Guoyang Zhou <zhouguoyang@huawei.com>,
	"Yisen Zhuang" <yisen.zhuang@huawei.com>,
	Lijun Ou <oulijun@huawei.com>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	"Wu, Jingjing" <jingjing.wu@intel.com>,
	"Yang, Qiming" <qiming.yang@intel.com>,
	Andrew Boyer <aboyer@pensando.io>,
	Shijith Thotton <sthotton@marvell.com>,
	Srisivasubramanian Srinivasan <srinivasan@marvell.com>,
	Zyta Szpak <zr@semihalf.com>, Liron Himi <lironh@marvell.com>,
	Heinrich Kuhn <heinrich.kuhn@netronome.com>,
	"Devendra Singh Rawat" <dsinghrawat@marvell.com>,
	Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
	"Wiles, Keith" <keith.wiles@intel.com>,
	Jiawen Wu <jiawenwu@trustnetic.com>,
	Jian Wang <jianwang@trustnetic.com>,
	Maxime Coquelin <maxime.coquelin@redhat.com>,
	"Xia, Chenbo" <chenbo.xia@intel.com>,
	"Chautru, Nicolas" <nicolas.chautru@intel.com>,
	"Hunt, David" <david.hunt@intel.com>,
	"Van Haaren, Harry" <harry.van.haaren@intel.com>,
	"Dumitrescu, Cristian" <cristian.dumitrescu@intel.com>,
	"Nicolau, Radu" <radu.nicolau@intel.com>,
	Akhil Goyal <gakhil@marvell.com>,
	"Kantecki, Tomasz" <tomasz.kantecki@intel.com>,
	"Doherty, Declan" <declan.doherty@intel.com>,
	Pavan Nikhilesh <pbhagavatula@marvell.com>,
	"Rybalchenko, Kirill" <kirill.rybalchenko@intel.com>,
	"Singh, Jasvinder" <jasvinder.singh@intel.com>,
	Thomas Monjalon <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length
Date: Sun, 18 Jul 2021 07:45:02 +0000	[thread overview]
Message-ID: <BYAPR11MB29011CE99ACD0100B30323DE89E09@BYAPR11MB2901.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210709172923.3369846-1-ferruh.yigit@intel.com>

Hi,

> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Saturday, July 10, 2021 1:29
> To: Jerin Jacob <jerinj@marvell.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Chas Williams <chas3@att.com>; Min Hu (Connor) <humin29@huawei.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> <sachin.saxena@oss.nxp.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang,
> Xiao W <xiao.w.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> Shahaf Shuler <shahafs@nvidia.com>; Viacheslav Ovsiienko
> <viacheslavo@nvidia.com>; Harman Kalra <hkalra@marvell.com>; Maciej
> Czekaj <mczekaj@marvell.com>; Ray Kinsella <mdr@ashroe.eu>; Neil
> Horman <nhorman@tuxdriver.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Igor Russkikh <igor.russkikh@aquantia.com>;
> Pavel Belous <pavel.belous@aquantia.com>; Steven Webster
> <steven.webster@windriver.com>; Matt Peters
> <matt.peters@windriver.com>; Somalapuram Amaranath
> <asomalap@amd.com>; Rasesh Mody <rmody@marvell.com>; Shahed
> Shaikh <shshaikh@marvell.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar K <kirankumark@marvell.com>;
> Sunil Kumar Kori <skori@marvell.com>; Satha Rao
> <skoteshwar@marvell.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> Marcin Wojtas <mw@semihalf.com>; Michal Krawczyk <mk@semihalf.com>;
> Guy Tzalik <gtzalik@amazon.com>; Evgeny Schemeilin
> <evgenys@amazon.com>; Igor Chauskin <igorch@amazon.com>;
> Gagandeep Singh <g.singh@nxp.com>; Daley, John <johndale@cisco.com>;
> Hyong Youb Kim <hyonkim@cisco.com>; Ziyang Xuan
> <xuanziyang2@huawei.com>; Xiaoyun Wang
> <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Yisen Zhuang <yisen.zhuang@huawei.com>;
> Lijun Ou <oulijun@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> Andrew Boyer <aboyer@pensando.io>; Xu, Rosen <rosen.xu@intel.com>;
> Shijith Thotton <sthotton@marvell.com>; Srisivasubramanian Srinivasan
> <srinivasan@marvell.com>; Zyta Szpak <zr@semihalf.com>; Liron Himi
> <lironh@marvell.com>; Heinrich Kuhn <heinrich.kuhn@netronome.com>;
> Devendra Singh Rawat <dsinghrawat@marvell.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Wiles, Keith <keith.wiles@intel.com>;
> Jiawen Wu <jiawenwu@trustnetic.com>; Jian Wang
> <jianwang@trustnetic.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>;
> Chautru, Nicolas <nicolas.chautru@intel.com>; Hunt, David
> <david.hunt@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Nicolau, Radu
> <radu.nicolau@intel.com>; Akhil Goyal <gakhil@marvell.com>; Kantecki,
> Tomasz <tomasz.kantecki@intel.com>; Doherty, Declan
> <declan.doherty@intel.com>; Pavan Nikhilesh
> <pbhagavatula@marvell.com>; Rybalchenko, Kirill
> <kirill.rybalchenko@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org
> Subject: [PATCH 1/4] ethdev: fix max Rx packet length
> 
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
> 
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' filed of the config struct 'struct
> rte_eth_conf'.
> 
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
> 
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also two different related method is confusing for
> the users.
> 
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
>   'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
>   Ethernet frame overhead, but this may be different from device to
>   device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
>   which adds additional confusion and some APIs and PMDs already
>   discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
>   field, this adds configuration complexity for application.
> 
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
> 
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
> 
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
> 
> As additional clarification, MTU is used to configure the device for
> physical Rx/Tx limitation. Other related issue is size of the buffer to
> store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size.
> And compares MTU against Rx buffer size to decide enabling scattered Rx
> or not, if PMD supports it. If scattered Rx is not supported by device,
> MTU bigger than Rx buffer size should fail.
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
>  app/test-eventdev/test_perf_common.c          |  1 -
>  app/test-eventdev/test_pipeline_common.c      |  5 +-
>  app/test-pmd/cmdline.c                        | 45 ++++-----
>  app/test-pmd/config.c                         | 18 ++--
>  app/test-pmd/parameters.c                     |  4 +-
>  app/test-pmd/testpmd.c                        | 94 ++++++++++--------
>  app/test-pmd/testpmd.h                        |  2 +-
>  app/test/test_link_bonding.c                  |  1 -
>  app/test/test_link_bonding_mode4.c            |  1 -
>  app/test/test_link_bonding_rssconf.c          |  2 -
>  app/test/test_pmd_perf.c                      |  1 -
>  doc/guides/nics/dpaa.rst                      |  2 +-
>  doc/guides/nics/dpaa2.rst                     |  2 +-
>  doc/guides/nics/features.rst                  |  2 +-
>  doc/guides/nics/fm10k.rst                     |  2 +-
>  doc/guides/nics/mlx5.rst                      |  4 +-
>  doc/guides/nics/octeontx.rst                  |  2 +-
>  doc/guides/nics/thunderx.rst                  |  2 +-
>  doc/guides/rel_notes/deprecation.rst          | 25 -----
>  doc/guides/sample_app_ug/flow_classify.rst    |  8 +-
>  doc/guides/sample_app_ug/ioat.rst             |  1 -
>  doc/guides/sample_app_ug/ip_reassembly.rst    |  2 +-
>  doc/guides/sample_app_ug/skeleton.rst         |  8 +-
>  drivers/net/atlantic/atl_ethdev.c             |  3 -
>  drivers/net/avp/avp_ethdev.c                  | 17 ++--
>  drivers/net/axgbe/axgbe_ethdev.c              |  7 +-
>  drivers/net/bnx2x/bnx2x_ethdev.c              |  6 +-
>  drivers/net/bnxt/bnxt_ethdev.c                | 21 ++--
>  drivers/net/bonding/rte_eth_bond_pmd.c        |  4 +-
>  drivers/net/cnxk/cnxk_ethdev.c                |  9 +-
>  drivers/net/cnxk/cnxk_ethdev_ops.c            |  8 +-
>  drivers/net/cxgbe/cxgbe_ethdev.c              | 12 +--
>  drivers/net/cxgbe/cxgbe_main.c                |  3 +-
>  drivers/net/cxgbe/sge.c                       |  3 +-
>  drivers/net/dpaa/dpaa_ethdev.c                | 52 ++++------
>  drivers/net/dpaa2/dpaa2_ethdev.c              | 31 +++---
>  drivers/net/e1000/em_ethdev.c                 |  4 +-
>  drivers/net/e1000/igb_ethdev.c                | 18 +---
>  drivers/net/e1000/igb_rxtx.c                  | 16 ++-
>  drivers/net/ena/ena_ethdev.c                  | 27 ++---
>  drivers/net/enetc/enetc_ethdev.c              | 24 ++---
>  drivers/net/enic/enic_ethdev.c                |  2 +-
>  drivers/net/enic/enic_main.c                  | 42 ++++----
>  drivers/net/fm10k/fm10k_ethdev.c              |  2 +-
>  drivers/net/hinic/hinic_pmd_ethdev.c          | 20 ++--
>  drivers/net/hns3/hns3_ethdev.c                | 28 ++----
>  drivers/net/hns3/hns3_ethdev_vf.c             | 38 +++----
>  drivers/net/hns3/hns3_rxtx.c                  | 10 +-
>  drivers/net/i40e/i40e_ethdev.c                | 10 +-
>  drivers/net/i40e/i40e_ethdev_vf.c             | 14 +--
>  drivers/net/i40e/i40e_rxtx.c                  |  4 +-
>  drivers/net/iavf/iavf_ethdev.c                |  9 +-
>  drivers/net/ice/ice_dcf_ethdev.c              |  5 +-
>  drivers/net/ice/ice_ethdev.c                  | 14 +--
>  drivers/net/ice/ice_rxtx.c                    | 12 +--
>  drivers/net/igc/igc_ethdev.c                  | 51 +++-------
>  drivers/net/igc/igc_ethdev.h                  |  7 ++
>  drivers/net/igc/igc_txrx.c                    | 22 ++---
>  drivers/net/ionic/ionic_ethdev.c              | 12 +--
>  drivers/net/ionic/ionic_rxtx.c                |  6 +-
>  drivers/net/ipn3ke/ipn3ke_representor.c       | 10 +-
>  drivers/net/ixgbe/ixgbe_ethdev.c              | 35 +++----
>  drivers/net/ixgbe/ixgbe_pf.c                  |  6 +-
>  drivers/net/ixgbe/ixgbe_rxtx.c                | 15 ++-
>  drivers/net/liquidio/lio_ethdev.c             | 20 +---
>  drivers/net/mlx4/mlx4_rxq.c                   | 17 ++--
>  drivers/net/mlx5/mlx5_rxq.c                   | 25 ++---
>  drivers/net/mvneta/mvneta_ethdev.c            |  7 --
>  drivers/net/mvneta/mvneta_rxtx.c              | 13 ++-
>  drivers/net/mvpp2/mrvl_ethdev.c               | 34 +++----
>  drivers/net/nfp/nfp_net.c                     |  9 +-
>  drivers/net/octeontx/octeontx_ethdev.c        | 12 +--
>  drivers/net/octeontx2/otx2_ethdev.c           |  2 +-
>  drivers/net/octeontx2/otx2_ethdev_ops.c       | 11 +--
>  drivers/net/pfe/pfe_ethdev.c                  |  7 +-
>  drivers/net/qede/qede_ethdev.c                | 16 +--
>  drivers/net/qede/qede_rxtx.c                  |  8 +-
>  drivers/net/sfc/sfc_ethdev.c                  |  4 +-
>  drivers/net/sfc/sfc_port.c                    |  6 +-
>  drivers/net/tap/rte_eth_tap.c                 |  7 +-
>  drivers/net/thunderx/nicvf_ethdev.c           | 13 +--
>  drivers/net/txgbe/txgbe_ethdev.c              |  7 +-
>  drivers/net/txgbe/txgbe_ethdev.h              |  4 +
>  drivers/net/txgbe/txgbe_ethdev_vf.c           |  2 -
>  drivers/net/txgbe/txgbe_rxtx.c                | 19 ++--
>  drivers/net/virtio/virtio_ethdev.c            |  4 +-
>  examples/bbdev_app/main.c                     |  1 -
>  examples/bond/main.c                          |  1 -
>  examples/distributor/main.c                   |  1 -
>  .../pipeline_worker_generic.c                 |  1 -
>  .../eventdev_pipeline/pipeline_worker_tx.c    |  1 -
>  examples/flow_classify/flow_classify.c        | 10 +-
>  examples/ioat/ioatfwd.c                       |  1 -
>  examples/ip_fragmentation/main.c              | 11 +--
>  examples/ip_pipeline/link.c                   |  2 +-
>  examples/ip_reassembly/main.c                 | 11 ++-
>  examples/ipsec-secgw/ipsec-secgw.c            |  7 +-
>  examples/ipv4_multicast/main.c                |  8 +-
>  examples/kni/main.c                           |  6 +-
>  examples/l2fwd-cat/l2fwd-cat.c                |  8 +-
>  examples/l2fwd-crypto/main.c                  |  1 -
>  examples/l2fwd-event/l2fwd_common.c           |  1 -
>  examples/l3fwd-acl/main.c                     | 11 +--
>  examples/l3fwd-graph/main.c                   |  4 +-
>  examples/l3fwd-power/main.c                   | 11 ++-
>  examples/l3fwd/main.c                         |  4 +-
>  .../performance-thread/l3fwd-thread/main.c    |  7 +-
>  examples/pipeline/obj.c                       |  2 +-
>  examples/ptpclient/ptpclient.c                | 10 +-
>  examples/qos_meter/main.c                     |  1 -
>  examples/qos_sched/init.c                     |  1 -
>  examples/rxtx_callbacks/main.c                | 10 +-
>  examples/skeleton/basicfwd.c                  | 10 +-
>  examples/vhost/main.c                         |  4 +-
>  examples/vm_power_manager/main.c              | 11 +--
>  lib/ethdev/rte_ethdev.c                       | 98 +++++++++++--------
>  lib/ethdev/rte_ethdev.h                       |  2 +-
>  lib/ethdev/rte_ethdev_trace.h                 |  2 +-
>  118 files changed, 531 insertions(+), 848 deletions(-)
> 
> diff --git a/app/test-eventdev/test_perf_common.c b/app/test-
> eventdev/test_perf_common.c
> index cc100650c21e..660d5a0364b6 100644
> --- a/app/test-eventdev/test_perf_common.c
> +++ b/app/test-eventdev/test_perf_common.c
> @@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
>  	struct rte_eth_conf port_conf = {
>  		.rxmode = {
>  			.mq_mode = ETH_MQ_RX_RSS,
> -			.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
>  			.split_hdr_size = 0,
>  		},
>  		.rx_adv_conf = {
> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-
> eventdev/test_pipeline_common.c
> index 6ee530d4cdc9..5fcea74b4d43 100644
> --- a/app/test-eventdev/test_pipeline_common.c
> +++ b/app/test-eventdev/test_pipeline_common.c
> @@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
>  		return -EINVAL;
>  	}
> 
> -	port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
> -	if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
> +	port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> +		RTE_ETHER_CRC_LEN;
> +	if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
>  		port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
>  	t->internal_port = 1;
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 8468018cf35d..8bdc042f6e8e 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1892,43 +1892,36 @@ cmd_config_max_pkt_len_parsed(void
> *parsed_result,
>  				__rte_unused void *data)
>  {
>  	struct cmd_config_max_pkt_len_result *res = parsed_result;
> -	uint32_t max_rx_pkt_len_backup = 0;
> -	portid_t pid;
> +	portid_t port_id;
>  	int ret;
> 
> +	if (strcmp(res->name, "max-pkt-len")) {
> +		printf("Unknown parameter\n");
> +		return;
> +	}
> +
>  	if (!all_ports_stopped()) {
>  		printf("Please stop all ports first\n");
>  		return;
>  	}
> 
> -	RTE_ETH_FOREACH_DEV(pid) {
> -		struct rte_port *port = &ports[pid];
> -
> -		if (!strcmp(res->name, "max-pkt-len")) {
> -			if (res->value < RTE_ETHER_MIN_LEN) {
> -				printf("max-pkt-len can not be less
> than %d\n",
> -						RTE_ETHER_MIN_LEN);
> -				return;
> -			}
> -			if (res->value == port-
> >dev_conf.rxmode.max_rx_pkt_len)
> -				return;
> -
> -			ret = eth_dev_info_get_print_err(pid, &port-
> >dev_info);
> -			if (ret != 0) {
> -				printf("rte_eth_dev_info_get() failed for
> port %u\n",
> -					pid);
> -				return;
> -			}
> +	RTE_ETH_FOREACH_DEV(port_id) {
> +		struct rte_port *port = &ports[port_id];
> 
> -			max_rx_pkt_len_backup = port-
> >dev_conf.rxmode.max_rx_pkt_len;
> +		if (res->value < RTE_ETHER_MIN_LEN) {
> +			printf("max-pkt-len can not be less than %d\n",
> +					RTE_ETHER_MIN_LEN);
> +			return;
> +		}
> 
> -			port->dev_conf.rxmode.max_rx_pkt_len = res-
> >value;
> -			if (update_jumbo_frame_offload(pid) != 0)
> -				port->dev_conf.rxmode.max_rx_pkt_len =
> max_rx_pkt_len_backup;
> -		} else {
> -			printf("Unknown parameter\n");
> +		ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
> +		if (ret != 0) {
> +			printf("rte_eth_dev_info_get() failed for port %u\n",
> +				port_id);
>  			return;
>  		}
> +
> +		update_jumbo_frame_offload(port_id, res->value);
>  	}
> 
>  	init_port_config();
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 04ae0feb5852..a87265d7638b 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1139,7 +1139,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
>  	int diag;
>  	struct rte_port *rte_port = &ports[port_id];
>  	struct rte_eth_dev_info dev_info;
> -	uint16_t eth_overhead;
>  	int ret;
> 
>  	if (port_id_is_invalid(port_id, ENABLED_WARN))
> @@ -1155,20 +1154,17 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
>  		return;
>  	}
>  	diag = rte_eth_dev_set_mtu(port_id, mtu);
> -	if (diag)
> +	if (diag) {
>  		printf("Set MTU failed. diag=%d\n", diag);
> -	else if (dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> -		/*
> -		 * Ether overhead in driver is equal to the difference of
> -		 * max_rx_pktlen and max_mtu in rte_eth_dev_info when
> the
> -		 * device supports jumbo frame.
> -		 */
> -		eth_overhead = dev_info.max_rx_pktlen -
> dev_info.max_mtu;
> +		return;
> +	}
> +
> +	rte_port->dev_conf.rxmode.mtu = mtu;
> +
> +	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME)
> {
>  		if (mtu > RTE_ETHER_MTU) {
>  			rte_port->dev_conf.rxmode.offloads |=
> 
> 	DEV_RX_OFFLOAD_JUMBO_FRAME;
> -			rte_port->dev_conf.rxmode.max_rx_pkt_len =
> -						mtu + eth_overhead;
>  		} else
>  			rte_port->dev_conf.rxmode.offloads &=
> 
> 	~DEV_RX_OFFLOAD_JUMBO_FRAME;
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index 5e69d2aa8cfe..8e8556d74a4a 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -860,7 +860,9 @@ launch_args_parse(int argc, char** argv)
>  			if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
>  				n = atoi(optarg);
>  				if (n >= RTE_ETHER_MIN_LEN)
> -					rx_mode.max_rx_pkt_len =
> (uint32_t) n;
> +					rx_mode.mtu = (uint32_t) n -
> +						(RTE_ETHER_HDR_LEN +
> +						 RTE_ETHER_CRC_LEN);
>  				else
>  					rte_exit(EXIT_FAILURE,
>  						 "Invalid max-pkt-len=%d -
> should be > %d\n",
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 1cdd3cdd12b6..2c79cae05664 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -445,13 +445,7 @@ lcoreid_t latencystats_lcore_id = -1;
>  /*
>   * Ethernet device configuration.
>   */
> -struct rte_eth_rxmode rx_mode = {
> -	/* Default maximum frame length.
> -	 * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
> -	 * in init_config().
> -	 */
> -	.max_rx_pkt_len = 0,
> -};
> +struct rte_eth_rxmode rx_mode;
> 
>  struct rte_eth_txmode tx_mode = {
>  	.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
> @@ -1417,6 +1411,20 @@ check_nb_hairpinq(queueid_t hairpinq)
>  	return 0;
>  }
> 
> +static int
> +get_eth_overhead(struct rte_eth_dev_info *dev_info)
> +{
> +	uint32_t eth_overhead;
> +
> +	if (dev_info->max_mtu != UINT16_MAX &&
> +	    dev_info->max_rx_pktlen > dev_info->max_mtu)
> +		eth_overhead = dev_info->max_rx_pktlen - dev_info-
> >max_mtu;
> +	else
> +		eth_overhead = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> +	return eth_overhead;
> +}
> +
>  static void
>  init_config(void)
>  {
> @@ -1465,7 +1473,7 @@ init_config(void)
>  			rte_exit(EXIT_FAILURE,
>  				 "rte_eth_dev_info_get() failed\n");
> 
> -		ret = update_jumbo_frame_offload(pid);
> +		ret = update_jumbo_frame_offload(pid, 0);
>  		if (ret != 0)
>  			printf("Updating jumbo frame offload failed for
> port %u\n",
>  				pid);
> @@ -1512,14 +1520,19 @@ init_config(void)
>  		 */
>  		if (port->dev_info.rx_desc_lim.nb_mtu_seg_max !=
> UINT16_MAX &&
>  				port-
> >dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
> -			data_size = rx_mode.max_rx_pkt_len /
> -				port-
> >dev_info.rx_desc_lim.nb_mtu_seg_max;
> +			uint32_t eth_overhead = get_eth_overhead(&port-
> >dev_info);
> +			uint16_t mtu;
> 
> -			if ((data_size + RTE_PKTMBUF_HEADROOM) >
> +			if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
> +				data_size = mtu + eth_overhead /
> +					port-
> >dev_info.rx_desc_lim.nb_mtu_seg_max;
> +
> +				if ((data_size + RTE_PKTMBUF_HEADROOM) >
>  							mbuf_data_size[0]) {
> -				mbuf_data_size[0] = data_size +
> -						 RTE_PKTMBUF_HEADROOM;
> -				warning = 1;
> +					mbuf_data_size[0] = data_size +
> +
> RTE_PKTMBUF_HEADROOM;
> +					warning = 1;
> +				}
>  			}
>  		}
>  	}
> @@ -3352,43 +3365,44 @@ rxtx_port_config(struct rte_port *port)
> 
>  /*
>   * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME
> offload,
> - * MTU is also aligned if JUMBO_FRAME offload is not set.
> + * MTU is also aligned.
>   *
>   * port->dev_info should be set before calling this function.
>   *
> + * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
> + * ETH_OVERHEAD". This is useful to update flags but not MTU value.
> + *
>   * return 0 on success, negative on error
>   */
>  int
> -update_jumbo_frame_offload(portid_t portid)
> +update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
>  {
>  	struct rte_port *port = &ports[portid];
>  	uint32_t eth_overhead;
>  	uint64_t rx_offloads;
> -	int ret;
> +	uint16_t mtu, new_mtu;
>  	bool on;
> 
> -	/* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
> -	if (port->dev_info.max_mtu != UINT16_MAX &&
> -	    port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
> -		eth_overhead = port->dev_info.max_rx_pktlen -
> -				port->dev_info.max_mtu;
> -	else
> -		eth_overhead = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +	eth_overhead = get_eth_overhead(&port->dev_info);
> 
> -	rx_offloads = port->dev_conf.rxmode.offloads;
> +	if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
> +		printf("Failed to get MTU for port %u\n", portid);
> +		return -1;
> +	}
> +
> +	if (max_rx_pktlen == 0)
> +		max_rx_pktlen = mtu + eth_overhead;
> 
> -	/* Default config value is 0 to use PMD specific overhead */
> -	if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
> -		port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU
> + eth_overhead;
> +	rx_offloads = port->dev_conf.rxmode.offloads;
> +	new_mtu = max_rx_pktlen - eth_overhead;
> 
> -	if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU +
> eth_overhead) {
> +	if (new_mtu <= RTE_ETHER_MTU) {
>  		rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>  		on = false;
>  	} else {
>  		if ((port->dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
>  			printf("Frame size (%u) is not supported by
> port %u\n",
> -				port->dev_conf.rxmode.max_rx_pkt_len,
> -				portid);
> +				max_rx_pktlen, portid);
>  			return -1;
>  		}
>  		rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -3409,18 +3423,16 @@ update_jumbo_frame_offload(portid_t portid)
>  		}
>  	}
> 
> -	/* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
> -	 * if unset do it here
> -	 */
> -	if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> -		ret = rte_eth_dev_set_mtu(portid,
> -				port->dev_conf.rxmode.max_rx_pkt_len -
> eth_overhead);
> -		if (ret)
> -			printf("Failed to set MTU to %u for port %u\n",
> -				port->dev_conf.rxmode.max_rx_pkt_len -
> eth_overhead,
> -				portid);
> +	if (mtu == new_mtu)
> +		return 0;
> +
> +	if (rte_eth_dev_set_mtu(portid, new_mtu) != 0) {
> +		printf("Failed to set MTU to %u for port %u\n", new_mtu,
> portid);
> +		return -1;
>  	}
> 
> +	port->dev_conf.rxmode.mtu = new_mtu;
> +
>  	return 0;
>  }
> 
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index d61a055bdd1b..42143f85924f 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id,
> __rte_unused uint16_t queue,
>  			 __rte_unused void *user_param);
>  void add_tx_dynf_callback(portid_t portid);
>  void remove_tx_dynf_callback(portid_t portid);
> -int update_jumbo_frame_offload(portid_t portid);
> +int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
> 
>  /*
>   * Work-around of a compilation error with ICC on invocations of the
> diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
> index 8a5c8310a8b4..5388d18125a6 100644
> --- a/app/test/test_link_bonding.c
> +++ b/app/test/test_link_bonding.c
> @@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
>  	.rxmode = {
>  		.mq_mode = ETH_MQ_RX_NONE,
>  		.split_hdr_size = 0,
> -		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
>  	},
>  	.txmode = {
>  		.mq_mode = ETH_MQ_TX_NONE,
> diff --git a/app/test/test_link_bonding_mode4.c
> b/app/test/test_link_bonding_mode4.c
> index 2c835fa7adc7..3e9254fe896d 100644
> --- a/app/test/test_link_bonding_mode4.c
> +++ b/app/test/test_link_bonding_mode4.c
> @@ -108,7 +108,6 @@ static struct link_bonding_unittest_params
> test_params  = {
>  static struct rte_eth_conf default_pmd_conf = {
>  	.rxmode = {
>  		.mq_mode = ETH_MQ_RX_NONE,
> -		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
>  		.split_hdr_size = 0,
>  	},
>  	.txmode = {
> diff --git a/app/test/test_link_bonding_rssconf.c
> b/app/test/test_link_bonding_rssconf.c
> index 5dac60ca1edd..e7bb0497b663 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params
> test_params  = {
>  static struct rte_eth_conf default_pmd_conf = {
>  	.rxmode = {
>  		.mq_mode = ETH_MQ_RX_NONE,
> -		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
>  		.split_hdr_size = 0,
>  	},
>  	.txmode = {
> @@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
>  static struct rte_eth_conf rss_pmd_conf = {
>  	.rxmode = {
>  		.mq_mode = ETH_MQ_RX_RSS,
> -		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
>  		.split_hdr_size = 0,
>  	},
>  	.txmode = {
> diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
> index 3a248d512c4a..a3b4f52c65e6 100644
> --- a/app/test/test_pmd_perf.c
> +++ b/app/test/test_pmd_perf.c
> @@ -63,7 +63,6 @@ static struct rte_ether_addr
> ports_eth_addr[RTE_MAX_ETHPORTS];
>  static struct rte_eth_conf port_conf = {
>  	.rxmode = {
>  		.mq_mode = ETH_MQ_RX_NONE,
> -		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
>  		.split_hdr_size = 0,
>  	},
>  	.txmode = {
> diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
> index 917482dbe2a5..b8d43aa90098 100644
> --- a/doc/guides/nics/dpaa.rst
> +++ b/doc/guides/nics/dpaa.rst
> @@ -335,7 +335,7 @@ Maximum packet length
>  ~~~~~~~~~~~~~~~~~~~~~
> 
>  The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
>  member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
>  up to 10240 bytes can still reach the host interface.
> 
> diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
> index 6470f1c05ac8..ce16e1047df2 100644
> --- a/doc/guides/nics/dpaa2.rst
> +++ b/doc/guides/nics/dpaa2.rst
> @@ -551,7 +551,7 @@ Maximum packet length
>  ~~~~~~~~~~~~~~~~~~~~~
> 
>  The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
>  member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
>  up to 10240 bytes can still reach the host interface.
> 
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 403c2b03a386..c98242f3b72f 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -166,7 +166,7 @@ Jumbo frame
>  Supports Rx jumbo frames.
> 
>  * **[uses]    rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> -  ``dev_conf.rxmode.max_rx_pkt_len``.
> +  ``dev_conf.rxmode.mtu``.
>  * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
>  * **[related] API**: ``rte_eth_dev_set_mtu()``.
> 
> diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
> index 7b8ef0e7823d..ed6afd62703d 100644
> --- a/doc/guides/nics/fm10k.rst
> +++ b/doc/guides/nics/fm10k.rst
> @@ -141,7 +141,7 @@ Maximum packet length
>  ~~~~~~~~~~~~~~~~~~~~~
> 
>  The FM10000 family of NICS support a maximum of a 15K jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
>  member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
>  up to 15364 bytes can still reach the host interface.
> 
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index 83299646ddb1..338734826a7a 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -584,9 +584,9 @@ Driver options
>    and each stride receives one packet. MPRQ can improve throughput for
>    small-packet traffic.
> 
> -  When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
> +  When MPRQ is enabled, MTU can be larger than the size of
>    user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled.
> PMD will
> -  configure large stride size enough to accommodate max_rx_pkt_len as
> long as
> +  configure large stride size enough to accommodate MTU as long as
>    device allows. Note that this can waste system memory compared to
> enabling Rx
>    scatter and multi-segment packet.
> 
> diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
> index b1a868b054d1..8236cc3e93e0 100644
> --- a/doc/guides/nics/octeontx.rst
> +++ b/doc/guides/nics/octeontx.rst
> @@ -157,7 +157,7 @@ Maximum packet length
>  ~~~~~~~~~~~~~~~~~~~~~
> 
>  The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame.
> The value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
>  member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
>  up to 32k bytes can still reach the host interface.
> 
> diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
> index 12d43ce93e28..98f23a2b2a3d 100644
> --- a/doc/guides/nics/thunderx.rst
> +++ b/doc/guides/nics/thunderx.rst
> @@ -392,7 +392,7 @@ Maximum packet length
>  ~~~~~~~~~~~~~~~~~~~~~
> 
>  The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
>  member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
>  up to 9200 bytes can still reach the host interface.
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 9584d6bfd723..86da47d8f9c6 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -56,31 +56,6 @@ Deprecation Notices
>    In 19.11 PMDs will still update the field even when the offload is not
>    enabled.
> 
> -* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``,
> will be
> -  replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
> -  The new ``mtu`` field will be used to configure the initial device MTU via
> -  ``rte_eth_dev_configure()`` API.
> -  Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
> -  The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to
> store
> -  the configured ``mtu`` value,
> -  and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
> -  be used to store the user configuration request.
> -  Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME``
> enabled,
> -  ``mtu`` field will be always valid.
> -  When ``mtu`` config is not provided by the application, default
> ``RTE_ETHER_MTU``
> -  value will be used.
> -  ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set
> successfully,
> -  either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
> -
> -  An application may need to configure device for a specific Rx packet size,
> like for
> -  cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received
> packet size
> -  can't be bigger than Rx buffer size.
> -  To cover these cases an application needs to know the device packet
> overhead to be
> -  able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
> -  ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
> -  the device packet overhead can be calculated as:
> -  ``(struct rte_eth_dev_info).max_rx_pktlen - (struct
> rte_eth_dev_info).max_mtu``
> -
>  * ethdev: ``rx_descriptor_done`` dev_ops and
> ``rte_eth_rx_descriptor_done``
>    will be removed in 21.11.
>    Existing ``rte_eth_rx_descriptor_status`` and
> ``rte_eth_tx_descriptor_status``
> diff --git a/doc/guides/sample_app_ug/flow_classify.rst
> b/doc/guides/sample_app_ug/flow_classify.rst
> index 01915971ae83..2cc36a688af3 100644
> --- a/doc/guides/sample_app_ug/flow_classify.rst
> +++ b/doc/guides/sample_app_ug/flow_classify.rst
> @@ -325,13 +325,7 @@ Forwarding application is shown below:
>      }
> 
>  The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
> -
> -.. code-block:: c
> -
> -    static const struct rte_eth_conf port_conf_default = {
> -        .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> -    };
> +``rte_eth_dev_configure()`` function.
> 
>  For this example the ports are set up with 1 RX and 1 TX queue using the
>  ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/doc/guides/sample_app_ug/ioat.rst
> b/doc/guides/sample_app_ug/ioat.rst
> index 7eb557f91c7a..c5c06261e395 100644
> --- a/doc/guides/sample_app_ug/ioat.rst
> +++ b/doc/guides/sample_app_ug/ioat.rst
> @@ -162,7 +162,6 @@ multiple CBDMA channels per port:
>      static const struct rte_eth_conf port_conf = {
>          .rxmode = {
>              .mq_mode        = ETH_MQ_RX_RSS,
> -            .max_rx_pkt_len = RTE_ETHER_MAX_LEN
>          },
>          .rx_adv_conf = {
>              .rss_conf = {
> diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst
> b/doc/guides/sample_app_ug/ip_reassembly.rst
> index e72c8492e972..2090b23fdd1c 100644
> --- a/doc/guides/sample_app_ug/ip_reassembly.rst
> +++ b/doc/guides/sample_app_ug/ip_reassembly.rst
> @@ -175,7 +175,7 @@ each RX queue uses its own mempool.
>  .. code-block:: c
> 
>      nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) *
> RTE_LIBRTE_IP_FRAG_MAX_FRAGS;
> -    nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) /
> BUF_SIZE;
> +    nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN + BUF_SIZE - 1) / BUF_SIZE;
>      nb_mbuf *= 2; /* ipv4 and ipv6 */
>      nb_mbuf += RTE_TEST_RX_DESC_DEFAULT +
> RTE_TEST_TX_DESC_DEFAULT;
>      nb_mbuf = RTE_MAX(nb_mbuf, (uint32_t)NB_MBUF);
> diff --git a/doc/guides/sample_app_ug/skeleton.rst
> b/doc/guides/sample_app_ug/skeleton.rst
> index 263d8debc81b..a88cb8f14a4b 100644
> --- a/doc/guides/sample_app_ug/skeleton.rst
> +++ b/doc/guides/sample_app_ug/skeleton.rst
> @@ -157,13 +157,7 @@ Forwarding application is shown below:
>      }
> 
>  The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
> -
> -.. code-block:: c
> -
> -    static const struct rte_eth_conf port_conf_default = {
> -        .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> -    };
> +``rte_eth_dev_configure()`` function.
> 
>  For this example the ports are set up with 1 RX and 1 TX queue using the
>  ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/drivers/net/atlantic/atl_ethdev.c
> b/drivers/net/atlantic/atl_ethdev.c
> index 0ce35eb519e2..3f654c071566 100644
> --- a/drivers/net/atlantic/atl_ethdev.c
> +++ b/drivers/net/atlantic/atl_ethdev.c
> @@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> dev_info.max_rx_pktlen)
>  		return -EINVAL;
> 
> -	/* update max frame size */
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
>  	return 0;
>  }
> 
> diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> index 623fa5e5ff5b..2554f5fdf59a 100644
> --- a/drivers/net/avp/avp_ethdev.c
> +++ b/drivers/net/avp/avp_ethdev.c
> @@ -1059,17 +1059,18 @@ static int
>  avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
>  			 struct avp_dev *avp)
>  {
> -	unsigned int max_rx_pkt_len;
> +	unsigned int max_rx_pktlen;
> 
> -	max_rx_pkt_len = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +	max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> +		RTE_ETHER_CRC_LEN;
> 
> -	if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
> -	    (max_rx_pkt_len > avp->host_mbuf_size)) {
> +	if ((max_rx_pktlen > avp->guest_mbuf_size) ||
> +	    (max_rx_pktlen > avp->host_mbuf_size)) {
>  		/*
>  		 * If the guest MTU is greater than either the host or guest
>  		 * buffers then chained mbufs have to be enabled in the TX
>  		 * direction.  It is assumed that the application will not need
> -		 * to send packets larger than their max_rx_pkt_len (MRU).
> +		 * to send packets larger than their MTU.
>  		 */
>  		return 1;
>  	}
> @@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> 
>  	PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u)
> mbuf_size=(%u,%u)\n",
>  		    avp->max_rx_pkt_len,
> -		    eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
> +		    eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN,
>  		    avp->host_mbuf_size,
>  		    avp->guest_mbuf_size);
> 
> @@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
>  			 * function; send it truncated to avoid the
> performance
>  			 * hit of having to manage returning the already
>  			 * allocated buffer to the free list.  This should not
> -			 * happen since the application should have set the
> -			 * max_rx_pkt_len based on its MTU and it should be
> +			 * happen since the application should have not send
> +			 * packages larger than its MTU and it should be
>  			 * policing its own packet sizes.
>  			 */
>  			txq->errors++;
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index 9cb4818af11f..76aeec077f2b 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
>  	struct axgbe_port *pdata = dev->data->dev_private;
>  	int ret;
>  	struct rte_eth_dev_data *dev_data = dev->data;
> -	uint16_t max_pkt_len = dev_data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +	uint16_t max_pkt_len;
> 
>  	dev->dev_ops = &axgbe_eth_dev_ops;
> 
> @@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
> 
>  	rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
>  	rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
> +
> +	max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
>  	if ((dev_data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) ||
>  				max_pkt_len > pdata->rx_buf_size)
>  		dev_data->scattered_rx = 1;
> @@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  				dev->data->port_id);
>  		return -EBUSY;
>  	}
> -	if (frame_size > AXGBE_ETH_MAX_LEN) {
> +	if (mtu > RTE_ETHER_MTU) {
>  		dev->data->dev_conf.rxmode.offloads |=
>  			DEV_RX_OFFLOAD_JUMBO_FRAME;
>  		val = 1;
> @@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  		val = 0;
>  	}
>  	AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
>  	return 0;
>  }
> 
> diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c
> b/drivers/net/bnx2x/bnx2x_ethdev.c
> index 463886f17a58..009a94e9a8fa 100644
> --- a/drivers/net/bnx2x/bnx2x_ethdev.c
> +++ b/drivers/net/bnx2x/bnx2x_ethdev.c
> @@ -175,16 +175,12 @@ static int
>  bnx2x_dev_configure(struct rte_eth_dev *dev)
>  {
>  	struct bnx2x_softc *sc = dev->data->dev_private;
> -	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> 
>  	int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
> 
>  	PMD_INIT_FUNC_TRACE(sc);
> 
> -	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> -		sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> -		dev->data->mtu = sc->mtu;
> -	}
> +	sc->mtu = dev->data->dev_conf.rxmode.mtu;
> 
>  	if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
>  		PMD_DRV_LOG(ERR, sc, "The number of TX queues is
> greater than number of RX queues");
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c
> b/drivers/net/bnxt/bnxt_ethdev.c
> index c9536f79267d..335505a106d5 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -1128,13 +1128,8 @@ static int bnxt_dev_configure_op(struct
> rte_eth_dev *eth_dev)
>  		rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
>  	eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
> 
> -	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> -		eth_dev->data->mtu =
> -			eth_dev->data->dev_conf.rxmode.max_rx_pkt_len
> -
> -			RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> VLAN_TAG_SIZE *
> -			BNXT_NUM_VLANS;
> -		bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> -	}
> +	bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> +
>  	return 0;
> 
>  resource_error:
> @@ -1172,6 +1167,7 @@ void bnxt_print_link_info(struct rte_eth_dev
> *eth_dev)
>   */
>  static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
>  {
> +	uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
>  	uint16_t buf_size;
>  	int i;
> 
> @@ -1186,7 +1182,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev
> *eth_dev)
> 
>  		buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq-
> >mb_pool) -
>  				      RTE_PKTMBUF_HEADROOM);
> -		if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len >
> buf_size)
> +		if (eth_dev->data->mtu + overhead > buf_size)
>  			return 1;
>  	}
>  	return 0;
> @@ -2992,6 +2988,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev,
> __rte_unused uint16_t queue_id,
> 
>  int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
>  {
> +	uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
>  	struct bnxt *bp = eth_dev->data->dev_private;
>  	uint32_t new_pkt_size;
>  	uint32_t rc = 0;
> @@ -3005,8 +3002,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
>  	if (!eth_dev->data->nb_rx_queues)
>  		return rc;
> 
> -	new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> -		       VLAN_TAG_SIZE * BNXT_NUM_VLANS;
> +	new_pkt_size = new_mtu + overhead;
> 
>  	/*
>  	 * Disallow any MTU change that would require scattered receive
> support
> @@ -3033,7 +3029,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
>  	}
> 
>  	/* Is there a change in mtu setting? */
> -	if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len ==
> new_pkt_size)
> +	if (eth_dev->data->mtu == new_mtu)
>  		return rc;
> 
>  	for (i = 0; i < bp->nr_vnics; i++) {
> @@ -3055,9 +3051,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
>  		}
>  	}
> 
> -	if (!rc)
> -		eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> new_pkt_size;
> -
>  	PMD_DRV_LOG(INFO, "New MTU is %d\n", new_mtu);
> 
>  	return rc;
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
> b/drivers/net/bonding/rte_eth_bond_pmd.c
> index b01ef003e65c..b2a1833e3f91 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -1728,8 +1728,8 @@ slave_configure(struct rte_eth_dev
> *bonded_eth_dev,
>  		slave_eth_dev->data->dev_conf.rxmode.offloads &=
>  				~DEV_RX_OFFLOAD_VLAN_FILTER;
> 
> -	slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> -			bonded_eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +	slave_eth_dev->data->dev_conf.rxmode.mtu =
> +			bonded_eth_dev->data->dev_conf.rxmode.mtu;
> 
>  	if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
>  			DEV_RX_OFFLOAD_JUMBO_FRAME)
> diff --git a/drivers/net/cnxk/cnxk_ethdev.c
> b/drivers/net/cnxk/cnxk_ethdev.c
> index 7adab4605819..da6c5e8f242f 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.c
> +++ b/drivers/net/cnxk/cnxk_ethdev.c
> @@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp
> *rxq)
>  	mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
>  	buffsz = mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM;
> 
> -	if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> +	if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD >
> buffsz) {
>  		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
>  		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
>  	}
> @@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
>  {
>  	struct rte_eth_dev_data *data = eth_dev->data;
>  	struct cnxk_eth_rxq_sp *rxq;
> -	uint16_t mtu;
>  	int rc;
> 
>  	rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
>  	/* Setup scatter mode if needed by jumbo */
>  	nix_enable_mseg_on_jumbo(rxq);
> 
> -	/* Setup MTU based on max_rx_pkt_len */
> -	mtu = data->dev_conf.rxmode.max_rx_pkt_len -
> CNXK_NIX_L2_OVERHEAD +
> -				CNXK_NIX_MAX_VTAG_ACT_SIZE;
> -
> -	rc = cnxk_nix_mtu_set(eth_dev, mtu);
> +	rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
>  	if (rc)
>  		plt_err("Failed to set default MTU size, rc=%d", rc);
> 
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c
> b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index b6cc5286c6d0..695d0d6fd3e2 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
>  		goto exit;
>  	}
> 
> -	frame_size += RTE_ETHER_CRC_LEN;
> -
> -	if (frame_size > RTE_ETHER_MAX_LEN)
> +	if (mtu > RTE_ETHER_MTU)
>  		dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
>  		dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> -	/* Update max_rx_pkt_len */
> -	data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
>  exit:
>  	return rc;
>  }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 177eca397600..8cf61f12a8d6 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu)
>  		return err;
> 
>  	/* Must accommodate at least RTE_ETHER_MIN_MTU */
> -	if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> +	if (mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
>  		return -EINVAL;
> 
>  	/* set to jumbo mode if needed */
> -	if (new_mtu > CXGBE_ETH_MAX_LEN)
> +	if (mtu > RTE_ETHER_MTU)
>  		eth_dev->data->dev_conf.rxmode.offloads |=
>  			DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
> @@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu)
> 
>  	err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1,
> -1,
>  			    -1, -1, true);
> -	if (!err)
> -		eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> new_mtu;
> -
>  	return err;
>  }
> 
> @@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
>  			     const struct rte_eth_rxconf *rx_conf
> __rte_unused,
>  			     struct rte_mempool *mp)
>  {
> -	unsigned int pkt_len = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +	unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN
> +
> +		RTE_ETHER_CRC_LEN;
>  	struct port_info *pi = eth_dev->data->dev_private;
>  	struct adapter *adapter = pi->adapter;
>  	struct rte_eth_dev_info dev_info;
> @@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
>  		rxq->fl.size = temp_nb_desc;
> 
>  	/* Set to jumbo mode if necessary */
> -	if (pkt_len > CXGBE_ETH_MAX_LEN)
> +	if (eth_dev->data->mtu > RTE_ETHER_MTU)
>  		eth_dev->data->dev_conf.rxmode.offloads |=
>  			DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
> diff --git a/drivers/net/cxgbe/cxgbe_main.c
> b/drivers/net/cxgbe/cxgbe_main.c
> index 6dd1bf1f836e..91d6bb9bbcb0 100644
> --- a/drivers/net/cxgbe/cxgbe_main.c
> +++ b/drivers/net/cxgbe/cxgbe_main.c
> @@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
>  	unsigned int mtu;
>  	int ret;
> 
> -	mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> -	      (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
> +	mtu = pi->eth_dev->data->mtu;
> 
>  	conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
> 
> diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
> index e5f7721dc4b3..830f5192474d 100644
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct
> rte_mbuf *mbuf,
>  	u32 wr_mid;
>  	u64 cntrl, *end;
>  	bool v6;
> -	u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
> +	u32 max_pkt_len;
> 
>  	/* Reject xmit if queue is stopped */
>  	if (unlikely(txq->flags & EQ_STOPPED))
> @@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct
> rte_mbuf *mbuf,
>  		return 0;
>  	}
> 
> +	max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
>  	if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
>  	    (unlikely(m->pkt_len > max_pkt_len)))
>  		goto out_free;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c
> b/drivers/net/dpaa/dpaa_ethdev.c
> index 27d670f843d2..56703e3a39e8 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
>  		return -EINVAL;
>  	}
> 
> -	if (frame_size > DPAA_ETH_MAX_LEN)
> +	if (mtu > RTE_ETHER_MTU)
>  		dev->data->dev_conf.rxmode.offloads |=
> 
> 	DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
>  		dev->data->dev_conf.rxmode.offloads &=
> 
> 	~DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
>  	fman_if_set_maxfrm(dev->process_private, frame_size);
> 
>  	return 0;
> @@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
>  	struct fman_if *fif = dev->process_private;
>  	struct __fman_if *__fif;
>  	struct rte_intr_handle *intr_handle;
> +	uint32_t max_rx_pktlen;
>  	int speed, duplex;
>  	int ret;
> 
> @@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
>  		tx_offloads, dev_tx_offloads_nodis);
>  	}
> 
> -	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> -		uint32_t max_len;
> -
> -		DPAA_PMD_DEBUG("enabling jumbo");
> -
> -		if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> -		    DPAA_MAX_RX_PKT_LEN)
> -			max_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> -		else {
> -			DPAA_PMD_INFO("enabling jumbo override conf
> max len=%d "
> -				"supported is %d",
> -				dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> -				DPAA_MAX_RX_PKT_LEN);
> -			max_len = DPAA_MAX_RX_PKT_LEN;
> -		}
> -
> -		fman_if_set_maxfrm(dev->process_private, max_len);
> -		dev->data->mtu = max_len
> -			- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> VLAN_TAG_SIZE;
> +	max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> +			RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> +	if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
> +		DPAA_PMD_INFO("enabling jumbo override conf max
> len=%d "
> +			"supported is %d",
> +			max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
> +		max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
>  	}
> 
> +	fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
> +
>  	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
>  		DPAA_PMD_DEBUG("enabling scatter mode");
>  		fman_if_set_sg(dev->process_private, 1);
> @@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
>  	u32 flags = 0;
>  	int ret;
>  	u32 buffsz = rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM;
> +	uint32_t max_rx_pktlen;
> 
>  	PMD_INIT_FUNC_TRACE();
> 
> @@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
>  		return -EINVAL;
>  	}
> 
> +	max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> +		VLAN_TAG_SIZE;
>  	/* Max packet can fit in single buffer */
> -	if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
> +	if (max_rx_pktlen <= buffsz) {
>  		;
>  	} else if (dev->data->dev_conf.rxmode.offloads &
>  			DEV_RX_OFFLOAD_SCATTER) {
> -		if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> -			buffsz * DPAA_SGT_MAX_ENTRIES) {
> -			DPAA_PMD_ERR("max RxPkt size %d too big to fit "
> +		if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
> +			DPAA_PMD_ERR("Maximum Rx packet size %d too
> big to fit "
>  				"MaxSGlist %d",
> -				dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> -				buffsz * DPAA_SGT_MAX_ENTRIES);
> +				max_rx_pktlen, buffsz *
> DPAA_SGT_MAX_ENTRIES);
>  			rte_errno = EOVERFLOW;
>  			return -rte_errno;
>  		}
> @@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
>  		DPAA_PMD_WARN("The requested maximum Rx packet size
> (%u) is"
>  		     " larger than a single mbuf (%u) and scattered"
>  		     " mode has not been requested",
> -		     dev->data->dev_conf.rxmode.max_rx_pkt_len,
> -		     buffsz - RTE_PKTMBUF_HEADROOM);
> +		     max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
>  	}
> 
>  	dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
> @@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> 
>  	dpaa_intf->valid = 1;
>  	DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf-
> >name,
> -		fman_if_get_sg_enable(fif),
> -		dev->data->dev_conf.rxmode.max_rx_pkt_len);
> +		fman_if_get_sg_enable(fif), max_rx_pktlen);
>  	/* checking if push mode only, no error check for now */
>  	if (!rxq->is_static &&
>  	    dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index 8b803b8542dc..6213bcbf3a43 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
>  	int tx_l3_csum_offload = false;
>  	int tx_l4_csum_offload = false;
>  	int ret, tc_index;
> +	uint32_t max_rx_pktlen;
> 
>  	PMD_INIT_FUNC_TRACE();
> 
> @@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev
> *dev)
>  		tx_offloads, dev_tx_offloads_nodis);
>  	}
> 
> -	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> -		if (eth_conf->rxmode.max_rx_pkt_len <=
> DPAA2_MAX_RX_PKT_LEN) {
> -			ret = dpni_set_max_frame_length(dpni,
> CMD_PRI_LOW,
> -				priv->token, eth_conf-
> >rxmode.max_rx_pkt_len
> -				- RTE_ETHER_CRC_LEN);
> -			if (ret) {
> -				DPAA2_PMD_ERR(
> -					"Unable to set mtu. check config");
> -				return ret;
> -			}
> -			dev->data->mtu =
> -				dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> -				RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
> -
> -				VLAN_TAG_SIZE;
> -		} else {
> -			return -1;
> +	max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> +				RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> +	if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
> +		ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
> +			priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
> +		if (ret) {
> +			DPAA2_PMD_ERR("Unable to set mtu. check config");
> +			return ret;
>  		}
> +	} else {
> +		return -1;
>  	}
> 
>  	if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
> @@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> DPAA2_MAX_RX_PKT_LEN)
>  		return -EINVAL;
> 
> -	if (frame_size > DPAA2_ETH_MAX_LEN)
> +	if (mtu > RTE_ETHER_MTU)
>  		dev->data->dev_conf.rxmode.offloads |=
> 
> 	DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
>  		dev->data->dev_conf.rxmode.offloads &=
> 
> 	~DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
>  	/* Set the Max Rx frame length as 'mtu' +
>  	 * Maximum Ethernet header length
>  	 */
> diff --git a/drivers/net/e1000/em_ethdev.c
> b/drivers/net/e1000/em_ethdev.c
> index a0ca371b0275..6f418a36aa04 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	rctl = E1000_READ_REG(hw, E1000_RCTL);
> 
>  	/* switch to jumbo mode if needed */
> -	if (frame_size > E1000_ETH_MAX_LEN) {
> +	if (mtu > RTE_ETHER_MTU) {
>  		dev->data->dev_conf.rxmode.offloads |=
>  			DEV_RX_OFFLOAD_JUMBO_FRAME;
>  		rctl |= E1000_RCTL_LPE;
> @@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	}
>  	E1000_WRITE_REG(hw, E1000_RCTL, rctl);
> 
> -	/* update max frame size */
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
>  	return 0;
>  }
> 
> diff --git a/drivers/net/e1000/igb_ethdev.c
> b/drivers/net/e1000/igb_ethdev.c
> index 10ee0f33415a..35b517891d67 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -2686,9 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev
> *dev)
>  	E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
> 
>  	/* Update maximum packet length */
> -	if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> -		E1000_WRITE_REG(hw, E1000_RLPML,
> -				dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> +	E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu +
> E1000_ETH_OVERHEAD);
>  }
> 
>  static void
> @@ -2704,10 +2702,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev
> *dev)
>  	E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
> 
>  	/* Update maximum packet length */
> -	if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> -		E1000_WRITE_REG(hw, E1000_RLPML,
> -			dev->data->dev_conf.rxmode.max_rx_pkt_len +
> -						VLAN_TAG_SIZE);
> +	E1000_WRITE_REG(hw, E1000_RLPML,
> +		dev->data->mtu + E1000_ETH_OVERHEAD +
> VLAN_TAG_SIZE);
>  }
> 
>  static int
> @@ -4405,7 +4401,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	rctl = E1000_READ_REG(hw, E1000_RCTL);
> 
>  	/* switch to jumbo mode if needed */
> -	if (frame_size > E1000_ETH_MAX_LEN) {
> +	if (mtu > RTE_ETHER_MTU) {
>  		dev->data->dev_conf.rxmode.offloads |=
>  			DEV_RX_OFFLOAD_JUMBO_FRAME;
>  		rctl |= E1000_RCTL_LPE;
> @@ -4416,11 +4412,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	}
>  	E1000_WRITE_REG(hw, E1000_RCTL, rctl);
> 
> -	/* update max frame size */
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> -	E1000_WRITE_REG(hw, E1000_RLPML,
> -			dev->data->dev_conf.rxmode.max_rx_pkt_len);
> +	E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
> 
>  	return 0;
>  }
> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> index 278d5d2712af..de12997b4bdd 100644
> --- a/drivers/net/e1000/igb_rxtx.c
> +++ b/drivers/net/e1000/igb_rxtx.c
> @@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
>  	uint32_t srrctl;
>  	uint16_t buf_size;
>  	uint16_t rctl_bsize;
> +	uint32_t max_len;
>  	uint16_t i;
>  	int ret;
> 
> @@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
>  	/*
>  	 * Configure support of jumbo frames, if any.
>  	 */
> +	max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
>  	if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> -		uint32_t max_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> -
>  		rctl |= E1000_RCTL_LPE;
> 
>  		/*
> @@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
>  					       E1000_SRRCTL_BSIZEPKT_SHIFT);
> 
>  			/* It adds dual VLAN length for supporting dual VLAN
> */
> -			if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> -						2 * VLAN_TAG_SIZE) >
> buf_size){
> +			if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){
>  				if (!dev->data->scattered_rx)
>  					PMD_INIT_LOG(DEBUG,
>  						     "forcing scatter mode");
> @@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
>  	uint32_t srrctl;
>  	uint16_t buf_size;
>  	uint16_t rctl_bsize;
> +	uint32_t max_len;
>  	uint16_t i;
>  	int ret;
> 
>  	hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
> 
>  	/* setup MTU */
> -	e1000_rlpml_set_vf(hw,
> -		(uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
> -		VLAN_TAG_SIZE));
> +	max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> +	e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
> 
>  	/* Configure and enable each RX queue. */
>  	rctl_bsize = 0;
> @@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
>  					       E1000_SRRCTL_BSIZEPKT_SHIFT);
> 
>  			/* It adds dual VLAN length for supporting dual VLAN
> */
> -			if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> -						2 * VLAN_TAG_SIZE) >
> buf_size){
> +			if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){
>  				if (!dev->data->scattered_rx)
>  					PMD_INIT_LOG(DEBUG,
>  						     "forcing scatter mode");
> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> index dfe68279fa7b..e9b718786a39 100644
> --- a/drivers/net/ena/ena_ethdev.c
> +++ b/drivers/net/ena/ena_ethdev.c
> @@ -850,26 +850,14 @@ static int ena_queue_start_all(struct rte_eth_dev
> *dev,
>  	return rc;
>  }
> 
> -static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
> -{
> -	uint32_t max_frame_len = adapter->max_mtu;
> -
> -	if (adapter->edev_data->dev_conf.rxmode.offloads &
> -	    DEV_RX_OFFLOAD_JUMBO_FRAME)
> -		max_frame_len =
> -			adapter->edev_data-
> >dev_conf.rxmode.max_rx_pkt_len;
> -
> -	return max_frame_len;
> -}
> -
>  static int ena_check_valid_conf(struct ena_adapter *adapter)
>  {
> -	uint32_t max_frame_len = ena_get_mtu_conf(adapter);
> +	uint32_t mtu = adapter->edev_data->mtu;
> 
> -	if (max_frame_len > adapter->max_mtu || max_frame_len <
> ENA_MIN_MTU) {
> +	if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
>  		PMD_INIT_LOG(ERR, "Unsupported MTU of %d. "
>  				  "max mtu: %d, min mtu: %d",
> -			     max_frame_len, adapter->max_mtu,
> ENA_MIN_MTU);
> +			     mtu, adapter->max_mtu, ENA_MIN_MTU);
>  		return ENA_COM_UNSUPPORTED;
>  	}
> 
> @@ -1042,11 +1030,11 @@ static int ena_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	ena_dev = &adapter->ena_dev;
>  	ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
> 
> -	if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
> +	if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
>  		PMD_DRV_LOG(ERR,
>  			"Invalid MTU setting. new_mtu: %d "
>  			"max mtu: %d min mtu: %d\n",
> -			mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
> +			mtu, adapter->max_mtu, ENA_MIN_MTU);
>  		return -EINVAL;
>  	}
> 
> @@ -2067,7 +2055,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
>  					   ETH_RSS_UDP;
> 
>  	dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
> -	dev_info->max_rx_pktlen  = adapter->max_mtu;
> +	dev_info->max_rx_pktlen  = adapter->max_mtu +
> RTE_ETHER_HDR_LEN +
> +		RTE_ETHER_CRC_LEN;
> +	dev_info->min_mtu = ENA_MIN_MTU;
> +	dev_info->max_mtu = adapter->max_mtu;
>  	dev_info->max_mac_addrs = 1;
> 
>  	dev_info->max_rx_queues = adapter->max_num_io_queues;
> diff --git a/drivers/net/enetc/enetc_ethdev.c
> b/drivers/net/enetc/enetc_ethdev.c
> index b496cd470045..cdb9783b5372 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
>  		return -EINVAL;
>  	}
> 
> -	if (frame_size > ENETC_ETH_MAX_LEN)
> +	if (mtu > RTE_ETHER_MTU)
>  		dev->data->dev_conf.rxmode.offloads &=
> 
> 	DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
> @@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
>  	enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
>  	enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
> 
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
>  	/*setting the MTU*/
>  	enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> ENETC_SET_MAXFRM(frame_size) |
>  		      ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
> @@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
>  	struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
>  	uint64_t rx_offloads = eth_conf->rxmode.offloads;
>  	uint32_t checksum = L3_CKSUM | L4_CKSUM;
> +	uint32_t max_len;
> 
>  	PMD_INIT_FUNC_TRACE();
> 
> -	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> -		uint32_t max_len;
> -
> -		max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> -
> -		enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> -			      ENETC_SET_MAXFRM(max_len));
> -		enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> -			      ENETC_MAC_MAXFRM_SIZE);
> -		enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
> -			      2 * ENETC_MAC_MAXFRM_SIZE);
> -		dev->data->mtu = RTE_ETHER_MAX_LEN -
> RTE_ETHER_HDR_LEN -
> -			RTE_ETHER_CRC_LEN;
> -	}
> +	max_len = dev->data->dev_conf.rxmode.mtu +
> RTE_ETHER_HDR_LEN +
> +		RTE_ETHER_CRC_LEN;
> +	enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> ENETC_SET_MAXFRM(max_len));
> +	enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
> +	enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
> 
>  	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
>  		int config;
> diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
> index 8d5797523b8f..6a81ceb62ba7 100644
> --- a/drivers/net/enic/enic_ethdev.c
> +++ b/drivers/net/enic/enic_ethdev.c
> @@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev
> *eth_dev,
>  	 * max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
>  	 * a hint to the driver to size receive buffers accordingly so that
>  	 * larger-than-vnic-mtu packets get truncated.. For DPDK, we let
> -	 * the user decide the buffer size via rxmode.max_rx_pkt_len,
> basically
> +	 * the user decide the buffer size via rxmode.mtu, basically
>  	 * ignoring vNIC mtu.
>  	 */
>  	device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic-
> >max_mtu);
> diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
> index 2affd380c6a4..dfc7f5d1f94f 100644
> --- a/drivers/net/enic/enic_main.c
> +++ b/drivers/net/enic/enic_main.c
> @@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct
> vnic_rq *rq)
>  	struct rq_enet_desc *rqd = rq->ring.descs;
>  	unsigned i;
>  	dma_addr_t dma_addr;
> -	uint32_t max_rx_pkt_len;
> +	uint32_t max_rx_pktlen;
>  	uint16_t rq_buf_len;
> 
>  	if (!rq->in_use)
> @@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic,
> struct vnic_rq *rq)
> 
>  	/*
>  	 * If *not* using scatter and the mbuf size is greater than the
> -	 * requested max packet size (max_rx_pkt_len), then reduce the
> -	 * posted buffer size to max_rx_pkt_len. HW still receives packets
> -	 * larger than max_rx_pkt_len, but they will be truncated, which we
> +	 * requested max packet size (mtu + eth overhead), then reduce the
> +	 * posted buffer size to max packet size. HW still receives packets
> +	 * larger than max packet size, but they will be truncated, which we
>  	 * drop in the rx handler. Not ideal, but better than returning
>  	 * large packets when the user is not expecting them.
>  	 */
> -	max_rx_pkt_len = enic->rte_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +	max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev-
> >data->mtu);
>  	rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) -
> RTE_PKTMBUF_HEADROOM;
> -	if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
> -		rq_buf_len = max_rx_pkt_len;
> +	if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
> +		rq_buf_len = max_rx_pktlen;
>  	for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
>  		mb = rte_mbuf_raw_alloc(rq->mp);
>  		if (mb == NULL) {
> @@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
>  	unsigned int mbuf_size, mbufs_per_pkt;
>  	unsigned int nb_sop_desc, nb_data_desc;
>  	uint16_t min_sop, max_sop, min_data, max_data;
> -	uint32_t max_rx_pkt_len;
> +	uint32_t max_rx_pktlen;
> 
>  	/*
>  	 * Representor uses a reserved PF queue. Translate representor
> @@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> 
>  	mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
>  			       RTE_PKTMBUF_HEADROOM);
> -	/* max_rx_pkt_len includes the ethernet header and CRC. */
> -	max_rx_pkt_len = enic->rte_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +	/* max_rx_pktlen includes the ethernet header and CRC. */
> +	max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev-
> >data->mtu);
> 
>  	if (enic->rte_dev->data->dev_conf.rxmode.offloads &
>  	    DEV_RX_OFFLOAD_SCATTER) {
>  		dev_info(enic, "Rq %u Scatter rx mode enabled\n",
> queue_idx);
>  		/* ceil((max pkt len)/mbuf_size) */
> -		mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) /
> mbuf_size;
> +		mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) /
> mbuf_size;
>  	} else {
>  		dev_info(enic, "Scatter rx mode disabled\n");
>  		mbufs_per_pkt = 1;
> -		if (max_rx_pkt_len > mbuf_size) {
> +		if (max_rx_pktlen > mbuf_size) {
>  			dev_warning(enic, "The maximum Rx packet size (%u)
> is"
>  				    " larger than the mbuf size (%u), and"
>  				    " scatter is disabled. Larger packets will"
>  				    " be truncated.\n",
> -				    max_rx_pkt_len, mbuf_size);
> +				    max_rx_pktlen, mbuf_size);
>  		}
>  	}
> 
> @@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
>  		rq_sop->data_queue_enable = 1;
>  		rq_data->in_use = 1;
>  		/*
> -		 * HW does not directly support rxmode.max_rx_pkt_len.
> HW always
> +		 * HW does not directly support MTU. HW always
>  		 * receives packet sizes up to the "max" MTU.
>  		 * If not using scatter, we can achieve the effect of dropping
>  		 * larger packets by reducing the size of posted buffers.
>  		 * See enic_alloc_rx_queue_mbufs().
>  		 */
> -		if (max_rx_pkt_len <
> -		    enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
> -			dev_warning(enic, "rxmode.max_rx_pkt_len is
> ignored"
> -				    " when scatter rx mode is in use.\n");
> +		if (enic->rte_dev->data->mtu < enic->max_mtu) {
> +			dev_warning(enic,
> +				"mtu is ignored when scatter rx mode is in
> use.\n");
>  		}
>  	} else {
>  		dev_info(enic, "Rq %u Scatter rx mode not being used\n",
> @@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
>  	if (mbufs_per_pkt > 1) {
>  		dev_info(enic, "For max packet size %u and mbuf size %u
> valid"
>  			 " rx descriptor range is %u to %u\n",
> -			 max_rx_pkt_len, mbuf_size, min_sop + min_data,
> +			 max_rx_pktlen, mbuf_size, min_sop + min_data,
>  			 max_sop + max_data);
>  	}
>  	dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
> @@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t
> new_mtu)
>  			"MTU (%u) is greater than value configured in NIC
> (%u)\n",
>  			new_mtu, config_mtu);
> 
> -	/* Update the MTU and maximum packet length */
> -	eth_dev->data->mtu = new_mtu;
> -	eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> -		enic_mtu_to_max_rx_pktlen(new_mtu);
> -
>  	/*
>  	 * If the device has not started (enic_enable), nothing to do.
>  	 * Later, enic_enable() will set up RQs reflecting the new maximum
> diff --git a/drivers/net/fm10k/fm10k_ethdev.c
> b/drivers/net/fm10k/fm10k_ethdev.c
> index 3236290e4021..5e4b361ca6c0 100644
> --- a/drivers/net/fm10k/fm10k_ethdev.c
> +++ b/drivers/net/fm10k/fm10k_ethdev.c
> @@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
>  				FM10K_SRRCTL_LOOPBACK_SUPPRESS);
> 
>  		/* It adds dual VLAN length for supporting dual VLAN */
> -		if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> +		if ((dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
>  				2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
>  			rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
>  			uint32_t reg;
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
> b/drivers/net/hinic/hinic_pmd_ethdev.c
> index 946465779f2e..c737ef8d06d8 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -324,19 +324,19 @@ static int hinic_dev_configure(struct rte_eth_dev
> *dev)
>  		dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_RSS_HASH;
> 
>  	/* mtu size is 256~9600 */
> -	if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> HINIC_MIN_FRAME_SIZE ||
> -	    dev->data->dev_conf.rxmode.max_rx_pkt_len >
> -	    HINIC_MAX_JUMBO_FRAME_SIZE) {
> +	if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
> +			HINIC_MIN_FRAME_SIZE ||
> +	    HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
> +			HINIC_MAX_JUMBO_FRAME_SIZE) {
>  		PMD_DRV_LOG(ERR,
> -			"Max rx pkt len out of range, get max_rx_pkt_len:%d,
> "
> +			"Packet length out of range, get packet length:%d, "
>  			"expect between %d and %d",
> -			dev->data->dev_conf.rxmode.max_rx_pkt_len,
> +			HINIC_MTU_TO_PKTLEN(dev->data-
> >dev_conf.rxmode.mtu),
>  			HINIC_MIN_FRAME_SIZE,
> HINIC_MAX_JUMBO_FRAME_SIZE);
>  		return -EINVAL;
>  	}
> 
> -	nic_dev->mtu_size =
> -		HINIC_PKTLEN_TO_MTU(dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> +	nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
> 
>  	/* rss template */
>  	err = hinic_config_mq_mode(dev, TRUE);
> @@ -1539,7 +1539,6 @@ static void hinic_deinit_mac_addr(struct
> rte_eth_dev *eth_dev)
>  static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
>  {
>  	struct hinic_nic_dev *nic_dev =
> HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
> -	uint32_t frame_size;
>  	int ret = 0;
> 
>  	PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d,
> max_pkt_len: %d",
> @@ -1557,16 +1556,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev
> *dev, uint16_t mtu)
>  		return ret;
>  	}
> 
> -	/* update max frame size */
> -	frame_size = HINIC_MTU_TO_PKTLEN(mtu);
> -	if (frame_size > HINIC_ETH_MAX_LEN)
> +	if (mtu > RTE_ETHER_MTU)
>  		dev->data->dev_conf.rxmode.offloads |=
>  			DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
>  		dev->data->dev_conf.rxmode.offloads &=
>  			~DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
>  	nic_dev->mtu_size = mtu;
> 
>  	return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c
> b/drivers/net/hns3/hns3_ethdev.c
> index e51512560e15..8bccdeddb2f7 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2379,20 +2379,11 @@ hns3_refresh_mtu(struct rte_eth_dev *dev,
> struct rte_eth_conf *conf)
>  {
>  	struct hns3_adapter *hns = dev->data->dev_private;
>  	struct hns3_hw *hw = &hns->hw;
> -	uint32_t max_rx_pkt_len;
> -	uint16_t mtu;
> -	int ret;
> -
> -	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
> -		return 0;
> +	uint32_t max_rx_pktlen;
> 
> -	/*
> -	 * If jumbo frames are enabled, MTU needs to be refreshed
> -	 * according to the maximum RX packet length.
> -	 */
> -	max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> -	if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> -	    max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> +	max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
> +	if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
> +	    max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
>  		hns3_err(hw, "maximum Rx packet length must be greater
> than %u "
>  			 "and no more than %u when jumbo frame enabled.",
>  			 (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> @@ -2400,13 +2391,7 @@ hns3_refresh_mtu(struct rte_eth_dev *dev,
> struct rte_eth_conf *conf)
>  		return -EINVAL;
>  	}
> 
> -	mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> -	ret = hns3_dev_mtu_set(dev, mtu);
> -	if (ret)
> -		return ret;
> -	dev->data->mtu = mtu;
> -
> -	return 0;
> +	return hns3_dev_mtu_set(dev, conf->rxmode.mtu);
>  }
> 
>  static int
> @@ -2622,7 +2607,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	}
> 
>  	rte_spinlock_lock(&hw->lock);
> -	is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true :
> false;
> +	is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
>  	frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
> 
>  	/*
> @@ -2643,7 +2628,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	else
>  		dev->data->dev_conf.rxmode.offloads &=
> 
> 	~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
>  	rte_spinlock_unlock(&hw->lock);
> 
>  	return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
> b/drivers/net/hns3/hns3_ethdev_vf.c
> index e582503f529b..ca839fa55fa0 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -784,8 +784,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>  	uint16_t nb_rx_q = dev->data->nb_rx_queues;
>  	uint16_t nb_tx_q = dev->data->nb_tx_queues;
>  	struct rte_eth_rss_conf rss_conf;
> -	uint32_t max_rx_pkt_len;
> -	uint16_t mtu;
> +	uint32_t max_rx_pktlen;
>  	bool gro_en;
>  	int ret;
> 
> @@ -825,29 +824,21 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
>  			goto cfg_err;
>  	}
> 
> -	/*
> -	 * If jumbo frames are enabled, MTU needs to be refreshed
> -	 * according to the maximum RX packet length.
> -	 */
> -	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> -		max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> -		if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> -		    max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> -			hns3_err(hw, "maximum Rx packet length must be
> greater "
> -				 "than %u and less than %u when jumbo
> frame enabled.",
> -				 (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> -				 (uint16_t)HNS3_MAX_FRAME_LEN);
> -			ret = -EINVAL;
> -			goto cfg_err;
> -		}
> -
> -		mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> -		ret = hns3vf_dev_mtu_set(dev, mtu);
> -		if (ret)
> -			goto cfg_err;
> -		dev->data->mtu = mtu;
> +	max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD;
> +	if (max_rx_pktlen > HNS3_MAX_FRAME_LEN ||
> +	    max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) {
> +		hns3_err(hw, "maximum Rx packet length must be greater "
> +			 "than %u and less than %u when jumbo frame
> enabled.",
> +			 (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> +			 (uint16_t)HNS3_MAX_FRAME_LEN);
> +		ret = -EINVAL;
> +		goto cfg_err;
>  	}
> 
> +	ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
> +	if (ret)
> +		goto cfg_err;
> +
>  	ret = hns3vf_dev_configure_vlan(dev);
>  	if (ret)
>  		goto cfg_err;
> @@ -935,7 +926,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  	else
>  		dev->data->dev_conf.rxmode.offloads &=
> 
> 	~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
>  	rte_spinlock_unlock(&hw->lock);
> 
>  	return 0;
> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
> index cb9eccf9faae..6b81688a7225 100644
> --- a/drivers/net/hns3/hns3_rxtx.c
> +++ b/drivers/net/hns3/hns3_rxtx.c
> @@ -1734,18 +1734,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw
> *hw, uint16_t buf_size,
>  				uint16_t nb_desc)
>  {
>  	struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
> -	struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
>  	eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
> +	uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
>  	uint16_t min_vec_bds;
> 
>  	/*
>  	 * HNS3 hardware network engine set scattered as default. If the
> driver
>  	 * is not work in scattered mode and the pkts greater than buf_size
> -	 * but smaller than max_rx_pkt_len will be distributed to multiple
> BDs.
> +	 * but smaller than frame size will be distributed to multiple BDs.
>  	 * Driver cannot handle this situation.
>  	 */
> -	if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len >
> buf_size) {
> -		hns3_err(hw, "max_rx_pkt_len is not allowed to be set
> greater "
> +	if (!hw->data->scattered_rx && frame_size > buf_size) {
> +		hns3_err(hw, "frame size is not allowed to be set greater "
>  			     "than rx_buf_len if scattered is off.");
>  		return -EINVAL;
>  	}
> @@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
>  	}
> 
>  	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
> -	    dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
> +	    dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
>  		dev->data->scattered_rx = true;
>  }
> 
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 7b230e2ed17a..1161f301b9ae 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11772,14 +11772,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  		return -EBUSY;
>  	}
> 
> -	if (frame_size > I40E_ETH_MAX_LEN)
> -		dev_data->dev_conf.rxmode.offloads |=
> -			DEV_RX_OFFLOAD_JUMBO_FRAME;
> +	if (mtu > RTE_ETHER_MTU)
> +		dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
> -		dev_data->dev_conf.rxmode.offloads &=
> -			~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> -	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> +		dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
>  	return ret;
>  }
> diff --git a/drivers/net/i40e/i40e_ethdev_vf.c
> b/drivers/net/i40e/i40e_ethdev_vf.c
> index 0cfe13b7b227..086a167ca672 100644
> --- a/drivers/net/i40e/i40e_ethdev_vf.c
> +++ b/drivers/net/i40e/i40e_ethdev_vf.c
> @@ -1927,8 +1927,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct
> i40e_rx_queue *rxq)
>  	rxq->rx_hdr_len = 0;
>  	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> I40E_RXQ_CTX_DBUFF_SHIFT));
>  	len = rxq->rx_buf_len * I40E_MAX_CHAINED_RX_BUFFERS;
> -	rxq->max_pkt_len = RTE_MIN(len,
> -		dev_data->dev_conf.rxmode.max_rx_pkt_len);
> +	rxq->max_pkt_len = RTE_MIN(len, dev_data->mtu +
> I40E_ETH_OVERHEAD);
> 
>  	/**
>  	 * Check if the jumbo frame and maximum packet length are set
> correctly
> @@ -2173,7 +2172,7 @@ i40evf_dev_start(struct rte_eth_dev *dev)
> 
>  	hw->adapter_stopped = 0;
> 
> -	vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> +	vf->max_pkt_len = dev->data->mtu + I40E_ETH_OVERHEAD;
>  	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
>  					dev->data->nb_tx_queues);
> 
> @@ -2885,13 +2884,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  		return -EBUSY;
>  	}
> 
> -	if (frame_size > I40E_ETH_MAX_LEN)
> -		dev_data->dev_conf.rxmode.offloads |=
> -			DEV_RX_OFFLOAD_JUMBO_FRAME;
> +	if (mtu > RTE_ETHER_MTU)
> +		dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
> -		dev_data->dev_conf.rxmode.offloads &=
> -			~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> +		dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
>  	return ret;
>  }
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index 8d65f287f455..aa43796ef1af 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2904,8 +2904,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
>  	}
> 
>  	rxq->max_pkt_len =
> -		RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
> -			rxq->rx_buf_len), data-
> >dev_conf.rxmode.max_rx_pkt_len);
> +		RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq-
> >rx_buf_len,
> +				data->mtu + I40E_ETH_OVERHEAD);
>  	if (data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
>  		if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
>  			rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 41382c6d669b..13c2329d85a7 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -563,12 +563,13 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct
> iavf_rx_queue *rxq)
>  	struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
>  	struct rte_eth_dev_data *dev_data = dev->data;
>  	uint16_t buf_size, max_pkt_len, len;
> +	uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
> 
>  	buf_size = rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM;
> 
>  	/* Calculate the maximum packet length allowed */
>  	len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS;
> -	max_pkt_len = RTE_MIN(len, dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> +	max_pkt_len = RTE_MIN(len, frame_size);
> 
>  	/* Check if the jumbo frame and maximum packet length are set
>  	 * correctly.
> @@ -815,7 +816,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
> 
>  	adapter->stopped = 0;
> 
> -	vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> +	vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
>  	vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
>  				      dev->data->nb_tx_queues);
>  	num_queue_pairs = vf->num_queue_pairs;
> @@ -1445,15 +1446,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>  		return -EBUSY;
>  	}
> 
> -	if (frame_size > IAVF_ETH_MAX_LEN)
> +	if (mtu > RTE_ETHER_MTU)
>  		dev->data->dev_conf.rxmode.offloads |=
>  				DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
>  		dev->data->dev_conf.rxmode.offloads &=
>  				~DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
>  	return ret;
>  }
> 
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c
> b/drivers/net/ice/ice_dcf_ethdev.c
> index 69fe6e63d1d3..34b6c9b2a7ed 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -59,9 +59,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct
> ice_rx_queue *rxq)
>  	buf_size = rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM;
>  	rxq->rx_hdr_len = 0;
>  	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> ICE_RLAN_CTX_DBUF_S));
> -	max_pkt_len = RTE_MIN((uint32_t)
> -			      ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> -			      dev->data->dev_conf.rxmode.max_rx_pkt_len);
> +	max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> +			      dev->data->mtu + ICE_ETH_OVERHEAD);
> 
>  	/* Check if the jumbo frame and maximum packet length are set
>  	 * correctly.
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 63f735d1ff72..bdda6fee3f8e 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3426,8 +3426,8 @@ ice_dev_start(struct rte_eth_dev *dev)
>  	pf->adapter_stopped = false;
> 
>  	/* Set the max frame size to default value*/
> -	max_frame_size = pf->dev_data-
> >dev_conf.rxmode.max_rx_pkt_len ?
> -		pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
> +	max_frame_size = pf->dev_data->mtu ?
> +		pf->dev_data->mtu + ICE_ETH_OVERHEAD :
>  		ICE_FRAME_SIZE_MAX;
> 
>  	/* Set the max frame size to HW*/
> @@ -3806,14 +3806,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
>  		return -EBUSY;
>  	}
> 
> -	if (frame_size > ICE_ETH_MAX_LEN)
> -		dev_data->dev_conf.rxmode.offloads |=
> -			DEV_RX_OFFLOAD_JUMBO_FRAME;
> +	if (mtu > RTE_ETHER_MTU)
> +		dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
> -		dev_data->dev_conf.rxmode.offloads &=
> -			~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> -	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> +		dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
>  	return 0;
>  }
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index 3f6e7359844b..a3de4172e2bc 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -262,15 +262,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
>  	struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode;
>  	uint32_t rxdid = ICE_RXDID_COMMS_OVS;
>  	uint32_t regval;
> +	uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
> 
>  	/* Set buffer size as the head split is disabled. */
>  	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
>  			      RTE_PKTMBUF_HEADROOM);
>  	rxq->rx_hdr_len = 0;
>  	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> ICE_RLAN_CTX_DBUF_S));
> -	rxq->max_pkt_len = RTE_MIN((uint32_t)
> -				   ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> -				   dev_data-
> >dev_conf.rxmode.max_rx_pkt_len);
> +	rxq->max_pkt_len =
> +		RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> +			frame_size);
> 
>  	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
>  		if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
> @@ -361,11 +362,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
>  		return -EINVAL;
>  	}
> 
> -	buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> -			      RTE_PKTMBUF_HEADROOM);
> -
>  	/* Check if scattered RX needs to be used. */
> -	if (rxq->max_pkt_len > buf_size)
> +	if (frame_size > buf_size)
>  		dev_data->scattered_rx = 1;
> 
>  	rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
> index 224a0954836b..b26723064b07 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -20,13 +20,6 @@
> 
>  #define IGC_INTEL_VENDOR_ID		0x8086
> 
> -/*
> - * The overhead from MTU to max frame size.
> - * Considering VLAN so tag needs to be counted.
> - */
> -#define IGC_ETH_OVERHEAD		(RTE_ETHER_HDR_LEN + \
> -					RTE_ETHER_CRC_LEN +
> VLAN_TAG_SIZE)
> -
>  #define IGC_FC_PAUSE_TIME		0x0680
>  #define IGC_LINK_UPDATE_CHECK_TIMEOUT	90  /* 9s */
>  #define IGC_LINK_UPDATE_CHECK_INTERVAL	100 /* ms */
> @@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> 
>  	/* switch to jumbo mode if needed */
>  	if (mtu > RTE_ETHER_MTU) {
> -		dev->data->dev_conf.rxmode.offloads |=
> -			DEV_RX_OFFLOAD_JUMBO_FRAME;
> +		dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>  		rctl |= IGC_RCTL_LPE;
>  	} else {
> -		dev->data->dev_conf.rxmode.offloads &=
> -			~DEV_RX_OFFLOAD_JUMBO_FRAME;
> +		dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>  		rctl &= ~IGC_RCTL_LPE;
>  	}
>  	IGC_WRITE_REG(hw, IGC_RCTL, rctl);
> 
> -	/* update max frame size */
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> -	IGC_WRITE_REG(hw, IGC_RLPML,
> -			dev->data->dev_conf.rxmode.max_rx_pkt_len);
> +	IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
> 
>  	return 0;
>  }
> @@ -2486,6 +2473,7 @@ static int
>  igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
>  {
>  	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> +	uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
>  	uint32_t ctrl_ext;
> 
>  	ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev
> *dev)
>  	if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
>  		return 0;
> 
> -	if ((dev->data->dev_conf.rxmode.offloads &
> -			DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> -		goto write_ext_vlan;
> -
>  	/* Update maximum packet length */
> -	if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> -		RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> +	if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
>  		PMD_DRV_LOG(ERR, "Maximum packet length %u error, min
> is %u",
> -			dev->data->dev_conf.rxmode.max_rx_pkt_len,
> -			VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
> +			frame_size, VLAN_TAG_SIZE +
> RTE_ETHER_MIN_MTU);
>  		return -EINVAL;
>  	}
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
> -	IGC_WRITE_REG(hw, IGC_RLPML,
> -		dev->data->dev_conf.rxmode.max_rx_pkt_len);
> +	IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
> 
> -write_ext_vlan:
>  	IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext &
> ~IGC_CTRL_EXT_EXT_VLAN);
>  	return 0;
>  }
> @@ -2519,6 +2498,7 @@ static int
>  igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
>  {
>  	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> +	uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
>  	uint32_t ctrl_ext;
> 
>  	ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev
> *dev)
>  	if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
>  		return 0;
> 
> -	if ((dev->data->dev_conf.rxmode.offloads &
> -			DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> -		goto write_ext_vlan;
> -
>  	/* Update maximum packet length */
> -	if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> -		MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
> +	if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
>  		PMD_DRV_LOG(ERR, "Maximum packet length %u error,
> max is %u",
> -			dev->data->dev_conf.rxmode.max_rx_pkt_len +
> -			VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
> +			frame_size, MAX_RX_JUMBO_FRAME_SIZE);
>  		return -EINVAL;
>  	}
> -	dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
> -	IGC_WRITE_REG(hw, IGC_RLPML,
> -		dev->data->dev_conf.rxmode.max_rx_pkt_len);
> +	IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
> 
> -write_ext_vlan:
>  	IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext |
> IGC_CTRL_EXT_EXT_VLAN);
>  	return 0;
>  }
> diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
> index 7b6c209df3b6..b3473b5b1646 100644
> --- a/drivers/net/igc/igc_ethdev.h
> +++ b/drivers/net/igc/igc_ethdev.h
> @@ -35,6 +35,13 @@ extern "C" {
>  #define IGC_HKEY_REG_SIZE		IGC_DEFAULT_REG_SIZE
>  #define IGC_HKEY_SIZE			(IGC_HKEY_REG_SIZE *
> IGC_HKEY_MAX_INDEX)
> 
> +/*
> + * The overhead from MTU to max frame size.
> + * Considering VLAN so tag needs to be counted.
> + */
> +#define IGC_ETH_OVERHEAD		(RTE_ETHER_HDR_LEN + \
> +					RTE_ETHER_CRC_LEN +
> VLAN_TAG_SIZE * 2)
> +
>  /*
>   * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN
> should be
>   * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
> diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
> index b5489eedd220..d80808a002f5 100644
> --- a/drivers/net/igc/igc_txrx.c
> +++ b/drivers/net/igc/igc_txrx.c
> @@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
>  	struct igc_rx_queue *rxq;
>  	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
>  	uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
> -	uint32_t max_rx_pkt_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +	uint32_t max_rx_pktlen;
>  	uint32_t rctl;
>  	uint32_t rxcsum;
>  	uint16_t buf_size;
> @@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
>  	IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
> 
>  	/* Configure support of jumbo frames, if any. */
> -	if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> +	if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
>  		rctl |= IGC_RCTL_LPE;
> -
> -		/*
> -		 * Set maximum packet length by default, and might be
> updated
> -		 * together with enabling/disabling dual VLAN.
> -		 */
> -		IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
> -	} else {
> +	else
>  		rctl &= ~IGC_RCTL_LPE;
> -	}
> +
> +	max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
> +	/*
> +	 * Set maximum packet length by default, and might be updated
> +	 * together with enabling/disabling dual VLAN.
> +	 */
> +	IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
> 
>  	/* Configure and enable each RX queue. */
>  	rctl_bsize = 0;
> @@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
>  					IGC_SRRCTL_BSIZEPKT_SHIFT);
> 
>  			/* It adds dual VLAN length for supporting dual VLAN
> */
> -			if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
> +			if (max_rx_pktlen > buf_size)
>  				dev->data->scattered_rx = 1;
>  		} else {
>  			/*
> diff --git a/drivers/net/ionic/ionic_ethdev.c
> b/drivers/net/ionic/ionic_ethdev.c
> index e6207939665e..97447a10e46a 100644
> --- a/drivers/net/ionic/ionic_ethdev.c
> +++ b/drivers/net/ionic/ionic_ethdev.c
> @@ -343,25 +343,15 @@ static int
>  ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
>  {
>  	struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
> -	uint32_t max_frame_size;
>  	int err;
> 
>  	IONIC_PRINT_CALL();
> 
>  	/*
>  	 * Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
> -	 * is done by the the API.
> +	 * is done by the API.
>  	 */
> 
> -	/*
> -	 * Max frame size is MTU + Ethernet header + VLAN + QinQ
> -	 * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
> -	 */
> -	max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
> -
> -	if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len <
> max_frame_size)
> -		return -EINVAL;
> -
>  	err = ionic_lif_change_mtu(lif, mtu);
>  	if (err)
>  		return err;
> diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
> index b83ea1bcaa6a..3f5fc66abf71 100644
> --- a/drivers/net/ionic/ionic_rxtx.c
> +++ b/drivers/net/ionic/ionic_rxtx.c
> @@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
>  	struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
>  	struct rte_mbuf *rxm, *rxm_seg;
>  	uint32_t max_frame_size =
> -		rxq->qcq.lif->eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +		rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
>  	uint64_t pkt_flags = 0;
>  	uint32_t pkt_type;
>  	struct ionic_rx_stats *stats = &rxq->stats;
> @@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
>  int __rte_cold
>  ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t
> rx_queue_id)
>  {
> -	uint32_t frame_size = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +	uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
>  	uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
>  	struct ionic_rx_qcq *rxq;
>  	int err;
> @@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts,
>  {
>  	struct ionic_rx_qcq *rxq = rx_queue;
>  	uint32_t frame_size =
> -		rxq->qcq.lif->eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> +		rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
>  	struct ionic_rx_service service_cb_arg;
> 
>  	service_cb_arg.rx_pkts = rx_pkts;
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c
> b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 589d9fa5877d..3634c0c8c5f0 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev
> *ethdev, uint16_t mtu)
>  		return -EBUSY;
>  	}
> 
> -	if (frame_size > IPN3KE_ETH_MAX_LEN)
> -		dev_data->dev_conf.rxmode.offloads |=
> -			(uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
> +	if (mtu > RTE_ETHER_MTU)
> +		dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>  	else
> -		dev_data->dev_conf.rxmode.offloads &=
> -			(uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
> -
> -	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> +		dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> 
>  	if (rpst->i40e_pf_eth) {
>  		ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst-
> >i40e_pf_eth,

Reviewed-by: Rosen Xu <rosen.xu@intel.com>



  parent reply	other threads:[~2021-07-18  7:45 UTC|newest]

Thread overview: 112+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-09 17:29 Ferruh Yigit
2021-07-09 17:29 ` [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-13 13:48   ` Andrew Rybchenko
2021-07-21 12:26     ` Ferruh Yigit
2021-07-18  7:49   ` Xu, Rosen
2021-07-19 14:38   ` Ajit Khaparde
2021-07-09 17:29 ` [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set Ferruh Yigit
2021-07-13 13:56   ` Andrew Rybchenko
2021-07-18  7:52   ` Xu, Rosen
2021-07-09 17:29 ` [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag Ferruh Yigit
2021-07-13 14:07   ` Andrew Rybchenko
2021-07-21 12:26     ` Ferruh Yigit
2021-07-21 12:39     ` Ferruh Yigit
2021-07-18  7:53   ` Xu, Rosen
2021-07-13 12:47 ` [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Andrew Rybchenko
2021-07-21 16:46   ` Ferruh Yigit
2021-07-22  1:31     ` Ajit Khaparde
2021-07-22 10:27       ` Ferruh Yigit
2021-07-22 10:38         ` Andrew Rybchenko
2021-07-18  7:45 ` Xu, Rosen [this message]
2021-07-19  3:35 ` Huisong Li
2021-07-21 15:29   ` Ferruh Yigit
2021-07-22  7:21     ` Huisong Li
2021-07-22 10:12       ` Ferruh Yigit
2021-07-22 10:15         ` Andrew Rybchenko
2021-07-22 14:43           ` Stephen Hemminger
2021-09-17  1:08             ` Min Hu (Connor)
2021-09-17  8:04               ` Ferruh Yigit
2021-09-17  8:16                 ` Min Hu (Connor)
2021-09-17  8:17                 ` Min Hu (Connor)
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
2021-07-22 17:21   ` [dpdk-dev] [PATCH v2 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-22 17:21   ` [dpdk-dev] [PATCH v2 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-07-22 17:21   ` [dpdk-dev] [PATCH v2 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-07-22 17:21   ` [dpdk-dev] [PATCH v2 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-07-23  3:29     ` Huisong Li
2021-07-22 17:21   ` [dpdk-dev] [PATCH v2 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-01 14:36   ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
2021-10-01 14:36     ` [dpdk-dev] [PATCH v3 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-04  5:08       ` Somnath Kotur
2021-10-01 14:36     ` [dpdk-dev] [PATCH v3 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-04  5:09       ` Somnath Kotur
2021-10-01 14:36     ` [dpdk-dev] [PATCH v3 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
     [not found]       ` <CAOBf=muYkU2dwgi3iC8Q7pdSNTJsMUwWYdXj14KeN_=_mUGa0w@mail.gmail.com>
2021-10-04  7:55         ` Somnath Kotur
2021-10-05 16:48           ` Ferruh Yigit
2021-10-01 14:36     ` [dpdk-dev] [PATCH v3 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-01 14:36     ` [dpdk-dev] [PATCH v3 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-01 15:07     ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Stephen Hemminger
2021-10-05 16:46       ` Ferruh Yigit
2021-10-05 17:16     ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
2021-10-05 17:16       ` [dpdk-dev] [PATCH v4 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-08  8:39         ` Xu, Rosen
2021-10-05 17:16       ` [dpdk-dev] [PATCH v4 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-05 17:16       ` [dpdk-dev] [PATCH v4 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-08  8:38         ` Xu, Rosen
2021-10-05 17:16       ` [dpdk-dev] [PATCH v4 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-05 17:16       ` [dpdk-dev] [PATCH v4 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-05 22:07       ` [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length Ajit Khaparde
2021-10-06  6:08         ` Somnath Kotur
2021-10-08  8:36       ` Xu, Rosen
2021-10-10  6:30       ` Matan Azrad
2021-10-11 21:59         ` Ferruh Yigit
2021-10-12  7:03           ` Matan Azrad
2021-10-12 11:03             ` Ferruh Yigit
2021-10-07 16:56     ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
2021-10-07 16:56       ` [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-08 17:20         ` Ananyev, Konstantin
2021-10-09 10:58         ` lihuisong (C)
2021-10-07 16:56       ` [dpdk-dev] [PATCH v5 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-08 17:19         ` Ananyev, Konstantin
2021-10-07 16:56       ` [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-08 17:11         ` Ananyev, Konstantin
2021-10-09 11:09           ` lihuisong (C)
2021-10-10  5:46         ` Matan Azrad
2021-10-07 16:56       ` [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-08 16:51         ` Ananyev, Konstantin
2021-10-11 19:50           ` Ferruh Yigit
2021-10-09 11:43         ` lihuisong (C)
2021-10-11 20:15           ` Ferruh Yigit
2021-10-12  4:02             ` lihuisong (C)
2021-10-07 16:56       ` [dpdk-dev] [PATCH v5 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-08 16:53         ` Ananyev, Konstantin
2021-10-08 15:57       ` [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length Ananyev, Konstantin
2021-10-11 19:47         ` Ferruh Yigit
2021-10-09 10:56       ` lihuisong (C)
2021-10-11 23:53     ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
2021-10-11 23:53       ` [dpdk-dev] [PATCH v6 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-11 23:53       ` [dpdk-dev] [PATCH v6 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-11 23:53       ` [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-12 17:20         ` Hyong Youb Kim (hyonkim)
2021-10-13  7:16         ` Michał Krawczyk
2021-10-11 23:53       ` [dpdk-dev] [PATCH v6 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-12  5:58         ` Andrew Rybchenko
2021-10-11 23:53       ` [dpdk-dev] [PATCH v6 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-12  6:02       ` [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length Andrew Rybchenko
2021-10-12  9:42       ` Ananyev, Konstantin
2021-10-13  7:08       ` Xu, Rosen
2021-10-15  1:31       ` Hyong Youb Kim (hyonkim)
2021-10-16  0:24       ` Ferruh Yigit
2021-10-18  8:54         ` Ferruh Yigit
2021-10-18 13:48     ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
2021-10-18 13:48       ` [dpdk-dev] [PATCH v7 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-18 13:48       ` [dpdk-dev] [PATCH v7 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-18 13:48       ` [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-21  0:43         ` Thomas Monjalon
2021-10-22 11:25           ` Ferruh Yigit
2021-10-22 11:29             ` Andrew Rybchenko
2021-10-18 13:48       ` [dpdk-dev] [PATCH v7 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-18 13:48       ` [dpdk-dev] [PATCH v7 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-18 17:31       ` [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length Ferruh Yigit
2021-11-05 14:19         ` Xueming(Steven) Li
2021-11-05 14:39           ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR11MB29011CE99ACD0100B30323DE89E09@BYAPR11MB2901.namprd11.prod.outlook.com \
    --to=rosen.xu@intel.com \
    --cc=aboyer@pensando.io \
    --cc=ajit.khaparde@broadcom.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=asomalap@amd.com \
    --cc=beilei.xing@intel.com \
    --cc=bernard.iremonger@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=chas3@att.com \
    --cc=chenbo.xia@intel.com \
    --cc=cloud.wangxiaoyun@huawei.com \
    --cc=cristian.dumitrescu@intel.com \
    --cc=david.hunt@intel.com \
    --cc=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    --cc=dsinghrawat@marvell.com \
    --cc=evgenys@amazon.com \
    --cc=ferruh.yigit@intel.com \
    --cc=g.singh@nxp.com \
    --cc=gakhil@marvell.com \
    --cc=gtzalik@amazon.com \
    --cc=haiyue.wang@intel.com \
    --cc=harry.van.haaren@intel.com \
    --cc=heinrich.kuhn@netronome.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=hkalra@marvell.com \
    --cc=humin29@huawei.com \
    --cc=hyonkim@cisco.com \
    --cc=igor.russkikh@aquantia.com \
    --cc=igorch@amazon.com \
    --cc=jasvinder.singh@intel.com \
    --cc=jerinj@marvell.com \
    --cc=jianwang@trustnetic.com \
    --cc=jiawenwu@trustnetic.com \
    --cc=jingjing.wu@intel.com \
    --cc=john.mcnamara@intel.com \
    --cc=johndale@cisco.com \
    --cc=keith.wiles@intel.com \
    --cc=kirankumark@marvell.com \
    --cc=kirill.rybalchenko@intel.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=lironh@marvell.com \
    --cc=matan@nvidia.com \
    --cc=matt.peters@windriver.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=mczekaj@marvell.com \
    --cc=mdr@ashroe.eu \
    --cc=mk@semihalf.com \
    --cc=mw@semihalf.com \
    --cc=ndabilpuram@marvell.com \
    --cc=nhorman@tuxdriver.com \
    --cc=nicolas.chautru@intel.com \
    --cc=oulijun@huawei.com \
    --cc=pavel.belous@aquantia.com \
    --cc=pbhagavatula@marvell.com \
    --cc=qi.z.zhang@intel.com \
    --cc=qiming.yang@intel.com \
    --cc=radu.nicolau@intel.com \
    --cc=rahul.lakkireddy@chelsio.com \
    --cc=rmody@marvell.com \
    --cc=sachin.saxena@oss.nxp.com \
    --cc=shahafs@nvidia.com \
    --cc=shshaikh@marvell.com \
    --cc=skori@marvell.com \
    --cc=skoteshwar@marvell.com \
    --cc=somnath.kotur@broadcom.com \
    --cc=srinivasan@marvell.com \
    --cc=steven.webster@windriver.com \
    --cc=sthotton@marvell.com \
    --cc=thomas@monjalon.net \
    --cc=tomasz.kantecki@intel.com \
    --cc=viacheslavo@nvidia.com \
    --cc=xiao.w.wang@intel.com \
    --cc=xiaoyun.li@intel.com \
    --cc=xuanziyang2@huawei.com \
    --cc=yisen.zhuang@huawei.com \
    --cc=zhouguoyang@huawei.com \
    --cc=zr@semihalf.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).