From: "Xu, Rosen" <rosen.xu@intel.com>
To: "Yigit, Ferruh" <ferruh.yigit@intel.com>,
Jerin Jacob <jerinj@marvell.com>,
"Li, Xiaoyun" <xiaoyun.li@intel.com>,
Chas Williams <chas3@att.com>,
"Min Hu (Connor)" <humin29@huawei.com>,
Hemant Agrawal <hemant.agrawal@nxp.com>,
Sachin Saxena <sachin.saxena@oss.nxp.com>,
"Zhang, Qi Z" <qi.z.zhang@intel.com>,
"Wang, Xiao W" <xiao.w.wang@intel.com>,
"Matan Azrad" <matan@nvidia.com>,
Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
Harman Kalra <hkalra@marvell.com>,
Maciej Czekaj <mczekaj@marvell.com>,
"Ray Kinsella" <mdr@ashroe.eu>,
"Iremonger, Bernard" <bernard.iremonger@intel.com>,
"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
Kiran Kumar K <kirankumark@marvell.com>,
Nithin Dabilpuram <ndabilpuram@marvell.com>,
"Hunt, David" <david.hunt@intel.com>,
"Mcnamara, John" <john.mcnamara@intel.com>,
"Richardson, Bruce" <bruce.richardson@intel.com>,
Igor Russkikh <irusskikh@marvell.com>,
Steven Webster <steven.webster@windriver.com>,
"Peters, Matt" <matt.peters@windriver.com>,
Somalapuram Amaranath <asomalap@amd.com>,
Rasesh Mody <rmody@marvell.com>,
Shahed Shaikh <shshaikh@marvell.com>,
Ajit Khaparde <ajit.khaparde@broadcom.com>,
"Somnath Kotur" <somnath.kotur@broadcom.com>,
Sunil Kumar Kori <skori@marvell.com>,
Satha Rao <skoteshwar@marvell.com>,
Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>,
"Wang, Haiyue" <haiyue.wang@intel.com>,
Marcin Wojtas <mw@semihalf.com>,
Michal Krawczyk <mk@semihalf.com>,
"Shai Brandes" <shaibran@amazon.com>,
Evgeny Schemeilin <evgenys@amazon.com>,
"Igor Chauskin" <igorch@amazon.com>,
Gagandeep Singh <g.singh@nxp.com>,
"Daley, John" <johndale@cisco.com>,
Hyong Youb Kim <hyonkim@cisco.com>,
Ziyang Xuan <xuanziyang2@huawei.com>,
Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>,
Guoyang Zhou <zhouguoyang@huawei.com>,
Yisen Zhuang <yisen.zhuang@huawei.com>,
Lijun Ou <oulijun@huawei.com>,
"Xing, Beilei" <beilei.xing@intel.com>,
"Wu, Jingjing" <jingjing.wu@intel.com>,
"Yang, Qiming" <qiming.yang@intel.com>,
Andrew Boyer <aboyer@pensando.io>,
"Shijith Thotton" <sthotton@marvell.com>,
Srisivasubramanian Srinivasan <srinivasan@marvell.com>,
Zyta Szpak <zr@semihalf.com>, Liron Himi <lironh@marvell.com>,
Heinrich Kuhn <heinrich.kuhn@corigine.com>,
"Devendra Singh Rawat" <dsinghrawat@marvell.com>,
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
"Wiles, Keith" <keith.wiles@intel.com>,
Jiawen Wu <jiawenwu@trustnetic.com>,
Jian Wang <jianwang@trustnetic.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>,
"Xia, Chenbo" <chenbo.xia@intel.com>,
"Chautru, Nicolas" <nicolas.chautru@intel.com>,
"Van Haaren, Harry" <harry.van.haaren@intel.com>,
"Dumitrescu, Cristian" <cristian.dumitrescu@intel.com>,
"Nicolau, Radu" <radu.nicolau@intel.com>,
Akhil Goyal <gakhil@marvell.com>,
"Kantecki, Tomasz" <tomasz.kantecki@intel.com>,
"Doherty, Declan" <declan.doherty@intel.com>,
Pavan Nikhilesh <pbhagavatula@marvell.com>,
"Rybalchenko, Kirill" <kirill.rybalchenko@intel.com>,
"Singh, Jasvinder" <jasvinder.singh@intel.com>,
Thomas Monjalon <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length
Date: Fri, 8 Oct 2021 08:36:13 +0000 [thread overview]
Message-ID: <BYAPR11MB29014AA19F2477ADB65E5C7B89B29@BYAPR11MB2901.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20211005171653.3700067-1-ferruh.yigit@intel.com>
Hi,
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Wednesday, October 06, 2021 1:17
> To: Jerin Jacob <jerinj@marvell.com>; Li, Xiaoyun <xiaoyun.li@intel.com>;
> Chas Williams <chas3@att.com>; Min Hu (Connor) <humin29@huawei.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; Sachin Saxena
> <sachin.saxena@oss.nxp.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wang,
> Xiao W <xiao.w.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> Viacheslav Ovsiienko <viacheslavo@nvidia.com>; Harman Kalra
> <hkalra@marvell.com>; Maciej Czekaj <mczekaj@marvell.com>; Ray Kinsella
> <mdr@ashroe.eu>; Iremonger, Bernard <bernard.iremonger@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Kiran Kumar K
> <kirankumark@marvell.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Hunt, David <david.hunt@intel.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Igor Russkikh <irusskikh@marvell.com>;
> Steven Webster <steven.webster@windriver.com>; Peters, Matt
> <matt.peters@windriver.com>; Somalapuram Amaranath
> <asomalap@amd.com>; Rasesh Mody <rmody@marvell.com>; Shahed
> Shaikh <shshaikh@marvell.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; Sunil Kumar Kori <skori@marvell.com>;
> Satha Rao <skoteshwar@marvell.com>; Rahul Lakkireddy
> <rahul.lakkireddy@chelsio.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> Marcin Wojtas <mw@semihalf.com>; Michal Krawczyk <mk@semihalf.com>;
> Shai Brandes <shaibran@amazon.com>; Evgeny Schemeilin
> <evgenys@amazon.com>; Igor Chauskin <igorch@amazon.com>; Gagandeep
> Singh <g.singh@nxp.com>; Daley, John <johndale@cisco.com>; Hyong Youb
> Kim <hyonkim@cisco.com>; Ziyang Xuan <xuanziyang2@huawei.com>;
> Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>; Guoyang Zhou
> <zhouguoyang@huawei.com>; Yisen Zhuang <yisen.zhuang@huawei.com>;
> Lijun Ou <oulijun@huawei.com>; Xing, Beilei <beilei.xing@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Yang, Qiming <qiming.yang@intel.com>;
> Andrew Boyer <aboyer@pensando.io>; Xu, Rosen <rosen.xu@intel.com>;
> Shijith Thotton <sthotton@marvell.com>; Srisivasubramanian Srinivasan
> <srinivasan@marvell.com>; Zyta Szpak <zr@semihalf.com>; Liron Himi
> <lironh@marvell.com>; Heinrich Kuhn <heinrich.kuhn@corigine.com>;
> Devendra Singh Rawat <dsinghrawat@marvell.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; Wiles, Keith <keith.wiles@intel.com>;
> Jiawen Wu <jiawenwu@trustnetic.com>; Jian Wang
> <jianwang@trustnetic.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>;
> Chautru, Nicolas <nicolas.chautru@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>; Nicolau, Radu <radu.nicolau@intel.com>;
> Akhil Goyal <gakhil@marvell.com>; Kantecki, Tomasz
> <tomasz.kantecki@intel.com>; Doherty, Declan <declan.doherty@intel.com>;
> Pavan Nikhilesh <pbhagavatula@marvell.com>; Rybalchenko, Kirill
> <kirill.rybalchenko@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>; Thomas Monjalon <thomas@monjalon.net>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org
> Subject: [PATCH v4 1/6] ethdev: fix max Rx packet length
>
> There is a confusion on setting max Rx packet length, this patch aims to
> clarify it.
>
> 'rte_eth_dev_configure()' API accepts max Rx packet size via
> 'uint32_t max_rx_pkt_len' field of the config struct 'struct
> rte_eth_conf'.
>
> Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
> stored into '(struct rte_eth_dev)->data->mtu'.
>
> These two APIs are related but they work in a disconnected way, they
> store the set values in different variables which makes hard to figure
> out which one to use, also having two different method for a related
> functionality is confusing for the users.
>
> Other issues causing confusion is:
> * maximum transmission unit (MTU) is payload of the Ethernet frame. And
> 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
> Ethernet frame overhead, and this overhead may be different from
> device to device based on what device supports, like VLAN and QinQ.
> * 'max_rx_pkt_len' is only valid when application requested jumbo frame,
> which adds additional confusion and some APIs and PMDs already
> discards this documented behavior.
> * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
> field, this adds configuration complexity for application.
>
> As solution, both APIs gets MTU as parameter, and both saves the result
> in same variable '(struct rte_eth_dev)->data->mtu'. For this
> 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
> from jumbo frame.
>
> For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
> request and it should be used only within configure function and result
> should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
> both application and PMD uses MTU from this variable.
>
> When application doesn't provide an MTU during 'rte_eth_dev_configure()'
> default 'RTE_ETHER_MTU' value is used.
>
> Additional clarification done on scattered Rx configuration, in
> relation to MTU and Rx buffer size.
> MTU is used to configure the device for physical Rx/Tx size limitation,
> Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
> size as Rx buffer size.
> PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
> or not. If scattered Rx is not supported by device, MTU bigger than Rx
> buffer size should fail.
>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> Cc: Min Hu (Connor) <humin29@huawei.com>
>
> v2:
> * Converted to explicit checks for zero/non-zero
> * fixed hns3 checks
> * fixed some sample app rxmode.mtu value
> * fixed some sample app max-pkt-len argument and updated doc for it
>
> v3:
> * rebased
>
> v4:
> * fix typos in commit logs
> ---
> app/test-eventdev/test_perf_common.c | 1 -
> app/test-eventdev/test_pipeline_common.c | 5 +-
> app/test-pmd/cmdline.c | 49 +++----
> app/test-pmd/config.c | 22 ++-
> app/test-pmd/parameters.c | 4 +-
> app/test-pmd/testpmd.c | 103 ++++++++------
> app/test-pmd/testpmd.h | 2 +-
> app/test/test_link_bonding.c | 1 -
> app/test/test_link_bonding_mode4.c | 1 -
> app/test/test_link_bonding_rssconf.c | 2 -
> app/test/test_pmd_perf.c | 1 -
> doc/guides/nics/dpaa.rst | 2 +-
> doc/guides/nics/dpaa2.rst | 2 +-
> doc/guides/nics/features.rst | 2 +-
> doc/guides/nics/fm10k.rst | 2 +-
> doc/guides/nics/mlx5.rst | 4 +-
> doc/guides/nics/octeontx.rst | 2 +-
> doc/guides/nics/thunderx.rst | 2 +-
> doc/guides/rel_notes/deprecation.rst | 25 ----
> doc/guides/sample_app_ug/flow_classify.rst | 7 +-
> doc/guides/sample_app_ug/l3_forward.rst | 6 +-
> .../sample_app_ug/l3_forward_access_ctrl.rst | 4 +-
> doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +-
> .../sample_app_ug/l3_forward_power_man.rst | 4 +-
> .../sample_app_ug/performance_thread.rst | 4 +-
> doc/guides/sample_app_ug/skeleton.rst | 7 +-
> drivers/net/atlantic/atl_ethdev.c | 3 -
> drivers/net/avp/avp_ethdev.c | 17 +--
> drivers/net/axgbe/axgbe_ethdev.c | 7 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 6 +-
> drivers/net/bnxt/bnxt_ethdev.c | 21 +--
> drivers/net/bonding/rte_eth_bond_pmd.c | 4 +-
> drivers/net/cnxk/cnxk_ethdev.c | 9 +-
> drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +-
> drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-
> drivers/net/cxgbe/cxgbe_main.c | 3 +-
> drivers/net/cxgbe/sge.c | 3 +-
> drivers/net/dpaa/dpaa_ethdev.c | 52 +++----
> drivers/net/dpaa2/dpaa2_ethdev.c | 31 ++---
> drivers/net/e1000/em_ethdev.c | 4 +-
> drivers/net/e1000/igb_ethdev.c | 18 +--
> drivers/net/e1000/igb_rxtx.c | 16 +--
> drivers/net/ena/ena_ethdev.c | 27 ++--
> drivers/net/enetc/enetc_ethdev.c | 24 +---
> drivers/net/enic/enic_ethdev.c | 2 +-
> drivers/net/enic/enic_main.c | 42 +++---
> drivers/net/fm10k/fm10k_ethdev.c | 2 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-
> drivers/net/hns3/hns3_ethdev.c | 42 +-----
> drivers/net/hns3/hns3_ethdev_vf.c | 28 +---
> drivers/net/hns3/hns3_rxtx.c | 10 +-
> drivers/net/i40e/i40e_ethdev.c | 10 +-
> drivers/net/i40e/i40e_rxtx.c | 4 +-
> drivers/net/iavf/iavf_ethdev.c | 9 +-
> drivers/net/ice/ice_dcf_ethdev.c | 5 +-
> drivers/net/ice/ice_ethdev.c | 14 +-
> drivers/net/ice/ice_rxtx.c | 12 +-
> drivers/net/igc/igc_ethdev.c | 51 ++-----
> drivers/net/igc/igc_ethdev.h | 7 +
> drivers/net/igc/igc_txrx.c | 22 +--
> drivers/net/ionic/ionic_ethdev.c | 12 +-
> drivers/net/ionic/ionic_rxtx.c | 6 +-
> drivers/net/ipn3ke/ipn3ke_representor.c | 10 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 35 ++---
> drivers/net/ixgbe/ixgbe_pf.c | 6 +-
> drivers/net/ixgbe/ixgbe_rxtx.c | 15 +-
> drivers/net/liquidio/lio_ethdev.c | 20 +--
> drivers/net/mlx4/mlx4_rxq.c | 17 +--
> drivers/net/mlx5/mlx5_rxq.c | 25 ++--
> drivers/net/mvneta/mvneta_ethdev.c | 7 -
> drivers/net/mvneta/mvneta_rxtx.c | 13 +-
> drivers/net/mvpp2/mrvl_ethdev.c | 34 ++---
> drivers/net/nfp/nfp_common.c | 9 +-
> drivers/net/octeontx/octeontx_ethdev.c | 12 +-
> drivers/net/octeontx2/otx2_ethdev.c | 2 +-
> drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-
> drivers/net/pfe/pfe_ethdev.c | 7 +-
> drivers/net/qede/qede_ethdev.c | 16 +--
> drivers/net/qede/qede_rxtx.c | 8 +-
> drivers/net/sfc/sfc_ethdev.c | 4 +-
> drivers/net/sfc/sfc_port.c | 6 +-
> drivers/net/tap/rte_eth_tap.c | 7 +-
> drivers/net/thunderx/nicvf_ethdev.c | 13 +-
> drivers/net/txgbe/txgbe_ethdev.c | 7 +-
> drivers/net/txgbe/txgbe_ethdev.h | 4 +
> drivers/net/txgbe/txgbe_ethdev_vf.c | 2 -
> drivers/net/txgbe/txgbe_rxtx.c | 19 +--
> drivers/net/virtio/virtio_ethdev.c | 9 +-
> examples/bbdev_app/main.c | 1 -
> examples/bond/main.c | 1 -
> examples/distributor/main.c | 1 -
> .../pipeline_worker_generic.c | 1 -
> .../eventdev_pipeline/pipeline_worker_tx.c | 1 -
> examples/flow_classify/flow_classify.c | 12 +-
> examples/ioat/ioatfwd.c | 1 -
> examples/ip_fragmentation/main.c | 12 +-
> examples/ip_pipeline/link.c | 2 +-
> examples/ip_reassembly/main.c | 12 +-
> examples/ipsec-secgw/ipsec-secgw.c | 7 +-
> examples/ipv4_multicast/main.c | 9 +-
> examples/kni/main.c | 6 +-
> examples/l2fwd-cat/l2fwd-cat.c | 8 +-
> examples/l2fwd-crypto/main.c | 1 -
> examples/l2fwd-event/l2fwd_common.c | 1 -
> examples/l3fwd-acl/main.c | 129 +++++++++---------
> examples/l3fwd-graph/main.c | 83 +++++++----
> examples/l3fwd-power/main.c | 90 +++++++-----
> examples/l3fwd/main.c | 84 +++++++-----
> .../performance-thread/l3fwd-thread/main.c | 88 +++++++-----
> .../performance-thread/l3fwd-thread/test.sh | 24 ++--
> examples/pipeline/obj.c | 2 +-
> examples/ptpclient/ptpclient.c | 10 +-
> examples/qos_meter/main.c | 1 -
> examples/qos_sched/init.c | 1 -
> examples/rxtx_callbacks/main.c | 10 +-
> examples/skeleton/basicfwd.c | 12 +-
> examples/vhost/main.c | 4 +-
> examples/vm_power_manager/main.c | 11 +-
> lib/ethdev/rte_ethdev.c | 92 +++++++------
> lib/ethdev/rte_ethdev.h | 2 +-
> lib/ethdev/rte_ethdev_trace.h | 2 +-
> 121 files changed, 801 insertions(+), 1071 deletions(-)
>
> diff --git a/app/test-eventdev/test_perf_common.c b/app/test-
> eventdev/test_perf_common.c
> index cc100650c21e..660d5a0364b6 100644
> --- a/app/test-eventdev/test_perf_common.c
> +++ b/app/test-eventdev/test_perf_common.c
> @@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
> struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-
> eventdev/test_pipeline_common.c
> index 6ee530d4cdc9..5fcea74b4d43 100644
> --- a/app/test-eventdev/test_pipeline_common.c
> +++ b/app/test-eventdev/test_pipeline_common.c
> @@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
> return -EINVAL;
> }
>
> - port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
> - if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
> + port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN;
> + if (port_conf.rxmode.mtu > RTE_ETHER_MTU)
> port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> t->internal_port = 1;
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index a9efd027c376..a677451073ae 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -1892,45 +1892,38 @@ cmd_config_max_pkt_len_parsed(void
> *parsed_result,
> __rte_unused void *data)
> {
> struct cmd_config_max_pkt_len_result *res = parsed_result;
> - uint32_t max_rx_pkt_len_backup = 0;
> - portid_t pid;
> + portid_t port_id;
> int ret;
>
> + if (strcmp(res->name, "max-pkt-len") != 0) {
> + printf("Unknown parameter\n");
> + return;
> + }
> +
> if (!all_ports_stopped()) {
> fprintf(stderr, "Please stop all ports first\n");
> return;
> }
>
> - RTE_ETH_FOREACH_DEV(pid) {
> - struct rte_port *port = &ports[pid];
> + RTE_ETH_FOREACH_DEV(port_id) {
> + struct rte_port *port = &ports[port_id];
>
> - if (!strcmp(res->name, "max-pkt-len")) {
> - if (res->value < RTE_ETHER_MIN_LEN) {
> - fprintf(stderr,
> - "max-pkt-len can not be less
> than %d\n",
> - RTE_ETHER_MIN_LEN);
> - return;
> - }
> - if (res->value == port-
> >dev_conf.rxmode.max_rx_pkt_len)
> - return;
> -
> - ret = eth_dev_info_get_print_err(pid, &port-
> >dev_info);
> - if (ret != 0) {
> - fprintf(stderr,
> - "rte_eth_dev_info_get() failed for
> port %u\n",
> - pid);
> - return;
> - }
> -
> - max_rx_pkt_len_backup = port-
> >dev_conf.rxmode.max_rx_pkt_len;
> + if (res->value < RTE_ETHER_MIN_LEN) {
> + fprintf(stderr,
> + "max-pkt-len can not be less than %d\n",
> + RTE_ETHER_MIN_LEN);
> + return;
> + }
>
> - port->dev_conf.rxmode.max_rx_pkt_len = res->value;
> - if (update_jumbo_frame_offload(pid) != 0)
> - port->dev_conf.rxmode.max_rx_pkt_len =
> max_rx_pkt_len_backup;
> - } else {
> - fprintf(stderr, "Unknown parameter\n");
> + ret = eth_dev_info_get_print_err(port_id, &port->dev_info);
> + if (ret != 0) {
> + fprintf(stderr,
> + "rte_eth_dev_info_get() failed for port %u\n",
> + port_id);
> return;
> }
> +
> + update_jumbo_frame_offload(port_id, res->value);
> }
>
> init_port_config();
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 9c66329e96ee..db3eeffa0093 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1147,7 +1147,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> int diag;
> struct rte_port *rte_port = &ports[port_id];
> struct rte_eth_dev_info dev_info;
> - uint16_t eth_overhead;
> int ret;
>
> if (port_id_is_invalid(port_id, ENABLED_WARN))
> @@ -1164,21 +1163,18 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
> return;
> }
> diag = rte_eth_dev_set_mtu(port_id, mtu);
> - if (diag)
> + if (diag != 0) {
> fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
> - else if (dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - /*
> - * Ether overhead in driver is equal to the difference of
> - * max_rx_pktlen and max_mtu in rte_eth_dev_info when
> the
> - * device supports jumbo frame.
> - */
> - eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
> - if (mtu > RTE_ETHER_MTU) {
> + return;
> + }
> +
> + rte_port->dev_conf.rxmode.mtu = mtu;
> +
> + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if (mtu > RTE_ETHER_MTU)
> rte_port->dev_conf.rxmode.offloads |=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - rte_port->dev_conf.rxmode.max_rx_pkt_len =
> - mtu + eth_overhead;
> - } else
> + else
> rte_port->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> }
> diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
> index 3f94a82e321f..27eb4bc667df 100644
> --- a/app/test-pmd/parameters.c
> +++ b/app/test-pmd/parameters.c
> @@ -870,7 +870,9 @@ launch_args_parse(int argc, char** argv)
> if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) {
> n = atoi(optarg);
> if (n >= RTE_ETHER_MIN_LEN)
> - rx_mode.max_rx_pkt_len = (uint32_t)
> n;
> + rx_mode.mtu = (uint32_t) n -
> + (RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN);
> else
> rte_exit(EXIT_FAILURE,
> "Invalid max-pkt-len=%d -
> should be > %d\n",
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 97ae52e17ecd..8c23cfe7c3da 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -446,13 +446,7 @@ lcoreid_t latencystats_lcore_id = -1;
> /*
> * Ethernet device configuration.
> */
> -struct rte_eth_rxmode rx_mode = {
> - /* Default maximum frame length.
> - * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead"
> - * in init_config().
> - */
> - .max_rx_pkt_len = 0,
> -};
> +struct rte_eth_rxmode rx_mode;
>
> struct rte_eth_txmode tx_mode = {
> .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
> @@ -1481,11 +1475,24 @@ check_nb_hairpinq(queueid_t hairpinq)
> return 0;
> }
>
> +static int
> +get_eth_overhead(struct rte_eth_dev_info *dev_info)
> +{
> + uint32_t eth_overhead;
> +
> + if (dev_info->max_mtu != UINT16_MAX &&
> + dev_info->max_rx_pktlen > dev_info->max_mtu)
> + eth_overhead = dev_info->max_rx_pktlen - dev_info-
> >max_mtu;
> + else
> + eth_overhead = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return eth_overhead;
> +}
> +
> static void
> init_config_port_offloads(portid_t pid, uint32_t socket_id)
> {
> struct rte_port *port = &ports[pid];
> - uint16_t data_size;
> int ret;
> int i;
>
> @@ -1496,7 +1503,7 @@ init_config_port_offloads(portid_t pid, uint32_t
> socket_id)
> if (ret != 0)
> rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n");
>
> - ret = update_jumbo_frame_offload(pid);
> + ret = update_jumbo_frame_offload(pid, 0);
> if (ret != 0)
> fprintf(stderr,
> "Updating jumbo frame offload failed for port %u\n",
> @@ -1528,14 +1535,20 @@ init_config_port_offloads(portid_t pid, uint32_t
> socket_id)
> */
> if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX &&
> port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) {
> - data_size = rx_mode.max_rx_pkt_len /
> - port->dev_info.rx_desc_lim.nb_mtu_seg_max;
> -
> - if ((data_size + RTE_PKTMBUF_HEADROOM) >
> mbuf_data_size[0]) {
> - mbuf_data_size[0] = data_size +
> RTE_PKTMBUF_HEADROOM;
> - TESTPMD_LOG(WARNING,
> - "Configured mbuf size of the first
> segment %hu\n",
> - mbuf_data_size[0]);
> + uint32_t eth_overhead = get_eth_overhead(&port-
> >dev_info);
> + uint16_t mtu;
> +
> + if (rte_eth_dev_get_mtu(pid, &mtu) == 0) {
> + uint16_t data_size = (mtu + eth_overhead) /
> + port-
> >dev_info.rx_desc_lim.nb_mtu_seg_max;
> + uint16_t buffer_size = data_size +
> RTE_PKTMBUF_HEADROOM;
> +
> + if (buffer_size > mbuf_data_size[0]) {
> + mbuf_data_size[0] = buffer_size;
> + TESTPMD_LOG(WARNING,
> + "Configured mbuf size of the first
> segment %hu\n",
> + mbuf_data_size[0]);
> + }
> }
> }
> }
> @@ -3451,44 +3464,45 @@ rxtx_port_config(struct rte_port *port)
>
> /*
> * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME
> offload,
> - * MTU is also aligned if JUMBO_FRAME offload is not set.
> + * MTU is also aligned.
> *
> * port->dev_info should be set before calling this function.
> *
> + * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU +
> + * ETH_OVERHEAD". This is useful to update flags but not MTU value.
> + *
> * return 0 on success, negative on error
> */
> int
> -update_jumbo_frame_offload(portid_t portid)
> +update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen)
> {
> struct rte_port *port = &ports[portid];
> uint32_t eth_overhead;
> uint64_t rx_offloads;
> - int ret;
> + uint16_t mtu, new_mtu;
> bool on;
>
> - /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */
> - if (port->dev_info.max_mtu != UINT16_MAX &&
> - port->dev_info.max_rx_pktlen > port->dev_info.max_mtu)
> - eth_overhead = port->dev_info.max_rx_pktlen -
> - port->dev_info.max_mtu;
> - else
> - eth_overhead = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> + eth_overhead = get_eth_overhead(&port->dev_info);
>
> - rx_offloads = port->dev_conf.rxmode.offloads;
> + if (rte_eth_dev_get_mtu(portid, &mtu) != 0) {
> + printf("Failed to get MTU for port %u\n", portid);
> + return -1;
> + }
> +
> + if (max_rx_pktlen == 0)
> + max_rx_pktlen = mtu + eth_overhead;
>
> - /* Default config value is 0 to use PMD specific overhead */
> - if (port->dev_conf.rxmode.max_rx_pkt_len == 0)
> - port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU
> + eth_overhead;
> + rx_offloads = port->dev_conf.rxmode.offloads;
> + new_mtu = max_rx_pktlen - eth_overhead;
>
> - if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU +
> eth_overhead) {
> + if (new_mtu <= RTE_ETHER_MTU) {
> rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> on = false;
> } else {
> if ((port->dev_info.rx_offload_capa &
> DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> fprintf(stderr,
> "Frame size (%u) is not supported by
> port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len,
> - portid);
> + max_rx_pktlen, portid);
> return -1;
> }
> rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -3509,19 +3523,18 @@ update_jumbo_frame_offload(portid_t portid)
> }
> }
>
> - /* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
> - * if unset do it here
> - */
> - if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
> - ret = eth_dev_set_mtu_mp(portid,
> - port->dev_conf.rxmode.max_rx_pkt_len -
> eth_overhead);
> - if (ret)
> - fprintf(stderr,
> - "Failed to set MTU to %u for port %u\n",
> - port->dev_conf.rxmode.max_rx_pkt_len -
> eth_overhead,
> - portid);
> + if (mtu == new_mtu)
> + return 0;
> +
> + if (eth_dev_set_mtu_mp(portid, new_mtu) != 0) {
> + fprintf(stderr,
> + "Failed to set MTU to %u for port %u\n",
> + new_mtu, portid);
> + return -1;
> }
>
> + port->dev_conf.rxmode.mtu = new_mtu;
> +
> return 0;
> }
>
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index 5863b2f43f3e..17562215c733 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -1022,7 +1022,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id,
> __rte_unused uint16_t queue,
> __rte_unused void *user_param);
> void add_tx_dynf_callback(portid_t portid);
> void remove_tx_dynf_callback(portid_t portid);
> -int update_jumbo_frame_offload(portid_t portid);
> +int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen);
>
> /*
> * Work-around of a compilation error with ICC on invocations of the
> diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
> index 8a5c8310a8b4..5388d18125a6 100644
> --- a/app/test/test_link_bonding.c
> +++ b/app/test/test_link_bonding.c
> @@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> .split_hdr_size = 0,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/app/test/test_link_bonding_mode4.c
> b/app/test/test_link_bonding_mode4.c
> index 2c835fa7adc7..3e9254fe896d 100644
> --- a/app/test/test_link_bonding_mode4.c
> +++ b/app/test/test_link_bonding_mode4.c
> @@ -108,7 +108,6 @@ static struct link_bonding_unittest_params
> test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_link_bonding_rssconf.c
> b/app/test/test_link_bonding_rssconf.c
> index 5dac60ca1edd..e7bb0497b663 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params
> test_params = {
> static struct rte_eth_conf default_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> @@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = {
> static struct rte_eth_conf rss_pmd_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
> index 3a248d512c4a..a3b4f52c65e6 100644
> --- a/app/test/test_pmd_perf.c
> +++ b/app/test/test_pmd_perf.c
> @@ -63,7 +63,6 @@ static struct rte_ether_addr
> ports_eth_addr[RTE_MAX_ETHPORTS];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
> index 7355ec305916..9dad612058c6 100644
> --- a/doc/guides/nics/dpaa.rst
> +++ b/doc/guides/nics/dpaa.rst
> @@ -335,7 +335,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA SoC family support a maximum of a 10240 jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
> index df23a5704dca..831bc564883a 100644
> --- a/doc/guides/nics/dpaa2.rst
> +++ b/doc/guides/nics/dpaa2.rst
> @@ -545,7 +545,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
> up to 10240 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 4fce8cd1c976..483cb7da576f 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -166,7 +166,7 @@ Jumbo frame
> Supports Rx jumbo frames.
>
> * **[uses] rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
> - ``dev_conf.rxmode.max_rx_pkt_len``.
> + ``dev_conf.rxmode.mtu``.
> * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
> * **[related] API**: ``rte_eth_dev_set_mtu()``.
>
> diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
> index 7b8ef0e7823d..ed6afd62703d 100644
> --- a/doc/guides/nics/fm10k.rst
> +++ b/doc/guides/nics/fm10k.rst
> @@ -141,7 +141,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The FM10000 family of NICS support a maximum of a 15K jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames
> up to 15364 bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index bae73f42d882..1f5619ed53fc 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -606,9 +606,9 @@ Driver options
> and each stride receives one packet. MPRQ can improve throughput for
> small-packet traffic.
>
> - When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
> + When MPRQ is enabled, MTU can be larger than the size of
> user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled.
> PMD will
> - configure large stride size enough to accommodate max_rx_pkt_len as long
> as
> + configure large stride size enough to accommodate MTU as long as
> device allows. Note that this can waste system memory compared to
> enabling Rx
> scatter and multi-segment packet.
>
> diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
> index b1a868b054d1..8236cc3e93e0 100644
> --- a/doc/guides/nics/octeontx.rst
> +++ b/doc/guides/nics/octeontx.rst
> @@ -157,7 +157,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame.
> The value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames
> up to 32k bytes can still reach the host interface.
>
> diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst
> index 12d43ce93e28..98f23a2b2a3d 100644
> --- a/doc/guides/nics/thunderx.rst
> +++ b/doc/guides/nics/thunderx.rst
> @@ -392,7 +392,7 @@ Maximum packet length
> ~~~~~~~~~~~~~~~~~~~~~
>
> The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The
> value
> -is fixed and cannot be changed. So, even when the
> ``rxmode.max_rx_pkt_len``
> +is fixed and cannot be changed. So, even when the ``rxmode.mtu``
> member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
> up to 9200 bytes can still reach the host interface.
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index a2fe766d4b4f..1063a1fe4bea 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -81,31 +81,6 @@ Deprecation Notices
> In 19.11 PMDs will still update the field even when the offload is not
> enabled.
>
> -* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will
> be
> - replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11.
> - The new ``mtu`` field will be used to configure the initial device MTU via
> - ``rte_eth_dev_configure()`` API.
> - Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now.
> - The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to
> store
> - the configured ``mtu`` value,
> - and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will
> - be used to store the user configuration request.
> - Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME``
> enabled,
> - ``mtu`` field will be always valid.
> - When ``mtu`` config is not provided by the application, default
> ``RTE_ETHER_MTU``
> - value will be used.
> - ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set
> successfully,
> - either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
> -
> - An application may need to configure device for a specific Rx packet size,
> like for
> - cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received
> packet size
> - can't be bigger than Rx buffer size.
> - To cover these cases an application needs to know the device packet
> overhead to be
> - able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
> - ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept,
> - the device packet overhead can be calculated as:
> - ``(struct rte_eth_dev_info).max_rx_pktlen - (struct
> rte_eth_dev_info).max_mtu``
> -
> * ethdev: ``rx_descriptor_done`` dev_ops and
> ``rte_eth_rx_descriptor_done``
> will be removed in 21.11.
> Existing ``rte_eth_rx_descriptor_status`` and
> ``rte_eth_tx_descriptor_status``
> diff --git a/doc/guides/sample_app_ug/flow_classify.rst
> b/doc/guides/sample_app_ug/flow_classify.rst
> index 812aaa87b05b..6c4c04e935e4 100644
> --- a/doc/guides/sample_app_ug/flow_classify.rst
> +++ b/doc/guides/sample_app_ug/flow_classify.rst
> @@ -162,12 +162,7 @@ Forwarding application is shown below:
> :end-before: >8 End of initializing a given port.
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct.
> -
> -.. literalinclude:: ../../../examples/flow_classify/flow_classify.c
> - :language: c
> - :start-after: Ethernet ports configured with default settings using struct. 8<
> - :end-before: >8 End of configuration of Ethernet ports.
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/doc/guides/sample_app_ug/l3_forward.rst
> b/doc/guides/sample_app_ug/l3_forward.rst
> index 2d5cd5f1c0ba..56af5cd5b383 100644
> --- a/doc/guides/sample_app_ug/l3_forward.rst
> +++ b/doc/guides/sample_app_ug/l3_forward.rst
> @@ -65,7 +65,7 @@ The application has a number of command line
> options::
> [--lookup LOOKUP_METHOD]
> --config(port,queue,lcore)[,(port,queue,lcore)]
> [--eth-dest=X,MM:MM:MM:MM:MM:MM]
> - [--enable-jumbo [--max-pkt-len PKTLEN]]
> + [--max-pkt-len PKTLEN]
> [--no-numa]
> [--hash-entry-num]
> [--ipv6]
> @@ -95,9 +95,7 @@ Where,
>
> * ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet
> destination for port X.
>
> -* ``--enable-jumbo:`` Optional, enables jumbo frames.
> -
> -* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo,
> maximum packet length in decimal (64-9600).
> +* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
>
> * ``--no-numa:`` Optional, disables numa awareness.
>
> diff --git a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> index 2cf6e4556f14..486247ac2e4f 100644
> --- a/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> +++ b/doc/guides/sample_app_ug/l3_forward_access_ctrl.rst
> @@ -236,7 +236,7 @@ The application has a number of command line
> options:
>
> .. code-block:: console
>
> - ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P]
> --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --
> rule_ipv6 FILENAME [--alg=<val>] [--enable-jumbo [--max-pkt-len PKTLEN]] [--
> no-numa] [--eth-dest=X,MM:MM:MM:MM:MM:MM]
> + ./<build_dir>/examples/dpdk-l3fwd-acl [EAL options] -- -p PORTMASK [-P]
> --config(port,queue,lcore)[,(port,queue,lcore)] --rule_ipv4 FILENAME --
> rule_ipv6 FILENAME [--alg=<val>] [--max-pkt-len PKTLEN] [--no-numa] [--eth-
> dest=X,MM:MM:MM:MM:MM:MM]
>
>
> where,
> @@ -255,8 +255,6 @@ where,
> * --alg=<val>: optional, ACL classify method to use, one of:
> ``scalar|sse|avx2|neon|altivec|avx512x16|avx512x32``
>
> -* --enable-jumbo: optional, enables jumbo frames
> -
> * --max-pkt-len: optional, maximum packet length in decimal (64-9600)
>
> * --no-numa: optional, disables numa awareness
> diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst
> b/doc/guides/sample_app_ug/l3_forward_graph.rst
> index 03e9a85aa68c..0a3e0d44ecea 100644
> --- a/doc/guides/sample_app_ug/l3_forward_graph.rst
> +++ b/doc/guides/sample_app_ug/l3_forward_graph.rst
> @@ -48,7 +48,7 @@ The application has a number of command line options
> similar to l3fwd::
> [-P]
> --config(port,queue,lcore)[,(port,queue,lcore)]
> [--eth-dest=X,MM:MM:MM:MM:MM:MM]
> - [--enable-jumbo [--max-pkt-len PKTLEN]]
> + [--max-pkt-len PKTLEN]
> [--no-numa]
> [--per-port-pool]
>
> @@ -63,9 +63,7 @@ Where,
>
> * ``--eth-dest=X,MM:MM:MM:MM:MM:MM:`` Optional, ethernet
> destination for port X.
>
> -* ``--enable-jumbo:`` Optional, enables jumbo frames.
> -
> -* ``--max-pkt-len:`` Optional, under the premise of enabling jumbo,
> maximum packet length in decimal (64-9600).
> +* ``--max-pkt-len:`` Optional, maximum packet length in decimal (64-9600).
>
> * ``--no-numa:`` Optional, disables numa awareness.
>
> diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst
> b/doc/guides/sample_app_ug/l3_forward_power_man.rst
> index 0495314c87d5..8817eaadbfc3 100644
> --- a/doc/guides/sample_app_ug/l3_forward_power_man.rst
> +++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst
> @@ -88,7 +88,7 @@ The application has a number of command line options:
>
> .. code-block:: console
>
> - ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK
> [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--enable-jumbo [--max-
> pkt-len PKTLEN]] [--no-numa]
> + ./<build_dir>/examples/dpdk-l3fwd_power [EAL options] -- -p PORTMASK
> [-P] --config(port,queue,lcore)[,(port,queue,lcore)] [--max-pkt-len PKTLEN] [-
> -no-numa]
>
> where,
>
> @@ -99,8 +99,6 @@ where,
>
> * --config (port,queue,lcore)[,(port,queue,lcore)]: determines which
> queues from which ports are mapped to which cores.
>
> -* --enable-jumbo: optional, enables jumbo frames
> -
> * --max-pkt-len: optional, maximum packet length in decimal (64-9600)
>
> * --no-numa: optional, disables numa awareness
> diff --git a/doc/guides/sample_app_ug/performance_thread.rst
> b/doc/guides/sample_app_ug/performance_thread.rst
> index 9b09838f6448..7d1bf6eaae8c 100644
> --- a/doc/guides/sample_app_ug/performance_thread.rst
> +++ b/doc/guides/sample_app_ug/performance_thread.rst
> @@ -59,7 +59,7 @@ The application has a number of command line
> options::
> -p PORTMASK [-P]
> --rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]
> --tx(lcore,thread)[,(lcore,thread)]
> - [--enable-jumbo] [--max-pkt-len PKTLEN]] [--no-numa]
> + [--max-pkt-len PKTLEN] [--no-numa]
> [--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]
> [--parse-ptype]
>
> @@ -80,8 +80,6 @@ Where:
> the lcore the thread runs on, and the id of RX thread with which it is
> associated. The parameters are explained below.
>
> -* ``--enable-jumbo``: optional, enables jumbo frames.
> -
> * ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).
>
> * ``--no-numa``: optional, disables numa awareness.
> diff --git a/doc/guides/sample_app_ug/skeleton.rst
> b/doc/guides/sample_app_ug/skeleton.rst
> index f7bcd7ed2a1d..6d0de6440105 100644
> --- a/doc/guides/sample_app_ug/skeleton.rst
> +++ b/doc/guides/sample_app_ug/skeleton.rst
> @@ -106,12 +106,7 @@ Forwarding application is shown below:
> :end-before: >8 End of main functional part of port initialization.
>
> The Ethernet ports are configured with default settings using the
> -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct:
> -
> -.. literalinclude:: ../../../examples/skeleton/basicfwd.c
> - :language: c
> - :start-after: Configuration of ethernet ports. 8<
> - :end-before: >8 End of configuration of ethernet ports.
> +``rte_eth_dev_configure()`` function.
>
> For this example the ports are set up with 1 RX and 1 TX queue using the
> ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions.
> diff --git a/drivers/net/atlantic/atl_ethdev.c
> b/drivers/net/atlantic/atl_ethdev.c
> index 0ce35eb519e2..3f654c071566 100644
> --- a/drivers/net/atlantic/atl_ethdev.c
> +++ b/drivers/net/atlantic/atl_ethdev.c
> @@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> dev_info.max_rx_pktlen)
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return 0;
> }
>
> diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
> index 623fa5e5ff5b..0feacc822433 100644
> --- a/drivers/net/avp/avp_ethdev.c
> +++ b/drivers/net/avp/avp_ethdev.c
> @@ -1059,17 +1059,18 @@ static int
> avp_dev_enable_scattered(struct rte_eth_dev *eth_dev,
> struct avp_dev *avp)
> {
> - unsigned int max_rx_pkt_len;
> + unsigned int max_rx_pktlen;
>
> - max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
>
> - if ((max_rx_pkt_len > avp->guest_mbuf_size) ||
> - (max_rx_pkt_len > avp->host_mbuf_size)) {
> + if (max_rx_pktlen > avp->guest_mbuf_size ||
> + max_rx_pktlen > avp->host_mbuf_size) {
> /*
> * If the guest MTU is greater than either the host or guest
> * buffers then chained mbufs have to be enabled in the TX
> * direction. It is assumed that the application will not need
> - * to send packets larger than their max_rx_pkt_len (MRU).
> + * to send packets larger than their MTU.
> */
> return 1;
> }
> @@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
>
> PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u)
> mbuf_size=(%u,%u)\n",
> avp->max_rx_pkt_len,
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN,
> avp->host_mbuf_size,
> avp->guest_mbuf_size);
>
> @@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
> * function; send it truncated to avoid the
> performance
> * hit of having to manage returning the already
> * allocated buffer to the free list. This should not
> - * happen since the application should have set the
> - * max_rx_pkt_len based on its MTU and it should be
> + * happen since the application should have not send
> + * packages larger than its MTU and it should be
> * policing its own packet sizes.
> */
> txq->errors++;
> diff --git a/drivers/net/axgbe/axgbe_ethdev.c
> b/drivers/net/axgbe/axgbe_ethdev.c
> index 9cb4818af11f..76aeec077f2b 100644
> --- a/drivers/net/axgbe/axgbe_ethdev.c
> +++ b/drivers/net/axgbe/axgbe_ethdev.c
> @@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
> struct axgbe_port *pdata = dev->data->dev_private;
> int ret;
> struct rte_eth_dev_data *dev_data = dev->data;
> - uint16_t max_pkt_len = dev_data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint16_t max_pkt_len;
>
> dev->dev_ops = &axgbe_eth_dev_ops;
>
> @@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev)
>
> rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
> rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
> +
> + max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> if ((dev_data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) ||
> max_pkt_len > pdata->rx_buf_size)
> dev_data->scattered_rx = 1;
> @@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> dev->data->port_id);
> return -EBUSY;
> }
> - if (frame_size > AXGBE_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> val = 1;
> @@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> val = 0;
> }
> AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c
> b/drivers/net/bnx2x/bnx2x_ethdev.c
> index 463886f17a58..009a94e9a8fa 100644
> --- a/drivers/net/bnx2x/bnx2x_ethdev.c
> +++ b/drivers/net/bnx2x/bnx2x_ethdev.c
> @@ -175,16 +175,12 @@ static int
> bnx2x_dev_configure(struct rte_eth_dev *dev)
> {
> struct bnx2x_softc *sc = dev->data->dev_private;
> - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
>
> int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF);
>
> PMD_INIT_FUNC_TRACE(sc);
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - dev->data->mtu = sc->mtu;
> - }
> + sc->mtu = dev->data->dev_conf.rxmode.mtu;
>
> if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) {
> PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater
> than number of RX queues");
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index aa7e7fdc85fa..8c6f20b75aed 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -1157,13 +1157,8 @@ static int bnxt_dev_configure_op(struct
> rte_eth_dev *eth_dev)
> rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
> eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - eth_dev->data->mtu =
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> VLAN_TAG_SIZE *
> - BNXT_NUM_VLANS;
> - bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> - }
> + bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu);
> +
> return 0;
>
> resource_error:
> @@ -1201,6 +1196,7 @@ void bnxt_print_link_info(struct rte_eth_dev
> *eth_dev)
> */
> static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> uint16_t buf_size;
> int i;
>
> @@ -1215,7 +1211,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev
> *eth_dev)
>
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq-
> >mb_pool) -
> RTE_PKTMBUF_HEADROOM);
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len >
> buf_size)
> + if (eth_dev->data->mtu + overhead > buf_size)
> return 1;
> }
> return 0;
> @@ -3026,6 +3022,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev,
> __rte_unused uint16_t queue_id,
>
> int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
> {
> + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU;
> struct bnxt *bp = eth_dev->data->dev_private;
> uint32_t new_pkt_size;
> uint32_t rc = 0;
> @@ -3039,8 +3036,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> if (!eth_dev->data->nb_rx_queues)
> return rc;
>
> - new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> - VLAN_TAG_SIZE * BNXT_NUM_VLANS;
> + new_pkt_size = new_mtu + overhead;
>
> /*
> * Disallow any MTU change that would require scattered receive
> support
> @@ -3067,7 +3063,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> }
>
> /* Is there a change in mtu setting? */
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len ==
> new_pkt_size)
> + if (eth_dev->data->mtu == new_mtu)
> return rc;
>
> for (i = 0; i < bp->nr_vnics; i++) {
> @@ -3089,9 +3085,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev,
> uint16_t new_mtu)
> }
> }
>
> - if (!rc)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> new_pkt_size;
> -
> if (bnxt_hwrm_config_host_mtu(bp))
> PMD_DRV_LOG(WARNING, "Failed to configure host
> MTU\n");
>
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c
> b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 54987d96b34d..412acff42f65 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -1724,8 +1724,8 @@ slave_configure(struct rte_eth_dev
> *bonded_eth_dev,
> slave_eth_dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_VLAN_FILTER;
>
> - slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - bonded_eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + slave_eth_dev->data->dev_conf.rxmode.mtu =
> + bonded_eth_dev->data->dev_conf.rxmode.mtu;
>
> if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
> index 8629193d5049..8d0677cd89d9 100644
> --- a/drivers/net/cnxk/cnxk_ethdev.c
> +++ b/drivers/net/cnxk/cnxk_ethdev.c
> @@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp
> *rxq)
> mbp_priv = rte_mempool_get_priv(rxq->qconf.mp);
> buffsz = mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM;
>
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD >
> buffsz) {
> dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> }
> @@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> {
> struct rte_eth_dev_data *data = eth_dev->data;
> struct cnxk_eth_rxq_sp *rxq;
> - uint16_t mtu;
> int rc;
>
> rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1;
> /* Setup scatter mode if needed by jumbo */
> nix_enable_mseg_on_jumbo(rxq);
>
> - /* Setup MTU based on max_rx_pkt_len */
> - mtu = data->dev_conf.rxmode.max_rx_pkt_len -
> CNXK_NIX_L2_OVERHEAD +
> - CNXK_NIX_MAX_VTAG_ACT_SIZE;
> -
> - rc = cnxk_nix_mtu_set(eth_dev, mtu);
> + rc = cnxk_nix_mtu_set(eth_dev, data->mtu);
> if (rc)
> plt_err("Failed to set default MTU size, rc=%d", rc);
>
> diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c
> b/drivers/net/cnxk/cnxk_ethdev_ops.c
> index b6cc5286c6d0..695d0d6fd3e2 100644
> --- a/drivers/net/cnxk/cnxk_ethdev_ops.c
> +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
> @@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> goto exit;
> }
>
> - frame_size += RTE_ETHER_CRC_LEN;
> -
> - if (frame_size > RTE_ETHER_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> exit:
> return rc;
> }
> diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c
> b/drivers/net/cxgbe/cxgbe_ethdev.c
> index 177eca397600..8cf61f12a8d6 100644
> --- a/drivers/net/cxgbe/cxgbe_ethdev.c
> +++ b/drivers/net/cxgbe/cxgbe_ethdev.c
> @@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu)
> return err;
>
> /* Must accommodate at least RTE_ETHER_MIN_MTU */
> - if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> + if (mtu < RTE_ETHER_MIN_MTU || new_mtu >
> dev_info.max_rx_pktlen)
> return -EINVAL;
>
> /* set to jumbo mode if needed */
> - if (new_mtu > CXGBE_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
>
> err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1,
> -1,
> -1, -1, true);
> - if (!err)
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> new_mtu;
> -
> return err;
> }
>
> @@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> const struct rte_eth_rxconf *rx_conf __rte_unused,
> struct rte_mempool *mp)
> {
> - unsigned int pkt_len = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> struct port_info *pi = eth_dev->data->dev_private;
> struct adapter *adapter = pi->adapter;
> struct rte_eth_dev_info dev_info;
> @@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev
> *eth_dev,
> rxq->fl.size = temp_nb_desc;
>
> /* Set to jumbo mode if necessary */
> - if (pkt_len > CXGBE_ETH_MAX_LEN)
> + if (eth_dev->data->mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> diff --git a/drivers/net/cxgbe/cxgbe_main.c
> b/drivers/net/cxgbe/cxgbe_main.c
> index 6dd1bf1f836e..91d6bb9bbcb0 100644
> --- a/drivers/net/cxgbe/cxgbe_main.c
> +++ b/drivers/net/cxgbe/cxgbe_main.c
> @@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi)
> unsigned int mtu;
> int ret;
>
> - mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN);
> + mtu = pi->eth_dev->data->mtu;
>
> conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads;
>
> diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
> index e5f7721dc4b3..830f5192474d 100644
> --- a/drivers/net/cxgbe/sge.c
> +++ b/drivers/net/cxgbe/sge.c
> @@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct
> rte_mbuf *mbuf,
> u32 wr_mid;
> u64 cntrl, *end;
> bool v6;
> - u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len;
> + u32 max_pkt_len;
>
> /* Reject xmit if queue is stopped */
> if (unlikely(txq->flags & EQ_STOPPED))
> @@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct
> rte_mbuf *mbuf,
> return 0;
> }
>
> + max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> if ((!(m->ol_flags & PKT_TX_TCP_SEG)) &&
> (unlikely(m->pkt_len > max_pkt_len)))
> goto out_free;
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c
> b/drivers/net/dpaa/dpaa_ethdev.c
> index 36d8f9249df1..adbdb87baab9 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
> @@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (frame_size > DPAA_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> fman_if_set_maxfrm(dev->process_private, frame_size);
>
> return 0;
> @@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> struct fman_if *fif = dev->process_private;
> struct __fman_if *__fif;
> struct rte_intr_handle *intr_handle;
> + uint32_t max_rx_pktlen;
> int speed, duplex;
> int ret;
>
> @@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - DPAA_PMD_DEBUG("enabling jumbo");
> -
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - DPAA_MAX_RX_PKT_LEN)
> - max_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> - else {
> - DPAA_PMD_INFO("enabling jumbo override conf
> max len=%d "
> - "supported is %d",
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> - DPAA_MAX_RX_PKT_LEN);
> - max_len = DPAA_MAX_RX_PKT_LEN;
> - }
> -
> - fman_if_set_maxfrm(dev->process_private, max_len);
> - dev->data->mtu = max_len
> - - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN -
> VLAN_TAG_SIZE;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) {
> + DPAA_PMD_INFO("enabling jumbo override conf max
> len=%d "
> + "supported is %d",
> + max_rx_pktlen, DPAA_MAX_RX_PKT_LEN);
> + max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
> }
>
> + fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
> +
> if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
> DPAA_PMD_DEBUG("enabling scatter mode");
> fman_if_set_sg(dev->process_private, 1);
> @@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> u32 flags = 0;
> int ret;
> u32 buffsz = rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> return -EINVAL;
> }
>
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> + VLAN_TAG_SIZE;
> /* Max packet can fit in single buffer */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
> + if (max_rx_pktlen <= buffsz) {
> ;
> } else if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - buffsz * DPAA_SGT_MAX_ENTRIES) {
> - DPAA_PMD_ERR("max RxPkt size %d too big to fit "
> + if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) {
> + DPAA_PMD_ERR("Maximum Rx packet size %d too
> big to fit "
> "MaxSGlist %d",
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> - buffsz * DPAA_SGT_MAX_ENTRIES);
> + max_rx_pktlen, buffsz *
> DPAA_SGT_MAX_ENTRIES);
> rte_errno = EOVERFLOW;
> return -rte_errno;
> }
> @@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
> DPAA_PMD_WARN("The requested maximum Rx packet size
> (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - buffsz - RTE_PKTMBUF_HEADROOM);
> + max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
> }
>
> dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
> @@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev
> *dev, uint16_t queue_idx,
>
> dpaa_intf->valid = 1;
> DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf-
> >name,
> - fman_if_get_sg_enable(fif),
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + fman_if_get_sg_enable(fif), max_rx_pktlen);
> /* checking if push mode only, no error check for now */
> if (!rxq->is_static &&
> dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
> diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c
> b/drivers/net/dpaa2/dpaa2_ethdev.c
> index c12169578e22..758a14e0ad2d 100644
> --- a/drivers/net/dpaa2/dpaa2_ethdev.c
> +++ b/drivers/net/dpaa2/dpaa2_ethdev.c
> @@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> int tx_l3_csum_offload = false;
> int tx_l4_csum_offload = false;
> int ret, tc_index;
> + uint32_t max_rx_pktlen;
>
> PMD_INIT_FUNC_TRACE();
>
> @@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
> tx_offloads, dev_tx_offloads_nodis);
> }
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - if (eth_conf->rxmode.max_rx_pkt_len <=
> DPAA2_MAX_RX_PKT_LEN) {
> - ret = dpni_set_max_frame_length(dpni,
> CMD_PRI_LOW,
> - priv->token, eth_conf-
> >rxmode.max_rx_pkt_len
> - - RTE_ETHER_CRC_LEN);
> - if (ret) {
> - DPAA2_PMD_ERR(
> - "Unable to set mtu. check config");
> - return ret;
> - }
> - dev->data->mtu =
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN
> -
> - VLAN_TAG_SIZE;
> - } else {
> - return -1;
> + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE;
> + if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) {
> + ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
> + priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN);
> + if (ret != 0) {
> + DPAA2_PMD_ERR("Unable to set mtu. check config");
> + return ret;
> }
> + } else {
> + return -1;
> }
>
> if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
> @@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> DPAA2_MAX_RX_PKT_LEN)
> return -EINVAL;
>
> - if (frame_size > DPAA2_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /* Set the Max Rx frame length as 'mtu' +
> * Maximum Ethernet header length
> */
> diff --git a/drivers/net/e1000/em_ethdev.c
> b/drivers/net/e1000/em_ethdev.c
> index a0ca371b0275..6f418a36aa04 100644
> --- a/drivers/net/e1000/em_ethdev.c
> +++ b/drivers/net/e1000/em_ethdev.c
> @@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> return 0;
> }
>
> diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
> index d80fad01e36d..4c114bf90fc7 100644
> --- a/drivers/net/e1000/igb_ethdev.c
> +++ b/drivers/net/e1000/igb_ethdev.c
> @@ -2681,9 +2681,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev
> *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu +
> E1000_ETH_OVERHEAD);
> }
>
> static void
> @@ -2699,10 +2697,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev
> *dev)
> E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
>
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE);
> + E1000_WRITE_REG(hw, E1000_RLPML,
> + dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE);
> }
>
> static int
> @@ -4400,7 +4396,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> rctl = E1000_READ_REG(hw, E1000_RCTL);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > E1000_ETH_MAX_LEN) {
> + if (mtu > RTE_ETHER_MTU) {
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= E1000_RCTL_LPE;
> @@ -4411,11 +4407,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
> E1000_WRITE_REG(hw, E1000_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - E1000_WRITE_REG(hw, E1000_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + E1000_WRITE_REG(hw, E1000_RLPML, frame_size);
>
> return 0;
> }
> diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
> index 278d5d2712af..e9a30d393bd7 100644
> --- a/drivers/net/e1000/igb_rxtx.c
> +++ b/drivers/net/e1000/igb_rxtx.c
> @@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> @@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure support of jumbo frames, if any.
> */
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> -
> rctl |= E1000_RCTL_LPE;
>
> /*
> @@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN
> */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) >
> buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> @@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> uint32_t srrctl;
> uint16_t buf_size;
> uint16_t rctl_bsize;
> + uint32_t max_len;
> uint16_t i;
> int ret;
>
> hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> /* setup MTU */
> - e1000_rlpml_set_vf(hw,
> - (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE));
> + max_len = dev->data->mtu + E1000_ETH_OVERHEAD;
> + e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE));
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
> E1000_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN
> */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE) >
> buf_size){
> + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG,
> "forcing scatter mode");
> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
> index 4cebf60a68a7..3a9d5031b262 100644
> --- a/drivers/net/ena/ena_ethdev.c
> +++ b/drivers/net/ena/ena_ethdev.c
> @@ -679,26 +679,14 @@ static int ena_queue_start_all(struct rte_eth_dev
> *dev,
> return rc;
> }
>
> -static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
> -{
> - uint32_t max_frame_len = adapter->max_mtu;
> -
> - if (adapter->edev_data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME)
> - max_frame_len =
> - adapter->edev_data-
> >dev_conf.rxmode.max_rx_pkt_len;
> -
> - return max_frame_len;
> -}
> -
> static int ena_check_valid_conf(struct ena_adapter *adapter)
> {
> - uint32_t max_frame_len = ena_get_mtu_conf(adapter);
> + uint32_t mtu = adapter->edev_data->mtu;
>
> - if (max_frame_len > adapter->max_mtu || max_frame_len <
> ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_INIT_LOG(ERR,
> "Unsupported MTU of %d. Max MTU: %d, min
> MTU: %d\n",
> - max_frame_len, adapter->max_mtu,
> ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return ENA_COM_UNSUPPORTED;
> }
>
> @@ -871,10 +859,10 @@ static int ena_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> ena_dev = &adapter->ena_dev;
> ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
>
> - if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) {
> + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) {
> PMD_DRV_LOG(ERR,
> "Invalid MTU setting. New MTU: %d, max MTU: %d,
> min MTU: %d\n",
> - mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU);
> + mtu, adapter->max_mtu, ENA_MIN_MTU);
> return -EINVAL;
> }
>
> @@ -1945,7 +1933,10 @@ static int ena_infos_get(struct rte_eth_dev *dev,
> dev_info->hash_key_size = ENA_HASH_KEY_SIZE;
>
> dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN;
> - dev_info->max_rx_pktlen = adapter->max_mtu;
> + dev_info->max_rx_pktlen = adapter->max_mtu +
> RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + dev_info->min_mtu = ENA_MIN_MTU;
> + dev_info->max_mtu = adapter->max_mtu;
> dev_info->max_mac_addrs = 1;
>
> dev_info->max_rx_queues = adapter->max_num_io_queues;
> diff --git a/drivers/net/enetc/enetc_ethdev.c
> b/drivers/net/enetc/enetc_ethdev.c
> index b496cd470045..cdb9783b5372 100644
> --- a/drivers/net/enetc/enetc_ethdev.c
> +++ b/drivers/net/enetc/enetc_ethdev.c
> @@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - if (frame_size > ENETC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads &=
>
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> @@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
> enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> /*setting the MTU*/
> enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> ENETC_SET_MAXFRM(frame_size) |
> ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE));
> @@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev)
> struct rte_eth_conf *eth_conf = &dev->data->dev_conf;
> uint64_t rx_offloads = eth_conf->rxmode.offloads;
> uint32_t checksum = L3_CKSUM | L4_CKSUM;
> + uint32_t max_len;
>
> PMD_INIT_FUNC_TRACE();
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - uint32_t max_len;
> -
> - max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> -
> - enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> - ENETC_SET_MAXFRM(max_len));
> - enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> - ENETC_MAC_MAXFRM_SIZE);
> - enetc_port_wr(enetc_hw, ENETC_PTXMBAR,
> - 2 * ENETC_MAC_MAXFRM_SIZE);
> - dev->data->mtu = RTE_ETHER_MAX_LEN -
> RTE_ETHER_HDR_LEN -
> - RTE_ETHER_CRC_LEN;
> - }
> + max_len = dev->data->dev_conf.rxmode.mtu +
> RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM,
> ENETC_SET_MAXFRM(max_len));
> + enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0),
> ENETC_MAC_MAXFRM_SIZE);
> + enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 *
> ENETC_MAC_MAXFRM_SIZE);
>
> if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
> int config;
> diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
> index 8d5797523b8f..6a81ceb62ba7 100644
> --- a/drivers/net/enic/enic_ethdev.c
> +++ b/drivers/net/enic/enic_ethdev.c
> @@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev
> *eth_dev,
> * max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is
> * a hint to the driver to size receive buffers accordingly so that
> * larger-than-vnic-mtu packets get truncated.. For DPDK, we let
> - * the user decide the buffer size via rxmode.max_rx_pkt_len,
> basically
> + * the user decide the buffer size via rxmode.mtu, basically
> * ignoring vNIC mtu.
> */
> device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic-
> >max_mtu);
> diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
> index 2affd380c6a4..dfc7f5d1f94f 100644
> --- a/drivers/net/enic/enic_main.c
> +++ b/drivers/net/enic/enic_main.c
> @@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct
> vnic_rq *rq)
> struct rq_enet_desc *rqd = rq->ring.descs;
> unsigned i;
> dma_addr_t dma_addr;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint16_t rq_buf_len;
>
> if (!rq->in_use)
> @@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic,
> struct vnic_rq *rq)
>
> /*
> * If *not* using scatter and the mbuf size is greater than the
> - * requested max packet size (max_rx_pkt_len), then reduce the
> - * posted buffer size to max_rx_pkt_len. HW still receives packets
> - * larger than max_rx_pkt_len, but they will be truncated, which we
> + * requested max packet size (mtu + eth overhead), then reduce the
> + * posted buffer size to max packet size. HW still receives packets
> + * larger than max packet size, but they will be truncated, which we
> * drop in the rx handler. Not ideal, but better than returning
> * large packets when the user is not expecting them.
> */
> - max_rx_pkt_len = enic->rte_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data-
> >mtu);
> rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) -
> RTE_PKTMBUF_HEADROOM;
> - if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable)
> - rq_buf_len = max_rx_pkt_len;
> + if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable)
> + rq_buf_len = max_rx_pktlen;
> for (i = 0; i < rq->ring.desc_count; i++, rqd++) {
> mb = rte_mbuf_raw_alloc(rq->mp);
> if (mb == NULL) {
> @@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> unsigned int mbuf_size, mbufs_per_pkt;
> unsigned int nb_sop_desc, nb_data_desc;
> uint16_t min_sop, max_sop, min_data, max_data;
> - uint32_t max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
>
> /*
> * Representor uses a reserved PF queue. Translate representor
> @@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
>
> mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM);
> - /* max_rx_pkt_len includes the ethernet header and CRC. */
> - max_rx_pkt_len = enic->rte_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + /* max_rx_pktlen includes the ethernet header and CRC. */
> + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data-
> >mtu);
>
> if (enic->rte_dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_SCATTER) {
> dev_info(enic, "Rq %u Scatter rx mode enabled\n",
> queue_idx);
> /* ceil((max pkt len)/mbuf_size) */
> - mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) /
> mbuf_size;
> + mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) /
> mbuf_size;
> } else {
> dev_info(enic, "Scatter rx mode disabled\n");
> mbufs_per_pkt = 1;
> - if (max_rx_pkt_len > mbuf_size) {
> + if (max_rx_pktlen > mbuf_size) {
> dev_warning(enic, "The maximum Rx packet size (%u)
> is"
> " larger than the mbuf size (%u), and"
> " scatter is disabled. Larger packets will"
> " be truncated.\n",
> - max_rx_pkt_len, mbuf_size);
> + max_rx_pktlen, mbuf_size);
> }
> }
>
> @@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> rq_sop->data_queue_enable = 1;
> rq_data->in_use = 1;
> /*
> - * HW does not directly support rxmode.max_rx_pkt_len.
> HW always
> + * HW does not directly support MTU. HW always
> * receives packet sizes up to the "max" MTU.
> * If not using scatter, we can achieve the effect of dropping
> * larger packets by reducing the size of posted buffers.
> * See enic_alloc_rx_queue_mbufs().
> */
> - if (max_rx_pkt_len <
> - enic_mtu_to_max_rx_pktlen(enic->max_mtu)) {
> - dev_warning(enic, "rxmode.max_rx_pkt_len is
> ignored"
> - " when scatter rx mode is in use.\n");
> + if (enic->rte_dev->data->mtu < enic->max_mtu) {
> + dev_warning(enic,
> + "mtu is ignored when scatter rx mode is in
> use.\n");
> }
> } else {
> dev_info(enic, "Rq %u Scatter rx mode not being used\n",
> @@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t
> queue_idx,
> if (mbufs_per_pkt > 1) {
> dev_info(enic, "For max packet size %u and mbuf size %u
> valid"
> " rx descriptor range is %u to %u\n",
> - max_rx_pkt_len, mbuf_size, min_sop + min_data,
> + max_rx_pktlen, mbuf_size, min_sop + min_data,
> max_sop + max_data);
> }
> dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n",
> @@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t
> new_mtu)
> "MTU (%u) is greater than value configured in NIC
> (%u)\n",
> new_mtu, config_mtu);
>
> - /* Update the MTU and maximum packet length */
> - eth_dev->data->mtu = new_mtu;
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - enic_mtu_to_max_rx_pktlen(new_mtu);
> -
> /*
> * If the device has not started (enic_enable), nothing to do.
> * Later, enic_enable() will set up RQs reflecting the new maximum
> diff --git a/drivers/net/fm10k/fm10k_ethdev.c
> b/drivers/net/fm10k/fm10k_ethdev.c
> index 3236290e4021..5e4b361ca6c0 100644
> --- a/drivers/net/fm10k/fm10k_ethdev.c
> +++ b/drivers/net/fm10k/fm10k_ethdev.c
> @@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
> FM10K_SRRCTL_LOOPBACK_SUPPRESS);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
> + if ((dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN +
> 2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
> rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
> uint32_t reg;
> diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c
> b/drivers/net/hinic/hinic_pmd_ethdev.c
> index c01e2ec1d450..2d8271cb6095 100644
> --- a/drivers/net/hinic/hinic_pmd_ethdev.c
> +++ b/drivers/net/hinic/hinic_pmd_ethdev.c
> @@ -315,19 +315,19 @@ static int hinic_dev_configure(struct rte_eth_dev
> *dev)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_RSS_HASH;
>
> /* mtu size is 256~9600 */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> HINIC_MIN_FRAME_SIZE ||
> - dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - HINIC_MAX_JUMBO_FRAME_SIZE) {
> + if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
> + HINIC_MIN_FRAME_SIZE ||
> + HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
> + HINIC_MAX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR,
> - "Max rx pkt len out of range, get max_rx_pkt_len:%d,
> "
> + "Packet length out of range, get packet length:%d, "
> "expect between %d and %d",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + HINIC_MTU_TO_PKTLEN(dev->data-
> >dev_conf.rxmode.mtu),
> HINIC_MIN_FRAME_SIZE,
> HINIC_MAX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
>
> - nic_dev->mtu_size =
> - HINIC_PKTLEN_TO_MTU(dev->data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu;
>
> /* rss template */
> err = hinic_config_mq_mode(dev, TRUE);
> @@ -1530,7 +1530,6 @@ static void hinic_deinit_mac_addr(struct
> rte_eth_dev *eth_dev)
> static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
> {
> struct hinic_nic_dev *nic_dev =
> HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
> - uint32_t frame_size;
> int ret = 0;
>
> PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d,
> max_pkt_len: %d",
> @@ -1548,16 +1547,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev
> *dev, uint16_t mtu)
> return ret;
> }
>
> - /* update max frame size */
> - frame_size = HINIC_MTU_TO_PKTLEN(mtu);
> - if (frame_size > HINIC_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> nic_dev->mtu_size = mtu;
>
> return ret;
> diff --git a/drivers/net/hns3/hns3_ethdev.c
> b/drivers/net/hns3/hns3_ethdev.c
> index 7d37004972bf..4ead227f9122 100644
> --- a/drivers/net/hns3/hns3_ethdev.c
> +++ b/drivers/net/hns3/hns3_ethdev.c
> @@ -2371,41 +2371,6 @@ hns3_init_ring_with_vector(struct hns3_hw *hw)
> return 0;
> }
>
> -static int
> -hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
> -{
> - struct hns3_adapter *hns = dev->data->dev_private;
> - struct hns3_hw *hw = &hns->hw;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> - int ret;
> -
> - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
> - return 0;
> -
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> - hns3_err(hw, "maximum Rx packet length must be greater
> than %u "
> - "and no more than %u when jumbo frame enabled.",
> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> - (uint16_t)HNS3_MAX_FRAME_LEN);
> - return -EINVAL;
> - }
> -
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3_dev_mtu_set(dev, mtu);
> - if (ret)
> - return ret;
> - dev->data->mtu = mtu;
> -
> - return 0;
> -}
> -
> static int
> hns3_setup_dcb(struct rte_eth_dev *dev)
> {
> @@ -2520,8 +2485,8 @@ hns3_dev_configure(struct rte_eth_dev *dev)
> goto cfg_err;
> }
>
> - ret = hns3_refresh_mtu(dev, conf);
> - if (ret)
> + ret = hns3_dev_mtu_set(dev, conf->rxmode.mtu);
> + if (ret != 0)
> goto cfg_err;
>
> ret = hns3_mbuf_dyn_rx_timestamp_register(dev, conf);
> @@ -2616,7 +2581,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
>
> rte_spinlock_lock(&hw->lock);
> - is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true :
> false;
> + is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false;
> frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN);
>
> /*
> @@ -2637,7 +2602,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_ethdev_vf.c
> b/drivers/net/hns3/hns3_ethdev_vf.c
> index 8d9b7979c806..0b5db486f8d6 100644
> --- a/drivers/net/hns3/hns3_ethdev_vf.c
> +++ b/drivers/net/hns3/hns3_ethdev_vf.c
> @@ -784,8 +784,6 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> uint16_t nb_rx_q = dev->data->nb_rx_queues;
> uint16_t nb_tx_q = dev->data->nb_tx_queues;
> struct rte_eth_rss_conf rss_conf;
> - uint32_t max_rx_pkt_len;
> - uint16_t mtu;
> bool gro_en;
> int ret;
>
> @@ -825,28 +823,9 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
> goto cfg_err;
> }
>
> - /*
> - * If jumbo frames are enabled, MTU needs to be refreshed
> - * according to the maximum RX packet length.
> - */
> - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
> - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
> - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
> - hns3_err(hw, "maximum Rx packet length must be
> greater "
> - "than %u and less than %u when jumbo
> frame enabled.",
> - (uint16_t)HNS3_DEFAULT_FRAME_LEN,
> - (uint16_t)HNS3_MAX_FRAME_LEN);
> - ret = -EINVAL;
> - goto cfg_err;
> - }
> -
> - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len);
> - ret = hns3vf_dev_mtu_set(dev, mtu);
> - if (ret)
> - goto cfg_err;
> - dev->data->mtu = mtu;
> - }
> + ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu);
> + if (ret != 0)
> + goto cfg_err;
>
> ret = hns3vf_dev_configure_vlan(dev);
> if (ret)
> @@ -935,7 +914,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> else
> dev->data->dev_conf.rxmode.offloads &=
>
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> rte_spinlock_unlock(&hw->lock);
>
> return 0;
> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
> index 481872e3957f..a260212f73f1 100644
> --- a/drivers/net/hns3/hns3_rxtx.c
> +++ b/drivers/net/hns3/hns3_rxtx.c
> @@ -1735,18 +1735,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw
> *hw, uint16_t buf_size,
> uint16_t nb_desc)
> {
> struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
> - struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode;
> eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
> + uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD;
> uint16_t min_vec_bds;
>
> /*
> * HNS3 hardware network engine set scattered as default. If the
> driver
> * is not work in scattered mode and the pkts greater than buf_size
> - * but smaller than max_rx_pkt_len will be distributed to multiple
> BDs.
> + * but smaller than frame size will be distributed to multiple BDs.
> * Driver cannot handle this situation.
> */
> - if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size)
> {
> - hns3_err(hw, "max_rx_pkt_len is not allowed to be set
> greater "
> + if (!hw->data->scattered_rx && frame_size > buf_size) {
> + hns3_err(hw, "frame size is not allowed to be set greater "
> "than rx_buf_len if scattered is off.");
> return -EINVAL;
> }
> @@ -1958,7 +1958,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
> }
>
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
> - dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
> + dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len)
> dev->data->scattered_rx = true;
> }
>
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index bd97d93dd746..ab571a921f9e 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -11775,14 +11775,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > I40E_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return ret;
> }
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
> index d5847ac6b546..1d27cf2b0a01 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -2909,8 +2909,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
> }
>
> rxq->max_pkt_len =
> - RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
> - rxq->rx_buf_len), data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq-
> >rx_buf_len,
> + data->mtu + I40E_ETH_OVERHEAD);
> if (data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
> rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 5a5a7f59e152..0eabce275d92 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -576,13 +576,14 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct
> iavf_rx_queue *rxq)
> struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> struct rte_eth_dev_data *dev_data = dev->data;
> uint16_t buf_size, max_pkt_len;
> + uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD;
>
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM;
>
> /* Calculate the maximum packet length allowed */
> max_pkt_len = RTE_MIN((uint32_t)
> rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + frame_size);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> @@ -839,7 +840,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
>
> adapter->stopped = 0;
>
> - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD;
> vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
> dev->data->nb_tx_queues);
> num_queue_pairs = vf->num_queue_pairs;
> @@ -1472,15 +1473,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IAVF_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return ret;
> }
>
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c
> b/drivers/net/ice/ice_dcf_ethdev.c
> index 4e4cdbcd7d71..c3c7ad88f250 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -66,9 +66,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct
> ice_rx_queue *rxq)
> buf_size = rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM;
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> ICE_RLAN_CTX_DBUF_S));
> - max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> + dev->data->mtu + ICE_ETH_OVERHEAD);
>
> /* Check if the jumbo frame and maximum packet length are set
> * correctly.
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 9ab7704ff003..8ee1335ac6cf 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -3603,8 +3603,8 @@ ice_dev_start(struct rte_eth_dev *dev)
> pf->adapter_stopped = false;
>
> /* Set the max frame size to default value*/
> - max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ?
> - pf->dev_data->dev_conf.rxmode.max_rx_pkt_len :
> + max_frame_size = pf->dev_data->mtu ?
> + pf->dev_data->mtu + ICE_ETH_OVERHEAD :
> ICE_FRAME_SIZE_MAX;
>
> /* Set the max frame size to HW*/
> @@ -3992,14 +3992,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EBUSY;
> }
>
> - if (frame_size > ICE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> return 0;
> }
> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> index 83fb788e6930..f9ef6ce57277 100644
> --- a/drivers/net/ice/ice_rxtx.c
> +++ b/drivers/net/ice/ice_rxtx.c
> @@ -271,15 +271,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
> uint32_t rxdid = ICE_RXDID_COMMS_OVS;
> uint32_t regval;
> struct ice_adapter *ad = rxq->vsi->adapter;
> + uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD;
>
> /* Set buffer size as the head split is disabled. */
> buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> RTE_PKTMBUF_HEADROOM);
> rxq->rx_hdr_len = 0;
> rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 <<
> ICE_RLAN_CTX_DBUF_S));
> - rxq->max_pkt_len = RTE_MIN((uint32_t)
> - ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> - dev_data-
> >dev_conf.rxmode.max_rx_pkt_len);
> + rxq->max_pkt_len =
> + RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq-
> >rx_buf_len,
> + frame_size);
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
> @@ -385,11 +386,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue
> *rxq)
> return -EINVAL;
> }
>
> - buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) -
> - RTE_PKTMBUF_HEADROOM);
> -
> /* Check if scattered RX needs to be used. */
> - if (rxq->max_pkt_len > buf_size)
> + if (frame_size > buf_size)
> dev_data->scattered_rx = 1;
>
> rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx);
> diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
> index 224a0954836b..b26723064b07 100644
> --- a/drivers/net/igc/igc_ethdev.c
> +++ b/drivers/net/igc/igc_ethdev.c
> @@ -20,13 +20,6 @@
>
> #define IGC_INTEL_VENDOR_ID 0x8086
>
> -/*
> - * The overhead from MTU to max frame size.
> - * Considering VLAN so tag needs to be counted.
> - */
> -#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> - RTE_ETHER_CRC_LEN +
> VLAN_TAG_SIZE)
> -
> #define IGC_FC_PAUSE_TIME 0x0680
> #define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */
> #define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */
> @@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
>
> /* switch to jumbo mode if needed */
> if (mtu > RTE_ETHER_MTU) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl |= IGC_RCTL_LPE;
> } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> rctl &= ~IGC_RCTL_LPE;
> }
> IGC_WRITE_REG(hw, IGC_RCTL, rctl);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> return 0;
> }
> @@ -2486,6 +2473,7 @@ static int
> igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev
> *dev)
> if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <
> - RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> + if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error, min
> is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> - VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU);
> + frame_size, VLAN_TAG_SIZE +
> RTE_ETHER_MIN_MTU);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext &
> ~IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> @@ -2519,6 +2498,7 @@ static int
> igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
> {
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD;
> uint32_t ctrl_ext;
>
> ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT);
> @@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev
> *dev)
> if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN)
> return 0;
>
> - if ((dev->data->dev_conf.rxmode.offloads &
> - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
> - goto write_ext_vlan;
> -
> /* Update maximum packet length */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) {
> + if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) {
> PMD_DRV_LOG(ERR, "Maximum packet length %u error, max
> is %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE);
> + frame_size, MAX_RX_JUMBO_FRAME_SIZE);
> return -EINVAL;
> }
> - dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE;
> - IGC_WRITE_REG(hw, IGC_RLPML,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + IGC_WRITE_REG(hw, IGC_RLPML, frame_size);
>
> -write_ext_vlan:
> IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext |
> IGC_CTRL_EXT_EXT_VLAN);
> return 0;
> }
> diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
> index 7b6c209df3b6..b3473b5b1646 100644
> --- a/drivers/net/igc/igc_ethdev.h
> +++ b/drivers/net/igc/igc_ethdev.h
> @@ -35,6 +35,13 @@ extern "C" {
> #define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE
> #define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE *
> IGC_HKEY_MAX_INDEX)
>
> +/*
> + * The overhead from MTU to max frame size.
> + * Considering VLAN so tag needs to be counted.
> + */
> +#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \
> + RTE_ETHER_CRC_LEN +
> VLAN_TAG_SIZE * 2)
> +
> /*
> * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN
> should be
> * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary.
> diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
> index b5489eedd220..28d3076439c3 100644
> --- a/drivers/net/igc/igc_txrx.c
> +++ b/drivers/net/igc/igc_txrx.c
> @@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> struct igc_rx_queue *rxq;
> struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
> uint64_t offloads = dev->data->dev_conf.rxmode.offloads;
> - uint32_t max_rx_pkt_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen;
> uint32_t rctl;
> uint32_t rxcsum;
> uint16_t buf_size;
> @@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
>
> /* Configure support of jumbo frames, if any. */
> - if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> + if ((offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) != 0)
> rctl |= IGC_RCTL_LPE;
> -
> - /*
> - * Set maximum packet length by default, and might be
> updated
> - * together with enabling/disabling dual VLAN.
> - */
> - IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len);
> - } else {
> + else
> rctl &= ~IGC_RCTL_LPE;
> - }
> +
> + max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD;
> + /*
> + * Set maximum packet length by default, and might be updated
> + * together with enabling/disabling dual VLAN.
> + */
> + IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen);
>
> /* Configure and enable each RX queue. */
> rctl_bsize = 0;
> @@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev)
> IGC_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN
> */
> - if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size)
> + if (max_rx_pktlen > buf_size)
> dev->data->scattered_rx = 1;
> } else {
> /*
> diff --git a/drivers/net/ionic/ionic_ethdev.c
> b/drivers/net/ionic/ionic_ethdev.c
> index e6207939665e..97447a10e46a 100644
> --- a/drivers/net/ionic/ionic_ethdev.c
> +++ b/drivers/net/ionic/ionic_ethdev.c
> @@ -343,25 +343,15 @@ static int
> ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
> {
> struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev);
> - uint32_t max_frame_size;
> int err;
>
> IONIC_PRINT_CALL();
>
> /*
> * Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU
> - * is done by the the API.
> + * is done by the API.
> */
>
> - /*
> - * Max frame size is MTU + Ethernet header + VLAN + QinQ
> - * (plus ETHER_CRC_LEN if the adapter is able to keep CRC)
> - */
> - max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4;
> -
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len <
> max_frame_size)
> - return -EINVAL;
> -
> err = ionic_lif_change_mtu(lif, mtu);
> if (err)
> return err;
> diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
> index b83ea1bcaa6a..3f5fc66abf71 100644
> --- a/drivers/net/ionic/ionic_rxtx.c
> +++ b/drivers/net/ionic/ionic_rxtx.c
> @@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq,
> struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index];
> struct rte_mbuf *rxm, *rxm_seg;
> uint32_t max_frame_size =
> - rxq->qcq.lif->eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint64_t pkt_flags = 0;
> uint32_t pkt_type;
> struct ionic_rx_stats *stats = &rxq->stats;
> @@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len)
> int __rte_cold
> ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t
> rx_queue_id)
> {
> - uint32_t frame_size = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> uint8_t *rx_queue_state = eth_dev->data->rx_queue_state;
> struct ionic_rx_qcq *rxq;
> int err;
> @@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts,
> {
> struct ionic_rx_qcq *rxq = rx_queue;
> uint32_t frame_size =
> - rxq->qcq.lif->eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN;
> struct ionic_rx_service service_cb_arg;
>
> service_cb_arg.rx_pkts = rx_pkts;
> diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c
> b/drivers/net/ipn3ke/ipn3ke_representor.c
> index 589d9fa5877d..3634c0c8c5f0 100644
> --- a/drivers/net/ipn3ke/ipn3ke_representor.c
> +++ b/drivers/net/ipn3ke/ipn3ke_representor.c
> @@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev
> *ethdev, uint16_t mtu)
> return -EBUSY;
> }
>
> - if (frame_size > IPN3KE_ETH_MAX_LEN)
> - dev_data->dev_conf.rxmode.offloads |=
> - (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
> + if (mtu > RTE_ETHER_MTU)
> + dev_data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> - dev_data->dev_conf.rxmode.offloads &=
> - (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
> -
> - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + dev_data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (rpst->i40e_pf_eth) {
> ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst-
> >i40e_pf_eth,
> diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c
> b/drivers/net/ixgbe/ixgbe_ethdev.c
> index 47693c0c47cd..31e67d86e77b 100644
> --- a/drivers/net/ixgbe/ixgbe_ethdev.c
> +++ b/drivers/net/ixgbe/ixgbe_ethdev.c
> @@ -5174,7 +5174,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> struct ixgbe_hw *hw;
> struct rte_eth_dev_info dev_info;
> uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD;
> - struct rte_eth_dev_data *dev_data = dev->data;
> int ret;
>
> ret = ixgbe_dev_info_get(dev, &dev_info);
> @@ -5188,9 +5187,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> /* If device is started, refuse mtu that requires the support of
> * scattered packets when this feature has not been enabled before.
> */
> - if (dev_data->dev_started && !dev_data->scattered_rx &&
> - (frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
> - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> + if (dev->data->dev_started && !dev->data->scattered_rx &&
> + frame_size + 2 * IXGBE_VLAN_TAG_SIZE >
> + dev->data->min_rx_buf_size -
> RTE_PKTMBUF_HEADROOM) {
> PMD_INIT_LOG(ERR, "Stop port first.");
> return -EINVAL;
> }
> @@ -5199,23 +5198,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
>
> /* switch to jumbo mode if needed */
> - if (frame_size > IXGBE_ETH_MAX_LEN) {
> - dev->data->dev_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> + if (mtu > RTE_ETHER_MTU) {
> + dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> } else {
> - dev->data->dev_conf.rxmode.offloads &=
> - ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> + dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> }
> IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> maxfrs &= 0x0000FFFF;
> - maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16);
> + maxfrs |= (frame_size << 16);
> IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
>
> return 0;
> @@ -6272,12 +6266,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev
> *dev,
> * set as 0x4.
> */
> if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
> - (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
> - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> - IXGBE_MMW_SIZE_JUMBO_FRAME);
> + (dev->data->mtu + IXGBE_ETH_OVERHEAD >=
> IXGBE_MAX_JUMBO_FRAME_SIZE))
> + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> IXGBE_MMW_SIZE_JUMBO_FRAME);
> else
> - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> - IXGBE_MMW_SIZE_DEFAULT);
> + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
> IXGBE_MMW_SIZE_DEFAULT);
>
> /* Set RTTBCNRC of queue X */
> IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx);
> @@ -6549,8 +6541,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
>
> hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
>
> - if (mtu < RTE_ETHER_MIN_MTU ||
> - max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN)
> + if (mtu < RTE_ETHER_MIN_MTU || max_frame >
> RTE_ETHER_MAX_JUMBO_FRAME_LEN)
> return -EINVAL;
>
> /* If device is started, refuse mtu that requires the support of
> @@ -6558,7 +6549,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> */
> if (dev_data->dev_started && !dev_data->scattered_rx &&
> (max_frame + 2 * IXGBE_VLAN_TAG_SIZE >
> - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
> + dev->data->min_rx_buf_size -
> RTE_PKTMBUF_HEADROOM)) {
> PMD_INIT_LOG(ERR, "Stop port first.");
> return -EINVAL;
> }
> @@ -6575,8 +6566,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (ixgbevf_rlpml_set_vf(hw, max_frame))
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
> return 0;
> }
>
> diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
> index fbf2b17d160f..9bcbc445f2d0 100644
> --- a/drivers/net/ixgbe/ixgbe_pf.c
> +++ b/drivers/net/ixgbe/ixgbe_pf.c
> @@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf,
> uint32_t *msgbuf)
> * if PF has jumbo frames enabled which means
> legacy
> * VFs are disabled.
> */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - IXGBE_ETH_MAX_LEN)
> + if (dev->data->mtu > RTE_ETHER_MTU)
> break;
> /* fall through */
> default:
> @@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf,
> uint32_t *msgbuf)
> * legacy VFs.
> */
> if (max_frame > IXGBE_ETH_MAX_LEN ||
> - dev->data->dev_conf.rxmode.max_rx_pkt_len >
> - IXGBE_ETH_MAX_LEN)
> + dev->data->mtu > RTE_ETHER_MTU)
> return -1;
> break;
> }
> diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
> index bfdfd5e755de..03991711fd6e 100644
> --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> @@ -5063,6 +5063,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> uint16_t buf_size;
> uint16_t i;
> struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
> + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
> int rc;
>
> PMD_INIT_FUNC_TRACE();
> @@ -5098,7 +5099,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> hlreg0 |= IXGBE_HLREG0_JUMBOEN;
> maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
> maxfrs &= 0x0000FFFF;
> - maxfrs |= (rx_conf->max_rx_pkt_len << 16);
> + maxfrs |= (frame_size << 16);
> IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs);
> } else
> hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
> @@ -5172,8 +5173,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
> IXGBE_SRRCTL_BSIZEPKT_SHIFT);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * IXGBE_VLAN_TAG_SIZE >
> buf_size)
> + if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size)
> dev->data->scattered_rx = 1;
> if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> @@ -5653,6 +5653,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
> struct ixgbe_hw *hw;
> struct ixgbe_rx_queue *rxq;
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD;
> uint64_t bus_addr;
> uint32_t srrctl, psrtype = 0;
> uint16_t buf_size;
> @@ -5689,10 +5690,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
> * ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way,
> * VF packets received can work in all cases.
> */
> - if (ixgbevf_rlpml_set_vf(hw,
> - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
> + if (ixgbevf_rlpml_set_vf(hw, frame_size) != 0) {
> PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + frame_size);
> return -EINVAL;
> }
>
> @@ -5751,8 +5751,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
> /* It adds dual VLAN length for supporting dual VLAN */
> - (rxmode->max_rx_pkt_len +
> - 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
> + (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter
> mode");
> dev->data->scattered_rx = 1;
> diff --git a/drivers/net/liquidio/lio_ethdev.c
> b/drivers/net/liquidio/lio_ethdev.c
> index b72060a4499b..976916f870a5 100644
> --- a/drivers/net/liquidio/lio_ethdev.c
> +++ b/drivers/net/liquidio/lio_ethdev.c
> @@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> {
> struct lio_device *lio_dev = LIO_DEV(eth_dev);
> uint16_t pf_mtu = lio_dev->linfo.link.s.mtu;
> - uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> struct lio_dev_ctrl_cmd ctrl_cmd;
> struct lio_ctrl_pkt ctrl_pkt;
>
> @@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> return -1;
> }
>
> - if (frame_len > LIO_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> eth_dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> eth_dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
> - eth_dev->data->mtu = mtu;
> -
> return 0;
> }
>
> @@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev)
> static int
> lio_dev_start(struct rte_eth_dev *eth_dev)
> {
> - uint16_t mtu;
> - uint32_t frame_len = eth_dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> struct lio_device *lio_dev = LIO_DEV(eth_dev);
> uint16_t timeout = LIO_MAX_CMD_TIMEOUT;
> int ret = 0;
> @@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev)
> goto dev_mtu_set_error;
> }
>
> - mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN -
> RTE_ETHER_CRC_LEN);
> - if (mtu < RTE_ETHER_MIN_MTU)
> - mtu = RTE_ETHER_MIN_MTU;
> -
> - if (eth_dev->data->mtu != mtu) {
> - ret = lio_dev_mtu_set(eth_dev, mtu);
> - if (ret)
> - goto dev_mtu_set_error;
> - }
> + ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu);
> + if (ret != 0)
> + goto dev_mtu_set_error;
>
> return 0;
>
> diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
> index 978cbb8201ea..4a5cfd22aa71 100644
> --- a/drivers/net/mlx4/mlx4_rxq.c
> +++ b/drivers/net/mlx4/mlx4_rxq.c
> @@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> int ret;
> uint32_t crc_present;
> uint64_t offloads;
> + uint32_t max_rx_pktlen;
>
> offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
>
> @@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> };
> /* Enable scattered packets support for this queue if necessary. */
> MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM);
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
> - (mb_len - RTE_PKTMBUF_HEADROOM)) {
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> + if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
> ;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> - uint32_t size =
> - RTE_PKTMBUF_HEADROOM +
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen;
> uint32_t sges_n;
>
> /*
> @@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> /* Make sure sges_n did not overflow. */
> size = mb_len * (1 << rxq->sges_n);
> size -= RTE_PKTMBUF_HEADROOM;
> - if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
> + if (size < max_rx_pktlen) {
> rte_errno = EOVERFLOW;
> ERROR("%p: too many SGEs (%u) needed to handle"
> " requested maximum packet size %u",
> (void *)dev,
> - 1 << sges_n,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + 1 << sges_n, max_rx_pktlen);
> goto error;
> }
> } else {
> WARN("%p: the requested maximum Rx packet size (%u) is"
> " larger than a single mbuf (%u) and scattered"
> " mode has not been requested",
> - (void *)dev,
> - dev->data->dev_conf.rxmode.max_rx_pkt_len,
> + (void *)dev, max_rx_pktlen,
> mb_len - RTE_PKTMBUF_HEADROOM);
> }
> DEBUG("%p: maximum number of segments per packet: %u",
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index abd8ce798986..6f4f351222d3 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -1330,10 +1330,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> uint64_t offloads = conf->offloads |
> dev->data->dev_conf.rxmode.offloads;
> unsigned int lro_on_queue = !!(offloads &
> DEV_RX_OFFLOAD_TCP_LRO);
> - unsigned int max_rx_pkt_len = lro_on_queue ?
> + unsigned int max_rx_pktlen = lro_on_queue ?
> dev->data->dev_conf.rxmode.max_lro_pkt_size :
> - dev->data->dev_conf.rxmode.max_rx_pkt_len;
> - unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
> + dev->data->mtu + (unsigned
> int)RTE_ETHER_HDR_LEN +
> + RTE_ETHER_CRC_LEN;
> + unsigned int non_scatter_min_mbuf_size = max_rx_pktlen +
>
> RTE_PKTMBUF_HEADROOM;
> unsigned int max_lro_size = 0;
> unsigned int first_mb_free_size = mb_len -
> RTE_PKTMBUF_HEADROOM;
> @@ -1372,7 +1373,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> * needed to handle max size packets, replace zero length
> * with the buffer length from the pool.
> */
> - tail_len = max_rx_pkt_len;
> + tail_len = max_rx_pktlen;
> do {
> struct mlx5_eth_rxseg *hw_seg =
> &tmpl->rxq.rxseg[tmpl->rxq.rxseg_n];
> @@ -1410,7 +1411,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to
> handle"
> " requested maximum packet size %u, the
> maximum"
> " supported are %u", dev->data->port_id,
> - tmpl->rxq.rxseg_n, max_rx_pkt_len,
> + tmpl->rxq.rxseg_n, max_rx_pktlen,
> MLX5_MAX_RXQ_NSEG);
> rte_errno = ENOTSUP;
> goto error;
> @@ -1435,7 +1436,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
> " configured and no enough mbuf space(%u) to
> contain "
> "the maximum RX packet length(%u) with head-
> room(%u)",
> - dev->data->port_id, idx, mb_len, max_rx_pkt_len,
> + dev->data->port_id, idx, mb_len, max_rx_pktlen,
> RTE_PKTMBUF_HEADROOM);
> rte_errno = ENOSPC;
> goto error;
> @@ -1454,7 +1455,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> * following conditions are met:
> * - MPRQ is enabled.
> * - The number of descs is more than the number of strides.
> - * - max_rx_pkt_len plus overhead is less than the max size
> + * - max_rx_pktlen plus overhead is less than the max size
> * of a stride or mprq_stride_size is specified by a user.
> * Need to make sure that there are enough strides to encap
> * the maximum packet size in case mprq_stride_size is set.
> @@ -1478,7 +1479,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> !!(offloads & DEV_RX_OFFLOAD_SCATTER);
> tmpl->rxq.mprq_max_memcpy_len =
> RTE_MIN(first_mb_free_size,
> config->mprq.max_memcpy_len);
> - max_lro_size = RTE_MIN(max_rx_pkt_len,
> + max_lro_size = RTE_MIN(max_rx_pktlen,
> (1u << tmpl->rxq.strd_num_n) *
> (1u << tmpl->rxq.strd_sz_n));
> DRV_LOG(DEBUG,
> @@ -1487,9 +1488,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> dev->data->port_id, idx,
> tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
> } else if (tmpl->rxq.rxseg_n == 1) {
> - MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
> + MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size);
> tmpl->rxq.sges_n = 0;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> } else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
> unsigned int sges_n;
>
> @@ -1511,13 +1512,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> "port %u too many SGEs (%u) needed to
> handle"
> " requested maximum packet size %u, the
> maximum"
> " supported are %u", dev->data->port_id,
> - 1 << sges_n, max_rx_pkt_len,
> + 1 << sges_n, max_rx_pktlen,
> 1u << MLX5_MAX_LOG_RQ_SEGS);
> rte_errno = ENOTSUP;
> goto error;
> }
> tmpl->rxq.sges_n = sges_n;
> - max_lro_size = max_rx_pkt_len;
> + max_lro_size = max_rx_pktlen;
> }
> if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
> DRV_LOG(WARNING,
> diff --git a/drivers/net/mvneta/mvneta_ethdev.c
> b/drivers/net/mvneta/mvneta_ethdev.c
> index a3ee15020466..520c6fdb1d31 100644
> --- a/drivers/net/mvneta/mvneta_ethdev.c
> +++ b/drivers/net/mvneta/mvneta_ethdev.c
> @@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME)
> - dev->data->mtu = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> - MRVL_NETA_ETH_HDRS_LEN;
> -
> if (dev->data->dev_conf.txmode.offloads &
> DEV_TX_OFFLOAD_MULTI_SEGS)
> priv->multiseg = 1;
>
> @@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> return -EINVAL;
> }
>
> - dev->data->mtu = mtu;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
> -
> if (!priv->ppio)
> /* It is OK. New MTU will be set later on mvneta_dev_start */
> return 0;
> diff --git a/drivers/net/mvneta/mvneta_rxtx.c
> b/drivers/net/mvneta/mvneta_rxtx.c
> index dfa7ecc09039..2cd4fb31348b 100644
> --- a/drivers/net/mvneta/mvneta_rxtx.c
> +++ b/drivers/net/mvneta/mvneta_rxtx.c
> @@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> struct mvneta_priv *priv = dev->data->dev_private;
> struct mvneta_rxq *rxq;
> uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
> - uint32_t max_rx_pkt_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
>
> frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
> MVNETA_PKT_EFFEC_OFFS;
>
> - if (frame_size < max_rx_pkt_len) {
> + if (frame_size < max_rx_pktlen) {
> MVNETA_LOG(ERR,
> "Mbuf size must be increased to %u bytes to hold up
> "
> "to %u bytes of data.",
> - buf_size + max_rx_pkt_len - frame_size,
> - max_rx_pkt_len);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> - MVNETA_LOG(INFO, "Setting max rx pkt len to %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_rx_pktlen + buf_size - frame_size,
> + max_rx_pktlen);
> + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
> + MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
> }
>
> if (dev->data->rx_queues[idx]) {
> diff --git a/drivers/net/mvpp2/mrvl_ethdev.c
> b/drivers/net/mvpp2/mrvl_ethdev.c
> index 078aefbb8da4..5ce71661c84e 100644
> --- a/drivers/net/mvpp2/mrvl_ethdev.c
> +++ b/drivers/net/mvpp2/mrvl_ethdev.c
> @@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> - if (dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - dev->data->mtu = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> - MRVL_PP2_ETH_HDRS_LEN;
> - if (dev->data->mtu > priv->max_mtu) {
> - MRVL_LOG(ERR, "inherit MTU %u from
> max_rx_pkt_len %u is larger than max_mtu %u\n",
> - dev->data->mtu,
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> - priv->max_mtu);
> - return -EINVAL;
> - }
> + if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) {
> + MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n",
> + dev->data->dev_conf.rxmode.mtu,
> + priv->max_mtu);
> + return -EINVAL;
> }
>
> if (dev->data->dev_conf.txmode.offloads &
> DEV_TX_OFFLOAD_MULTI_SEGS)
> @@ -595,9 +590,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> return -EINVAL;
> }
>
> - dev->data->mtu = mtu;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE;
> -
> if (!priv->ppio)
> return 0;
>
> @@ -1994,7 +1986,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> struct mrvl_priv *priv = dev->data->dev_private;
> struct mrvl_rxq *rxq;
> uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp);
> - uint32_t max_rx_pkt_len = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
> int ret, tc, inq;
> uint64_t offloads;
>
> @@ -2009,17 +2001,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> return -EFAULT;
> }
>
> - frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
> - MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN;
> - if (frame_size < max_rx_pkt_len) {
> + frame_size = buf_size - RTE_PKTMBUF_HEADROOM -
> MRVL_PKT_EFFEC_OFFS;
> + if (frame_size < max_rx_pktlen) {
> MRVL_LOG(WARNING,
> "Mbuf size must be increased to %u bytes to hold up
> "
> "to %u bytes of data.",
> - buf_size + max_rx_pkt_len - frame_size,
> - max_rx_pkt_len);
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> - MRVL_LOG(INFO, "Setting max rx pkt len to %u",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + max_rx_pktlen + buf_size - frame_size,
> + max_rx_pktlen);
> + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN;
> + MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu);
> }
>
> if (dev->data->rx_queues[idx]) {
> diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
> index 1b4bc33593fb..a2031a7a82cc 100644
> --- a/drivers/net/nfp/nfp_common.c
> +++ b/drivers/net/nfp/nfp_common.c
> @@ -370,7 +370,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
> }
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - hw->mtu = rxmode->max_rx_pkt_len;
> + hw->mtu = dev->data->mtu;
>
> if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
> ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
> @@ -963,16 +963,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> }
>
> /* switch to jumbo mode if needed */
> - if ((uint32_t)mtu > RTE_ETHER_MTU)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
> -
> /* writing to configuration space */
> - nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu);
> + nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
>
> hw->mtu = mtu;
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.c
> b/drivers/net/octeontx/octeontx_ethdev.c
> index 9f4c0503b4d4..69c3bda12df8 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.c
> +++ b/drivers/net/octeontx/octeontx_ethdev.c
> @@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev
> *eth_dev, uint16_t mtu)
> if (rc)
> return rc;
>
> - if (frame_size > OCCTX_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> octeontx_log_info("Received pkt beyond maxlen %d will be
> dropped",
> frame_size);
>
> @@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq
> *rxq)
> buffsz = mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM;
>
> /* Setup scatter mode if needed by jumbo */
> - if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (data->mtu > buffsz) {
> nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
> nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
> @@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq
> *rxq)
> evdev_priv->rx_offload_flags = nic->rx_offload_flags;
> evdev_priv->tx_offload_flags = nic->tx_offload_flags;
>
> - /* Setup MTU based on max_rx_pkt_len */
> - nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len -
> OCCTX_L2_OVERHEAD;
> + /* Setup MTU */
> + nic->mtu = data->mtu;
>
> return 0;
> }
> @@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev)
> octeontx_recheck_rx_offloads(rxq);
> }
>
> - /* Setting up the mtu based on max_rx_pkt_len */
> + /* Setting up the mtu */
> ret = octeontx_dev_mtu_set(dev, nic->mtu);
> if (ret) {
> octeontx_log_err("Failed to set default MTU size %d", ret);
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c
> b/drivers/net/octeontx2/otx2_ethdev.c
> index 75d4cabf2e7c..787e8d890215 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.c
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct
> otx2_eth_rxq *rxq)
> mbp_priv = rte_mempool_get_priv(rxq->pool);
> buffsz = mbp_priv->mbuf_data_room_size -
> RTE_PKTMBUF_HEADROOM;
>
> - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
> + if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
> dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
> dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
>
> diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c
> b/drivers/net/octeontx2/otx2_ethdev_ops.c
> index 552e6bd43d2b..cf7804157198 100644
> --- a/drivers/net/octeontx2/otx2_ethdev_ops.c
> +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
> @@ -59,14 +59,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev,
> uint16_t mtu)
> if (rc)
> return rc;
>
> - if (frame_size > NIX_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* Update max_rx_pkt_len */
> - data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> -
> return rc;
> }
>
> @@ -75,7 +72,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> {
> struct rte_eth_dev_data *data = eth_dev->data;
> struct otx2_eth_rxq *rxq;
> - uint16_t mtu;
> int rc;
>
> rxq = data->rx_queues[0];
> @@ -83,10 +79,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
> /* Setup scatter mode if needed by jumbo */
> otx2_nix_enable_mseg_on_jumbo(rxq);
>
> - /* Setup MTU based on max_rx_pkt_len */
> - mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD;
> -
> - rc = otx2_nix_mtu_set(eth_dev, mtu);
> + rc = otx2_nix_mtu_set(eth_dev, data->mtu);
> if (rc)
> otx2_err("Failed to set default MTU size %d", rc);
>
> diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
> index feec4d10a26e..2619bd2f2a19 100644
> --- a/drivers/net/pfe/pfe_ethdev.c
> +++ b/drivers/net/pfe/pfe_ethdev.c
> @@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev)
> static int
> pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> - int ret;
> struct pfe_eth_priv_s *priv = dev->data->dev_private;
> uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
>
> /*TODO Support VLAN*/
> - ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size);
> - if (!ret)
> - dev->data->mtu = mtu;
> -
> - return ret;
> + return gemac_set_rx(priv->EMAC_baseaddr, frame_size);
> }
>
> /* pfe_eth_enet_addr_byte_mac
> diff --git a/drivers/net/qede/qede_ethdev.c
> b/drivers/net/qede/qede_ethdev.c
> index a4304e0eff44..4b971fd1fe3c 100644
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev
> *eth_dev)
> return -ENOMEM;
> }
>
> - /* If jumbo enabled adjust MTU */
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - eth_dev->data->mtu =
> - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
> - RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
> -
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
> eth_dev->data->scattered_rx = 1;
>
> @@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
> struct rte_eth_dev_info dev_info = {0};
> struct qede_fastpath *fp;
> - uint32_t max_rx_pkt_len;
> uint32_t frame_size;
> uint16_t bufsz;
> bool restart = false;
> @@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> DP_ERR(edev, "Error during getting ethernet device info\n");
> return rc;
> }
> - max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN;
> - frame_size = max_rx_pkt_len;
> +
> + frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN;
> if (mtu < RTE_ETHER_MIN_MTU || frame_size >
> dev_info.max_rx_pktlen) {
> DP_ERR(edev, "MTU %u out of range, %u is maximum
> allowable\n",
> mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN -
> @@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> fp->rxq->rx_buf_size = rc;
> }
> }
> - if (frame_size > QEDE_ETH_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> dev->data->dev_started = 1;
> }
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len;
> -
> return 0;
> }
>
> diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
> index 35cde561ba59..c2263787b4ec 100644
> --- a/drivers/net/qede/qede_rxtx.c
> +++ b/drivers/net/qede/qede_rxtx.c
> @@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t qid,
> struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
> struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> struct qede_rx_queue *rxq;
> - uint16_t max_rx_pkt_len;
> + uint16_t max_rx_pktlen;
> uint16_t bufsz;
> int rc;
>
> @@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t qid,
> dev->data->rx_queues[qid] = NULL;
> }
>
> - max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len;
> + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
>
> /* Fix up RX buffer size */
> bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM;
> /* cache align the mbuf size to simplfy rx_buf_size calculation */
> bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
> if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) ||
> - (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
> + (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) {
> if (!dev->data->scattered_rx) {
> DP_INFO(edev, "Forcing scatter-gather mode\n");
> dev->data->scattered_rx = 1;
> }
> }
>
> - rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len);
> + rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen);
> if (rc < 0)
> return rc;
>
> diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
> index 2db0d000c3ad..1f55c90b419d 100644
> --- a/drivers/net/sfc/sfc_ethdev.c
> +++ b/drivers/net/sfc/sfc_ethdev.c
> @@ -1066,15 +1066,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
>
> /*
> * The driver does not use it, but other PMDs update jumbo frame
> - * flag and max_rx_pkt_len when MTU is set.
> + * flag when MTU is set.
> */
> if (mtu > RTE_ETHER_MTU) {
> struct rte_eth_rxmode *rxmode = &dev->data-
> >dev_conf.rxmode;
> rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> }
>
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
> -
> sfc_adapter_unlock(sa);
>
> sfc_log_init(sa, "done");
> diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
> index adb2b2cb8175..22f74735db08 100644
> --- a/drivers/net/sfc/sfc_port.c
> +++ b/drivers/net/sfc/sfc_port.c
> @@ -383,14 +383,10 @@ sfc_port_configure(struct sfc_adapter *sa)
> {
> const struct rte_eth_dev_data *dev_data = sa->eth_dev->data;
> struct sfc_port *port = &sa->port;
> - const struct rte_eth_rxmode *rxmode = &dev_data-
> >dev_conf.rxmode;
>
> sfc_log_init(sa, "entry");
>
> - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - port->pdu = rxmode->max_rx_pkt_len;
> - else
> - port->pdu = EFX_MAC_PDU(dev_data->mtu);
> + port->pdu = EFX_MAC_PDU(dev_data->mtu);
>
> return 0;
> }
> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
> index c515de3bf71d..0a8d29277aeb 100644
> --- a/drivers/net/tap/rte_eth_tap.c
> +++ b/drivers/net/tap/rte_eth_tap.c
> @@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t
> mtu)
> {
> struct pmd_internals *pmd = dev->data->dev_private;
> struct ifreq ifr = { .ifr_mtu = mtu };
> - int err = 0;
>
> - err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
> - if (!err)
> - dev->data->mtu = mtu;
> -
> - return err;
> + return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE);
> }
>
> static int
> diff --git a/drivers/net/thunderx/nicvf_ethdev.c
> b/drivers/net/thunderx/nicvf_ethdev.c
> index 561a98fc81a3..c8ae95a61306 100644
> --- a/drivers/net/thunderx/nicvf_ethdev.c
> +++ b/drivers/net/thunderx/nicvf_ethdev.c
> @@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t
> mtu)
> (frame_size + 2 * VLAN_TAG_SIZE > buffsz *
> NIC_HW_MAX_SEGS))
> return -EINVAL;
>
> - if (frame_size > NIC_HW_L2_MAX_LEN)
> + if (mtu > RTE_ETHER_MTU)
> rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
> @@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t
> mtu)
> if (nicvf_mbox_update_hw_max_frs(nic, mtu))
> return -EINVAL;
>
> - /* Update max_rx_pkt_len */
> - rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN;
> nic->mtu = mtu;
>
> for (i = 0; i < nic->sqs_count; i++)
> @@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev)
> }
>
> /* Setup scatter mode if needed by jumbo */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * VLAN_TAG_SIZE > buffsz)
> + if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 *
> VLAN_TAG_SIZE > buffsz)
> dev->data->scattered_rx = 1;
> if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
> dev->data->scattered_rx = 1;
>
> - /* Setup MTU based on max_rx_pkt_len or default */
> - mtu = dev->data->dev_conf.rxmode.offloads &
> DEV_RX_OFFLOAD_JUMBO_FRAME ?
> - dev->data->dev_conf.rxmode.max_rx_pkt_len
> - - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
> + /* Setup MTU */
> + mtu = dev->data->mtu;
>
> if (nicvf_dev_set_mtu(dev, mtu)) {
> PMD_INIT_LOG(ERR, "Failed to set default mtu size");
> diff --git a/drivers/net/txgbe/txgbe_ethdev.c
> b/drivers/net/txgbe/txgbe_ethdev.c
> index 006399468841..269de9f848dd 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -3486,8 +3486,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev,
> uint16_t mtu)
> return -EINVAL;
> }
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
> + /* switch to jumbo mode if needed */
> + if (mtu > RTE_ETHER_MTU)
> + dev->data->dev_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + else
> + dev->data->dev_conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> if (hw->mode)
> wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> diff --git a/drivers/net/txgbe/txgbe_ethdev.h
> b/drivers/net/txgbe/txgbe_ethdev.h
> index 3021933965c8..44cfcd76bca4 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.h
> +++ b/drivers/net/txgbe/txgbe_ethdev.h
> @@ -55,6 +55,10 @@
> #define TXGBE_5TUPLE_MAX_PRI 7
> #define TXGBE_5TUPLE_MIN_PRI 1
>
> +
> +/* The overhead from MTU to max frame size. */
> +#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN)
> +
> #define TXGBE_RSS_OFFLOAD_ALL ( \
> ETH_RSS_IPV4 | \
> ETH_RSS_NONFRAG_IPV4_TCP | \
> diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c
> b/drivers/net/txgbe/txgbe_ethdev_vf.c
> index 896da8a88770..43dc0ed39b75 100644
> --- a/drivers/net/txgbe/txgbe_ethdev_vf.c
> +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
> @@ -1128,8 +1128,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev,
> uint16_t mtu)
> if (txgbevf_rlpml_set_vf(hw, max_frame))
> return -EINVAL;
>
> - /* update max frame size */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame;
> return 0;
> }
>
> diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
> index 1a261287d1bd..c6cd3803c434 100644
> --- a/drivers/net/txgbe/txgbe_rxtx.c
> +++ b/drivers/net/txgbe/txgbe_rxtx.c
> @@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> /*
> * Configure jumbo frame support, if any.
> */
> - if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> - TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
> - } else {
> - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> - TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
> - }
> + wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
> + TXGBE_FRMSZ_MAX(dev->data->mtu +
> TXGBE_ETH_OVERHEAD));
>
> /*
> * If loopback mode is configured, set LPBK bit.
> @@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
> wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
>
> /* It adds dual VLAN length for supporting dual VLAN */
> - if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> - 2 * TXGBE_VLAN_TAG_SIZE >
> buf_size)
> + if (dev->data->mtu + TXGBE_ETH_OVERHEAD +
> + 2 * TXGBE_VLAN_TAG_SIZE > buf_size)
> dev->data->scattered_rx = 1;
> if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> @@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
> * VF packets received can work in all cases.
> */
> if (txgbevf_rlpml_set_vf(hw,
> - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) {
> + (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) {
> PMD_INIT_LOG(ERR, "Set max packet length to %d failed.",
> - dev->data->dev_conf.rxmode.max_rx_pkt_len);
> + dev->data->mtu + TXGBE_ETH_OVERHEAD);
> return -EINVAL;
> }
>
> @@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
>
> if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
> /* It adds dual VLAN length for supporting dual VLAN */
> - (rxmode->max_rx_pkt_len +
> + (dev->data->mtu + TXGBE_ETH_OVERHEAD +
> 2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
> if (!dev->data->scattered_rx)
> PMD_INIT_LOG(DEBUG, "forcing scatter
> mode");
> diff --git a/drivers/net/virtio/virtio_ethdev.c
> b/drivers/net/virtio/virtio_ethdev.c
> index b60eeb24abe7..5d341a3e23bb 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -930,7 +930,6 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> }
>
> hw->max_rx_pkt_len = frame_size;
> - dev->data->dev_conf.rxmode.max_rx_pkt_len = hw-
> >max_rx_pkt_len;
>
> return 0;
> }
> @@ -2116,14 +2115,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
> return ret;
> }
>
> - if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
> - (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len))
> + if (rxmode->mtu > hw->max_mtu)
> req_features &= ~(1ULL << VIRTIO_NET_F_MTU);
>
> - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> - hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
> - else
> - hw->max_rx_pkt_len = ether_hdr_len + dev->data->mtu;
> + hw->max_rx_pkt_len = ether_hdr_len + rxmode->mtu;
>
> if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
> DEV_RX_OFFLOAD_TCP_CKSUM))
> diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
> index adbd40808396..68e3c13730ad 100644
> --- a/examples/bbdev_app/main.c
> +++ b/examples/bbdev_app/main.c
> @@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf)
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/bond/main.c b/examples/bond/main.c
> index a63ca70a7f06..25ca459be57b 100644
> --- a/examples/bond/main.c
> +++ b/examples/bond/main.c
> @@ -116,7 +116,6 @@ static struct rte_mempool *mbuf_pool;
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> diff --git a/examples/distributor/main.c b/examples/distributor/main.c
> index d0f40a1fb4bc..8c4a8feec0c2 100644
> --- a/examples/distributor/main.c
> +++ b/examples/distributor/main.c
> @@ -81,7 +81,6 @@ struct app_stats prev_app_stats;
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c
> b/examples/eventdev_pipeline/pipeline_worker_generic.c
> index 5ed0dc73ec60..e26be8edf28f 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> @@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool
> *mbuf_pool)
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c
> b/examples/eventdev_pipeline/pipeline_worker_tx.c
> index ab8c6d6a0dad..476b147bdfcc 100644
> --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> @@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool
> *mbuf_pool)
> static const struct rte_eth_conf port_conf_default = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/flow_classify/flow_classify.c
> b/examples/flow_classify/flow_classify.c
> index 65c1d85cf2fb..8a43f6ac0f92 100644
> --- a/examples/flow_classify/flow_classify.c
> +++ b/examples/flow_classify/flow_classify.c
> @@ -59,14 +59,6 @@ static struct{
> } parm_config;
> const char cb_port_delim[] = ":";
>
> -/* Ethernet ports configured with default settings using struct. 8< */
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -/* >8 End of configuration of Ethernet ports. */
> -
> /* Creation of flow classifier object. 8< */
> struct flow_classifier {
> struct rte_flow_classifier *cls;
> @@ -200,7 +192,7 @@ static struct rte_flow_attr attr;
> static inline int
> port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> struct rte_ether_addr addr;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> @@ -211,6 +203,8 @@ port_init(uint8_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
> index b3977a8be561..fdc66368dce9 100644
> --- a/examples/ioat/ioatfwd.c
> +++ b/examples/ioat/ioatfwd.c
> @@ -820,7 +820,6 @@ port_init(uint16_t portid, struct rte_mempool
> *mbuf_pool, uint16_t nb_queues)
> static const struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN
> },
> .rx_adv_conf = {
> .rss_conf = {
> diff --git a/examples/ip_fragmentation/main.c
> b/examples/ip_fragmentation/main.c
> index f24536972084..12062a785dc6 100644
> --- a/examples/ip_fragmentation/main.c
> +++ b/examples/ip_fragmentation/main.c
> @@ -146,7 +146,8 @@ struct lcore_queue_conf
> lcore_queue_conf[RTE_MAX_LCORE];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> DEV_RX_OFFLOAD_SCATTER |
> @@ -918,9 +919,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> @@ -963,8 +964,7 @@ main(int argc, char **argv)
> }
>
> /* set the mtu to the maximum received packet size */
> - ret = rte_eth_dev_set_mtu(portid,
> - local_port_conf.rxmode.max_rx_pkt_len -
> MTU_OVERHEAD);
> + ret = rte_eth_dev_set_mtu(portid,
> local_port_conf.rxmode.mtu);
> if (ret < 0) {
> printf("\n");
> rte_exit(EXIT_FAILURE, "Set MTU failed: "
> diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
> index 16bcffe356bc..9ba02e687adb 100644
> --- a/examples/ip_pipeline/link.c
> +++ b/examples/ip_pipeline/link.c
> @@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = {
> .link_speeds = 0,
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
> + .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN),
> /* Jumbo frame MTU */
> .split_hdr_size = 0, /* Header split buffer size */
> },
> .rx_adv_conf = {
> diff --git a/examples/ip_reassembly/main.c
> b/examples/ip_reassembly/main.c
> index 8645ac790be4..e5c7d46d2caa 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -162,7 +162,8 @@ static struct lcore_queue_conf
> lcore_queue_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = (DEV_RX_OFFLOAD_CHECKSUM |
> DEV_RX_OFFLOAD_JUMBO_FRAME),
> @@ -882,7 +883,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t
> lcore, uint32_t queue)
>
> /* mbufs stored int the gragment table. 8< */
> nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) *
> MAX_FRAG_NUM;
> - nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) /
> BUF_SIZE;
> + nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN
> + + BUF_SIZE - 1) / BUF_SIZE;
> nb_mbuf *= 2; /* ipv4 and ipv6 */
> nb_mbuf += nb_rxd + nb_txd;
>
> @@ -1054,9 +1056,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-
> secgw/ipsec-secgw.c
> index 7ad94cb8228b..d032a47d1c3b 100644
> --- a/examples/ipsec-secgw/ipsec-secgw.c
> +++ b/examples/ipsec-secgw/ipsec-secgw.c
> @@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -2163,7 +2162,6 @@ cryptodevs_init(uint16_t req_queue_num)
> static void
> port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
> {
> - uint32_t frame_size;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_txconf *txconf;
> uint16_t nb_tx_queue, nb_rx_queue;
> @@ -2211,10 +2209,9 @@ port_init(uint16_t portid, uint64_t
> req_rx_offloads, uint64_t req_tx_offloads)
> printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n",
> nb_rx_queue, nb_tx_queue);
>
> - frame_size = MTU_TO_FRAMELEN(mtu_size);
> - if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
> + if (mtu_size > RTE_ETHER_MTU)
> local_port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - local_port_conf.rxmode.max_rx_pkt_len = frame_size;
> + local_port_conf.rxmode.mtu = mtu_size;
>
> if (multi_seg_required()) {
> local_port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_SCATTER;
> diff --git a/examples/ipv4_multicast/main.c
> b/examples/ipv4_multicast/main.c
> index cc527d7f6b38..b3993685ec92 100644
> --- a/examples/ipv4_multicast/main.c
> +++ b/examples/ipv4_multicast/main.c
> @@ -110,7 +110,8 @@ static struct lcore_queue_conf
> lcore_queue_conf[RTE_MAX_LCORE];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
> + .mtu = JUMBO_FRAME_MAX_SIZE - RTE_ETHER_HDR_LEN -
> + RTE_ETHER_CRC_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
> },
> @@ -715,9 +716,9 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN(
> - dev_info.max_rx_pktlen,
> - local_port_conf.rxmode.max_rx_pkt_len);
> + local_port_conf.rxmode.mtu = RTE_MIN(
> + dev_info.max_mtu,
> + local_port_conf.rxmode.mtu);
>
> /* get the lcore_id for this port */
> while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
> diff --git a/examples/kni/main.c b/examples/kni/main.c
> index beabb3c848aa..c10814c6a94f 100644
> --- a/examples/kni/main.c
> +++ b/examples/kni/main.c
> @@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int
> new_mtu)
>
> memcpy(&conf, &port_conf, sizeof(conf));
> /* Set new MTU */
> - if (new_mtu > RTE_ETHER_MAX_LEN)
> + if (new_mtu > RTE_ETHER_MTU)
> conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
> else
> conf.rxmode.offloads &=
> ~DEV_RX_OFFLOAD_JUMBO_FRAME;
>
> - /* mtu + length of header + length of FCS = max pkt length */
> - conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE
> +
> - KNI_ENET_FCS_SIZE;
> + conf.rxmode.mtu = new_mtu;
> ret = rte_eth_dev_configure(port_id, 1, 1, &conf);
> if (ret < 0) {
> RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id);
> diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c
> index 9b3e324efb23..d9cf00c9dfc7 100644
> --- a/examples/l2fwd-cat/l2fwd-cat.c
> +++ b/examples/l2fwd-cat/l2fwd-cat.c
> @@ -19,10 +19,6 @@
> #define MBUF_CACHE_SIZE 250
> #define BURST_SIZE 32
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN }
> -};
> -
> /* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */
>
> /*
> @@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> uint16_t q;
> @@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> /* Configure the Ethernet device. */
> retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
> if (retval != 0)
> diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
> index 66d1491bf76d..f9438176cbb1 100644
> --- a/examples/l2fwd-crypto/main.c
> +++ b/examples/l2fwd-crypto/main.c
> @@ -217,7 +217,6 @@ struct lcore_queue_conf
> lcore_queue_conf[RTE_MAX_LCORE];
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-
> event/l2fwd_common.c
> index 19f32809aa9d..9040be5ed9b6 100644
> --- a/examples/l2fwd-event/l2fwd_common.c
> +++ b/examples/l2fwd-event/l2fwd_common.c
> @@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
> uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
> struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
> index a1f457b564b6..7abb612ee6a4 100644
> --- a/examples/l3fwd-acl/main.c
> +++ b/examples/l3fwd-acl/main.c
> @@ -125,7 +125,6 @@ static uint16_t nb_lcore_params =
> sizeof(lcore_params_array_default) /
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -141,6 +140,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
>
> /* ethernet addresses of ports */
> @@ -201,8 +202,8 @@ enum {
> OPT_CONFIG_NUM = 256,
> #define OPT_NONUMA "no-numa"
> OPT_NONUMA_NUM,
> -#define OPT_ENBJMO "enable-jumbo"
> - OPT_ENBJMO_NUM,
> +#define OPT_MAX_PKT_LEN "max-pkt-len"
> + OPT_MAX_PKT_LEN_NUM,
> #define OPT_RULE_IPV4 "rule_ipv4"
> OPT_RULE_IPV4_NUM,
> #define OPT_RULE_IPV6 "rule_ipv6"
> @@ -1619,26 +1620,21 @@ print_usage(const char *prgname)
>
> usage_acl_alg(alg, sizeof(alg));
> printf("%s [EAL options] -- -p PORTMASK -P"
> - "--"OPT_RULE_IPV4"=FILE"
> - "--"OPT_RULE_IPV6"=FILE"
> + " --"OPT_RULE_IPV4"=FILE"
> + " --"OPT_RULE_IPV6"=FILE"
> " [--"OPT_CONFIG" (port,queue,lcore)[,(port,queue,lcore]]"
> - " [--"OPT_ENBJMO" [--max-pkt-len PKTLEN]]\n"
> + " [--"OPT_MAX_PKT_LEN" PKTLEN]\n"
> " -p PORTMASK: hexadecimal bitmask of ports to
> configure\n"
> - " -P : enable promiscuous mode\n"
> - " --"OPT_CONFIG": (port,queue,lcore): "
> - "rx queues configuration\n"
> + " -P: enable promiscuous mode\n"
> + " --"OPT_CONFIG" (port,queue,lcore): rx queues
> configuration\n"
> " --"OPT_NONUMA": optional, disable numa awareness\n"
> - " --"OPT_ENBJMO": enable jumbo frame"
> - " which max packet len is PKTLEN in decimal (64-9600)\n"
> - " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries "
> - "file. "
> + " --"OPT_MAX_PKT_LEN" PKTLEN: maximum packet length in
> decimal (64-9600)\n"
> + " --"OPT_RULE_IPV4"=FILE: specify the ipv4 rules entries file.
> "
> "Each rule occupy one line. "
> "2 kinds of rules are supported. "
> "One is ACL entry at while line leads with character '%c', "
> - "another is route entry at while line leads with "
> - "character '%c'.\n"
> - " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules "
> - "entries file.\n"
> + "another is route entry at while line leads with character
> '%c'.\n"
> + " --"OPT_RULE_IPV6"=FILE: specify the ipv6 rules entries
> file.\n"
> " --"OPT_ALG": ACL classify method to use, one of: %s\n",
> prgname, ACL_LEAD_CHAR, ROUTE_LEAD_CHAR, alg);
> }
> @@ -1758,14 +1754,14 @@ parse_args(int argc, char **argv)
> int option_index;
> char *prgname = argv[0];
> static struct option lgopts[] = {
> - {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
> - {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
> - {OPT_ENBJMO, 0, NULL, OPT_ENBJMO_NUM },
> - {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
> - {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
> - {OPT_ALG, 1, NULL, OPT_ALG_NUM },
> - {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
> - {NULL, 0, 0, 0 }
> + {OPT_CONFIG, 1, NULL, OPT_CONFIG_NUM },
> + {OPT_NONUMA, 0, NULL, OPT_NONUMA_NUM },
> + {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
> + {OPT_RULE_IPV4, 1, NULL, OPT_RULE_IPV4_NUM },
> + {OPT_RULE_IPV6, 1, NULL, OPT_RULE_IPV6_NUM },
> + {OPT_ALG, 1, NULL, OPT_ALG_NUM },
> + {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
> + {NULL, 0, 0, 0 }
> };
>
> argvopt = argv;
> @@ -1804,43 +1800,11 @@ parse_args(int argc, char **argv)
> numa_on = 0;
> break;
>
> - case OPT_ENBJMO_NUM:
> - {
> - struct option lenopts = {
> - "max-pkt-len",
> - required_argument,
> - 0,
> - 0
> - };
> -
> - printf("jumbo frame is enabled\n");
> - port_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> - DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /*
> - * if no max-pkt-len set, then use the
> - * default value RTE_ETHER_MAX_LEN
> - */
> - if (getopt_long(argc, argvopt, "",
> - &lenopts, &option_index) == 0) {
> - ret = parse_max_pkt_len(optarg);
> - if ((ret < 64) ||
> - (ret > MAX_JUMBO_PKT_LEN)) {
> - printf("invalid packet "
> - "length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> - printf("set jumbo frame max packet length "
> - "to %u\n",
> - (unsigned int)
> - port_conf.rxmode.max_rx_pkt_len);
> + case OPT_MAX_PKT_LEN_NUM:
> + printf("Custom frame size is configured\n");
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> - }
> +
> case OPT_RULE_IPV4_NUM:
> parm_config.rule_ipv4_name = optarg;
> break;
> @@ -2007,6 +1971,43 @@ set_default_dest_mac(void)
> }
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> int
> main(int argc, char **argv)
> {
> @@ -2080,6 +2081,12 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
> index a0de8ca9b42d..b431b9ff5f3c 100644
> --- a/examples/l3fwd-graph/main.c
> +++ b/examples/l3fwd-graph/main.c
> @@ -112,7 +112,6 @@ static uint16_t nb_lcore_params =
> RTE_DIM(lcore_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .rx_adv_conf = {
> @@ -126,6 +125,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool
> *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
>
> static struct rte_node_ethdev_config ethdev_conf[RTE_MAX_ETHPORTS];
> @@ -259,7 +260,7 @@ print_usage(const char *prgname)
> " [-P]"
> " --config (port,queue,lcore)[,(port,queue,lcore)]"
> " [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]"
> + " [--max-pkt-len PKTLEN]"
> " [--no-numa]"
> " [--per-port-pool]\n\n"
>
> @@ -268,9 +269,7 @@ print_usage(const char *prgname)
> " --config (port,queue,lcore): Rx queue configuration\n"
> " --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet
> destination for "
> "port X\n"
> - " --enable-jumbo: Enable jumbo frames\n"
> - " --max-pkt-len: Under the premise of enabling jumbo,\n"
> - " maximum packet length in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal
> (64-9600)\n"
> " --no-numa: Disable numa awareness\n"
> " --per-port-pool: Use separate buffer pool per port\n\n",
> prgname);
> @@ -404,7 +403,7 @@ static const char short_options[] = "p:" /* portmask
> */
> #define CMD_LINE_OPT_CONFIG "config"
> #define CMD_LINE_OPT_ETH_DEST "eth-dest"
> #define CMD_LINE_OPT_NO_NUMA "no-numa"
> -#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
> +#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
> #define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
> enum {
> /* Long options mapped to a short option */
> @@ -416,7 +415,7 @@ enum {
> CMD_LINE_OPT_CONFIG_NUM,
> CMD_LINE_OPT_ETH_DEST_NUM,
> CMD_LINE_OPT_NO_NUMA_NUM,
> - CMD_LINE_OPT_ENABLE_JUMBO_NUM,
> + CMD_LINE_OPT_MAX_PKT_LEN_NUM,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL,
> };
>
> @@ -424,7 +423,7 @@ static const struct option lgopts[] = {
> {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM},
> {CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
> {CMD_LINE_OPT_NO_NUMA, 0, 0,
> CMD_LINE_OPT_NO_NUMA_NUM},
> - {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0,
> CMD_LINE_OPT_ENABLE_JUMBO_NUM},
> + {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0,
> CMD_LINE_OPT_MAX_PKT_LEN_NUM},
> {CMD_LINE_OPT_PER_PORT_POOL, 0, 0,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL},
> {NULL, 0, 0, 0},
> };
> @@ -490,28 +489,8 @@ parse_args(int argc, char **argv)
> numa_on = 0;
> break;
>
> - case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
> - const struct option lenopts = {"max-pkt-len",
> - required_argument, 0, 0};
> -
> - port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /*
> - * if no max-pkt-len set, use the default
> - * value RTE_ETHER_MAX_LEN.
> - */
> - if (getopt_long(argc, argvopt, "", &lenopts,
> - &option_index) == 0) {
> - ret = parse_max_pkt_len(optarg);
> - if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
> - fprintf(stderr, "Invalid maximum "
> - "packet length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> + case CMD_LINE_OPT_MAX_PKT_LEN_NUM: {
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> }
>
> @@ -722,6 +701,43 @@ graph_main_loop(void *conf)
> }
> /* >8 End of main processing loop. */
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> int
> main(int argc, char **argv)
> {
> @@ -807,6 +823,13 @@ main(int argc, char **argv)
> nb_rx_queue, n_tx_queue);
>
> rte_eth_dev_info_get(portid, &dev_info);
> +
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
> index aa7b8db44ae8..e58561327c48 100644
> --- a/examples/l3fwd-power/main.c
> +++ b/examples/l3fwd-power/main.c
> @@ -251,7 +251,6 @@ uint16_t nb_lcore_params =
> RTE_DIM(lcore_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -266,6 +265,8 @@ static struct rte_eth_conf port_conf = {
> }
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool * pktmbuf_pool[NB_SOCKETS];
>
>
> @@ -1601,16 +1602,15 @@ print_usage(const char *prgname)
> " [--config (port,queue,lcore)[,(port,queue,lcore]]"
> " [--high-perf-cores CORELIST"
> " [--perf-config
> (port,queue,hi_perf,lcore_index)[,(port,queue,hi_perf,lcore_index]]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
> + " [--max-pkt-len PKTLEN]\n"
> " -p PORTMASK: hexadecimal bitmask of ports to
> configure\n"
> - " -P : enable promiscuous mode\n"
> + " -P: enable promiscuous mode\n"
> " --config (port,queue,lcore): rx queues configuration\n"
> " --high-perf-cores CORELIST: list of high performance
> cores\n"
> " --perf-config: similar as config, cores specified as indices"
> " for bins containing high or regular performance cores\n"
> " --no-numa: optional, disable numa awareness\n"
> - " --enable-jumbo: enable jumbo frame"
> - " which max packet len is PKTLEN in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal
> (64-9600)\n"
> " --parse-ptype: parse packet type by software\n"
> " --legacy: use legacy interrupt-based scaling\n"
> " --empty-poll: enable empty poll detection"
> @@ -1795,6 +1795,7 @@ parse_ep_config(const char *q_arg)
> #define CMD_LINE_OPT_INTERRUPT_ONLY "interrupt-only"
> #define CMD_LINE_OPT_TELEMETRY "telemetry"
> #define CMD_LINE_OPT_PMD_MGMT "pmd-mgmt"
> +#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
>
> /* Parse the argument given in the command line of the application */
> static int
> @@ -1810,7 +1811,7 @@ parse_args(int argc, char **argv)
> {"perf-config", 1, 0, 0},
> {"high-perf-cores", 1, 0, 0},
> {"no-numa", 0, 0, 0},
> - {"enable-jumbo", 0, 0, 0},
> + {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0, 0},
> {CMD_LINE_OPT_EMPTY_POLL, 1, 0, 0},
> {CMD_LINE_OPT_PARSE_PTYPE, 0, 0, 0},
> {CMD_LINE_OPT_LEGACY, 0, 0, 0},
> @@ -1954,36 +1955,10 @@ parse_args(int argc, char **argv)
> }
>
> if (!strncmp(lgopts[option_index].name,
> - "enable-jumbo", 12)) {
> - struct option lenopts =
> - {"max-pkt-len", required_argument, \
> - 0, 0};
> -
> - printf("jumbo frame is enabled \n");
> - port_conf.rxmode.offloads |=
> -
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> -
> DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /**
> - * if no max-pkt-len set, use the default value
> - * RTE_ETHER_MAX_LEN
> - */
> - if (0 == getopt_long(argc, argvopt, "",
> - &lenopts, &option_index)) {
> - ret = parse_max_pkt_len(optarg);
> - if ((ret < 64) ||
> - (ret >
> MAX_JUMBO_PKT_LEN)){
> - printf("invalid packet "
> - "length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len =
> ret;
> - }
> - printf("set jumbo frame "
> - "max packet length to %u\n",
> - (unsigned
> int)port_conf.rxmode.max_rx_pkt_len);
> + CMD_LINE_OPT_MAX_PKT_LEN,
> +
> sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
> + printf("Custom frame size is configured\n");
> + max_pkt_len = parse_max_pkt_len(optarg);
> }
>
> if (!strncmp(lgopts[option_index].name,
> @@ -2505,6 +2480,43 @@ mode_to_str(enum appmode mode)
> }
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> /* Power library initialized in the main routine. 8< */
> int
> main(int argc, char **argv)
> @@ -2622,6 +2634,12 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 00ac267af1dd..cb9bc7ad6002 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -121,7 +121,6 @@ static uint16_t nb_lcore_params =
> sizeof(lcore_params_array_default) /
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -136,6 +135,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool
> *pktmbuf_pool[RTE_MAX_ETHPORTS][NB_SOCKETS];
> static uint8_t lkp_per_socket[NB_SOCKETS];
>
> @@ -326,7 +327,7 @@ print_usage(const char *prgname)
> " [--lookup]"
> " --config (port,queue,lcore)[,(port,queue,lcore)]"
> " [--eth-dest=X,MM:MM:MM:MM:MM:MM]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]"
> + " [--max-pkt-len PKTLEN]"
> " [--no-numa]"
> " [--hash-entry-num]"
> " [--ipv6]"
> @@ -344,9 +345,7 @@ print_usage(const char *prgname)
> " Accepted: em (Exact Match), lpm (Longest Prefix
> Match), fib (Forwarding Information Base)\n"
> " --config (port,queue,lcore): Rx queue configuration\n"
> " --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet
> destination for port X\n"
> - " --enable-jumbo: Enable jumbo frames\n"
> - " --max-pkt-len: Under the premise of enabling jumbo,\n"
> - " maximum packet length in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal
> (64-9600)\n"
> " --no-numa: Disable numa awareness\n"
> " --hash-entry-num: Specify the hash entry number in
> hexadecimal to be setup\n"
> " --ipv6: Set if running ipv6 packets\n"
> @@ -566,7 +565,7 @@ static const char short_options[] =
> #define CMD_LINE_OPT_ETH_DEST "eth-dest"
> #define CMD_LINE_OPT_NO_NUMA "no-numa"
> #define CMD_LINE_OPT_IPV6 "ipv6"
> -#define CMD_LINE_OPT_ENABLE_JUMBO "enable-jumbo"
> +#define CMD_LINE_OPT_MAX_PKT_LEN "max-pkt-len"
> #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
> #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
> #define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
> @@ -584,7 +583,7 @@ enum {
> CMD_LINE_OPT_ETH_DEST_NUM,
> CMD_LINE_OPT_NO_NUMA_NUM,
> CMD_LINE_OPT_IPV6_NUM,
> - CMD_LINE_OPT_ENABLE_JUMBO_NUM,
> + CMD_LINE_OPT_MAX_PKT_LEN_NUM,
> CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
> CMD_LINE_OPT_PARSE_PTYPE_NUM,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL,
> @@ -599,7 +598,7 @@ static const struct option lgopts[] = {
> {CMD_LINE_OPT_ETH_DEST, 1, 0, CMD_LINE_OPT_ETH_DEST_NUM},
> {CMD_LINE_OPT_NO_NUMA, 0, 0,
> CMD_LINE_OPT_NO_NUMA_NUM},
> {CMD_LINE_OPT_IPV6, 0, 0, CMD_LINE_OPT_IPV6_NUM},
> - {CMD_LINE_OPT_ENABLE_JUMBO, 0, 0,
> CMD_LINE_OPT_ENABLE_JUMBO_NUM},
> + {CMD_LINE_OPT_MAX_PKT_LEN, 1, 0,
> CMD_LINE_OPT_MAX_PKT_LEN_NUM},
> {CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0,
> CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
> {CMD_LINE_OPT_PARSE_PTYPE, 0, 0,
> CMD_LINE_OPT_PARSE_PTYPE_NUM},
> {CMD_LINE_OPT_PER_PORT_POOL, 0, 0,
> CMD_LINE_OPT_PARSE_PER_PORT_POOL},
> @@ -698,31 +697,9 @@ parse_args(int argc, char **argv)
> ipv6 = 1;
> break;
>
> - case CMD_LINE_OPT_ENABLE_JUMBO_NUM: {
> - const struct option lenopts = {
> - "max-pkt-len", required_argument, 0, 0
> - };
> -
> - port_conf.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /*
> - * if no max-pkt-len set, use the default
> - * value RTE_ETHER_MAX_LEN.
> - */
> - if (getopt_long(argc, argvopt, "",
> - &lenopts, &option_index) == 0) {
> - ret = parse_max_pkt_len(optarg);
> - if (ret < 64 || ret > MAX_JUMBO_PKT_LEN) {
> - fprintf(stderr,
> - "invalid maximum packet
> length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> + case CMD_LINE_OPT_MAX_PKT_LEN_NUM:
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> - }
>
> case CMD_LINE_OPT_HASH_ENTRY_NUM_NUM:
> ret = parse_hash_entry_number(optarg);
> @@ -981,6 +958,43 @@ prepare_ptype_parser(uint16_t portid, uint16_t
> queueid)
> return 0;
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> static void
> l3fwd_poll_resource_setup(void)
> {
> @@ -1035,6 +1049,12 @@ l3fwd_poll_resource_setup(void)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/performance-thread/l3fwd-thread/main.c
> b/examples/performance-thread/l3fwd-thread/main.c
> index 2f593abf263d..b6cddc8c7b51 100644
> --- a/examples/performance-thread/l3fwd-thread/main.c
> +++ b/examples/performance-thread/l3fwd-thread/main.c
> @@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params =
> RTE_DIM(tx_thread_params_array_default);
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> @@ -323,6 +322,8 @@ static struct rte_eth_conf port_conf = {
> },
> };
>
> +static uint16_t max_pkt_len;
> +
> static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];
>
> #if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
> @@ -2643,7 +2644,7 @@ print_usage(const char *prgname)
> printf("%s [EAL options] -- -p PORTMASK -P"
> " [--rx
> (port,queue,lcore,thread)[,(port,queue,lcore,thread]]"
> " [--tx (lcore,thread)[,(lcore,thread]]"
> - " [--enable-jumbo [--max-pkt-len PKTLEN]]\n"
> + " [--max-pkt-len PKTLEN]"
> " [--parse-ptype]\n\n"
> " -p PORTMASK: hexadecimal bitmask of ports to
> configure\n"
> " -P : enable promiscuous mode\n"
> @@ -2653,8 +2654,7 @@ print_usage(const char *prgname)
> " --eth-dest=X,MM:MM:MM:MM:MM:MM: optional,
> ethernet destination for port X\n"
> " --no-numa: optional, disable numa awareness\n"
> " --ipv6: optional, specify it if running ipv6 packets\n"
> - " --enable-jumbo: enable jumbo frame"
> - " which max packet len is PKTLEN in decimal (64-9600)\n"
> + " --max-pkt-len PKTLEN: maximum packet length in decimal
> (64-9600)\n"
> " --hash-entry-num: specify the hash entry number in
> hexadecimal to be setup\n"
> " --no-lthreads: turn off lthread model\n"
> " --parse-ptype: set to use software to analyze packet
> type\n\n",
> @@ -2877,8 +2877,8 @@ enum {
> OPT_NO_NUMA_NUM,
> #define OPT_IPV6 "ipv6"
> OPT_IPV6_NUM,
> -#define OPT_ENABLE_JUMBO "enable-jumbo"
> - OPT_ENABLE_JUMBO_NUM,
> +#define OPT_MAX_PKT_LEN "max-pkt-len"
> + OPT_MAX_PKT_LEN_NUM,
> #define OPT_HASH_ENTRY_NUM "hash-entry-num"
> OPT_HASH_ENTRY_NUM_NUM,
> #define OPT_NO_LTHREADS "no-lthreads"
> @@ -2902,7 +2902,7 @@ parse_args(int argc, char **argv)
> {OPT_ETH_DEST, 1, NULL, OPT_ETH_DEST_NUM },
> {OPT_NO_NUMA, 0, NULL, OPT_NO_NUMA_NUM },
> {OPT_IPV6, 0, NULL, OPT_IPV6_NUM },
> - {OPT_ENABLE_JUMBO, 0, NULL,
> OPT_ENABLE_JUMBO_NUM },
> + {OPT_MAX_PKT_LEN, 1, NULL, OPT_MAX_PKT_LEN_NUM },
> {OPT_HASH_ENTRY_NUM, 1, NULL,
> OPT_HASH_ENTRY_NUM_NUM },
> {OPT_NO_LTHREADS, 0, NULL, OPT_NO_LTHREADS_NUM },
> {OPT_PARSE_PTYPE, 0, NULL, OPT_PARSE_PTYPE_NUM },
> @@ -2981,35 +2981,10 @@ parse_args(int argc, char **argv)
> parse_ptype_on = 1;
> break;
>
> - case OPT_ENABLE_JUMBO_NUM:
> - {
> - struct option lenopts = {"max-pkt-len",
> - required_argument, 0, 0};
> -
> - printf("jumbo frame is enabled - disabling simple TX
> path\n");
> - port_conf.rxmode.offloads |=
> - DEV_RX_OFFLOAD_JUMBO_FRAME;
> - port_conf.txmode.offloads |=
> - DEV_TX_OFFLOAD_MULTI_SEGS;
> -
> - /* if no max-pkt-len set, use the default value
> - * RTE_ETHER_MAX_LEN
> - */
> - if (getopt_long(argc, argvopt, "", &lenopts,
> - &option_index) == 0) {
> -
> - ret = parse_max_pkt_len(optarg);
> - if ((ret < 64) || (ret > MAX_JUMBO_PKT_LEN))
> {
> - printf("invalid packet length\n");
> - print_usage(prgname);
> - return -1;
> - }
> - port_conf.rxmode.max_rx_pkt_len = ret;
> - }
> - printf("set jumbo frame max packet length to %u\n",
> - (unsigned
> int)port_conf.rxmode.max_rx_pkt_len);
> + case OPT_MAX_PKT_LEN_NUM:
> + max_pkt_len = parse_max_pkt_len(optarg);
> break;
> - }
> +
> #if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)
> case OPT_HASH_ENTRY_NUM_NUM:
> ret = parse_hash_entry_number(optarg);
> @@ -3489,6 +3464,43 @@ check_all_ports_link_status(uint32_t port_mask)
> }
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> +static int
> +config_port_max_pkt_len(struct rte_eth_conf *conf,
> + struct rte_eth_dev_info *dev_info)
> +{
> + uint16_t overhead_len;
> +
> + if (max_pkt_len == 0)
> + return 0;
> +
> + if (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len >
> MAX_JUMBO_PKT_LEN)
> + return -1;
> +
> + overhead_len = eth_dev_get_overhead_len(dev_info-
> >max_rx_pktlen,
> + dev_info->max_mtu);
> + conf->rxmode.mtu = max_pkt_len - overhead_len;
> +
> + if (conf->rxmode.mtu > RTE_ETHER_MTU) {
> + conf->txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
> + conf->rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> + }
> +
> + return 0;
> +}
> +
> int
> main(int argc, char **argv)
> {
> @@ -3577,6 +3589,12 @@ main(int argc, char **argv)
> "Error during getting device (port %u)
> info: %s\n",
> portid, strerror(-ret));
>
> + ret = config_port_max_pkt_len(&local_port_conf, &dev_info);
> + if (ret != 0)
> + rte_exit(EXIT_FAILURE,
> + "Invalid max packet length: %u (port %u)\n",
> + max_pkt_len, portid);
> +
> if (dev_info.tx_offload_capa &
> DEV_TX_OFFLOAD_MBUF_FAST_FREE)
> local_port_conf.txmode.offloads |=
> DEV_TX_OFFLOAD_MBUF_FAST_FREE;
> diff --git a/examples/performance-thread/l3fwd-thread/test.sh
> b/examples/performance-thread/l3fwd-thread/test.sh
> index f0b6e271a5f3..3dd33407ea41 100755
> --- a/examples/performance-thread/l3fwd-thread/test.sh
> +++ b/examples/performance-thread/l3fwd-thread/test.sh
> @@ -11,7 +11,7 @@ case "$1" in
> echo "1.1 1 L-core per pcore (N=2)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,0)" \
> --tx="(1,0)" \
> --stat-lcore 2 \
> @@ -23,7 +23,7 @@ case "$1" in
> echo "1.2 1 L-core per pcore (N=4)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,1,1)" \
> --tx="(2,0)(3,1)" \
> --stat-lcore 4 \
> @@ -34,7 +34,7 @@ case "$1" in
> echo "1.3 1 L-core per pcore (N=8)"
>
> ./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)"
> \
> --tx="(4,0)(5,1)(6,2)(7,3)" \
> --stat-lcore 8 \
> @@ -45,7 +45,7 @@ case "$1" in
> echo "1.3 1 L-core per pcore (N=16)"
>
> ./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --
> rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
> --
> tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
> --stat-lcore 16 \
> @@ -61,7 +61,7 @@ case "$1" in
> echo "2.1 N L-core per pcore (N=2)"
>
> ./build/l3fwd-thread -c ff -n 2 --lcores="2,(0-1)@0" -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,0)" \
> --tx="(1,0)" \
> --stat-lcore 2 \
> @@ -73,7 +73,7 @@ case "$1" in
> echo "2.2 N L-core per pcore (N=4)"
>
> ./build/l3fwd-thread -c ff -n 2 --lcores="(0-3)@0,4" -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,1,1)" \
> --tx="(2,0)(3,1)" \
> --stat-lcore 4 \
> @@ -84,7 +84,7 @@ case "$1" in
> echo "2.3 N L-core per pcore (N=8)"
>
> ./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-7)@0,8" -- -P -p
> 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)"
> \
> --tx="(4,0)(5,1)(6,2)(7,3)" \
> --stat-lcore 8 \
> @@ -95,7 +95,7 @@ case "$1" in
> echo "2.3 N L-core per pcore (N=16)"
>
> ./build/l3fwd-thread -c 3ffff -n 2 --lcores="(0-15)@0,16" -- -P
> -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --
> rx="(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)" \
> --
> tx="(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)" \
> --stat-lcore 16 \
> @@ -111,7 +111,7 @@ case "$1" in
> echo "3.1 N L-threads per pcore (N=2)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,0)" \
> --tx="(0,0)" \
> --stat-lcore 1
> @@ -121,7 +121,7 @@ case "$1" in
> echo "3.2 N L-threads per pcore (N=4)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500 \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(1,0,0,1)" \
> --tx="(0,0)(0,1)" \
> --stat-lcore 1
> @@ -131,7 +131,7 @@ case "$1" in
> echo "3.2 N L-threads per pcore (N=8)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --rx="(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)"
> \
> --tx="(0,0)(0,1)(0,2)(0,3)" \
> --stat-lcore 1
> @@ -141,7 +141,7 @@ case "$1" in
> echo "3.2 N L-threads per pcore (N=16)"
>
> ./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \
> - --enable-jumbo --max-pkt-len 1500
> \
> + --max-pkt-len 1500 \
> --
> rx="(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)" \
> --tx="(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)"
> \
> --stat-lcore 1
> diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
> index 467cda5a6dac..4f20dfc4be06 100644
> --- a/examples/pipeline/obj.c
> +++ b/examples/pipeline/obj.c
> @@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = {
> .link_speeds = 0,
> .rxmode = {
> .mq_mode = ETH_MQ_RX_NONE,
> - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
> + .mtu = 9000 - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN),
> /* Jumbo frame MTU */
> .split_hdr_size = 0, /* Header split buffer size */
> },
> .rx_adv_conf = {
> diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
> index 4f32ade7fbf7..3b6c6c297f43 100644
> --- a/examples/ptpclient/ptpclient.c
> +++ b/examples/ptpclient/ptpclient.c
> @@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask;
> uint8_t ptp_enabled_port_nb;
> static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS];
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static const struct rte_ether_addr ether_multicast = {
> .addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0}
> };
> @@ -178,7 +172,7 @@ static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> struct rte_eth_dev_info dev_info;
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1;
> const uint16_t tx_rings = 1;
> int retval;
> @@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
> index 7ffccc8369dc..c32d2e12e633 100644
> --- a/examples/qos_meter/main.c
> +++ b/examples/qos_meter/main.c
> @@ -52,7 +52,6 @@ static struct rte_mempool *pool = NULL;
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .offloads = DEV_RX_OFFLOAD_CHECKSUM,
> },
> diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
> index 1abe003fc6ae..1367569c65db 100644
> --- a/examples/qos_sched/init.c
> +++ b/examples/qos_sched/init.c
> @@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS];
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> .split_hdr_size = 0,
> },
> .txmode = {
> diff --git a/examples/rxtx_callbacks/main.c
> b/examples/rxtx_callbacks/main.c
> index ab6fa7d56c5d..6845c396b8d9 100644
> --- a/examples/rxtx_callbacks/main.c
> +++ b/examples/rxtx_callbacks/main.c
> @@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf)
> static const char usage[] =
> "%s EAL_ARGS -- [-t]\n";
>
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static struct {
> uint64_t total_cycles;
> uint64_t total_queue_cycles;
> @@ -124,7 +118,7 @@ calc_latency(uint16_t port, uint16_t qidx
> __rte_unused,
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> uint16_t nb_rxd = RX_RING_SIZE;
> uint16_t nb_txd = TX_RING_SIZE;
> @@ -137,6 +131,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
> index ae9bbee8d820..fd7207aee758 100644
> --- a/examples/skeleton/basicfwd.c
> +++ b/examples/skeleton/basicfwd.c
> @@ -17,14 +17,6 @@
> #define MBUF_CACHE_SIZE 250
> #define BURST_SIZE 32
>
> -/* Configuration of ethernet ports. 8< */
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -/* >8 End of configuration of ethernet ports. */
> -
> /* basicfwd.c: Basic DPDK skeleton forwarding example. */
>
> /*
> @@ -36,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = {
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> uint16_t nb_rxd = RX_RING_SIZE;
> uint16_t nb_txd = TX_RING_SIZE;
> @@ -48,6 +40,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index d0bf1f31e36a..da381b41c0c5 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -44,6 +44,7 @@
> #define BURST_RX_RETRIES 4 /* Number of retries on RX. */
>
> #define JUMBO_FRAME_MAX_SIZE 0x2600
> +#define MAX_MTU (JUMBO_FRAME_MAX_SIZE - (RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN))
>
> /* State of virtio device. */
> #define DEVICE_MAC_LEARNING 0
> @@ -633,8 +634,7 @@ us_vhost_parse_args(int argc, char **argv)
> if (ret) {
> vmdq_conf_default.rxmode.offloads |=
> DEV_RX_OFFLOAD_JUMBO_FRAME;
> - vmdq_conf_default.rxmode.max_rx_pkt_len
> - = JUMBO_FRAME_MAX_SIZE;
> + vmdq_conf_default.rxmode.mtu =
> MAX_MTU;
> }
> break;
>
> diff --git a/examples/vm_power_manager/main.c
> b/examples/vm_power_manager/main.c
> index e59fb7d3478b..e19d79a40802 100644
> --- a/examples/vm_power_manager/main.c
> +++ b/examples/vm_power_manager/main.c
> @@ -51,17 +51,10 @@
> static uint32_t enabled_port_mask;
> static volatile bool force_quit;
>
> -/****************/
> -static const struct rte_eth_conf port_conf_default = {
> - .rxmode = {
> - .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
> - },
> -};
> -
> static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> {
> - struct rte_eth_conf port_conf = port_conf_default;
> + struct rte_eth_conf port_conf;
> const uint16_t rx_rings = 1, tx_rings = 1;
> int retval;
> uint16_t q;
> @@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + memset(&port_conf, 0, sizeof(struct rte_eth_conf));
> +
> retval = rte_eth_dev_info_get(port, &dev_info);
> if (retval != 0) {
> printf("Error during getting device (port %u) info: %s\n",
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index daf5ca924221..4d0584af52e3 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1324,6 +1324,19 @@ eth_dev_validate_offloads(uint16_t port_id,
> uint64_t req_offloads,
> return ret;
> }
>
> +static uint16_t
> +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)
> +{
> + uint16_t overhead_len;
> +
> + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)
> + overhead_len = max_rx_pktlen - max_mtu;
> + else
> + overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> +
> + return overhead_len;
> +}
> +
> int
> rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> const struct rte_eth_conf *dev_conf)
> @@ -1331,6 +1344,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t
> nb_rx_q, uint16_t nb_tx_q,
> struct rte_eth_dev *dev;
> struct rte_eth_dev_info dev_info;
> struct rte_eth_conf orig_conf;
> + uint32_t max_rx_pktlen;
> uint16_t overhead_len;
> int diag;
> int ret;
> @@ -1381,11 +1395,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t
> nb_rx_q, uint16_t nb_tx_q,
> goto rollback;
>
> /* Get the real Ethernet overhead length */
> - if (dev_info.max_mtu != UINT16_MAX &&
> - dev_info.max_rx_pktlen > dev_info.max_mtu)
> - overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
> - else
> - overhead_len = RTE_ETHER_HDR_LEN +
> RTE_ETHER_CRC_LEN;
> + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
>
> /* If number of queues specified by application for both Rx and Tx is
> * zero, use driver preferred values. This cannot be done individually
> @@ -1454,49 +1465,45 @@ rte_eth_dev_configure(uint16_t port_id,
> uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> /*
> - * If jumbo frames are enabled, check that the maximum RX packet
> - * length is supported by the configured device.
> + * Check that the maximum RX packet length is supported by the
> + * configured device.
> */
> - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> {
> - if (dev_conf->rxmode.max_rx_pkt_len >
> dev_info.max_rx_pktlen) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u >
> max valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - dev_info.max_rx_pktlen);
> - ret = -EINVAL;
> - goto rollback;
> - } else if (dev_conf->rxmode.max_rx_pkt_len <
> RTE_ETHER_MIN_LEN) {
> - RTE_ETHDEV_LOG(ERR,
> - "Ethdev port_id=%u max_rx_pkt_len %u <
> min valid value %u\n",
> - port_id, dev_conf->rxmode.max_rx_pkt_len,
> - (unsigned int)RTE_ETHER_MIN_LEN);
> - ret = -EINVAL;
> - goto rollback;
> - }
> + if (dev_conf->rxmode.mtu == 0)
> + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU;
> + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len;
> + if (max_rx_pktlen > dev_info.max_rx_pktlen) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u > max valid
> value %u\n",
> + port_id, max_rx_pktlen, dev_info.max_rx_pktlen);
> + ret = -EINVAL;
> + goto rollback;
> + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) {
> + RTE_ETHDEV_LOG(ERR,
> + "Ethdev port_id=%u max_rx_pktlen %u < min valid
> value %u\n",
> + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN);
> + ret = -EINVAL;
> + goto rollback;
> + }
>
> - /* Scale the MTU size to adapt max_rx_pkt_len */
> - dev->data->mtu = dev->data-
> >dev_conf.rxmode.max_rx_pkt_len -
> - overhead_len;
> - } else {
> - uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
> - if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
> - pktlen > RTE_ETHER_MTU + overhead_len)
> + if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
> == 0) {
> + if (dev->data->dev_conf.rxmode.mtu <
> RTE_ETHER_MIN_MTU ||
> + dev->data->dev_conf.rxmode.mtu >
> RTE_ETHER_MTU)
> /* Use default value */
> - dev->data->dev_conf.rxmode.max_rx_pkt_len =
> - RTE_ETHER_MTU +
> overhead_len;
> + dev->data->dev_conf.rxmode.mtu =
> RTE_ETHER_MTU;
> }
>
> + dev->data->mtu = dev->data->dev_conf.rxmode.mtu;
> +
> /*
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> if (dev_conf->rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> + dev->data->dev_conf.rxmode.max_lro_pkt_size =
> max_rx_pktlen;
> ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data-
> >dev_conf.rxmode.max_lro_pkt_size,
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> goto rollback;
> @@ -2156,13 +2163,20 @@ rte_eth_rx_queue_setup(uint16_t port_id,
> uint16_t rx_queue_id,
> * If LRO is enabled, check that the maximum aggregated packet
> * size is supported by the configured device.
> */
> + /* Get the real Ethernet overhead length */
> if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> + uint16_t overhead_len;
> + uint32_t max_rx_pktlen;
> + int ret;
> +
> + overhead_len =
> eth_dev_get_overhead_len(dev_info.max_rx_pktlen,
> + dev_info.max_mtu);
> + max_rx_pktlen = dev->data->mtu + overhead_len;
> if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
> - dev->data->dev_conf.rxmode.max_lro_pkt_size =
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len;
> - int ret = eth_dev_check_lro_pkt_size(port_id,
> + dev->data->dev_conf.rxmode.max_lro_pkt_size =
> max_rx_pktlen;
> + ret = eth_dev_check_lro_pkt_size(port_id,
> dev->data-
> >dev_conf.rxmode.max_lro_pkt_size,
> - dev->data-
> >dev_conf.rxmode.max_rx_pkt_len,
> + max_rx_pktlen,
> dev_info.max_lro_pkt_size);
> if (ret != 0)
> return ret;
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index afdc53b674cc..9fba2bd73c84 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -416,7 +416,7 @@ enum rte_eth_tx_mq_mode {
> struct rte_eth_rxmode {
> /** The multi-queue packet distribution mode to be used, e.g. RSS.
> */
> enum rte_eth_rx_mq_mode mq_mode;
> - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled.
> */
> + uint32_t mtu; /**< Requested MTU. */
> /** Maximum allowed size of LRO aggregated packet. */
> uint32_t max_lro_pkt_size;
> uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
> diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h
> index 0036bda7465c..1491c815c312 100644
> --- a/lib/ethdev/rte_ethdev_trace.h
> +++ b/lib/ethdev/rte_ethdev_trace.h
> @@ -28,7 +28,7 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_u16(nb_tx_q);
> rte_trace_point_emit_u32(dev_conf->link_speeds);
> rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode);
> - rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len);
> + rte_trace_point_emit_u32(dev_conf->rxmode.mtu);
> rte_trace_point_emit_u64(dev_conf->rxmode.offloads);
> rte_trace_point_emit_u32(dev_conf->txmode.mq_mode);
> rte_trace_point_emit_u64(dev_conf->txmode.offloads);
> --
> 2.31.1
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
next prev parent reply other threads:[~2021-10-08 8:36 UTC|newest]
Thread overview: 112+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-09 17:29 [dpdk-dev] [PATCH 1/4] " Ferruh Yigit
2021-07-09 17:29 ` [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-13 13:48 ` Andrew Rybchenko
2021-07-21 12:26 ` Ferruh Yigit
2021-07-18 7:49 ` Xu, Rosen
2021-07-19 14:38 ` Ajit Khaparde
2021-07-09 17:29 ` [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set Ferruh Yigit
2021-07-13 13:56 ` Andrew Rybchenko
2021-07-18 7:52 ` Xu, Rosen
2021-07-09 17:29 ` [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag Ferruh Yigit
2021-07-13 14:07 ` Andrew Rybchenko
2021-07-21 12:26 ` Ferruh Yigit
2021-07-21 12:39 ` Ferruh Yigit
2021-07-18 7:53 ` Xu, Rosen
2021-07-13 12:47 ` [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length Andrew Rybchenko
2021-07-21 16:46 ` Ferruh Yigit
2021-07-22 1:31 ` Ajit Khaparde
2021-07-22 10:27 ` Ferruh Yigit
2021-07-22 10:38 ` Andrew Rybchenko
2021-07-18 7:45 ` Xu, Rosen
2021-07-19 3:35 ` Huisong Li
2021-07-21 15:29 ` Ferruh Yigit
2021-07-22 7:21 ` Huisong Li
2021-07-22 10:12 ` Ferruh Yigit
2021-07-22 10:15 ` Andrew Rybchenko
2021-07-22 14:43 ` Stephen Hemminger
2021-09-17 1:08 ` Min Hu (Connor)
2021-09-17 8:04 ` Ferruh Yigit
2021-09-17 8:16 ` Min Hu (Connor)
2021-09-17 8:17 ` Min Hu (Connor)
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 1/6] " Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-07-23 3:29 ` Huisong Li
2021-07-22 17:21 ` [dpdk-dev] [PATCH v2 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-04 5:08 ` Somnath Kotur
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-04 5:09 ` Somnath Kotur
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
[not found] ` <CAOBf=muYkU2dwgi3iC8Q7pdSNTJsMUwWYdXj14KeN_=_mUGa0w@mail.gmail.com>
2021-10-04 7:55 ` Somnath Kotur
2021-10-05 16:48 ` Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-01 14:36 ` [dpdk-dev] [PATCH v3 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-01 15:07 ` [dpdk-dev] [PATCH v3 1/6] ethdev: fix max Rx packet length Stephen Hemminger
2021-10-05 16:46 ` Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 " Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-08 8:39 ` Xu, Rosen
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-08 8:38 ` Xu, Rosen
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-05 17:16 ` [dpdk-dev] [PATCH v4 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-05 22:07 ` [dpdk-dev] [PATCH v4 1/6] ethdev: fix max Rx packet length Ajit Khaparde
2021-10-06 6:08 ` Somnath Kotur
2021-10-08 8:36 ` Xu, Rosen [this message]
2021-10-10 6:30 ` Matan Azrad
2021-10-11 21:59 ` Ferruh Yigit
2021-10-12 7:03 ` Matan Azrad
2021-10-12 11:03 ` Ferruh Yigit
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 " Ferruh Yigit
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-08 17:20 ` Ananyev, Konstantin
2021-10-09 10:58 ` lihuisong (C)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-08 17:19 ` Ananyev, Konstantin
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-08 17:11 ` Ananyev, Konstantin
2021-10-09 11:09 ` lihuisong (C)
2021-10-10 5:46 ` Matan Azrad
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-08 16:51 ` Ananyev, Konstantin
2021-10-11 19:50 ` Ferruh Yigit
2021-10-09 11:43 ` lihuisong (C)
2021-10-11 20:15 ` Ferruh Yigit
2021-10-12 4:02 ` lihuisong (C)
2021-10-07 16:56 ` [dpdk-dev] [PATCH v5 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-08 16:53 ` Ananyev, Konstantin
2021-10-08 15:57 ` [dpdk-dev] [PATCH v5 1/6] ethdev: fix max Rx packet length Ananyev, Konstantin
2021-10-11 19:47 ` Ferruh Yigit
2021-10-09 10:56 ` lihuisong (C)
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 " Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-12 17:20 ` Hyong Youb Kim (hyonkim)
2021-10-13 7:16 ` Michał Krawczyk
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-12 5:58 ` Andrew Rybchenko
2021-10-11 23:53 ` [dpdk-dev] [PATCH v6 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-12 6:02 ` [dpdk-dev] [PATCH v6 1/6] ethdev: fix max Rx packet length Andrew Rybchenko
2021-10-12 9:42 ` Ananyev, Konstantin
2021-10-13 7:08 ` Xu, Rosen
2021-10-15 1:31 ` Hyong Youb Kim (hyonkim)
2021-10-16 0:24 ` Ferruh Yigit
2021-10-18 8:54 ` Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 " Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 2/6] ethdev: move jumbo frame offload check to library Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 3/6] ethdev: move check to library for MTU set Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 4/6] ethdev: remove jumbo offload flag Ferruh Yigit
2021-10-21 0:43 ` Thomas Monjalon
2021-10-22 11:25 ` Ferruh Yigit
2021-10-22 11:29 ` Andrew Rybchenko
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 5/6] ethdev: unify MTU checks Ferruh Yigit
2021-10-18 13:48 ` [dpdk-dev] [PATCH v7 6/6] examples/ip_reassembly: remove unused parameter Ferruh Yigit
2021-10-18 17:31 ` [dpdk-dev] [PATCH v7 1/6] ethdev: fix max Rx packet length Ferruh Yigit
2021-11-05 14:19 ` Xueming(Steven) Li
2021-11-05 14:39 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BYAPR11MB29014AA19F2477ADB65E5C7B89B29@BYAPR11MB2901.namprd11.prod.outlook.com \
--to=rosen.xu@intel.com \
--cc=aboyer@pensando.io \
--cc=ajit.khaparde@broadcom.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=asomalap@amd.com \
--cc=beilei.xing@intel.com \
--cc=bernard.iremonger@intel.com \
--cc=bruce.richardson@intel.com \
--cc=chas3@att.com \
--cc=chenbo.xia@intel.com \
--cc=cloud.wangxiaoyun@huawei.com \
--cc=cristian.dumitrescu@intel.com \
--cc=david.hunt@intel.com \
--cc=declan.doherty@intel.com \
--cc=dev@dpdk.org \
--cc=dsinghrawat@marvell.com \
--cc=evgenys@amazon.com \
--cc=ferruh.yigit@intel.com \
--cc=g.singh@nxp.com \
--cc=gakhil@marvell.com \
--cc=haiyue.wang@intel.com \
--cc=harry.van.haaren@intel.com \
--cc=heinrich.kuhn@corigine.com \
--cc=hemant.agrawal@nxp.com \
--cc=hkalra@marvell.com \
--cc=humin29@huawei.com \
--cc=hyonkim@cisco.com \
--cc=igorch@amazon.com \
--cc=irusskikh@marvell.com \
--cc=jasvinder.singh@intel.com \
--cc=jerinj@marvell.com \
--cc=jianwang@trustnetic.com \
--cc=jiawenwu@trustnetic.com \
--cc=jingjing.wu@intel.com \
--cc=john.mcnamara@intel.com \
--cc=johndale@cisco.com \
--cc=keith.wiles@intel.com \
--cc=kirankumark@marvell.com \
--cc=kirill.rybalchenko@intel.com \
--cc=konstantin.ananyev@intel.com \
--cc=lironh@marvell.com \
--cc=matan@nvidia.com \
--cc=matt.peters@windriver.com \
--cc=maxime.coquelin@redhat.com \
--cc=mczekaj@marvell.com \
--cc=mdr@ashroe.eu \
--cc=mk@semihalf.com \
--cc=mw@semihalf.com \
--cc=ndabilpuram@marvell.com \
--cc=nicolas.chautru@intel.com \
--cc=oulijun@huawei.com \
--cc=pbhagavatula@marvell.com \
--cc=qi.z.zhang@intel.com \
--cc=qiming.yang@intel.com \
--cc=radu.nicolau@intel.com \
--cc=rahul.lakkireddy@chelsio.com \
--cc=rmody@marvell.com \
--cc=sachin.saxena@oss.nxp.com \
--cc=shaibran@amazon.com \
--cc=shshaikh@marvell.com \
--cc=skori@marvell.com \
--cc=skoteshwar@marvell.com \
--cc=somnath.kotur@broadcom.com \
--cc=srinivasan@marvell.com \
--cc=steven.webster@windriver.com \
--cc=sthotton@marvell.com \
--cc=thomas@monjalon.net \
--cc=tomasz.kantecki@intel.com \
--cc=viacheslavo@nvidia.com \
--cc=xiao.w.wang@intel.com \
--cc=xiaoyun.li@intel.com \
--cc=xuanziyang2@huawei.com \
--cc=yisen.zhuang@huawei.com \
--cc=zhouguoyang@huawei.com \
--cc=zr@semihalf.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).