From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 56E45A09F0;
	Thu, 17 Dec 2020 10:28:27 +0100 (CET)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 2CAF2CA10;
	Thu, 17 Dec 2020 10:28:26 +0100 (CET)
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by dpdk.org (Postfix) with ESMTP id 6864ACA10
 for <dev@dpdk.org>; Thu, 17 Dec 2020 10:28:24 +0100 (CET)
IronPort-SDR: 3XSswR7C2NslBIQptjU+08Ni2SVLFSoBo01fBrukc1Ei40mgmaOM9hQQWqu40d+MryFKx2pZ5G
 ZtXO3qFk/kxg==
X-IronPort-AV: E=McAfee;i="6000,8403,9837"; a="193610789"
X-IronPort-AV: E=Sophos;i="5.78,426,1599548400"; d="scan'208";a="193610789"
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Dec 2020 01:28:22 -0800
IronPort-SDR: /JG8AjLKj9EAqVsmqWUo4YVYp69TWsxQgmM9QfkTVkFPjhoDeF4qq2a4cswKYFEZxJvgGvNQXS
 nd40KgCzKtMg==
X-IronPort-AV: E=Sophos;i="5.78,426,1599548400"; d="scan'208";a="369745126"
Received: from intel-npg-odc-srv01.cd.intel.com ([10.240.178.136])
 by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Dec 2020 01:27:52 -0800
From: Steve Yang <stevex.yang@intel.com>
To: dev@dpdk.org
Cc: wenzhuo.lu@intel.com, beilei.xing@intel.com, bernard.iremonger@intel.com,
 asomalap@amd.com, rahul.lakkireddy@chelsio.com, hemant.agrawal@nxp.com,
 sachin.saxena@oss.nxp.com, jia.guo@intel.com, haiyue.wang@intel.com,
 g.singh@nxp.com, xuanziyang2@huawei.com, cloud.wangxiaoyun@huawei.com,
 zhouguoyang@huawei.com, xavier.huwei@huawei.com, humin29@huawei.com,
 yisen.zhuang@huawei.com, oulijun@huawei.com, jingjing.wu@intel.com,
 qiming.yang@intel.com, qi.z.zhang@intel.com, rosen.xu@intel.com,
 sthotton@marvell.com, srinivasan@marvell.com, heinrich.kuhn@netronome.com,
 hkalra@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com,
 kirankumark@marvell.com, rmody@marvell.com, shshaikh@marvell.com,
 andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, thomas@monjalon.net,
 ferruh.yigit@intel.com, ivan.boule@6wind.com, konstantin.ananyev@intel.com,
 samuel.gauthier@6wind.com, david.marchand@6wind.com, shahafs@mellanox.com,
 stephen@networkplumber.org, maxime.coquelin@redhat.com,
 olivier.matz@6wind.com, lihuisong@huawei.com, shreyansh.jain@nxp.com,
 wei.dai@intel.com, fengchunsong@huawei.com, chenhao164@huawei.com,
 tangchengchang@hisilicon.com, helin.zhang@intel.com, yanglong.wu@intel.com,
 xiaolong.ye@intel.com, ting.xu@intel.com, xiaoyun.li@intel.com,
 dan.wei@intel.com, andy.pei@intel.com, vattunuru@marvell.com,
 skori@marvell.com, sony.chacko@qlogic.com, bruce.richardson@intel.com,
 ivan.malov@oktetlabs.ru, rad@semihalf.com, slawomir.rosek@semihalf.com,
 kamil.rytarowski@caviumnetworks.com, wei.zhao1@intel.com,
 junyux.jiang@intel.com, kumaras@chelsio.com, girish.nandibasappa@amd.com,
 rolf.neugebauer@netronome.com, alejandro.lucero@netronome.com,
 Steve Yang <stevex.yang@intel.com>
Date: Thu, 17 Dec 2020 09:22:51 +0000
Message-Id: <20201217092312.27033-2-stevex.yang@intel.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201217092312.27033-1-stevex.yang@intel.com>
References: <20201209031628.29572-1-stevex.yang@intel.com>
 <20201217092312.27033-1-stevex.yang@intel.com>
Subject: [dpdk-dev] [PATCH v2 01/22] ethdev: fix MTU size exceeds max rx
	packet length
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

If max rx packet length is smaller then MTU + Ether overhead, that will
drop all MTU size packets.

Update the MTU size according to the max rx packet and Ether overhead.

Fixes: 59d0ecdbf0e1 ("ethdev: MTU accessors")

Signed-off-by: Steve Yang <stevex.yang@intel.com>
---
 lib/librte_ethdev/rte_ethdev.c | 21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 17ddacc78d..ff6a1e675f 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1292,6 +1292,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_conf orig_conf;
+	uint16_t overhead_len;
 	int diag;
 	int ret;
 
@@ -1323,6 +1324,15 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	if (ret != 0)
 		goto rollback;
 
+	/* Get the real Ethernet overhead length */
+	if (dev_info.max_mtu &&
+	    dev_info.max_mtu != UINT16_MAX &&
+	    dev_info.max_rx_pktlen &&
+	    dev_info.max_rx_pktlen > dev_info.max_mtu)
+		overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu;
+	else
+		overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+
 	/* If number of queues specified by application for both Rx and Tx is
 	 * zero, use driver preferred values. This cannot be done individually
 	 * as it is valid for either Tx or Rx (but not both) to be zero.
@@ -1410,13 +1420,18 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 			goto rollback;
 		}
 	} else {
-		if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN ||
-			dev_conf->rxmode.max_rx_pkt_len > RTE_ETHER_MAX_LEN)
+		uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len;
+		if (pktlen < RTE_ETHER_MIN_MTU + overhead_len ||
+			pktlen > RTE_ETHER_MTU + overhead_len)
 			/* Use default value */
 			dev->data->dev_conf.rxmode.max_rx_pkt_len =
-							RTE_ETHER_MAX_LEN;
+						RTE_ETHER_MTU + overhead_len;
 	}
 
+	/* Scale the MTU size to adapt max_rx_pkt_len */
+	dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
+				overhead_len;
+
 	/*
 	 * If LRO is enabled, check that the maximum aggregated packet
 	 * size is supported by the configured device.
-- 
2.17.1