From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25FA6A09E9; Wed, 9 Dec 2020 04:21:09 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 23BD7C952; Wed, 9 Dec 2020 04:20:28 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 27A71C938 for ; Wed, 9 Dec 2020 04:20:25 +0100 (CET) IronPort-SDR: qUjtq/HXiFIis9ta9RZa3SAG+pyGSkga2dperGcu/wgaRVZ8aspB54EoQHWzZ8fQY8e2DE3Mk+ uNEnXS5iVKXw== X-IronPort-AV: E=McAfee;i="6000,8403,9829"; a="192310503" X-IronPort-AV: E=Sophos;i="5.78,404,1599548400"; d="scan'208";a="192310503" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Dec 2020 19:20:24 -0800 IronPort-SDR: vxEz0/v9Gg+upVKLAONjCElB/UMjvQNKzy/1cRtOGQhIqn18Bs/ji5MXG0RFB3wM+1BsMj2m27 fZ3kzE++mSPg== X-IronPort-AV: E=Sophos;i="5.78,404,1599548400"; d="scan'208";a="363948855" Received: from intel-npg-odc-srv01.cd.intel.com ([10.240.178.136]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Dec 2020 19:20:12 -0800 From: Steve Yang To: dev@dpdk.org Cc: hemant.agrawal@nxp.com, sachin.saxena@oss.nxp.com, jia.guo@intel.com, haiyue.wang@intel.com, xavier.huwei@huawei.com, humin29@huawei.com, yisen.zhuang@huawei.com, oulijun@huawei.com, beilei.xing@intel.com, jingjing.wu@intel.com, qiming.yang@intel.com, qi.z.zhang@intel.com, rosen.xu@intel.com, hkalra@marvell.com, jerinj@marvell.com, ndabilpuram@marvell.com, kirankumark@marvell.com, rmody@marvell.com, shshaikh@marvell.com, andrew.rybchenko@oktetlabs.ru, mczekaj@marvell.com, wei.dai@intel.com, fengchunsong@huawei.com, lihuisong@huawei.com, ferruh.yigit@intel.com, chenhao164@huawei.com, helin.zhang@intel.com, konstantin.ananyev@intel.com, yanglong.wu@intel.com, xiaolong.ye@intel.com, ting.xu@intel.com, xiaoyun.li@intel.com, wenzhuo.lu@intel.com, andy.pei@intel.com, dan.wei@intel.com, skori@marvell.com, vattunuru@marvell.com, sony.chacko@qlogic.com, bruce.richardson@intel.com, ivan.malov@oktetlabs.ru, zyta.szpak@semihalf.com, slawomir.rosek@semihalf.com, rad@semihalf.com, Steve Yang Date: Wed, 9 Dec 2020 03:16:22 +0000 Message-Id: <20201209031628.29572-7-stevex.yang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201209031628.29572-1-stevex.yang@intel.com> References: <20201209031628.29572-1-stevex.yang@intel.com> Subject: [dpdk-dev] [PATCH v1 06/12] net/ice: fix the jumbo frame flag condition X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The jumbo frame uses the 'RTE_ETHER_MAX_LEN' as boundary condition, but the Ether overhead is larger than 18 when it supports dual VLAN tags. That will cause the jumbo flag rx offload is wrong when MTU size is 'RTE_ETHER_MTU'. This fix will change the boundary condition with 'RTE_ETHER_MTU'. Fixes: 84dc7a95a2d3 ("net/ice: enable flow director engine") Fixes: 1b009275e2c8 ("net/ice: add Rx queue init in DCF") Fixes: ae2bdd0219cb ("net/ice: support MTU setting") Fixes: 50370662b727 ("net/ice: support device and queue ops") Signed-off-by: Steve Yang --- drivers/net/ice/ice_dcf_ethdev.c | 8 ++++---- drivers/net/ice/ice_ethdev.c | 2 +- drivers/net/ice/ice_ethdev.h | 1 + drivers/net/ice/ice_rxtx.c | 10 +++++----- 4 files changed, 11 insertions(+), 10 deletions(-) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index b0b2ecb0d6..e5c877805f 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -60,23 +60,23 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq) * correctly. */ if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - if (max_pkt_len <= RTE_ETHER_MAX_LEN || + if (max_pkt_len <= ICE_ETH_MAX_LEN || max_pkt_len > ICE_FRAME_SIZE_MAX) { PMD_DRV_LOG(ERR, "maximum packet length must be " "larger than %u and smaller than %u, " "as jumbo frame is enabled", - (uint32_t)RTE_ETHER_MAX_LEN, + (uint32_t)ICE_ETH_MAX_LEN, (uint32_t)ICE_FRAME_SIZE_MAX); return -EINVAL; } } else { if (max_pkt_len < RTE_ETHER_MIN_LEN || - max_pkt_len > RTE_ETHER_MAX_LEN) { + max_pkt_len > ICE_ETH_MAX_LEN) { PMD_DRV_LOG(ERR, "maximum packet length must be " "larger than %u and smaller than %u, " "as jumbo frame is disabled", (uint32_t)RTE_ETHER_MIN_LEN, - (uint32_t)RTE_ETHER_MAX_LEN); + (uint32_t)ICE_ETH_MAX_LEN); return -EINVAL; } } diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 9a5d6a559f..fc99f4bad0 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3904,7 +3904,7 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - if (frame_size > RTE_ETHER_MAX_LEN) + if (mtu > RTE_ETHER_MTU) dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 899f446cde..2b03c59671 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -135,6 +135,7 @@ */ #define ICE_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_VLAN_TAG_SIZE * 2) +#define ICE_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_ETH_OVERHEAD) #define ICE_RXTX_BYTES_HIGH(bytes) ((bytes) & ~ICE_40_BIT_MASK) #define ICE_RXTX_BYTES_LOW(bytes) ((bytes) & ICE_40_BIT_MASK) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 5fbd68eafc..0d9ad75782 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -246,23 +246,23 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) dev->data->dev_conf.rxmode.max_rx_pkt_len); if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - if (rxq->max_pkt_len <= RTE_ETHER_MAX_LEN || + if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN || rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) { PMD_DRV_LOG(ERR, "maximum packet length must " "be larger than %u and smaller than %u," "as jumbo frame is enabled", - (uint32_t)RTE_ETHER_MAX_LEN, + (uint32_t)ICE_ETH_MAX_LEN, (uint32_t)ICE_FRAME_SIZE_MAX); return -EINVAL; } } else { if (rxq->max_pkt_len < RTE_ETHER_MIN_LEN || - rxq->max_pkt_len > RTE_ETHER_MAX_LEN) { + rxq->max_pkt_len > ICE_ETH_MAX_LEN) { PMD_DRV_LOG(ERR, "maximum packet length must be " "larger than %u and smaller than %u, " "as jumbo frame is disabled", (uint32_t)RTE_ETHER_MIN_LEN, - (uint32_t)RTE_ETHER_MAX_LEN); + (uint32_t)ICE_ETH_MAX_LEN); return -EINVAL; } } @@ -701,7 +701,7 @@ ice_fdir_program_hw_rx_queue(struct ice_rx_queue *rxq) rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; rx_ctx.dtype = 0; /* No Header Split mode */ rx_ctx.dsize = 1; /* 32B descriptors */ - rx_ctx.rxmax = RTE_ETHER_MAX_LEN; + rx_ctx.rxmax = ICE_ETH_MAX_LEN; /* TPH: Transaction Layer Packet (TLP) processing hints */ rx_ctx.tphrdesc_ena = 1; rx_ctx.tphwdesc_ena = 1; -- 2.17.1