* [dpdk-dev] [PATCH v1 0/5] fix default max mtu size when device configured @ 2020-09-16 5:52 SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang ` (5 more replies) 0 siblings, 6 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-16 5:52 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, SteveX Yang testpmd will initialize default max packet length to 1518 which does't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. configure the correct default max packet size in dev_config ops. SteveX Yang (5): net/e1000: fix max mtu size packets with vlan tag cannot be received by default net/igc: fix max mtu size packets with vlan tag cannot be received by default net/ice: fix max mtu size packets with vlan tag cannot be received by default net/iavf: fix max mtu size packets with vlan tag cannot be received by default net/i40e: fix max mtu size packets with vlan tag cannot be received by default drivers/net/e1000/em_ethdev.c | 11 +++++++++++ drivers/net/i40e/i40e_ethdev.c | 10 ++++++++++ drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++++ drivers/net/iavf/iavf_ethdev.c | 13 +++++++++++-- drivers/net/ice/ice_ethdev.c | 14 ++++++++++++-- drivers/net/igc/igc_ethdev.c | 10 ++++++++++ 6 files changed, 65 insertions(+), 4 deletions(-) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v1 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default 2020-09-16 5:52 [dpdk-dev] [PATCH v1 0/5] fix default max mtu size when device configured SteveX Yang @ 2020-09-16 5:52 ` SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 2/5] net/igc: " SteveX Yang ` (4 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-16 5:52 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, SteveX Yang testpmd will initialize default max packet length to 1518 which does't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. e1000 can support single vlan tags that need more 4 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/e1000/em_ethdev.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 902b1cdca..68ff892be 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -437,10 +437,21 @@ eth_em_configure(struct rte_eth_dev *dev) { struct e1000_interrupt *intr = E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private); + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; PMD_INIT_FUNC_TRACE(); intr->flags |= E1000_FLAG_NEED_LINK_UPDATE; + /** + * Considering vlan tag packet, max frame size should be MTU and + * corresponding ether overhead. + */ + if (dev->data->mtu == RTE_ETHER_MTU && + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + E1000_ETH_OVERHEAD; + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + } + PMD_INIT_FUNC_TRACE(); return 0; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v1 2/5] net/igc: fix max mtu size packets with vlan tag cannot be received by default 2020-09-16 5:52 [dpdk-dev] [PATCH v1 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang @ 2020-09-16 5:52 ` SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 3/5] net/ice: " SteveX Yang ` (3 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-16 5:52 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, SteveX Yang testpmd will initialize default max packet length to 1518 which does't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. igc can support single vlan tag that need more 4 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: a5aeb2b9e225 ("net/igc: support Rx and Tx") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/igc/igc_ethdev.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index 6ab3ee909..6113793a2 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -341,10 +341,20 @@ static int eth_igc_configure(struct rte_eth_dev *dev) { struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev); + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; int ret; PMD_INIT_FUNC_TRACE(); + /* Considering vlan tag packet, max frame size should be MTU and + * corresponding ether overhead. + */ + if (dev->data->mtu == RTE_ETHER_MTU && + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + IGC_ETH_OVERHEAD; + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + } + ret = igc_check_mq_mode(dev); if (ret != 0) return ret; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v1 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-09-16 5:52 [dpdk-dev] [PATCH v1 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 2/5] net/igc: " SteveX Yang @ 2020-09-16 5:52 ` SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 4/5] net/iavf: " SteveX Yang ` (2 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-16 5:52 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, SteveX Yang testpmd will initialize default max packet length to 1518 which does't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. ice can support dual vlan tags that need more 8 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/ice/ice_ethdev.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 4170a5446..f5bf05bb8 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3129,6 +3129,7 @@ ice_dev_configure(struct rte_eth_dev *dev) struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; int ret; /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ -3137,8 +3138,17 @@ ice_dev_configure(struct rte_eth_dev *dev) ad->rx_bulk_alloc_allowed = true; ad->tx_simple_allowed = true; - if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG) + rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH; + + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + if (dev->data->mtu == RTE_ETHER_MTU && + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + ICE_ETH_OVERHEAD; + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + } ret = ice_init_rss(pf); if (ret) { -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v1 4/5] net/iavf: fix max mtu size packets with vlan tag cannot be received by default 2020-09-16 5:52 [dpdk-dev] [PATCH v1 0/5] fix default max mtu size when device configured SteveX Yang ` (2 preceding siblings ...) 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 3/5] net/ice: " SteveX Yang @ 2020-09-16 5:52 ` SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 5/5] net/i40e: " SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured SteveX Yang 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-16 5:52 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, SteveX Yang testpmd will initialize default max packet length to 1518 which does't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. iavf can support dual vlan tags that need more 8 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 02d212ca3125 ("net/iavf: rename remaining avf strings") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/iavf/iavf_ethdev.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 8fe81409c..ca4c52a52 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -217,7 +217,7 @@ iavf_dev_configure(struct rte_eth_dev *dev) struct iavf_adapter *ad = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); - struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; ad->rx_bulk_alloc_allowed = true; /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ -229,9 +229,18 @@ iavf_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + if (dev->data->mtu == RTE_ETHER_MTU && + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + IAVF_ETH_OVERHEAD; + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + } + /* Vlan stripping setting */ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) { - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) iavf_enable_vlan_strip(ad); else iavf_disable_vlan_strip(ad); -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v1 5/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default 2020-09-16 5:52 [dpdk-dev] [PATCH v1 0/5] fix default max mtu size when device configured SteveX Yang ` (3 preceding siblings ...) 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 4/5] net/iavf: " SteveX Yang @ 2020-09-16 5:52 ` SteveX Yang 2020-09-16 14:41 ` Ananyev, Konstantin 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured SteveX Yang 5 siblings, 1 reply; 94+ messages in thread From: SteveX Yang @ 2020-09-16 5:52 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, SteveX Yang testpmd will initialize default max packet length to 1518 which does't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. But for i40e/i40evf, they should support dual vlan tags that need more 8 bytes for max packet size, so, configure the correct max packet size in dev_config ops. Fixes: ff8282f4bbcd ("net/i40e: consider QinQ when setting MTU") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/i40e/i40e_ethdev.c | 10 ++++++++++ drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++++ 2 files changed, 21 insertions(+) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 841447228..787ff61c0 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -1917,6 +1917,7 @@ i40e_dev_configure(struct rte_eth_dev *dev) struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode; int i, ret; + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; ret = i40e_dev_sync_phy_type(hw); if (ret) @@ -1930,6 +1931,15 @@ i40e_dev_configure(struct rte_eth_dev *dev) ad->tx_simple_allowed = true; ad->tx_vec_allowed = true; + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + if (dev->data->mtu == RTE_ETHER_MTU && + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + I40E_ETH_OVERHEAD; + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + } + if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index b755350cd..7410563db 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -1669,6 +1669,7 @@ i40evf_dev_configure(struct rte_eth_dev *dev) I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; /* Initialize to TRUE. If any of Rx queues doesn't meet the bulk * allocation or vector Rx preconditions we will reset it. @@ -1681,6 +1682,16 @@ i40evf_dev_configure(struct rte_eth_dev *dev) dev->data->dev_conf.intr_conf.lsc = !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC); + + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + if (dev->data->mtu == RTE_ETHER_MTU && + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + I40E_ETH_OVERHEAD; + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + } + if (num_queue_pairs > vf->vsi_res->num_queue_pairs) { struct i40e_hw *hw; int ret; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v1 5/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 5/5] net/i40e: " SteveX Yang @ 2020-09-16 14:41 ` Ananyev, Konstantin [not found] ` <DM6PR11MB4362E5FF332551D12AA20017F93E0@DM6PR11MB4362.namprd11.prod.outlook.com> 0 siblings, 1 reply; 94+ messages in thread From: Ananyev, Konstantin @ 2020-09-16 14:41 UTC (permalink / raw) To: Yang, SteveX, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Zhang, Qi Z, Wu, Jingjing, Xing, Beilei, Yang, SteveX > testpmd will initialize default max packet length to 1518 which does't > include vlan tag size in ether overheader. Once, send the max mtu length > packet with vlan tag, the max packet length will exceed 1518 that will > cause packets dropped directly from NIC hw side. But for i40e/i40evf, > they should support dual vlan tags that need more 8 bytes for max packet > size, so, configure the correct max packet size in dev_config ops. > > Fixes: ff8282f4bbcd ("net/i40e: consider QinQ when setting MTU") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > drivers/net/i40e/i40e_ethdev.c | 10 ++++++++++ > drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++++ > 2 files changed, 21 insertions(+) > > diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c > index 841447228..787ff61c0 100644 > --- a/drivers/net/i40e/i40e_ethdev.c > +++ b/drivers/net/i40e/i40e_ethdev.c > @@ -1917,6 +1917,7 @@ i40e_dev_configure(struct rte_eth_dev *dev) > struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); > enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode; > int i, ret; > + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; > > ret = i40e_dev_sync_phy_type(hw); > if (ret) > @@ -1930,6 +1931,15 @@ i40e_dev_configure(struct rte_eth_dev *dev) > ad->tx_simple_allowed = true; > ad->tx_vec_allowed = true; > > + /* Considering QinQ packet, max frame size should be MTU and > + * corresponding ether overhead. > + */ > + if (dev->data->mtu == RTE_ETHER_MTU && > + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { Wonder why that particular max_rx_pkt_len and mtu values are important? Shouldn't we always do here same calculations as we do in i40e_dev_mtu_set()? > + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + I40E_ETH_OVERHEAD; > + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; > + } > + > if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) > dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; > > diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c > index b755350cd..7410563db 100644 > --- a/drivers/net/i40e/i40e_ethdev_vf.c > +++ b/drivers/net/i40e/i40e_ethdev_vf.c > @@ -1669,6 +1669,7 @@ i40evf_dev_configure(struct rte_eth_dev *dev) > I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, > dev->data->nb_tx_queues); > + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; > > /* Initialize to TRUE. If any of Rx queues doesn't meet the bulk > * allocation or vector Rx preconditions we will reset it. > @@ -1681,6 +1682,16 @@ i40evf_dev_configure(struct rte_eth_dev *dev) > dev->data->dev_conf.intr_conf.lsc = > !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC); > > + > + /* Considering QinQ packet, max frame size should be MTU and > + * corresponding ether overhead. > + */ > + if (dev->data->mtu == RTE_ETHER_MTU && > + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { > + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + I40E_ETH_OVERHEAD; > + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; > + } > + > if (num_queue_pairs > vf->vsi_res->num_queue_pairs) { > struct i40e_hw *hw; > int ret; > -- > 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
[parent not found: <DM6PR11MB4362E5FF332551D12AA20017F93E0@DM6PR11MB4362.namprd11.prod.outlook.com>]
* Re: [dpdk-dev] [PATCH v1 5/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default [not found] ` <DM6PR11MB4362E5FF332551D12AA20017F93E0@DM6PR11MB4362.namprd11.prod.outlook.com> @ 2020-09-17 12:18 ` Ananyev, Konstantin 0 siblings, 0 replies; 94+ messages in thread From: Ananyev, Konstantin @ 2020-09-17 12:18 UTC (permalink / raw) To: Yang, SteveX, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Zhang, Qi Z, Wu, Jingjing, Xing, Beilei > > Subject: RE: [dpdk-dev] [PATCH v1 5/5] net/i40e: fix max mtu size packets > > with vlan tag cannot be received by default > > > > > testpmd will initialize default max packet length to 1518 which does't > > > include vlan tag size in ether overheader. Once, send the max mtu > > > length packet with vlan tag, the max packet length will exceed 1518 > > > that will cause packets dropped directly from NIC hw side. But for > > > i40e/i40evf, they should support dual vlan tags that need more 8 bytes > > > for max packet size, so, configure the correct max packet size in dev_config > > ops. > > > > > > Fixes: ff8282f4bbcd ("net/i40e: consider QinQ when setting MTU") > > > > > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > > --- > > > drivers/net/i40e/i40e_ethdev.c | 10 ++++++++++ > > > drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++++ > > > 2 files changed, 21 insertions(+) > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev.c > > > b/drivers/net/i40e/i40e_ethdev.c index 841447228..787ff61c0 100644 > > > --- a/drivers/net/i40e/i40e_ethdev.c > > > +++ b/drivers/net/i40e/i40e_ethdev.c > > > @@ -1917,6 +1917,7 @@ i40e_dev_configure(struct rte_eth_dev *dev) > > > struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data- > > >dev_private); > > > enum rte_eth_rx_mq_mode mq_mode = dev->data- > > >dev_conf.rxmode.mq_mode; > > > int i, ret; > > > + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; > > > > > > ret = i40e_dev_sync_phy_type(hw); > > > if (ret) > > > @@ -1930,6 +1931,15 @@ i40e_dev_configure(struct rte_eth_dev *dev) > > > ad->tx_simple_allowed = true; > > > ad->tx_vec_allowed = true; > > > > > > + /* Considering QinQ packet, max frame size should be MTU and > > > + * corresponding ether overhead. > > > + */ > > > + if (dev->data->mtu == RTE_ETHER_MTU && > > > + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { > > > > Wonder why that particular max_rx_pkt_len and mtu values are important? > > Shouldn't we always do here same calculations as we do in > > i40e_dev_mtu_set()? > > The combination of RTE_ETHER_MTU (1500) & RTE_ETHER_MAX_LEN (1518) is > the generical default value from test-pmd or other apps. the RTE_ETHER_MAX_LEN > doesn't include VLAN tag(s) size, hence, only need adjust frame size to hold real mtu > size packet for the particular condition. Ok, but user can overwrite default values in dev_configure. What would happen if user would set rxmode.max_rx_pkt_len to RTE_ETHER_MAX_LEN + 1 or RTE_ETHER_MAX_LEN - 1? > > > > > > > > + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + > > I40E_ETH_OVERHEAD; > > > + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; > > > + } > > > + > > > if (dev->data->dev_conf.rxmode.mq_mode & > > ETH_MQ_RX_RSS_FLAG) > > > dev->data->dev_conf.rxmode.offloads |= > > DEV_RX_OFFLOAD_RSS_HASH; > > > > > > diff --git a/drivers/net/i40e/i40e_ethdev_vf.c > > > b/drivers/net/i40e/i40e_ethdev_vf.c > > > index b755350cd..7410563db 100644 > > > --- a/drivers/net/i40e/i40e_ethdev_vf.c > > > +++ b/drivers/net/i40e/i40e_ethdev_vf.c > > > @@ -1669,6 +1669,7 @@ i40evf_dev_configure(struct rte_eth_dev *dev) > > > I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > > > uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, > > > dev->data->nb_tx_queues); > > > + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; > > > > > > /* Initialize to TRUE. If any of Rx queues doesn't meet the bulk > > > * allocation or vector Rx preconditions we will reset it. > > > @@ -1681,6 +1682,16 @@ i40evf_dev_configure(struct rte_eth_dev *dev) > > > dev->data->dev_conf.intr_conf.lsc = > > > !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC); > > > > > > + > > > + /* Considering QinQ packet, max frame size should be MTU and > > > + * corresponding ether overhead. > > > + */ > > > + if (dev->data->mtu == RTE_ETHER_MTU && > > > + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { > > > + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + > > I40E_ETH_OVERHEAD; > > > + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; > > > + } > > > + > > > if (num_queue_pairs > vf->vsi_res->num_queue_pairs) { > > > struct i40e_hw *hw; > > > int ret; > > > -- > > > 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured 2020-09-16 5:52 [dpdk-dev] [PATCH v1 0/5] fix default max mtu size when device configured SteveX Yang ` (4 preceding siblings ...) 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 5/5] net/i40e: " SteveX Yang @ 2020-09-22 1:23 ` SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang ` (5 more replies) 5 siblings, 6 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-22 1:23 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which does't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. configure the correct default max packet size in dev_config ops. v2: * change the max_rx_pkt_len via mtu_set ops; SteveX Yang (5): net/e1000: fix max mtu size packets with vlan tag cannot be received by default net/igc: fix max mtu size packets with vlan tag cannot be received by default net/ice: fix max mtu size packets with vlan tag cannot be received by default net/i40e: fix max mtu size packets with vlan tag cannot be received by default net/iavf: fix max mtu size packets with vlan tag cannot be received by default drivers/net/e1000/em_ethdev.c | 6 ++++++ drivers/net/i40e/i40e_ethdev.c | 5 +++++ drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++++ drivers/net/iavf/iavf_ethdev.c | 5 +++++ drivers/net/ice/ice_ethdev.c | 5 +++++ drivers/net/igc/igc_ethdev.c | 7 ++++++- 6 files changed, 38 insertions(+), 1 deletion(-) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v2 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured SteveX Yang @ 2020-09-22 1:23 ` SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 2/5] net/igc: " SteveX Yang ` (4 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-22 1:23 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. e1000 can support single vlan tags that need more 4 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/e1000/em_ethdev.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 1dc360713..485a30625 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -441,6 +441,12 @@ eth_em_configure(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); intr->flags |= E1000_FLAG_NEED_LINK_UPDATE; + /** + * Considering vlan tag packet, max frame size should be MTU and + * corresponding ether overhead. + */ + eth_em_mtu_set(dev, dev->data->mtu); + PMD_INIT_FUNC_TRACE(); return 0; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v2 2/5] net/igc: fix max mtu size packets with vlan tag cannot be received by default 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang @ 2020-09-22 1:23 ` SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 3/5] net/ice: " SteveX Yang ` (3 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-22 1:23 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. igc can support single vlan tag that need more 4 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: a5aeb2b9e225 ("net/igc: support Rx and Tx") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/igc/igc_ethdev.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index 810568bc5..36ef325a4 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -341,7 +341,12 @@ eth_igc_configure(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); - ret = igc_check_mq_mode(dev); + /* Considering vlan tag packet, max frame size should be MTU and + * corresponding ether overhead. + */ + eth_igc_mtu_set(dev, dev->data->mtu); + + ret = igc_check_mq_mode(dev); if (ret != 0) return ret; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v2 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 2/5] net/igc: " SteveX Yang @ 2020-09-22 1:23 ` SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 4/5] net/i40e: " SteveX Yang ` (2 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-22 1:23 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. ice can support dual vlan tags that need more 8 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/ice/ice_ethdev.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index cfd357b05..0ca6962b1 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3157,6 +3157,11 @@ ice_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + ice_mtu_set(dev, dev->data->mtu); + ret = ice_init_rss(pf); if (ret) { PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v2 4/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured SteveX Yang ` (2 preceding siblings ...) 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 3/5] net/ice: " SteveX Yang @ 2020-09-22 1:23 ` SteveX Yang 2020-09-22 10:47 ` Ananyev, Konstantin 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 5/5] net/iavf: " SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured SteveX Yang 5 siblings, 1 reply; 94+ messages in thread From: SteveX Yang @ 2020-09-22 1:23 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. But for i40e/i40evf, they should support dual vlan tags that need more 8 bytes for max packet size, so, configure the correct max packet size in dev_config ops. Fixes: ff8282f4bbcd ("net/i40e: consider QinQ when setting MTU") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/i40e/i40e_ethdev.c | 5 +++++ drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++++ 2 files changed, 16 insertions(+) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 563f21d9d..023c86d66 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -1930,6 +1930,11 @@ i40e_dev_configure(struct rte_eth_dev *dev) ad->tx_simple_allowed = true; ad->tx_vec_allowed = true; + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + i40e_dev_mtu_set(dev, dev->data->mtu); + if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 8531cf6b1..b268b3d00 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -1669,6 +1669,7 @@ i40evf_dev_configure(struct rte_eth_dev *dev) I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; /* Initialize to TRUE. If any of Rx queues doesn't meet the bulk * allocation or vector Rx preconditions we will reset it. @@ -1681,6 +1682,16 @@ i40evf_dev_configure(struct rte_eth_dev *dev) dev->data->dev_conf.intr_conf.lsc = !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC); + + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + if (dev->data->mtu == RTE_ETHER_MTU && + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + I40E_ETH_OVERHEAD; + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + } + if (num_queue_pairs > vf->vsi_res->num_queue_pairs) { struct i40e_hw *hw; int ret; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 4/5] net/i40e: " SteveX Yang @ 2020-09-22 10:47 ` Ananyev, Konstantin 0 siblings, 0 replies; 94+ messages in thread From: Ananyev, Konstantin @ 2020-09-22 10:47 UTC (permalink / raw) To: Yang, SteveX, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Zhang, Qi Z, Wu, Jingjing, Xing, Beilei > -----Original Message----- > From: Yang, SteveX <stevex.yang@intel.com> > Sent: Tuesday, September 22, 2020 2:24 AM > To: dev@dpdk.org > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z > <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Yang, SteveX <stevex.yang@intel.com> > Subject: [PATCH v2 4/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default > > testpmd will initialize default max packet length to 1518 which doesn't > include vlan tag size in ether overheader. Once, send the max mtu length > packet with vlan tag, the max packet length will exceed 1518 that will > cause packets dropped directly from NIC hw side. But for i40e/i40evf, > they should support dual vlan tags that need more 8 bytes for max packet > size, so, configure the correct max packet size in dev_config ops. > > Fixes: ff8282f4bbcd ("net/i40e: consider QinQ when setting MTU") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > drivers/net/i40e/i40e_ethdev.c | 5 +++++ > drivers/net/i40e/i40e_ethdev_vf.c | 11 +++++++++++ > 2 files changed, 16 insertions(+) > > diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c > index 563f21d9d..023c86d66 100644 > --- a/drivers/net/i40e/i40e_ethdev.c > +++ b/drivers/net/i40e/i40e_ethdev.c > @@ -1930,6 +1930,11 @@ i40e_dev_configure(struct rte_eth_dev *dev) > ad->tx_simple_allowed = true; > ad->tx_vec_allowed = true; > > + /* Considering QinQ packet, max frame size should be MTU and > + * corresponding ether overhead. > + */ > + i40e_dev_mtu_set(dev, dev->data->mtu); > + > if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) > dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; > > diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c > index 8531cf6b1..b268b3d00 100644 > --- a/drivers/net/i40e/i40e_ethdev_vf.c > +++ b/drivers/net/i40e/i40e_ethdev_vf.c > @@ -1669,6 +1669,7 @@ i40evf_dev_configure(struct rte_eth_dev *dev) > I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, > dev->data->nb_tx_queues); > + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; > > /* Initialize to TRUE. If any of Rx queues doesn't meet the bulk > * allocation or vector Rx preconditions we will reset it. > @@ -1681,6 +1682,16 @@ i40evf_dev_configure(struct rte_eth_dev *dev) > dev->data->dev_conf.intr_conf.lsc = > !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC); > > + > + /* Considering QinQ packet, max frame size should be MTU and > + * corresponding ether overhead. > + */ > + if (dev->data->mtu == RTE_ETHER_MTU && > + rxmode->max_rx_pkt_len == RTE_ETHER_MAX_LEN) { > + rxmode->max_rx_pkt_len = RTE_ETHER_MTU + I40E_ETH_OVERHEAD; > + rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; > + } Wonder why vf code-path is different here? Can't we also do mtu_set() here? > + > if (num_queue_pairs > vf->vsi_res->num_queue_pairs) { > struct i40e_hw *hw; > int ret; > -- > 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v2 5/5] net/iavf: fix max mtu size packets with vlan tag cannot be received by default 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured SteveX Yang ` (3 preceding siblings ...) 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 4/5] net/i40e: " SteveX Yang @ 2020-09-22 1:23 ` SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured SteveX Yang 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-22 1:23 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. iavf can support dual vlan tags that need more 8 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 02d212ca3125 ("net/iavf: rename remaining avf strings") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/iavf/iavf_ethdev.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 6bb915d81..47caaeda3 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -226,6 +226,11 @@ iavf_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + iavf_dev_mtu_set(dev, dev->data->mtu); + /* Vlan stripping setting */ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) { if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured SteveX Yang ` (4 preceding siblings ...) 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 5/5] net/iavf: " SteveX Yang @ 2020-09-23 4:09 ` SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang ` (5 more replies) 5 siblings, 6 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-23 4:09 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which does't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. configure the correct default max packet size in dev_config ops. v3: * change the i40evf relative code; v2: * change the max_rx_pkt_len via mtu_set ops; SteveX Yang (5): net/e1000: fix max mtu size packets with vlan tag cannot be received by default net/igc: fix max mtu size packets with vlan tag cannot be received by default net/ice: fix max mtu size packets with vlan tag cannot be received by default net/i40e: fix max mtu size packets with vlan tag cannot be received by default net/iavf: fix max mtu size packets with vlan tag cannot be received by default drivers/net/e1000/em_ethdev.c | 6 ++++++ drivers/net/i40e/i40e_ethdev.c | 5 +++++ drivers/net/i40e/i40e_ethdev_vf.c | 5 +++++ drivers/net/iavf/iavf_ethdev.c | 5 +++++ drivers/net/ice/ice_ethdev.c | 5 +++++ drivers/net/igc/igc_ethdev.c | 7 ++++++- 6 files changed, 32 insertions(+), 1 deletion(-) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v3 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured SteveX Yang @ 2020-09-23 4:09 ` SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 2/5] net/igc: " SteveX Yang ` (4 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-23 4:09 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. e1000 can support single vlan tags that need more 4 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/e1000/em_ethdev.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 1dc360713..485a30625 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -441,6 +441,12 @@ eth_em_configure(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); intr->flags |= E1000_FLAG_NEED_LINK_UPDATE; + /** + * Considering vlan tag packet, max frame size should be MTU and + * corresponding ether overhead. + */ + eth_em_mtu_set(dev, dev->data->mtu); + PMD_INIT_FUNC_TRACE(); return 0; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v3 2/5] net/igc: fix max mtu size packets with vlan tag cannot be received by default 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang @ 2020-09-23 4:09 ` SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 3/5] net/ice: " SteveX Yang ` (3 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-23 4:09 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. igc can support single vlan tag that need more 4 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: a5aeb2b9e225 ("net/igc: support Rx and Tx") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/igc/igc_ethdev.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index 810568bc5..36ef325a4 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -341,7 +341,12 @@ eth_igc_configure(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(); - ret = igc_check_mq_mode(dev); + /* Considering vlan tag packet, max frame size should be MTU and + * corresponding ether overhead. + */ + eth_igc_mtu_set(dev, dev->data->mtu); + + ret = igc_check_mq_mode(dev); if (ret != 0) return ret; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v3 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 2/5] net/igc: " SteveX Yang @ 2020-09-23 4:09 ` SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 4/5] net/i40e: " SteveX Yang ` (2 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-23 4:09 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. ice can support dual vlan tags that need more 8 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/ice/ice_ethdev.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index cfd357b05..0ca6962b1 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3157,6 +3157,11 @@ ice_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + ice_mtu_set(dev, dev->data->mtu); + ret = ice_init_rss(pf); if (ret) { PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v3 4/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured SteveX Yang ` (2 preceding siblings ...) 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 3/5] net/ice: " SteveX Yang @ 2020-09-23 4:09 ` SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 5/5] net/iavf: " SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured SteveX Yang 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-23 4:09 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. But for i40e/i40evf, they should support dual vlan tags that need more 8 bytes for max packet size, so, configure the correct max packet size in dev_config ops. Fixes: ff8282f4bbcd ("net/i40e: consider QinQ when setting MTU") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/i40e/i40e_ethdev.c | 5 +++++ drivers/net/i40e/i40e_ethdev_vf.c | 5 +++++ 2 files changed, 10 insertions(+) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 6439baf2f..6b8acd07f 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -1930,6 +1930,11 @@ i40e_dev_configure(struct rte_eth_dev *dev) ad->tx_simple_allowed = true; ad->tx_vec_allowed = true; + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + i40e_dev_mtu_set(dev, dev->data->mtu); + if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 8531cf6b1..12e85ba26 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -1681,6 +1681,11 @@ i40evf_dev_configure(struct rte_eth_dev *dev) dev->data->dev_conf.intr_conf.lsc = !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC); + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + i40evf_dev_mtu_set(dev, dev->data->mtu); + if (num_queue_pairs > vf->vsi_res->num_queue_pairs) { struct i40e_hw *hw; int ret; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v3 5/5] net/iavf: fix max mtu size packets with vlan tag cannot be received by default 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured SteveX Yang ` (3 preceding siblings ...) 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 4/5] net/i40e: " SteveX Yang @ 2020-09-23 4:09 ` SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured SteveX Yang 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-23 4:09 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. iavf can support dual vlan tags that need more 8 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 02d212ca3125 ("net/iavf: rename remaining avf strings") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/iavf/iavf_ethdev.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 440da7d76..20581994c 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -226,6 +226,11 @@ iavf_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /* Considering QinQ packet, max frame size should be MTU and + * corresponding ether overhead. + */ + iavf_dev_mtu_set(dev, dev->data->mtu); + /* Vlan stripping setting */ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) { if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured SteveX Yang ` (4 preceding siblings ...) 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 5/5] net/iavf: " SteveX Yang @ 2020-09-28 6:55 ` SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang ` (5 more replies) 5 siblings, 6 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-28 6:55 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which does't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. configure the correct default max packet size in dev_config ops. v4: * add the adjust condition for max_rx_pkt_len; v3: * change the i40evf relative code; v2: * change the max_rx_pkt_len via mtu_set ops; SteveX Yang (5): net/e1000: fix max mtu size packets with vlan tag cannot be received by default net/igc: fix max mtu size packets with vlan tag cannot be received by default net/ice: fix max mtu size packets with vlan tag cannot be received by default net/i40e: fix max mtu size packets with vlan tag cannot be received by default net/iavf: fix max mtu size packets with vlan tag cannot be received by default drivers/net/e1000/em_ethdev.c | 12 ++++++++++++ drivers/net/i40e/i40e_ethdev.c | 11 +++++++++++ drivers/net/i40e/i40e_ethdev_vf.c | 13 ++++++++++++- drivers/net/iavf/iavf_ethdev.c | 12 ++++++++++++ drivers/net/ice/ice_ethdev.c | 11 +++++++++++ drivers/net/igc/igc_ethdev.c | 13 ++++++++++++- 6 files changed, 70 insertions(+), 2 deletions(-) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v4 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured SteveX Yang @ 2020-09-28 6:55 ` SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 2/5] net/igc: " SteveX Yang ` (4 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-28 6:55 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. e1000 can support single vlan tags that need more 4 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/e1000/em_ethdev.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 1dc360713..96ff99951 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -437,10 +437,22 @@ eth_em_configure(struct rte_eth_dev *dev) { struct e1000_interrupt *intr = E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private); + uint16_t frame_size = dev->data->mtu + E1000_ETH_OVERHEAD; + int rc = 0; PMD_INIT_FUNC_TRACE(); intr->flags |= E1000_FLAG_NEED_LINK_UPDATE; + /** + * Considering vlan tag packet, max frame size should be equal or + * larger than total size of MTU and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + rc = eth_em_mtu_set(dev, dev->data->mtu); + if (rc != 0) + return rc; + } + PMD_INIT_FUNC_TRACE(); return 0; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v4 2/5] net/igc: fix max mtu size packets with vlan tag cannot be received by default 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang @ 2020-09-28 6:55 ` SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 3/5] net/ice: " SteveX Yang ` (3 subsequent siblings) 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-28 6:55 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. igc can support single vlan tag that need more 4 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: a5aeb2b9e225 ("net/igc: support Rx and Tx") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/igc/igc_ethdev.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index 810568bc5..f47ea3e64 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -337,11 +337,22 @@ static int eth_igc_configure(struct rte_eth_dev *dev) { struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev); + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD; int ret; PMD_INIT_FUNC_TRACE(); - ret = igc_check_mq_mode(dev); + /** + * Considering vlan tag packet, max frame size should be equal or + * larger than total size of MTU and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = eth_igc_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + + ret = igc_check_mq_mode(dev); if (ret != 0) return ret; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 2/5] net/igc: " SteveX Yang @ 2020-09-28 6:55 ` SteveX Yang 2020-09-29 11:59 ` Zhang, Qi Z 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 4/5] net/i40e: " SteveX Yang ` (2 subsequent siblings) 5 siblings, 1 reply; 94+ messages in thread From: SteveX Yang @ 2020-09-28 6:55 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. ice can support dual vlan tags that need more 8 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/ice/ice_ethdev.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index cfd357b05..6b7098444 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev *dev) struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; int ret; /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ -3157,6 +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /** + * Considering QinQ packet, max frame size should be equal or + * larger than total size of MTU and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = ice_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + ret = ice_init_rss(pf); if (ret) { PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 3/5] net/ice: " SteveX Yang @ 2020-09-29 11:59 ` Zhang, Qi Z 2020-09-29 23:01 ` Ananyev, Konstantin 0 siblings, 1 reply; 94+ messages in thread From: Zhang, Qi Z @ 2020-09-29 11:59 UTC (permalink / raw) To: Yang, SteveX, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Ananyev, Konstantin > -----Original Message----- > From: Yang, SteveX <stevex.yang@intel.com> > Sent: Monday, September 28, 2020 2:56 PM > To: dev@dpdk.org > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, > Qiming <qiming.yang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, > Jingjing <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; > Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, SteveX > <stevex.yang@intel.com> > Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot > be received by default > > testpmd will initialize default max packet length to 1518 which doesn't include > vlan tag size in ether overheader. Once, send the max mtu length packet with > vlan tag, the max packet length will exceed 1518 that will cause packets > dropped directly from NIC hw side. > > ice can support dual vlan tags that need more 8 bytes for max packet size, so, > configures the correct max packet size in dev_config ops. > > Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index > cfd357b05..6b7098444 100644 > --- a/drivers/net/ice/ice_ethdev.c > +++ b/drivers/net/ice/ice_ethdev.c > @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev *dev) > struct ice_adapter *ad = > ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > + uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > int ret; > > /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ -3157,6 > +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) > dev->data->dev_conf.rxmode.offloads |= > DEV_RX_OFFLOAD_RSS_HASH; > > + /** > + * Considering QinQ packet, max frame size should be equal or > + * larger than total size of MTU and Ether overhead. > + */ > + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { Why we need this check? Can we just call ice_mtu_set directly And please remove above comment, since ether overhead is already considered in ice_mtu_set. > + ret = ice_mtu_set(dev, dev->data->mtu); > + if (ret != 0) > + return ret; > + } > + > ret = ice_init_rss(pf); > if (ret) { > PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > -- > 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-09-29 11:59 ` Zhang, Qi Z @ 2020-09-29 23:01 ` Ananyev, Konstantin 2020-09-30 0:34 ` Zhang, Qi Z 0 siblings, 1 reply; 94+ messages in thread From: Ananyev, Konstantin @ 2020-09-29 23:01 UTC (permalink / raw) To: Zhang, Qi Z, Yang, SteveX, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei > > > -----Original Message----- > > From: Yang, SteveX <stevex.yang@intel.com> > > Sent: Monday, September 28, 2020 2:56 PM > > To: dev@dpdk.org > > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, > > Qiming <qiming.yang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, > > Jingjing <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; > > Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, SteveX > > <stevex.yang@intel.com> > > Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot > > be received by default > > > > testpmd will initialize default max packet length to 1518 which doesn't include > > vlan tag size in ether overheader. Once, send the max mtu length packet with > > vlan tag, the max packet length will exceed 1518 that will cause packets > > dropped directly from NIC hw side. > > > > ice can support dual vlan tags that need more 8 bytes for max packet size, so, > > configures the correct max packet size in dev_config ops. > > > > Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > > > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > --- > > drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > > 1 file changed, 11 insertions(+) > > > > diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index > > cfd357b05..6b7098444 100644 > > --- a/drivers/net/ice/ice_ethdev.c > > +++ b/drivers/net/ice/ice_ethdev.c > > @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev *dev) > > struct ice_adapter *ad = > > ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > > struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > > +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > > int ret; > > > > /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ -3157,6 > > +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > > if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) > > dev->data->dev_conf.rxmode.offloads |= > > DEV_RX_OFFLOAD_RSS_HASH; > > > > +/** > > + * Considering QinQ packet, max frame size should be equal or > > + * larger than total size of MTU and Ether overhead. > > + */ > > > +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > > > Why we need this check? > Can we just call ice_mtu_set directly I think that without that check we can silently overwrite provided by user dev_conf.rxmode.max_rx_pkt_len value. > And please remove above comment, since ether overhead is already considered in ice_mtu_set. > > > > +ret = ice_mtu_set(dev, dev->data->mtu); > > +if (ret != 0) > > +return ret; > > +} > > + > > ret = ice_init_rss(pf); > > if (ret) { > > PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > > -- > > 2.17.1 > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-09-29 23:01 ` Ananyev, Konstantin @ 2020-09-30 0:34 ` Zhang, Qi Z [not found] ` <DM6PR11MB4362515283D00E27A793E6B0F9330@DM6PR11MB4362.namprd11.prod.outlook.com> 0 siblings, 1 reply; 94+ messages in thread From: Zhang, Qi Z @ 2020-09-30 0:34 UTC (permalink / raw) To: Ananyev, Konstantin, Yang, SteveX, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Wednesday, September 30, 2020 7:02 AM > To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX > <stevex.yang@intel.com>; dev@dpdk.org > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, > Qiming <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing, > Beilei <beilei.xing@intel.com> > Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag > cannot be received by default > > > > > > -----Original Message----- > > > From: Yang, SteveX <stevex.yang@intel.com> > > > Sent: Monday, September 28, 2020 2:56 PM > > > To: dev@dpdk.org > > > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; > > > Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z > > > <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing, > > > Beilei <beilei.xing@intel.com>; Ananyev, Konstantin > > > <konstantin.ananyev@intel.com>; Yang, SteveX <stevex.yang@intel.com> > > > Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan > > > tag cannot be received by default > > > > > > testpmd will initialize default max packet length to 1518 which > > > doesn't include vlan tag size in ether overheader. Once, send the > > > max mtu length packet with vlan tag, the max packet length will > > > exceed 1518 that will cause packets dropped directly from NIC hw side. > > > > > > ice can support dual vlan tags that need more 8 bytes for max packet > > > size, so, configures the correct max packet size in dev_config ops. > > > > > > Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > > > > > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > > --- > > > drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > > > 1 file changed, 11 insertions(+) > > > > > > diff --git a/drivers/net/ice/ice_ethdev.c > > > b/drivers/net/ice/ice_ethdev.c index > > > cfd357b05..6b7098444 100644 > > > --- a/drivers/net/ice/ice_ethdev.c > > > +++ b/drivers/net/ice/ice_ethdev.c > > > @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev *dev) > > > struct ice_adapter *ad = > > > ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > > > struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > > > +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > > > int ret; > > > > > > /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ > > > -3157,6 > > > +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > > > if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) > > > dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; > > > > > > +/** > > > + * Considering QinQ packet, max frame size should be equal or > > > + * larger than total size of MTU and Ether overhead. > > > + */ > > > > > +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > > > > > > Why we need this check? > > Can we just call ice_mtu_set directly > > I think that without that check we can silently overwrite provided by user > dev_conf.rxmode.max_rx_pkt_len value. OK, I see But still have one question dev->data->mtu is initialized to 1518 as default , but if application set dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. does that mean we will still will set mtu to 1518, is this expected? Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) here? > > > And please remove above comment, since ether overhead is already > considered in ice_mtu_set. > > > > > > > +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return ret; } > > > + > > > ret = ice_init_rss(pf); > > > if (ret) { > > > PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > > > -- > > > 2.17.1 > > > ^ permalink raw reply [flat|nested] 94+ messages in thread
[parent not found: <DM6PR11MB4362515283D00E27A793E6B0F9330@DM6PR11MB4362.namprd11.prod.outlook.com>]
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default [not found] ` <DM6PR11MB4362515283D00E27A793E6B0F9330@DM6PR11MB4362.namprd11.prod.outlook.com> @ 2020-09-30 2:32 ` Zhang, Qi Z 2020-10-14 15:38 ` Ferruh Yigit 0 siblings, 1 reply; 94+ messages in thread From: Zhang, Qi Z @ 2020-09-30 2:32 UTC (permalink / raw) To: Yang, SteveX, Ananyev, Konstantin, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei > -----Original Message----- > From: Yang, SteveX <stevex.yang@intel.com> > Sent: Wednesday, September 30, 2020 9:32 AM > To: Zhang, Qi Z <qi.z.zhang@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; dev@dpdk.org > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, > Qiming <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing, > Beilei <beilei.xing@intel.com> > Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag > cannot be received by default > > > > > -----Original Message----- > > From: Zhang, Qi Z <qi.z.zhang@intel.com> > > Sent: Wednesday, September 30, 2020 8:35 AM > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, SteveX > > <stevex.yang@intel.com>; dev@dpdk.org > > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; > > Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing > > <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> > > Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with > > vlan tag cannot be received by default > > > > > > > > > -----Original Message----- > > > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > > > Sent: Wednesday, September 30, 2020 7:02 AM > > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX > > > <stevex.yang@intel.com>; dev@dpdk.org > > > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; > > > Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing > > > <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> > > > Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with > > > vlan tag cannot be received by default > > > > > > > > > > > > -----Original Message----- > > > > > From: Yang, SteveX <stevex.yang@intel.com> > > > > > Sent: Monday, September 28, 2020 2:56 PM > > > > > To: dev@dpdk.org > > > > > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia > > > > > <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; > > > > > Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing > > > > > <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; > > > > > Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, SteveX > > > > > <stevex.yang@intel.com> > > > > > Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with > > > > > vlan tag cannot be received by default > > > > > > > > > > testpmd will initialize default max packet length to 1518 which > > > > > doesn't include vlan tag size in ether overheader. Once, send > > > > > the max mtu length packet with vlan tag, the max packet length > > > > > will exceed 1518 that will cause packets dropped directly from NIC hw > side. > > > > > > > > > > ice can support dual vlan tags that need more 8 bytes for max > > > > > packet size, so, configures the correct max packet size in > > > > > dev_config > > ops. > > > > > > > > > > Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > > > > > > > > > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > > > > --- > > > > > drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > > > > > 1 file changed, 11 insertions(+) > > > > > > > > > > diff --git a/drivers/net/ice/ice_ethdev.c > > > > > b/drivers/net/ice/ice_ethdev.c index > > > > > cfd357b05..6b7098444 100644 > > > > > --- a/drivers/net/ice/ice_ethdev.c > > > > > +++ b/drivers/net/ice/ice_ethdev.c > > > > > @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev *dev) > > > > > struct ice_adapter *ad = > > > > > ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > > > > > struct ice_pf *pf = > > > > > ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > > > > > +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > > > > > int ret; > > > > > > > > > > /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ > > > > > -3157,6 > > > > > +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > > > > > if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) > > > > > dev->data->dev_conf.rxmode.offloads |= > > DEV_RX_OFFLOAD_RSS_HASH; > > > > > > > > > > +/** > > > > > + * Considering QinQ packet, max frame size should be equal or > > > > > + * larger than total size of MTU and Ether overhead. > > > > > + */ > > > > > > > > > +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > > > > > > > > > > > > Why we need this check? > > > > Can we just call ice_mtu_set directly > > > > > > I think that without that check we can silently overwrite provided > > > by user dev_conf.rxmode.max_rx_pkt_len value. > > > > OK, I see > > > > But still have one question > > dev->data->mtu is initialized to 1518 as default , but if application > > dev->data->set > > dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. > > does that mean we will still will set mtu to 1518, is this expected? > > > > max_rx_pkt_len should be larger than mtu at least, so we should raise the > max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). Ok, this describe the problem more general and better to replace exist code comment and commit log for easy understanding. Please send a new version for reword > Generally, the mtu value can be adjustable from user (e.g.: ip link set ens801f0 > mtu 1400), hence, we just adjust the max_rx_pkt_len to satisfy mtu > requirement. > > > Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) > > here? > ice_mtu_set(dev, mtu) will append ether overhead to > frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd > parameter, or not the max_rx_pkt_len. > > > > > > > > > > > > And please remove above comment, since ether overhead is already > > > considered in ice_mtu_set. > Ether overhead is already considered in ice_mtu_set, but it also should be > considered as the adjustment condition that if ice_mtu_set need be invoked. > So, it perhaps should remain this comment before this if() condition. > > > > > > > > > > > > > > +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return > > > > > +ret; } > > > > > + > > > > > ret = ice_init_rss(pf); > > > > > if (ret) { > > > > > PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > > > > > -- > > > > > 2.17.1 > > > > > > > > > > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-09-30 2:32 ` Zhang, Qi Z @ 2020-10-14 15:38 ` Ferruh Yigit [not found] ` <DM6PR11MB43628BBF9DCE7CC4D7C05AD8F91E0@DM6PR11MB4362.namprd11.prod.outlook.com> 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-10-14 15:38 UTC (permalink / raw) To: Zhang, Qi Z, Yang, SteveX, Ananyev, Konstantin, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, ian.stokes On 9/30/2020 3:32 AM, Zhang, Qi Z wrote: > > >> -----Original Message----- >> From: Yang, SteveX <stevex.yang@intel.com> >> Sent: Wednesday, September 30, 2020 9:32 AM >> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Ananyev, Konstantin >> <konstantin.ananyev@intel.com>; dev@dpdk.org >> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, >> Qiming <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing, >> Beilei <beilei.xing@intel.com> >> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag >> cannot be received by default >> >> >> >>> -----Original Message----- >>> From: Zhang, Qi Z <qi.z.zhang@intel.com> >>> Sent: Wednesday, September 30, 2020 8:35 AM >>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, SteveX >>> <stevex.yang@intel.com>; dev@dpdk.org >>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> >>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>> vlan tag cannot be received by default >>> >>> >>> >>>> -----Original Message----- >>>> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> >>>> Sent: Wednesday, September 30, 2020 7:02 AM >>>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX >>>> <stevex.yang@intel.com>; dev@dpdk.org >>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> >>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>> vlan tag cannot be received by default >>>> >>>>> >>>>>> -----Original Message----- >>>>>> From: Yang, SteveX <stevex.yang@intel.com> >>>>>> Sent: Monday, September 28, 2020 2:56 PM >>>>>> To: dev@dpdk.org >>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia >>>>>> <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; >>>>>> Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing >>>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; >>>>>> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, SteveX >>>>>> <stevex.yang@intel.com> >>>>>> Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>>> vlan tag cannot be received by default >>>>>> >>>>>> testpmd will initialize default max packet length to 1518 which >>>>>> doesn't include vlan tag size in ether overheader. Once, send >>>>>> the max mtu length packet with vlan tag, the max packet length >>>>>> will exceed 1518 that will cause packets dropped directly from NIC hw >> side. >>>>>> >>>>>> ice can support dual vlan tags that need more 8 bytes for max >>>>>> packet size, so, configures the correct max packet size in >>>>>> dev_config >>> ops. >>>>>> >>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") >>>>>> >>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>> --- >>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ >>>>>> 1 file changed, 11 insertions(+) >>>>>> >>>>>> diff --git a/drivers/net/ice/ice_ethdev.c >>>>>> b/drivers/net/ice/ice_ethdev.c index >>>>>> cfd357b05..6b7098444 100644 >>>>>> --- a/drivers/net/ice/ice_ethdev.c >>>>>> +++ b/drivers/net/ice/ice_ethdev.c >>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev *dev) >>>>>> struct ice_adapter *ad = >>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); >>>>>> struct ice_pf *pf = >>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); >>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; >>>>>> int ret; >>>>>> >>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ >>>>>> -3157,6 >>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) >>>>>> if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) >>>>>> dev->data->dev_conf.rxmode.offloads |= >>> DEV_RX_OFFLOAD_RSS_HASH; >>>>>> >>>>>> +/** >>>>>> + * Considering QinQ packet, max frame size should be equal or >>>>>> + * larger than total size of MTU and Ether overhead. >>>>>> + */ >>>>> >>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { >>>>> >>>>> >>>>> Why we need this check? >>>>> Can we just call ice_mtu_set directly >>>> >>>> I think that without that check we can silently overwrite provided >>>> by user dev_conf.rxmode.max_rx_pkt_len value. >>> >>> OK, I see >>> >>> But still have one question >>> dev->data->mtu is initialized to 1518 as default , but if application >>> dev->data->set >>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. >>> does that mean we will still will set mtu to 1518, is this expected? >>> >> >> max_rx_pkt_len should be larger than mtu at least, so we should raise the >> max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). > > Ok, this describe the problem more general and better to replace exist code comment and commit log for easy understanding. > Please send a new version for reword > I didn't really get this set. Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than this size is dropped. Isn't this what should be, why we are trying to overwrite user configuration in PMD to prevent this? During eth_dev allocation, mtu set to default '1500', by ethdev layer. And testpmd sets 'max_rx_pkt_len' by default to '1518'. I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' and mean it? PMD will not honor the user config. Why not simply increase the default 'max_rx_pkt_len' in testpmd? And I guess even better what we need is to tell to the application what the frame overhead PMD accepts. So the application can set proper 'max_rx_pkt_len' value per port for a given/requested MTU value. @Ian, cc'ed, was complaining almost same thing years ago, these PMD overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps he has a solution now? And why this same thing can't happen to other PMDs? If this is a problem for all PMDs, we should solve in other level, not for only some PMDs. > >> Generally, the mtu value can be adjustable from user (e.g.: ip link set ens801f0 >> mtu 1400), hence, we just adjust the max_rx_pkt_len to satisfy mtu >> requirement. >> >>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) >>> here? >> ice_mtu_set(dev, mtu) will append ether overhead to >> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd >> parameter, or not the max_rx_pkt_len. >> >>> >>> >>>> >>>>> And please remove above comment, since ether overhead is already >>>> considered in ice_mtu_set. >> Ether overhead is already considered in ice_mtu_set, but it also should be >> considered as the adjustment condition that if ice_mtu_set need be invoked. >> So, it perhaps should remain this comment before this if() condition. >> >>>>> >>>>> >>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return >>>>>> +ret; } >>>>>> + >>>>>> ret = ice_init_rss(pf); >>>>>> if (ret) { >>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); >>>>>> -- >>>>>> 2.17.1 >>>>> >>>> >>> >> > ^ permalink raw reply [flat|nested] 94+ messages in thread
[parent not found: <DM6PR11MB43628BBF9DCE7CC4D7C05AD8F91E0@DM6PR11MB4362.namprd11.prod.outlook.com>]
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default [not found] ` <DM6PR11MB43628BBF9DCE7CC4D7C05AD8F91E0@DM6PR11MB4362.namprd11.prod.outlook.com> @ 2020-10-19 10:49 ` Ananyev, Konstantin 2020-10-19 13:07 ` Ferruh Yigit 2020-10-19 18:05 ` Ferruh Yigit 1 sibling, 1 reply; 94+ messages in thread From: Ananyev, Konstantin @ 2020-10-19 10:49 UTC (permalink / raw) To: Yang, SteveX, Yigit, Ferruh, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian > > -----Original Message----- > > From: Ferruh Yigit <ferruh.yigit@intel.com> > > Sent: Wednesday, October 14, 2020 11:38 PM > > To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX > > <stevex.yang@intel.com>; Ananyev, Konstantin > > <konstantin.ananyev@intel.com>; dev@dpdk.org > > Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, > > Qiming <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; > > Xing, Beilei <beilei.xing@intel.com>; Stokes, Ian <ian.stokes@intel.com> > > Subject: Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets > > with vlan tag cannot be received by default > > > > On 9/30/2020 3:32 AM, Zhang, Qi Z wrote: > > > > > > > > >> -----Original Message----- > > >> From: Yang, SteveX <stevex.yang@intel.com> > > >> Sent: Wednesday, September 30, 2020 9:32 AM > > >> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Ananyev, Konstantin > > >> <konstantin.ananyev@intel.com>; dev@dpdk.org > > >> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; > > >> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing > > >> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> > > >> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with > > >> vlan tag cannot be received by default > > >> > > >> > > >> > > >>> -----Original Message----- > > >>> From: Zhang, Qi Z <qi.z.zhang@intel.com> > > >>> Sent: Wednesday, September 30, 2020 8:35 AM > > >>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, > > SteveX > > >>> <stevex.yang@intel.com>; dev@dpdk.org > > >>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; > > >>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing > > >>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> > > >>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with > > >>> vlan tag cannot be received by default > > >>> > > >>> > > >>> > > >>>> -----Original Message----- > > >>>> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > > >>>> Sent: Wednesday, September 30, 2020 7:02 AM > > >>>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX > > >>>> <stevex.yang@intel.com>; dev@dpdk.org > > >>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; > > >>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing > > >>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> > > >>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with > > >>>> vlan tag cannot be received by default > > >>>> > > >>>>> > > >>>>>> -----Original Message----- > > >>>>>> From: Yang, SteveX <stevex.yang@intel.com> > > >>>>>> Sent: Monday, September 28, 2020 2:56 PM > > >>>>>> To: dev@dpdk.org > > >>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia > > >>>>>> <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; > > Zhang, > > >>>>>> Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing > > >>>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; > > >>>>>> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, > > SteveX > > >>>>>> <stevex.yang@intel.com> > > >>>>>> Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with > > >>>>>> vlan tag cannot be received by default > > >>>>>> > > >>>>>> testpmd will initialize default max packet length to 1518 which > > >>>>>> doesn't include vlan tag size in ether overheader. Once, send the > > >>>>>> max mtu length packet with vlan tag, the max packet length will > > >>>>>> exceed 1518 that will cause packets dropped directly from NIC hw > > >> side. > > >>>>>> > > >>>>>> ice can support dual vlan tags that need more 8 bytes for max > > >>>>>> packet size, so, configures the correct max packet size in > > >>>>>> dev_config > > >>> ops. > > >>>>>> > > >>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > > >>>>>> > > >>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > >>>>>> --- > > >>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > > >>>>>> 1 file changed, 11 insertions(+) > > >>>>>> > > >>>>>> diff --git a/drivers/net/ice/ice_ethdev.c > > >>>>>> b/drivers/net/ice/ice_ethdev.c index > > >>>>>> cfd357b05..6b7098444 100644 > > >>>>>> --- a/drivers/net/ice/ice_ethdev.c > > >>>>>> +++ b/drivers/net/ice/ice_ethdev.c > > >>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev > > *dev) > > >>>>>> struct ice_adapter *ad = > > >>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > > >>>>>> struct ice_pf *pf = > > >>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > > >>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > > >>>>>> int ret; > > >>>>>> > > >>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ > > >>>>>> -3157,6 > > >>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > > >>>>>> if (dev->data->dev_conf.rxmode.mq_mode & > > ETH_MQ_RX_RSS_FLAG) > > >>>>>> dev->data->dev_conf.rxmode.offloads |= > > >>> DEV_RX_OFFLOAD_RSS_HASH; > > >>>>>> > > >>>>>> +/** > > >>>>>> + * Considering QinQ packet, max frame size should be equal or > > >>>>>> + * larger than total size of MTU and Ether overhead. > > >>>>>> + */ > > >>>>> > > >>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > > >>>>> > > >>>>> > > >>>>> Why we need this check? > > >>>>> Can we just call ice_mtu_set directly > > >>>> > > >>>> I think that without that check we can silently overwrite provided > > >>>> by user dev_conf.rxmode.max_rx_pkt_len value. > > >>> > > >>> OK, I see > > >>> > > >>> But still have one question > > >>> dev->data->mtu is initialized to 1518 as default , but if > > >>> dev->data->application set > > >>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. > > >>> does that mean we will still will set mtu to 1518, is this expected? > > >>> > > >> > > >> max_rx_pkt_len should be larger than mtu at least, so we should raise > > >> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). > > > > > > Ok, this describe the problem more general and better to replace exist > > code comment and commit log for easy understanding. > > > Please send a new version for reword > > > > > > > I didn't really get this set. > > > > Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than > > this size is dropped. > > Sure, it is normal case for dropping oversize data. > > > Isn't this what should be, why we are trying to overwrite user configuration > > in PMD to prevent this? > > > > But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. > This fix will make a decision when confliction occurred. > MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, > so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. > > > During eth_dev allocation, mtu set to default '1500', by ethdev layer. > > And testpmd sets 'max_rx_pkt_len' by default to '1518'. > > I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' > > and mean it? PMD will not honor the user config. > > I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? > If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. > > > > > Why not simply increase the default 'max_rx_pkt_len' in testpmd? > > > The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, > But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. > > > And I guess even better what we need is to tell to the application what the > > frame overhead PMD accepts. > > So the application can set proper 'max_rx_pkt_len' value per port for a > > given/requested MTU value. > > @Ian, cc'ed, was complaining almost same thing years ago, these PMD > > overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps > > he has a solution now? From my perspective the main problem here: We have 2 different variables for nearly the same thing: rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. and 2 different API to update them: dev_mtu_set() and dev_configure(). And inside majority of Intel PMDs we don't keep these 2 variables in sync: - mtu_set() will update both variables. - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. This patch fixes this inconsistency, which I think is a good thing. Though yes, it introduces change in behaviour. Let say the code: rte_eth_dev_set_mtu(port, 1500); dev_conf.max_rx_pkt_len = 1000; rte_eth_dev_configure(port, 1, 1, &dev_conf); Before the patch will result: mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me After the patch: mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. If you think we need to preserve current behaviour, then I suppose the easiest thing would be to change dev_config() code to update mtu value based on max_rx_pkt_len. I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} So the code snippet above will result: mtu=982,max_rx_pkt_len=1000; Konstantin > > > > > And why this same thing can't happen to other PMDs? If this is a problem for > > all PMDs, we should solve in other level, not for only some PMDs. > > > No, all PMDs exist the same issue, another proposal: > - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); > - provide the uniform API for fetching the NIC's supported Ether Overhead size; > Is it feasible? > > > > > > >> Generally, the mtu value can be adjustable from user (e.g.: ip link > > >> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to > > >> satisfy mtu requirement. > > >> > > >>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) > > >>> here? > > >> ice_mtu_set(dev, mtu) will append ether overhead to > > >> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd > > >> parameter, or not the max_rx_pkt_len. > > >> > > >>> > > >>> > > >>>> > > >>>>> And please remove above comment, since ether overhead is already > > >>>> considered in ice_mtu_set. > > >> Ether overhead is already considered in ice_mtu_set, but it also > > >> should be considered as the adjustment condition that if ice_mtu_set > > need be invoked. > > >> So, it perhaps should remain this comment before this if() condition. > > >> > > >>>>> > > >>>>> > > >>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return > > >>>>>> +ret; } > > >>>>>> + > > >>>>>> ret = ice_init_rss(pf); > > >>>>>> if (ret) { > > >>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > > >>>>>> -- > > >>>>>> 2.17.1 > > >>>>> > > >>>> > > >>> > > >> > > > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-19 10:49 ` Ananyev, Konstantin @ 2020-10-19 13:07 ` Ferruh Yigit 2020-10-19 14:07 ` Ananyev, Konstantin 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-10-19 13:07 UTC (permalink / raw) To: Ananyev, Konstantin, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian On 10/19/2020 11:49 AM, Ananyev, Konstantin wrote: > >>> -----Original Message----- >>> From: Ferruh Yigit <ferruh.yigit@intel.com> >>> Sent: Wednesday, October 14, 2020 11:38 PM >>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX >>> <stevex.yang@intel.com>; Ananyev, Konstantin >>> <konstantin.ananyev@intel.com>; dev@dpdk.org >>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, >>> Qiming <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; >>> Xing, Beilei <beilei.xing@intel.com>; Stokes, Ian <ian.stokes@intel.com> >>> Subject: Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets >>> with vlan tag cannot be received by default >>> >>> On 9/30/2020 3:32 AM, Zhang, Qi Z wrote: >>>> >>>> >>>>> -----Original Message----- >>>>> From: Yang, SteveX <stevex.yang@intel.com> >>>>> Sent: Wednesday, September 30, 2020 9:32 AM >>>>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Ananyev, Konstantin >>>>> <konstantin.ananyev@intel.com>; dev@dpdk.org >>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> >>>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>> vlan tag cannot be received by default >>>>> >>>>> >>>>> >>>>>> -----Original Message----- >>>>>> From: Zhang, Qi Z <qi.z.zhang@intel.com> >>>>>> Sent: Wednesday, September 30, 2020 8:35 AM >>>>>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, >>> SteveX >>>>>> <stevex.yang@intel.com>; dev@dpdk.org >>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>>>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> >>>>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>>> vlan tag cannot be received by default >>>>>> >>>>>> >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> >>>>>>> Sent: Wednesday, September 30, 2020 7:02 AM >>>>>>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX >>>>>>> <stevex.yang@intel.com>; dev@dpdk.org >>>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>>>>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>>>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> >>>>>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>>>> vlan tag cannot be received by default >>>>>>> >>>>>>>> >>>>>>>>> -----Original Message----- >>>>>>>>> From: Yang, SteveX <stevex.yang@intel.com> >>>>>>>>> Sent: Monday, September 28, 2020 2:56 PM >>>>>>>>> To: dev@dpdk.org >>>>>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia >>>>>>>>> <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; >>> Zhang, >>>>>>>>> Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing >>>>>>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; >>>>>>>>> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, >>> SteveX >>>>>>>>> <stevex.yang@intel.com> >>>>>>>>> Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>>>>>> vlan tag cannot be received by default >>>>>>>>> >>>>>>>>> testpmd will initialize default max packet length to 1518 which >>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the >>>>>>>>> max mtu length packet with vlan tag, the max packet length will >>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw >>>>> side. >>>>>>>>> >>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max >>>>>>>>> packet size, so, configures the correct max packet size in >>>>>>>>> dev_config >>>>>> ops. >>>>>>>>> >>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") >>>>>>>>> >>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>> --- >>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ >>>>>>>>> 1 file changed, 11 insertions(+) >>>>>>>>> >>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c >>>>>>>>> b/drivers/net/ice/ice_ethdev.c index >>>>>>>>> cfd357b05..6b7098444 100644 >>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c >>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c >>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev >>> *dev) >>>>>>>>> struct ice_adapter *ad = >>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); >>>>>>>>> struct ice_pf *pf = >>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); >>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; >>>>>>>>> int ret; >>>>>>>>> >>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ >>>>>>>>> -3157,6 >>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) >>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & >>> ETH_MQ_RX_RSS_FLAG) >>>>>>>>> dev->data->dev_conf.rxmode.offloads |= >>>>>> DEV_RX_OFFLOAD_RSS_HASH; >>>>>>>>> >>>>>>>>> +/** >>>>>>>>> + * Considering QinQ packet, max frame size should be equal or >>>>>>>>> + * larger than total size of MTU and Ether overhead. >>>>>>>>> + */ >>>>>>>> >>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { >>>>>>>> >>>>>>>> >>>>>>>> Why we need this check? >>>>>>>> Can we just call ice_mtu_set directly >>>>>>> >>>>>>> I think that without that check we can silently overwrite provided >>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. >>>>>> >>>>>> OK, I see >>>>>> >>>>>> But still have one question >>>>>> dev->data->mtu is initialized to 1518 as default , but if >>>>>> dev->data->application set >>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. >>>>>> does that mean we will still will set mtu to 1518, is this expected? >>>>>> >>>>> >>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise >>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). >>>> >>>> Ok, this describe the problem more general and better to replace exist >>> code comment and commit log for easy understanding. >>>> Please send a new version for reword >>>> >>> >>> I didn't really get this set. >>> >>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than >>> this size is dropped. >> >> Sure, it is normal case for dropping oversize data. >> >>> Isn't this what should be, why we are trying to overwrite user configuration >>> in PMD to prevent this? >>> >> >> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. >> This fix will make a decision when confliction occurred. >> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, >> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. >> >>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. >>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. >>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' >>> and mean it? PMD will not honor the user config. >> >> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? >> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. >> >>> >>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? >>> >> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, >> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. >> >>> And I guess even better what we need is to tell to the application what the >>> frame overhead PMD accepts. >>> So the application can set proper 'max_rx_pkt_len' value per port for a >>> given/requested MTU value. >>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD >>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps >>> he has a solution now? > > From my perspective the main problem here: > We have 2 different variables for nearly the same thing: > rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. > and 2 different API to update them: dev_mtu_set() and dev_configure(). According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' Although not sure that is practically what is done for all drivers. > And inside majority of Intel PMDs we don't keep these 2 variables in sync: > - mtu_set() will update both variables. > - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. > > This patch fixes this inconsistency, which I think is a good thing. > Though yes, it introduces change in behaviour. > > Let say the code: > rte_eth_dev_set_mtu(port, 1500); > dev_conf.max_rx_pkt_len = 1000; > rte_eth_dev_configure(port, 1, 1, &dev_conf); > 'rte_eth_dev_configure()' is one of the first APIs called, it is called before 'rte_eth_dev_set_mtu(). When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' are updated (mostly). > Before the patch will result: > mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me > > After the patch: > mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. > > If you think we need to preserve current behaviour, > then I suppose the easiest thing would be to change dev_config() code > to update mtu value based on max_rx_pkt_len. > I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} > So the code snippet above will result: > mtu=982,max_rx_pkt_len=1000; > The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just drop it? By default device will be up with default MTU (1500), later 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. Will this work? And for short term, for above Intel PMDs, there must be a place this 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, otherwise use the 'MTU' value. Without 'start()' updated the current logic won't work after stop & start anyway. > Konstantin > > > > > > > > > > > > > > > > > > > > > > > > >> >>> >>> And why this same thing can't happen to other PMDs? If this is a problem for >>> all PMDs, we should solve in other level, not for only some PMDs. >>> >> No, all PMDs exist the same issue, another proposal: >> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); >> - provide the uniform API for fetching the NIC's supported Ether Overhead size; >> Is it feasible? >> >>>> >>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link >>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to >>>>> satisfy mtu requirement. >>>>> >>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) >>>>>> here? >>>>> ice_mtu_set(dev, mtu) will append ether overhead to >>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd >>>>> parameter, or not the max_rx_pkt_len. >>>>> >>>>>> >>>>>> >>>>>>> >>>>>>>> And please remove above comment, since ether overhead is already >>>>>>> considered in ice_mtu_set. >>>>> Ether overhead is already considered in ice_mtu_set, but it also >>>>> should be considered as the adjustment condition that if ice_mtu_set >>> need be invoked. >>>>> So, it perhaps should remain this comment before this if() condition. >>>>> >>>>>>>> >>>>>>>> >>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return >>>>>>>>> +ret; } >>>>>>>>> + >>>>>>>>> ret = ice_init_rss(pf); >>>>>>>>> if (ret) { >>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); >>>>>>>>> -- >>>>>>>>> 2.17.1 >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-19 13:07 ` Ferruh Yigit @ 2020-10-19 14:07 ` Ananyev, Konstantin 2020-10-19 14:28 ` Ananyev, Konstantin 0 siblings, 1 reply; 94+ messages in thread From: Ananyev, Konstantin @ 2020-10-19 14:07 UTC (permalink / raw) To: Yigit, Ferruh, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian > >>>>>>>>> > >>>>>>>>> testpmd will initialize default max packet length to 1518 which > >>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the > >>>>>>>>> max mtu length packet with vlan tag, the max packet length will > >>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw > >>>>> side. > >>>>>>>>> > >>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max > >>>>>>>>> packet size, so, configures the correct max packet size in > >>>>>>>>> dev_config > >>>>>> ops. > >>>>>>>>> > >>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > >>>>>>>>> > >>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > >>>>>>>>> --- > >>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > >>>>>>>>> 1 file changed, 11 insertions(+) > >>>>>>>>> > >>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c > >>>>>>>>> b/drivers/net/ice/ice_ethdev.c index > >>>>>>>>> cfd357b05..6b7098444 100644 > >>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c > >>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c > >>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev > >>> *dev) > >>>>>>>>> struct ice_adapter *ad = > >>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > >>>>>>>>> struct ice_pf *pf = > >>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > >>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > >>>>>>>>> int ret; > >>>>>>>>> > >>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ > >>>>>>>>> -3157,6 > >>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > >>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & > >>> ETH_MQ_RX_RSS_FLAG) > >>>>>>>>> dev->data->dev_conf.rxmode.offloads |= > >>>>>> DEV_RX_OFFLOAD_RSS_HASH; > >>>>>>>>> > >>>>>>>>> +/** > >>>>>>>>> + * Considering QinQ packet, max frame size should be equal or > >>>>>>>>> + * larger than total size of MTU and Ether overhead. > >>>>>>>>> + */ > >>>>>>>> > >>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > >>>>>>>> > >>>>>>>> > >>>>>>>> Why we need this check? > >>>>>>>> Can we just call ice_mtu_set directly > >>>>>>> > >>>>>>> I think that without that check we can silently overwrite provided > >>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. > >>>>>> > >>>>>> OK, I see > >>>>>> > >>>>>> But still have one question > >>>>>> dev->data->mtu is initialized to 1518 as default , but if > >>>>>> dev->data->application set > >>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. > >>>>>> does that mean we will still will set mtu to 1518, is this expected? > >>>>>> > >>>>> > >>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise > >>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). > >>>> > >>>> Ok, this describe the problem more general and better to replace exist > >>> code comment and commit log for easy understanding. > >>>> Please send a new version for reword > >>>> > >>> > >>> I didn't really get this set. > >>> > >>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than > >>> this size is dropped. > >> > >> Sure, it is normal case for dropping oversize data. > >> > >>> Isn't this what should be, why we are trying to overwrite user configuration > >>> in PMD to prevent this? > >>> > >> > >> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. > >> This fix will make a decision when confliction occurred. > >> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, > >> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. > >> > >>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. > >>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. > >>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' > >>> and mean it? PMD will not honor the user config. > >> > >> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? > >> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. > >> > >>> > >>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? > >>> > >> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, > >> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. > >> > >>> And I guess even better what we need is to tell to the application what the > >>> frame overhead PMD accepts. > >>> So the application can set proper 'max_rx_pkt_len' value per port for a > >>> given/requested MTU value. > >>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD > >>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps > >>> he has a solution now? > > > > From my perspective the main problem here: > > We have 2 different variables for nearly the same thing: > > rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. > > and 2 different API to update them: dev_mtu_set() and dev_configure(). > > According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' > Although not sure that is practically what is done for all drivers. I think most of Intel PMDs use it unconditionally. > > > And inside majority of Intel PMDs we don't keep these 2 variables in sync: > > - mtu_set() will update both variables. > > - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. > > > > This patch fixes this inconsistency, which I think is a good thing. > > Though yes, it introduces change in behaviour. > > > > Let say the code: > > rte_eth_dev_set_mtu(port, 1500); > > dev_conf.max_rx_pkt_len = 1000; > > rte_eth_dev_configure(port, 1, 1, &dev_conf); > > > > 'rte_eth_dev_configure()' is one of the first APIs called, it is called before > 'rte_eth_dev_set_mtu(). Usually yes. But you can still do sometimes later: dev_mtu_set(); ...; dev_stop(); dev_configure(); dev_start(); > > When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by > ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. See above. PMD doesn't know where this MTU value came from (default ethdev value or user specified value) and probably it shouldn't care. > > And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' > are updated (mostly). Yes, in mtu_set() we update both. But we don't update MTU in dev_configure(), only max_rx_pkt_len. That what this patch tries to fix (as I understand it). > > > > Before the patch will result: > > mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me > > > > After the patch: > > mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. > > > > If you think we need to preserve current behaviour, > > then I suppose the easiest thing would be to change dev_config() code > > to update mtu value based on max_rx_pkt_len. > > I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} > > So the code snippet above will result: > > mtu=982,max_rx_pkt_len=1000; > > > > The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just > drop it? > > By default device will be up with default MTU (1500), later > 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. > > Will this work? I think it might, but that's a big change, probably too risky at that stage... > > > And for short term, for above Intel PMDs, there must be a place this > 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that > function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, > otherwise use the 'MTU' value. Even if we'll use max_rx_pkt_len only when if JUMBO_FRAME is set, I think we still need to keep max_rx_pkt_len and MTU values in sync. > > Without 'start()' updated the current logic won't work after stop & start anyway. > > > > > > > > > >> > >>> > >>> And why this same thing can't happen to other PMDs? If this is a problem for > >>> all PMDs, we should solve in other level, not for only some PMDs. > >>> > >> No, all PMDs exist the same issue, another proposal: > >> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); > >> - provide the uniform API for fetching the NIC's supported Ether Overhead size; > >> Is it feasible? > >> > >>>> > >>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link > >>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to > >>>>> satisfy mtu requirement. > >>>>> > >>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) > >>>>>> here? > >>>>> ice_mtu_set(dev, mtu) will append ether overhead to > >>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd > >>>>> parameter, or not the max_rx_pkt_len. > >>>>> > >>>>>> > >>>>>> > >>>>>>> > >>>>>>>> And please remove above comment, since ether overhead is already > >>>>>>> considered in ice_mtu_set. > >>>>> Ether overhead is already considered in ice_mtu_set, but it also > >>>>> should be considered as the adjustment condition that if ice_mtu_set > >>> need be invoked. > >>>>> So, it perhaps should remain this comment before this if() condition. > >>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return > >>>>>>>>> +ret; } > >>>>>>>>> + > >>>>>>>>> ret = ice_init_rss(pf); > >>>>>>>>> if (ret) { > >>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > >>>>>>>>> -- > >>>>>>>>> 2.17.1 > >>>>>>>> > >>>>>>> > >>>>>> > >>>>> > >>>> > > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-19 14:07 ` Ananyev, Konstantin @ 2020-10-19 14:28 ` Ananyev, Konstantin 2020-10-19 18:01 ` Ferruh Yigit 0 siblings, 1 reply; 94+ messages in thread From: Ananyev, Konstantin @ 2020-10-19 14:28 UTC (permalink / raw) To: Ananyev, Konstantin, Yigit, Ferruh, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian > > > >>>>>>>>> > > >>>>>>>>> testpmd will initialize default max packet length to 1518 which > > >>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the > > >>>>>>>>> max mtu length packet with vlan tag, the max packet length will > > >>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw > > >>>>> side. > > >>>>>>>>> > > >>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max > > >>>>>>>>> packet size, so, configures the correct max packet size in > > >>>>>>>>> dev_config > > >>>>>> ops. > > >>>>>>>>> > > >>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > > >>>>>>>>> > > >>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > >>>>>>>>> --- > > >>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > > >>>>>>>>> 1 file changed, 11 insertions(+) > > >>>>>>>>> > > >>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c > > >>>>>>>>> b/drivers/net/ice/ice_ethdev.c index > > >>>>>>>>> cfd357b05..6b7098444 100644 > > >>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c > > >>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c > > >>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev > > >>> *dev) > > >>>>>>>>> struct ice_adapter *ad = > > >>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > > >>>>>>>>> struct ice_pf *pf = > > >>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > > >>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > > >>>>>>>>> int ret; > > >>>>>>>>> > > >>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ > > >>>>>>>>> -3157,6 > > >>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > > >>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & > > >>> ETH_MQ_RX_RSS_FLAG) > > >>>>>>>>> dev->data->dev_conf.rxmode.offloads |= > > >>>>>> DEV_RX_OFFLOAD_RSS_HASH; > > >>>>>>>>> > > >>>>>>>>> +/** > > >>>>>>>>> + * Considering QinQ packet, max frame size should be equal or > > >>>>>>>>> + * larger than total size of MTU and Ether overhead. > > >>>>>>>>> + */ > > >>>>>>>> > > >>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > > >>>>>>>> > > >>>>>>>> > > >>>>>>>> Why we need this check? > > >>>>>>>> Can we just call ice_mtu_set directly > > >>>>>>> > > >>>>>>> I think that without that check we can silently overwrite provided > > >>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. > > >>>>>> > > >>>>>> OK, I see > > >>>>>> > > >>>>>> But still have one question > > >>>>>> dev->data->mtu is initialized to 1518 as default , but if > > >>>>>> dev->data->application set > > >>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. > > >>>>>> does that mean we will still will set mtu to 1518, is this expected? > > >>>>>> > > >>>>> > > >>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise > > >>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). > > >>>> > > >>>> Ok, this describe the problem more general and better to replace exist > > >>> code comment and commit log for easy understanding. > > >>>> Please send a new version for reword > > >>>> > > >>> > > >>> I didn't really get this set. > > >>> > > >>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than > > >>> this size is dropped. > > >> > > >> Sure, it is normal case for dropping oversize data. > > >> > > >>> Isn't this what should be, why we are trying to overwrite user configuration > > >>> in PMD to prevent this? > > >>> > > >> > > >> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. > > >> This fix will make a decision when confliction occurred. > > >> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, > > >> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. > > >> > > >>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. > > >>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. > > >>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' > > >>> and mean it? PMD will not honor the user config. > > >> > > >> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? > > >> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. > > >> > > >>> > > >>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? > > >>> > > >> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, > > >> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. > > >> > > >>> And I guess even better what we need is to tell to the application what the > > >>> frame overhead PMD accepts. > > >>> So the application can set proper 'max_rx_pkt_len' value per port for a > > >>> given/requested MTU value. > > >>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD > > >>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps > > >>> he has a solution now? > > > > > > From my perspective the main problem here: > > > We have 2 different variables for nearly the same thing: > > > rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. > > > and 2 different API to update them: dev_mtu_set() and dev_configure(). > > > > According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' > > Although not sure that is practically what is done for all drivers. > > I think most of Intel PMDs use it unconditionally. > > > > > > And inside majority of Intel PMDs we don't keep these 2 variables in sync: > > > - mtu_set() will update both variables. > > > - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. > > > > > > This patch fixes this inconsistency, which I think is a good thing. > > > Though yes, it introduces change in behaviour. > > > > > > Let say the code: > > > rte_eth_dev_set_mtu(port, 1500); > > > dev_conf.max_rx_pkt_len = 1000; > > > rte_eth_dev_configure(port, 1, 1, &dev_conf); > > > > > > > 'rte_eth_dev_configure()' is one of the first APIs called, it is called before > > 'rte_eth_dev_set_mtu(). > > Usually yes. > But you can still do sometimes later: dev_mtu_set(); ...; dev_stop(); dev_configure(); dev_start(); > > > > > When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by > > ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. > > See above. > PMD doesn't know where this MTU value came from (default ethdev value or user specified value) > and probably it shouldn't care. > > > > > And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' > > are updated (mostly). > > Yes, in mtu_set() we update both. > But we don't update MTU in dev_configure(), only max_rx_pkt_len. > That what this patch tries to fix (as I understand it). To be more precise - it doesn't change MTU value in dev_configure(), but instead doesn't allow max_rx_pkt_len to become smaller then MTU + OVERHEAD. Probably changing MTU value instead is a better choice. > > > > > > > Before the patch will result: > > > mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me > > > > > > After the patch: > > > mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. > > > > > > If you think we need to preserve current behaviour, > > > then I suppose the easiest thing would be to change dev_config() code > > > to update mtu value based on max_rx_pkt_len. > > > I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} > > > So the code snippet above will result: > > > mtu=982,max_rx_pkt_len=1000; > > > > > > > The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just > > drop it? > > > > By default device will be up with default MTU (1500), later > > 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. > > > > Will this work? > > I think it might, but that's a big change, probably too risky at that stage... > > > > > > > > And for short term, for above Intel PMDs, there must be a place this > > 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that > > function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, > > otherwise use the 'MTU' value. > > Even if we'll use max_rx_pkt_len only when if JUMBO_FRAME is set, > I think we still need to keep max_rx_pkt_len and MTU values in sync. > > > > > Without 'start()' updated the current logic won't work after stop & start anyway. > > > > > > > > > > > > > > > >> > > >>> > > >>> And why this same thing can't happen to other PMDs? If this is a problem for > > >>> all PMDs, we should solve in other level, not for only some PMDs. > > >>> > > >> No, all PMDs exist the same issue, another proposal: > > >> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); > > >> - provide the uniform API for fetching the NIC's supported Ether Overhead size; > > >> Is it feasible? > > >> > > >>>> > > >>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link > > >>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to > > >>>>> satisfy mtu requirement. > > >>>>> > > >>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) > > >>>>>> here? > > >>>>> ice_mtu_set(dev, mtu) will append ether overhead to > > >>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd > > >>>>> parameter, or not the max_rx_pkt_len. > > >>>>> > > >>>>>> > > >>>>>> > > >>>>>>> > > >>>>>>>> And please remove above comment, since ether overhead is already > > >>>>>>> considered in ice_mtu_set. > > >>>>> Ether overhead is already considered in ice_mtu_set, but it also > > >>>>> should be considered as the adjustment condition that if ice_mtu_set > > >>> need be invoked. > > >>>>> So, it perhaps should remain this comment before this if() condition. > > >>>>> > > >>>>>>>> > > >>>>>>>> > > >>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return > > >>>>>>>>> +ret; } > > >>>>>>>>> + > > >>>>>>>>> ret = ice_init_rss(pf); > > >>>>>>>>> if (ret) { > > >>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > > >>>>>>>>> -- > > >>>>>>>>> 2.17.1 > > >>>>>>>> > > >>>>>>> > > >>>>>> > > >>>>> > > >>>> > > > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-19 14:28 ` Ananyev, Konstantin @ 2020-10-19 18:01 ` Ferruh Yigit 2020-10-20 9:07 ` Ananyev, Konstantin 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-10-19 18:01 UTC (permalink / raw) To: Ananyev, Konstantin, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian On 10/19/2020 3:28 PM, Ananyev, Konstantin wrote: > >> >>>>>>>>>>>> >>>>>>>>>>>> testpmd will initialize default max packet length to 1518 which >>>>>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the >>>>>>>>>>>> max mtu length packet with vlan tag, the max packet length will >>>>>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw >>>>>>>> side. >>>>>>>>>>>> >>>>>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max >>>>>>>>>>>> packet size, so, configures the correct max packet size in >>>>>>>>>>>> dev_config >>>>>>>>> ops. >>>>>>>>>>>> >>>>>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") >>>>>>>>>>>> >>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>>>>> --- >>>>>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ >>>>>>>>>>>> 1 file changed, 11 insertions(+) >>>>>>>>>>>> >>>>>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>> b/drivers/net/ice/ice_ethdev.c index >>>>>>>>>>>> cfd357b05..6b7098444 100644 >>>>>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev >>>>>> *dev) >>>>>>>>>>>> struct ice_adapter *ad = >>>>>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); >>>>>>>>>>>> struct ice_pf *pf = >>>>>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); >>>>>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; >>>>>>>>>>>> int ret; >>>>>>>>>>>> >>>>>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ >>>>>>>>>>>> -3157,6 >>>>>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) >>>>>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & >>>>>> ETH_MQ_RX_RSS_FLAG) >>>>>>>>>>>> dev->data->dev_conf.rxmode.offloads |= >>>>>>>>> DEV_RX_OFFLOAD_RSS_HASH; >>>>>>>>>>>> >>>>>>>>>>>> +/** >>>>>>>>>>>> + * Considering QinQ packet, max frame size should be equal or >>>>>>>>>>>> + * larger than total size of MTU and Ether overhead. >>>>>>>>>>>> + */ >>>>>>>>>>> >>>>>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Why we need this check? >>>>>>>>>>> Can we just call ice_mtu_set directly >>>>>>>>>> >>>>>>>>>> I think that without that check we can silently overwrite provided >>>>>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. >>>>>>>>> >>>>>>>>> OK, I see >>>>>>>>> >>>>>>>>> But still have one question >>>>>>>>> dev->data->mtu is initialized to 1518 as default , but if >>>>>>>>> dev->data->application set >>>>>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. >>>>>>>>> does that mean we will still will set mtu to 1518, is this expected? >>>>>>>>> >>>>>>>> >>>>>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise >>>>>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). >>>>>>> >>>>>>> Ok, this describe the problem more general and better to replace exist >>>>>> code comment and commit log for easy understanding. >>>>>>> Please send a new version for reword >>>>>>> >>>>>> >>>>>> I didn't really get this set. >>>>>> >>>>>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than >>>>>> this size is dropped. >>>>> >>>>> Sure, it is normal case for dropping oversize data. >>>>> >>>>>> Isn't this what should be, why we are trying to overwrite user configuration >>>>>> in PMD to prevent this? >>>>>> >>>>> >>>>> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. >>>>> This fix will make a decision when confliction occurred. >>>>> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, >>>>> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. >>>>> >>>>>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. >>>>>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. >>>>>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' >>>>>> and mean it? PMD will not honor the user config. >>>>> >>>>> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? >>>>> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. >>>>> >>>>>> >>>>>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? >>>>>> >>>>> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, >>>>> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. >>>>> >>>>>> And I guess even better what we need is to tell to the application what the >>>>>> frame overhead PMD accepts. >>>>>> So the application can set proper 'max_rx_pkt_len' value per port for a >>>>>> given/requested MTU value. >>>>>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD >>>>>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps >>>>>> he has a solution now? >>>> >>>> From my perspective the main problem here: >>>> We have 2 different variables for nearly the same thing: >>>> rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. >>>> and 2 different API to update them: dev_mtu_set() and dev_configure(). >>> >>> According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' >>> Although not sure that is practically what is done for all drivers. >> >> I think most of Intel PMDs use it unconditionally. >> >>> >>>> And inside majority of Intel PMDs we don't keep these 2 variables in sync: >>>> - mtu_set() will update both variables. >>>> - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. >>>> >>>> This patch fixes this inconsistency, which I think is a good thing. >>>> Though yes, it introduces change in behaviour. >>>> >>>> Let say the code: >>>> rte_eth_dev_set_mtu(port, 1500); >>>> dev_conf.max_rx_pkt_len = 1000; >>>> rte_eth_dev_configure(port, 1, 1, &dev_conf); >>>> >>> >>> 'rte_eth_dev_configure()' is one of the first APIs called, it is called before >>> 'rte_eth_dev_set_mtu(). >> >> Usually yes. >> But you can still do sometimes later: dev_mtu_set(); ...; dev_stop(); dev_configure(); dev_start(); >> >>> >>> When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by >>> ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. >> >> See above. >> PMD doesn't know where this MTU value came from (default ethdev value or user specified value) >> and probably it shouldn't care. >> >>> >>> And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' >>> are updated (mostly). >> >> Yes, in mtu_set() we update both. >> But we don't update MTU in dev_configure(), only max_rx_pkt_len. >> That what this patch tries to fix (as I understand it). > > To be more precise - it doesn't change MTU value in dev_configure(), > but instead doesn't allow max_rx_pkt_len to become smaller > then MTU + OVERHEAD. > Probably changing MTU value instead is a better choice. > +1 to change mtu for this case. And this is what happens in practice when there is no 'rte_eth_dev_set_mtu()' call, since PMD is using ('max_rx_pkt_len' - OVERHEAD) to set MTU. But this won't solve the problem Steve is trying to solve. >>> >>> >>>> Before the patch will result: >>>> mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me >>>> >>>> After the patch: >>>> mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. >>>> >>>> If you think we need to preserve current behaviour, >>>> then I suppose the easiest thing would be to change dev_config() code >>>> to update mtu value based on max_rx_pkt_len. >>>> I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} >>>> So the code snippet above will result: >>>> mtu=982,max_rx_pkt_len=1000; >>>> >>> >>> The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just >>> drop it? >>> >>> By default device will be up with default MTU (1500), later >>> 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. >>> >>> Will this work? >> >> I think it might, but that's a big change, probably too risky at that stage... >> Defintely, I was thinking for 21.11. Let me send a deprecation notice and see what happens. >> >>> >>> >>> And for short term, for above Intel PMDs, there must be a place this >>> 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that >>> function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, >>> otherwise use the 'MTU' value. >> >> Even if we'll use max_rx_pkt_len only when if JUMBO_FRAME is set, >> I think we still need to keep max_rx_pkt_len and MTU values in sync. >> >>> >>> Without 'start()' updated the current logic won't work after stop & start anyway. >>> >>> >>>> >>>> >>>> >>>>> >>>>>> >>>>>> And why this same thing can't happen to other PMDs? If this is a problem for >>>>>> all PMDs, we should solve in other level, not for only some PMDs. >>>>>> >>>>> No, all PMDs exist the same issue, another proposal: >>>>> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); >>>>> - provide the uniform API for fetching the NIC's supported Ether Overhead size; >>>>> Is it feasible? >>>>> >>>>>>> >>>>>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link >>>>>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to >>>>>>>> satisfy mtu requirement. >>>>>>>> >>>>>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) >>>>>>>>> here? >>>>>>>> ice_mtu_set(dev, mtu) will append ether overhead to >>>>>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd >>>>>>>> parameter, or not the max_rx_pkt_len. >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>>> And please remove above comment, since ether overhead is already >>>>>>>>>> considered in ice_mtu_set. >>>>>>>> Ether overhead is already considered in ice_mtu_set, but it also >>>>>>>> should be considered as the adjustment condition that if ice_mtu_set >>>>>> need be invoked. >>>>>>>> So, it perhaps should remain this comment before this if() condition. >>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return >>>>>>>>>>>> +ret; } >>>>>>>>>>>> + >>>>>>>>>>>> ret = ice_init_rss(pf); >>>>>>>>>>>> if (ret) { >>>>>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); >>>>>>>>>>>> -- >>>>>>>>>>>> 2.17.1 >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>> > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-19 18:01 ` Ferruh Yigit @ 2020-10-20 9:07 ` Ananyev, Konstantin 2020-10-20 12:29 ` Ferruh Yigit 0 siblings, 1 reply; 94+ messages in thread From: Ananyev, Konstantin @ 2020-10-20 9:07 UTC (permalink / raw) To: Yigit, Ferruh, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian > >>>>>>>>>>>> > >>>>>>>>>>>> testpmd will initialize default max packet length to 1518 which > >>>>>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the > >>>>>>>>>>>> max mtu length packet with vlan tag, the max packet length will > >>>>>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw > >>>>>>>> side. > >>>>>>>>>>>> > >>>>>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max > >>>>>>>>>>>> packet size, so, configures the correct max packet size in > >>>>>>>>>>>> dev_config > >>>>>>>>> ops. > >>>>>>>>>>>> > >>>>>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > >>>>>>>>>>>> > >>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > >>>>>>>>>>>> --- > >>>>>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > >>>>>>>>>>>> 1 file changed, 11 insertions(+) > >>>>>>>>>>>> > >>>>>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c > >>>>>>>>>>>> b/drivers/net/ice/ice_ethdev.c index > >>>>>>>>>>>> cfd357b05..6b7098444 100644 > >>>>>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c > >>>>>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c > >>>>>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev > >>>>>> *dev) > >>>>>>>>>>>> struct ice_adapter *ad = > >>>>>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > >>>>>>>>>>>> struct ice_pf *pf = > >>>>>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > >>>>>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > >>>>>>>>>>>> int ret; > >>>>>>>>>>>> > >>>>>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ > >>>>>>>>>>>> -3157,6 > >>>>>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > >>>>>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & > >>>>>> ETH_MQ_RX_RSS_FLAG) > >>>>>>>>>>>> dev->data->dev_conf.rxmode.offloads |= > >>>>>>>>> DEV_RX_OFFLOAD_RSS_HASH; > >>>>>>>>>>>> > >>>>>>>>>>>> +/** > >>>>>>>>>>>> + * Considering QinQ packet, max frame size should be equal or > >>>>>>>>>>>> + * larger than total size of MTU and Ether overhead. > >>>>>>>>>>>> + */ > >>>>>>>>>>> > >>>>>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> Why we need this check? > >>>>>>>>>>> Can we just call ice_mtu_set directly > >>>>>>>>>> > >>>>>>>>>> I think that without that check we can silently overwrite provided > >>>>>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. > >>>>>>>>> > >>>>>>>>> OK, I see > >>>>>>>>> > >>>>>>>>> But still have one question > >>>>>>>>> dev->data->mtu is initialized to 1518 as default , but if > >>>>>>>>> dev->data->application set > >>>>>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. > >>>>>>>>> does that mean we will still will set mtu to 1518, is this expected? > >>>>>>>>> > >>>>>>>> > >>>>>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise > >>>>>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). > >>>>>>> > >>>>>>> Ok, this describe the problem more general and better to replace exist > >>>>>> code comment and commit log for easy understanding. > >>>>>>> Please send a new version for reword > >>>>>>> > >>>>>> > >>>>>> I didn't really get this set. > >>>>>> > >>>>>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than > >>>>>> this size is dropped. > >>>>> > >>>>> Sure, it is normal case for dropping oversize data. > >>>>> > >>>>>> Isn't this what should be, why we are trying to overwrite user configuration > >>>>>> in PMD to prevent this? > >>>>>> > >>>>> > >>>>> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. > >>>>> This fix will make a decision when confliction occurred. > >>>>> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, > >>>>> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. > >>>>> > >>>>>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. > >>>>>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. > >>>>>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' > >>>>>> and mean it? PMD will not honor the user config. > >>>>> > >>>>> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? > >>>>> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. > >>>>> > >>>>>> > >>>>>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? > >>>>>> > >>>>> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, > >>>>> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. > >>>>> > >>>>>> And I guess even better what we need is to tell to the application what the > >>>>>> frame overhead PMD accepts. > >>>>>> So the application can set proper 'max_rx_pkt_len' value per port for a > >>>>>> given/requested MTU value. > >>>>>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD > >>>>>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps > >>>>>> he has a solution now? > >>>> > >>>> From my perspective the main problem here: > >>>> We have 2 different variables for nearly the same thing: > >>>> rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. > >>>> and 2 different API to update them: dev_mtu_set() and dev_configure(). > >>> > >>> According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' > >>> Although not sure that is practically what is done for all drivers. > >> > >> I think most of Intel PMDs use it unconditionally. > >> > >>> > >>>> And inside majority of Intel PMDs we don't keep these 2 variables in sync: > >>>> - mtu_set() will update both variables. > >>>> - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. > >>>> > >>>> This patch fixes this inconsistency, which I think is a good thing. > >>>> Though yes, it introduces change in behaviour. > >>>> > >>>> Let say the code: > >>>> rte_eth_dev_set_mtu(port, 1500); > >>>> dev_conf.max_rx_pkt_len = 1000; > >>>> rte_eth_dev_configure(port, 1, 1, &dev_conf); > >>>> > >>> > >>> 'rte_eth_dev_configure()' is one of the first APIs called, it is called before > >>> 'rte_eth_dev_set_mtu(). > >> > >> Usually yes. > >> But you can still do sometimes later: dev_mtu_set(); ...; dev_stop(); dev_configure(); dev_start(); > >> > >>> > >>> When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by > >>> ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. > >> > >> See above. > >> PMD doesn't know where this MTU value came from (default ethdev value or user specified value) > >> and probably it shouldn't care. > >> > >>> > >>> And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' > >>> are updated (mostly). > >> > >> Yes, in mtu_set() we update both. > >> But we don't update MTU in dev_configure(), only max_rx_pkt_len. > >> That what this patch tries to fix (as I understand it). > > > > To be more precise - it doesn't change MTU value in dev_configure(), > > but instead doesn't allow max_rx_pkt_len to become smaller > > then MTU + OVERHEAD. > > Probably changing MTU value instead is a better choice. > > > > +1 to change mtu for this case. > And this is what happens in practice when there is no 'rte_eth_dev_set_mtu()' > call, since PMD is using ('max_rx_pkt_len' - OVERHEAD) to set MTU. Hmm, I don't see that happens within Intel PMDs. As I can read the code: if user never call mtu_set(), then MTU value is left intact. > But this won't solve the problem Steve is trying to solve. You mean we still need to update test-pmd code to calculate max_rx_pkt_len properly for default mtu value? > >>> > >>> > >>>> Before the patch will result: > >>>> mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me > >>>> > >>>> After the patch: > >>>> mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. > >>>> > >>>> If you think we need to preserve current behaviour, > >>>> then I suppose the easiest thing would be to change dev_config() code > >>>> to update mtu value based on max_rx_pkt_len. > >>>> I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} > >>>> So the code snippet above will result: > >>>> mtu=982,max_rx_pkt_len=1000; > >>>> > >>> > >>> The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just > >>> drop it? > >>> > >>> By default device will be up with default MTU (1500), later > >>> 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. > >>> > >>> Will this work? > >> > >> I think it might, but that's a big change, probably too risky at that stage... > >> > > Defintely, I was thinking for 21.11. Let me send a deprecation notice and see > what happens. > > >> > >>> > >>> > >>> And for short term, for above Intel PMDs, there must be a place this > >>> 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that > >>> function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, > >>> otherwise use the 'MTU' value. > >> > >> Even if we'll use max_rx_pkt_len only when if JUMBO_FRAME is set, > >> I think we still need to keep max_rx_pkt_len and MTU values in sync. > >> > >>> > >>> Without 'start()' updated the current logic won't work after stop & start anyway. > >>> > >>> > >>>> > >>>> > >>>> > >>>>> > >>>>>> > >>>>>> And why this same thing can't happen to other PMDs? If this is a problem for > >>>>>> all PMDs, we should solve in other level, not for only some PMDs. > >>>>>> > >>>>> No, all PMDs exist the same issue, another proposal: > >>>>> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); > >>>>> - provide the uniform API for fetching the NIC's supported Ether Overhead size; > >>>>> Is it feasible? > >>>>> > >>>>>>> > >>>>>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link > >>>>>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to > >>>>>>>> satisfy mtu requirement. > >>>>>>>> > >>>>>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) > >>>>>>>>> here? > >>>>>>>> ice_mtu_set(dev, mtu) will append ether overhead to > >>>>>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd > >>>>>>>> parameter, or not the max_rx_pkt_len. > >>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>>> > >>>>>>>>>>> And please remove above comment, since ether overhead is already > >>>>>>>>>> considered in ice_mtu_set. > >>>>>>>> Ether overhead is already considered in ice_mtu_set, but it also > >>>>>>>> should be considered as the adjustment condition that if ice_mtu_set > >>>>>> need be invoked. > >>>>>>>> So, it perhaps should remain this comment before this if() condition. > >>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return > >>>>>>>>>>>> +ret; } > >>>>>>>>>>>> + > >>>>>>>>>>>> ret = ice_init_rss(pf); > >>>>>>>>>>>> if (ret) { > >>>>>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > >>>>>>>>>>>> -- > >>>>>>>>>>>> 2.17.1 > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>> > >>>>>>>> > >>>>>>> > >>>> > > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-20 9:07 ` Ananyev, Konstantin @ 2020-10-20 12:29 ` Ferruh Yigit 2020-10-21 9:47 ` Ananyev, Konstantin 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-10-20 12:29 UTC (permalink / raw) To: Ananyev, Konstantin, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian On 10/20/2020 10:07 AM, Ananyev, Konstantin wrote: > >>>>>>>>>>>>>> >>>>>>>>>>>>>> testpmd will initialize default max packet length to 1518 which >>>>>>>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the >>>>>>>>>>>>>> max mtu length packet with vlan tag, the max packet length will >>>>>>>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw >>>>>>>>>> side. >>>>>>>>>>>>>> >>>>>>>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max >>>>>>>>>>>>>> packet size, so, configures the correct max packet size in >>>>>>>>>>>>>> dev_config >>>>>>>>>>> ops. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") >>>>>>>>>>>>>> >>>>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>>>>>>> --- >>>>>>>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ >>>>>>>>>>>>>> 1 file changed, 11 insertions(+) >>>>>>>>>>>>>> >>>>>>>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>>>> b/drivers/net/ice/ice_ethdev.c index >>>>>>>>>>>>>> cfd357b05..6b7098444 100644 >>>>>>>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev >>>>>>>> *dev) >>>>>>>>>>>>>> struct ice_adapter *ad = >>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); >>>>>>>>>>>>>> struct ice_pf *pf = >>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); >>>>>>>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; >>>>>>>>>>>>>> int ret; >>>>>>>>>>>>>> >>>>>>>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ >>>>>>>>>>>>>> -3157,6 >>>>>>>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) >>>>>>>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & >>>>>>>> ETH_MQ_RX_RSS_FLAG) >>>>>>>>>>>>>> dev->data->dev_conf.rxmode.offloads |= >>>>>>>>>>> DEV_RX_OFFLOAD_RSS_HASH; >>>>>>>>>>>>>> >>>>>>>>>>>>>> +/** >>>>>>>>>>>>>> + * Considering QinQ packet, max frame size should be equal or >>>>>>>>>>>>>> + * larger than total size of MTU and Ether overhead. >>>>>>>>>>>>>> + */ >>>>>>>>>>>>> >>>>>>>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Why we need this check? >>>>>>>>>>>>> Can we just call ice_mtu_set directly >>>>>>>>>>>> >>>>>>>>>>>> I think that without that check we can silently overwrite provided >>>>>>>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. >>>>>>>>>>> >>>>>>>>>>> OK, I see >>>>>>>>>>> >>>>>>>>>>> But still have one question >>>>>>>>>>> dev->data->mtu is initialized to 1518 as default , but if >>>>>>>>>>> dev->data->application set >>>>>>>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. >>>>>>>>>>> does that mean we will still will set mtu to 1518, is this expected? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise >>>>>>>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). >>>>>>>>> >>>>>>>>> Ok, this describe the problem more general and better to replace exist >>>>>>>> code comment and commit log for easy understanding. >>>>>>>>> Please send a new version for reword >>>>>>>>> >>>>>>>> >>>>>>>> I didn't really get this set. >>>>>>>> >>>>>>>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than >>>>>>>> this size is dropped. >>>>>>> >>>>>>> Sure, it is normal case for dropping oversize data. >>>>>>> >>>>>>>> Isn't this what should be, why we are trying to overwrite user configuration >>>>>>>> in PMD to prevent this? >>>>>>>> >>>>>>> >>>>>>> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. >>>>>>> This fix will make a decision when confliction occurred. >>>>>>> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, >>>>>>> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. >>>>>>> >>>>>>>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. >>>>>>>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. >>>>>>>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' >>>>>>>> and mean it? PMD will not honor the user config. >>>>>>> >>>>>>> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? >>>>>>> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. >>>>>>> >>>>>>>> >>>>>>>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? >>>>>>>> >>>>>>> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, >>>>>>> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. >>>>>>> >>>>>>>> And I guess even better what we need is to tell to the application what the >>>>>>>> frame overhead PMD accepts. >>>>>>>> So the application can set proper 'max_rx_pkt_len' value per port for a >>>>>>>> given/requested MTU value. >>>>>>>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD >>>>>>>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps >>>>>>>> he has a solution now? >>>>>> >>>>>> From my perspective the main problem here: >>>>>> We have 2 different variables for nearly the same thing: >>>>>> rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. >>>>>> and 2 different API to update them: dev_mtu_set() and dev_configure(). >>>>> >>>>> According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' >>>>> Although not sure that is practically what is done for all drivers. >>>> >>>> I think most of Intel PMDs use it unconditionally. >>>> >>>>> >>>>>> And inside majority of Intel PMDs we don't keep these 2 variables in sync: >>>>>> - mtu_set() will update both variables. >>>>>> - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. >>>>>> >>>>>> This patch fixes this inconsistency, which I think is a good thing. >>>>>> Though yes, it introduces change in behaviour. >>>>>> >>>>>> Let say the code: >>>>>> rte_eth_dev_set_mtu(port, 1500); >>>>>> dev_conf.max_rx_pkt_len = 1000; >>>>>> rte_eth_dev_configure(port, 1, 1, &dev_conf); >>>>>> >>>>> >>>>> 'rte_eth_dev_configure()' is one of the first APIs called, it is called before >>>>> 'rte_eth_dev_set_mtu(). >>>> >>>> Usually yes. >>>> But you can still do sometimes later: dev_mtu_set(); ...; dev_stop(); dev_configure(); dev_start(); >>>> >>>>> >>>>> When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by >>>>> ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. >>>> >>>> See above. >>>> PMD doesn't know where this MTU value came from (default ethdev value or user specified value) >>>> and probably it shouldn't care. >>>> >>>>> >>>>> And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' >>>>> are updated (mostly). >>>> >>>> Yes, in mtu_set() we update both. >>>> But we don't update MTU in dev_configure(), only max_rx_pkt_len. >>>> That what this patch tries to fix (as I understand it). >>> >>> To be more precise - it doesn't change MTU value in dev_configure(), >>> but instead doesn't allow max_rx_pkt_len to become smaller >>> then MTU + OVERHEAD. >>> Probably changing MTU value instead is a better choice. >>> >> >> +1 to change mtu for this case. >> And this is what happens in practice when there is no 'rte_eth_dev_set_mtu()' >> call, since PMD is using ('max_rx_pkt_len' - OVERHEAD) to set MTU. > > Hmm, I don't see that happens within Intel PMDs. > As I can read the code: if user never call mtu_set(), then MTU value is left intact. > I was checking ice, in 'ice_dev_start()', 'rxmode.max_rx_pkt_len' is used to configure the device. >> But this won't solve the problem Steve is trying to solve. > > You mean we still need to update test-pmd code to calculate max_rx_pkt_len > properly for default mtu value? > Yes. Because target of this set is able to receive packets with payload size 'RTE_ETHER_MTU', if MTU is updated according to the provided 'max_rx_pkt_len', device still won't able to receive those packets. >>>>> >>>>> >>>>>> Before the patch will result: >>>>>> mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me >>>>>> >>>>>> After the patch: >>>>>> mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. >>>>>> >>>>>> If you think we need to preserve current behaviour, >>>>>> then I suppose the easiest thing would be to change dev_config() code >>>>>> to update mtu value based on max_rx_pkt_len. >>>>>> I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} >>>>>> So the code snippet above will result: >>>>>> mtu=982,max_rx_pkt_len=1000; >>>>>> >>>>> >>>>> The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just >>>>> drop it? >>>>> >>>>> By default device will be up with default MTU (1500), later >>>>> 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. >>>>> >>>>> Will this work? >>>> >>>> I think it might, but that's a big change, probably too risky at that stage... >>>> >> >> Defintely, I was thinking for 21.11. Let me send a deprecation notice and see >> what happens. >> >>>> >>>>> >>>>> >>>>> And for short term, for above Intel PMDs, there must be a place this >>>>> 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that >>>>> function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, >>>>> otherwise use the 'MTU' value. >>>> >>>> Even if we'll use max_rx_pkt_len only when if JUMBO_FRAME is set, >>>> I think we still need to keep max_rx_pkt_len and MTU values in sync. >>>> >>>>> >>>>> Without 'start()' updated the current logic won't work after stop & start anyway. >>>>> >>>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>>> >>>>>>>> And why this same thing can't happen to other PMDs? If this is a problem for >>>>>>>> all PMDs, we should solve in other level, not for only some PMDs. >>>>>>>> >>>>>>> No, all PMDs exist the same issue, another proposal: >>>>>>> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); >>>>>>> - provide the uniform API for fetching the NIC's supported Ether Overhead size; >>>>>>> Is it feasible? >>>>>>> >>>>>>>>> >>>>>>>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link >>>>>>>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to >>>>>>>>>> satisfy mtu requirement. >>>>>>>>>> >>>>>>>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) >>>>>>>>>>> here? >>>>>>>>>> ice_mtu_set(dev, mtu) will append ether overhead to >>>>>>>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd >>>>>>>>>> parameter, or not the max_rx_pkt_len. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> And please remove above comment, since ether overhead is already >>>>>>>>>>>> considered in ice_mtu_set. >>>>>>>>>> Ether overhead is already considered in ice_mtu_set, but it also >>>>>>>>>> should be considered as the adjustment condition that if ice_mtu_set >>>>>>>> need be invoked. >>>>>>>>>> So, it perhaps should remain this comment before this if() condition. >>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return >>>>>>>>>>>>>> +ret; } >>>>>>>>>>>>>> + >>>>>>>>>>>>>> ret = ice_init_rss(pf); >>>>>>>>>>>>>> if (ret) { >>>>>>>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> 2.17.1 >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>> >>> > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-20 12:29 ` Ferruh Yigit @ 2020-10-21 9:47 ` Ananyev, Konstantin 2020-10-21 10:36 ` Ferruh Yigit 0 siblings, 1 reply; 94+ messages in thread From: Ananyev, Konstantin @ 2020-10-21 9:47 UTC (permalink / raw) To: Yigit, Ferruh, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian > > On 10/20/2020 10:07 AM, Ananyev, Konstantin wrote: > > > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> testpmd will initialize default max packet length to 1518 which > >>>>>>>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the > >>>>>>>>>>>>>> max mtu length packet with vlan tag, the max packet length will > >>>>>>>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw > >>>>>>>>>> side. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max > >>>>>>>>>>>>>> packet size, so, configures the correct max packet size in > >>>>>>>>>>>>>> dev_config > >>>>>>>>>>> ops. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > >>>>>>>>>>>>>> --- > >>>>>>>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > >>>>>>>>>>>>>> 1 file changed, 11 insertions(+) > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c > >>>>>>>>>>>>>> b/drivers/net/ice/ice_ethdev.c index > >>>>>>>>>>>>>> cfd357b05..6b7098444 100644 > >>>>>>>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c > >>>>>>>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c > >>>>>>>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev > >>>>>>>> *dev) > >>>>>>>>>>>>>> struct ice_adapter *ad = > >>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > >>>>>>>>>>>>>> struct ice_pf *pf = > >>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > >>>>>>>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > >>>>>>>>>>>>>> int ret; > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ > >>>>>>>>>>>>>> -3157,6 > >>>>>>>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > >>>>>>>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & > >>>>>>>> ETH_MQ_RX_RSS_FLAG) > >>>>>>>>>>>>>> dev->data->dev_conf.rxmode.offloads |= > >>>>>>>>>>> DEV_RX_OFFLOAD_RSS_HASH; > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> +/** > >>>>>>>>>>>>>> + * Considering QinQ packet, max frame size should be equal or > >>>>>>>>>>>>>> + * larger than total size of MTU and Ether overhead. > >>>>>>>>>>>>>> + */ > >>>>>>>>>>>>> > >>>>>>>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> Why we need this check? > >>>>>>>>>>>>> Can we just call ice_mtu_set directly > >>>>>>>>>>>> > >>>>>>>>>>>> I think that without that check we can silently overwrite provided > >>>>>>>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. > >>>>>>>>>>> > >>>>>>>>>>> OK, I see > >>>>>>>>>>> > >>>>>>>>>>> But still have one question > >>>>>>>>>>> dev->data->mtu is initialized to 1518 as default , but if > >>>>>>>>>>> dev->data->application set > >>>>>>>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. > >>>>>>>>>>> does that mean we will still will set mtu to 1518, is this expected? > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise > >>>>>>>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). > >>>>>>>>> > >>>>>>>>> Ok, this describe the problem more general and better to replace exist > >>>>>>>> code comment and commit log for easy understanding. > >>>>>>>>> Please send a new version for reword > >>>>>>>>> > >>>>>>>> > >>>>>>>> I didn't really get this set. > >>>>>>>> > >>>>>>>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than > >>>>>>>> this size is dropped. > >>>>>>> > >>>>>>> Sure, it is normal case for dropping oversize data. > >>>>>>> > >>>>>>>> Isn't this what should be, why we are trying to overwrite user configuration > >>>>>>>> in PMD to prevent this? > >>>>>>>> > >>>>>>> > >>>>>>> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. > >>>>>>> This fix will make a decision when confliction occurred. > >>>>>>> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, > >>>>>>> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. > >>>>>>> > >>>>>>>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. > >>>>>>>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. > >>>>>>>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' > >>>>>>>> and mean it? PMD will not honor the user config. > >>>>>>> > >>>>>>> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? > >>>>>>> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. > >>>>>>> > >>>>>>>> > >>>>>>>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? > >>>>>>>> > >>>>>>> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, > >>>>>>> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. > >>>>>>> > >>>>>>>> And I guess even better what we need is to tell to the application what the > >>>>>>>> frame overhead PMD accepts. > >>>>>>>> So the application can set proper 'max_rx_pkt_len' value per port for a > >>>>>>>> given/requested MTU value. > >>>>>>>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD > >>>>>>>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps > >>>>>>>> he has a solution now? > >>>>>> > >>>>>> From my perspective the main problem here: > >>>>>> We have 2 different variables for nearly the same thing: > >>>>>> rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. > >>>>>> and 2 different API to update them: dev_mtu_set() and dev_configure(). > >>>>> > >>>>> According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' > >>>>> Although not sure that is practically what is done for all drivers. > >>>> > >>>> I think most of Intel PMDs use it unconditionally. > >>>> > >>>>> > >>>>>> And inside majority of Intel PMDs we don't keep these 2 variables in sync: > >>>>>> - mtu_set() will update both variables. > >>>>>> - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. > >>>>>> > >>>>>> This patch fixes this inconsistency, which I think is a good thing. > >>>>>> Though yes, it introduces change in behaviour. > >>>>>> > >>>>>> Let say the code: > >>>>>> rte_eth_dev_set_mtu(port, 1500); > >>>>>> dev_conf.max_rx_pkt_len = 1000; > >>>>>> rte_eth_dev_configure(port, 1, 1, &dev_conf); > >>>>>> > >>>>> > >>>>> 'rte_eth_dev_configure()' is one of the first APIs called, it is called before > >>>>> 'rte_eth_dev_set_mtu(). > >>>> > >>>> Usually yes. > >>>> But you can still do sometimes later: dev_mtu_set(); ...; dev_stop(); dev_configure(); dev_start(); > >>>> > >>>>> > >>>>> When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by > >>>>> ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. > >>>> > >>>> See above. > >>>> PMD doesn't know where this MTU value came from (default ethdev value or user specified value) > >>>> and probably it shouldn't care. > >>>> > >>>>> > >>>>> And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' > >>>>> are updated (mostly). > >>>> > >>>> Yes, in mtu_set() we update both. > >>>> But we don't update MTU in dev_configure(), only max_rx_pkt_len. > >>>> That what this patch tries to fix (as I understand it). > >>> > >>> To be more precise - it doesn't change MTU value in dev_configure(), > >>> but instead doesn't allow max_rx_pkt_len to become smaller > >>> then MTU + OVERHEAD. > >>> Probably changing MTU value instead is a better choice. > >>> > >> > >> +1 to change mtu for this case. > >> And this is what happens in practice when there is no 'rte_eth_dev_set_mtu()' > >> call, since PMD is using ('max_rx_pkt_len' - OVERHEAD) to set MTU. > > > > Hmm, I don't see that happens within Intel PMDs. > > As I can read the code: if user never call mtu_set(), then MTU value is left intact. > > > > I was checking ice, > in 'ice_dev_start()', 'rxmode.max_rx_pkt_len' is used to configure the device. Yes, I am not arguing with that. What I am saying - dev_config() doesn't update MTU based on max_rx_pkt_len. While it probably should. > > >> But this won't solve the problem Steve is trying to solve. > > > > You mean we still need to update test-pmd code to calculate max_rx_pkt_len > > properly for default mtu value? > > > > Yes. > Because target of this set is able to receive packets with payload size > 'RTE_ETHER_MTU', if MTU is updated according to the provided 'max_rx_pkt_len', > device still won't able to receive those packets. Agree. > > >>>>> > >>>>> > >>>>>> Before the patch will result: > >>>>>> mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me > >>>>>> > >>>>>> After the patch: > >>>>>> mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. > >>>>>> > >>>>>> If you think we need to preserve current behaviour, > >>>>>> then I suppose the easiest thing would be to change dev_config() code > >>>>>> to update mtu value based on max_rx_pkt_len. > >>>>>> I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} > >>>>>> So the code snippet above will result: > >>>>>> mtu=982,max_rx_pkt_len=1000; > >>>>>> > >>>>> > >>>>> The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just > >>>>> drop it? > >>>>> > >>>>> By default device will be up with default MTU (1500), later > >>>>> 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. > >>>>> > >>>>> Will this work? > >>>> > >>>> I think it might, but that's a big change, probably too risky at that stage... > >>>> > >> > >> Defintely, I was thinking for 21.11. Let me send a deprecation notice and see > >> what happens. > >> > >>>> > >>>>> > >>>>> > >>>>> And for short term, for above Intel PMDs, there must be a place this > >>>>> 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that > >>>>> function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, > >>>>> otherwise use the 'MTU' value. > >>>> > >>>> Even if we'll use max_rx_pkt_len only when if JUMBO_FRAME is set, > >>>> I think we still need to keep max_rx_pkt_len and MTU values in sync. > >>>> > >>>>> > >>>>> Without 'start()' updated the current logic won't work after stop & start anyway. > >>>>> > >>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>>> > >>>>>>>> > >>>>>>>> And why this same thing can't happen to other PMDs? If this is a problem for > >>>>>>>> all PMDs, we should solve in other level, not for only some PMDs. > >>>>>>>> > >>>>>>> No, all PMDs exist the same issue, another proposal: > >>>>>>> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); > >>>>>>> - provide the uniform API for fetching the NIC's supported Ether Overhead size; > >>>>>>> Is it feasible? > >>>>>>> > >>>>>>>>> > >>>>>>>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link > >>>>>>>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to > >>>>>>>>>> satisfy mtu requirement. > >>>>>>>>>> > >>>>>>>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) > >>>>>>>>>>> here? > >>>>>>>>>> ice_mtu_set(dev, mtu) will append ether overhead to > >>>>>>>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd > >>>>>>>>>> parameter, or not the max_rx_pkt_len. > >>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>>> And please remove above comment, since ether overhead is already > >>>>>>>>>>>> considered in ice_mtu_set. > >>>>>>>>>> Ether overhead is already considered in ice_mtu_set, but it also > >>>>>>>>>> should be considered as the adjustment condition that if ice_mtu_set > >>>>>>>> need be invoked. > >>>>>>>>>> So, it perhaps should remain this comment before this if() condition. > >>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return > >>>>>>>>>>>>>> +ret; } > >>>>>>>>>>>>>> + > >>>>>>>>>>>>>> ret = ice_init_rss(pf); > >>>>>>>>>>>>>> if (ret) { > >>>>>>>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > >>>>>>>>>>>>>> -- > >>>>>>>>>>>>>> 2.17.1 > >>>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>> > >>>>>> > >>> > > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-21 9:47 ` Ananyev, Konstantin @ 2020-10-21 10:36 ` Ferruh Yigit 2020-10-21 10:44 ` Ananyev, Konstantin 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-10-21 10:36 UTC (permalink / raw) To: Ananyev, Konstantin, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian On 10/21/2020 10:47 AM, Ananyev, Konstantin wrote: > > >> >> On 10/20/2020 10:07 AM, Ananyev, Konstantin wrote: >>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> testpmd will initialize default max packet length to 1518 which >>>>>>>>>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the >>>>>>>>>>>>>>>> max mtu length packet with vlan tag, the max packet length will >>>>>>>>>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw >>>>>>>>>>>> side. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max >>>>>>>>>>>>>>>> packet size, so, configures the correct max packet size in >>>>>>>>>>>>>>>> dev_config >>>>>>>>>>>>> ops. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ >>>>>>>>>>>>>>>> 1 file changed, 11 insertions(+) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>>>>>> b/drivers/net/ice/ice_ethdev.c index >>>>>>>>>>>>>>>> cfd357b05..6b7098444 100644 >>>>>>>>>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev >>>>>>>>>> *dev) >>>>>>>>>>>>>>>> struct ice_adapter *ad = >>>>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); >>>>>>>>>>>>>>>> struct ice_pf *pf = >>>>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); >>>>>>>>>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; >>>>>>>>>>>>>>>> int ret; >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ >>>>>>>>>>>>>>>> -3157,6 >>>>>>>>>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) >>>>>>>>>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & >>>>>>>>>> ETH_MQ_RX_RSS_FLAG) >>>>>>>>>>>>>>>> dev->data->dev_conf.rxmode.offloads |= >>>>>>>>>>>>> DEV_RX_OFFLOAD_RSS_HASH; >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> +/** >>>>>>>>>>>>>>>> + * Considering QinQ packet, max frame size should be equal or >>>>>>>>>>>>>>>> + * larger than total size of MTU and Ether overhead. >>>>>>>>>>>>>>>> + */ >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Why we need this check? >>>>>>>>>>>>>>> Can we just call ice_mtu_set directly >>>>>>>>>>>>>> >>>>>>>>>>>>>> I think that without that check we can silently overwrite provided >>>>>>>>>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. >>>>>>>>>>>>> >>>>>>>>>>>>> OK, I see >>>>>>>>>>>>> >>>>>>>>>>>>> But still have one question >>>>>>>>>>>>> dev->data->mtu is initialized to 1518 as default , but if >>>>>>>>>>>>> dev->data->application set >>>>>>>>>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. >>>>>>>>>>>>> does that mean we will still will set mtu to 1518, is this expected? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise >>>>>>>>>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). >>>>>>>>>>> >>>>>>>>>>> Ok, this describe the problem more general and better to replace exist >>>>>>>>>> code comment and commit log for easy understanding. >>>>>>>>>>> Please send a new version for reword >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I didn't really get this set. >>>>>>>>>> >>>>>>>>>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than >>>>>>>>>> this size is dropped. >>>>>>>>> >>>>>>>>> Sure, it is normal case for dropping oversize data. >>>>>>>>> >>>>>>>>>> Isn't this what should be, why we are trying to overwrite user configuration >>>>>>>>>> in PMD to prevent this? >>>>>>>>>> >>>>>>>>> >>>>>>>>> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. >>>>>>>>> This fix will make a decision when confliction occurred. >>>>>>>>> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, >>>>>>>>> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. >>>>>>>>> >>>>>>>>>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. >>>>>>>>>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. >>>>>>>>>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' >>>>>>>>>> and mean it? PMD will not honor the user config. >>>>>>>>> >>>>>>>>> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? >>>>>>>>> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? >>>>>>>>>> >>>>>>>>> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, >>>>>>>>> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. >>>>>>>>> >>>>>>>>>> And I guess even better what we need is to tell to the application what the >>>>>>>>>> frame overhead PMD accepts. >>>>>>>>>> So the application can set proper 'max_rx_pkt_len' value per port for a >>>>>>>>>> given/requested MTU value. >>>>>>>>>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD >>>>>>>>>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps >>>>>>>>>> he has a solution now? >>>>>>>> >>>>>>>> From my perspective the main problem here: >>>>>>>> We have 2 different variables for nearly the same thing: >>>>>>>> rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. >>>>>>>> and 2 different API to update them: dev_mtu_set() and dev_configure(). >>>>>>> >>>>>>> According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' >>>>>>> Although not sure that is practically what is done for all drivers. >>>>>> >>>>>> I think most of Intel PMDs use it unconditionally. >>>>>> >>>>>>> >>>>>>>> And inside majority of Intel PMDs we don't keep these 2 variables in sync: >>>>>>>> - mtu_set() will update both variables. >>>>>>>> - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. >>>>>>>> >>>>>>>> This patch fixes this inconsistency, which I think is a good thing. >>>>>>>> Though yes, it introduces change in behaviour. >>>>>>>> >>>>>>>> Let say the code: >>>>>>>> rte_eth_dev_set_mtu(port, 1500); >>>>>>>> dev_conf.max_rx_pkt_len = 1000; >>>>>>>> rte_eth_dev_configure(port, 1, 1, &dev_conf); >>>>>>>> >>>>>>> >>>>>>> 'rte_eth_dev_configure()' is one of the first APIs called, it is called before >>>>>>> 'rte_eth_dev_set_mtu(). >>>>>> >>>>>> Usually yes. >>>>>> But you can still do sometimes later: dev_mtu_set(); ...; dev_stop(); dev_configure(); dev_start(); >>>>>> >>>>>>> >>>>>>> When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by >>>>>>> ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. >>>>>> >>>>>> See above. >>>>>> PMD doesn't know where this MTU value came from (default ethdev value or user specified value) >>>>>> and probably it shouldn't care. >>>>>> >>>>>>> >>>>>>> And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' >>>>>>> are updated (mostly). >>>>>> >>>>>> Yes, in mtu_set() we update both. >>>>>> But we don't update MTU in dev_configure(), only max_rx_pkt_len. >>>>>> That what this patch tries to fix (as I understand it). >>>>> >>>>> To be more precise - it doesn't change MTU value in dev_configure(), >>>>> but instead doesn't allow max_rx_pkt_len to become smaller >>>>> then MTU + OVERHEAD. >>>>> Probably changing MTU value instead is a better choice. >>>>> >>>> >>>> +1 to change mtu for this case. >>>> And this is what happens in practice when there is no 'rte_eth_dev_set_mtu()' >>>> call, since PMD is using ('max_rx_pkt_len' - OVERHEAD) to set MTU. >>> >>> Hmm, I don't see that happens within Intel PMDs. >>> As I can read the code: if user never call mtu_set(), then MTU value is left intact. >>> >> >> I was checking ice, >> in 'ice_dev_start()', 'rxmode.max_rx_pkt_len' is used to configure the device. > > Yes, I am not arguing with that. > What I am saying - dev_config() doesn't update MTU based on max_rx_pkt_len. > While it probably should. > Yes 'dev_configure()' doesn't update the 'dev->data->mtu' and 'max_rx_pkt_len' & 'dev->data->mtu' may diverge there. I think best place to update 'dev->data->mtu' is where the device is actually updated, but to prevent the diversion above we can update 'dev->data->mtu' in ethdev layer, in 'rte_eth_dev_configure()' based on 'max_rx_pkt_len', will it work? Only concern I see is if user reads the MTU ('rte_eth_dev_get_mtu()') after 'rte_eth_dev_configure()' but before device configured, user will get the wrong value, I guess that problem was already there but changing default value may make it more visible. >> >>>> But this won't solve the problem Steve is trying to solve. >>> >>> You mean we still need to update test-pmd code to calculate max_rx_pkt_len >>> properly for default mtu value? >>> >> >> Yes. >> Because target of this set is able to receive packets with payload size >> 'RTE_ETHER_MTU', if MTU is updated according to the provided 'max_rx_pkt_len', >> device still won't able to receive those packets. > > Agree. > >> >>>>>>> >>>>>>> >>>>>>>> Before the patch will result: >>>>>>>> mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me >>>>>>>> >>>>>>>> After the patch: >>>>>>>> mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. >>>>>>>> >>>>>>>> If you think we need to preserve current behaviour, >>>>>>>> then I suppose the easiest thing would be to change dev_config() code >>>>>>>> to update mtu value based on max_rx_pkt_len. >>>>>>>> I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} >>>>>>>> So the code snippet above will result: >>>>>>>> mtu=982,max_rx_pkt_len=1000; >>>>>>>> >>>>>>> >>>>>>> The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just >>>>>>> drop it? >>>>>>> >>>>>>> By default device will be up with default MTU (1500), later >>>>>>> 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. >>>>>>> >>>>>>> Will this work? >>>>>> >>>>>> I think it might, but that's a big change, probably too risky at that stage... >>>>>> >>>> >>>> Defintely, I was thinking for 21.11. Let me send a deprecation notice and see >>>> what happens. >>>> >>>>>> >>>>>>> >>>>>>> >>>>>>> And for short term, for above Intel PMDs, there must be a place this >>>>>>> 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that >>>>>>> function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, >>>>>>> otherwise use the 'MTU' value. >>>>>> >>>>>> Even if we'll use max_rx_pkt_len only when if JUMBO_FRAME is set, >>>>>> I think we still need to keep max_rx_pkt_len and MTU values in sync. >>>>>> >>>>>>> >>>>>>> Without 'start()' updated the current logic won't work after stop & start anyway. >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> And why this same thing can't happen to other PMDs? If this is a problem for >>>>>>>>>> all PMDs, we should solve in other level, not for only some PMDs. >>>>>>>>>> >>>>>>>>> No, all PMDs exist the same issue, another proposal: >>>>>>>>> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); >>>>>>>>> - provide the uniform API for fetching the NIC's supported Ether Overhead size; >>>>>>>>> Is it feasible? >>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link >>>>>>>>>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to >>>>>>>>>>>> satisfy mtu requirement. >>>>>>>>>>>> >>>>>>>>>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) >>>>>>>>>>>>> here? >>>>>>>>>>>> ice_mtu_set(dev, mtu) will append ether overhead to >>>>>>>>>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd >>>>>>>>>>>> parameter, or not the max_rx_pkt_len. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>>> And please remove above comment, since ether overhead is already >>>>>>>>>>>>>> considered in ice_mtu_set. >>>>>>>>>>>> Ether overhead is already considered in ice_mtu_set, but it also >>>>>>>>>>>> should be considered as the adjustment condition that if ice_mtu_set >>>>>>>>>> need be invoked. >>>>>>>>>>>> So, it perhaps should remain this comment before this if() condition. >>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return >>>>>>>>>>>>>>>> +ret; } >>>>>>>>>>>>>>>> + >>>>>>>>>>>>>>>> ret = ice_init_rss(pf); >>>>>>>>>>>>>>>> if (ret) { >>>>>>>>>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> 2.17.1 >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>> >>> > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-21 10:36 ` Ferruh Yigit @ 2020-10-21 10:44 ` Ananyev, Konstantin 2020-10-21 10:53 ` Ferruh Yigit 0 siblings, 1 reply; 94+ messages in thread From: Ananyev, Konstantin @ 2020-10-21 10:44 UTC (permalink / raw) To: Yigit, Ferruh, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian > >> On 10/20/2020 10:07 AM, Ananyev, Konstantin wrote: > >>> > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> testpmd will initialize default max packet length to 1518 which > >>>>>>>>>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the > >>>>>>>>>>>>>>>> max mtu length packet with vlan tag, the max packet length will > >>>>>>>>>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw > >>>>>>>>>>>> side. > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max > >>>>>>>>>>>>>>>> packet size, so, configures the correct max packet size in > >>>>>>>>>>>>>>>> dev_config > >>>>>>>>>>>>> ops. > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > >>>>>>>>>>>>>>>> --- > >>>>>>>>>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > >>>>>>>>>>>>>>>> 1 file changed, 11 insertions(+) > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c > >>>>>>>>>>>>>>>> b/drivers/net/ice/ice_ethdev.c index > >>>>>>>>>>>>>>>> cfd357b05..6b7098444 100644 > >>>>>>>>>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c > >>>>>>>>>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c > >>>>>>>>>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev > >>>>>>>>>> *dev) > >>>>>>>>>>>>>>>> struct ice_adapter *ad = > >>>>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > >>>>>>>>>>>>>>>> struct ice_pf *pf = > >>>>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > >>>>>>>>>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > >>>>>>>>>>>>>>>> int ret; > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ > >>>>>>>>>>>>>>>> -3157,6 > >>>>>>>>>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > >>>>>>>>>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & > >>>>>>>>>> ETH_MQ_RX_RSS_FLAG) > >>>>>>>>>>>>>>>> dev->data->dev_conf.rxmode.offloads |= > >>>>>>>>>>>>> DEV_RX_OFFLOAD_RSS_HASH; > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> +/** > >>>>>>>>>>>>>>>> + * Considering QinQ packet, max frame size should be equal or > >>>>>>>>>>>>>>>> + * larger than total size of MTU and Ether overhead. > >>>>>>>>>>>>>>>> + */ > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> Why we need this check? > >>>>>>>>>>>>>>> Can we just call ice_mtu_set directly > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> I think that without that check we can silently overwrite provided > >>>>>>>>>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. > >>>>>>>>>>>>> > >>>>>>>>>>>>> OK, I see > >>>>>>>>>>>>> > >>>>>>>>>>>>> But still have one question > >>>>>>>>>>>>> dev->data->mtu is initialized to 1518 as default , but if > >>>>>>>>>>>>> dev->data->application set > >>>>>>>>>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. > >>>>>>>>>>>>> does that mean we will still will set mtu to 1518, is this expected? > >>>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise > >>>>>>>>>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). > >>>>>>>>>>> > >>>>>>>>>>> Ok, this describe the problem more general and better to replace exist > >>>>>>>>>> code comment and commit log for easy understanding. > >>>>>>>>>>> Please send a new version for reword > >>>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> I didn't really get this set. > >>>>>>>>>> > >>>>>>>>>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than > >>>>>>>>>> this size is dropped. > >>>>>>>>> > >>>>>>>>> Sure, it is normal case for dropping oversize data. > >>>>>>>>> > >>>>>>>>>> Isn't this what should be, why we are trying to overwrite user configuration > >>>>>>>>>> in PMD to prevent this? > >>>>>>>>>> > >>>>>>>>> > >>>>>>>>> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. > >>>>>>>>> This fix will make a decision when confliction occurred. > >>>>>>>>> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, > >>>>>>>>> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. > >>>>>>>>> > >>>>>>>>>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. > >>>>>>>>>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. > >>>>>>>>>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' > >>>>>>>>>> and mean it? PMD will not honor the user config. > >>>>>>>>> > >>>>>>>>> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? > >>>>>>>>> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. > >>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? > >>>>>>>>>> > >>>>>>>>> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, > >>>>>>>>> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. > >>>>>>>>> > >>>>>>>>>> And I guess even better what we need is to tell to the application what the > >>>>>>>>>> frame overhead PMD accepts. > >>>>>>>>>> So the application can set proper 'max_rx_pkt_len' value per port for a > >>>>>>>>>> given/requested MTU value. > >>>>>>>>>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD > >>>>>>>>>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps > >>>>>>>>>> he has a solution now? > >>>>>>>> > >>>>>>>> From my perspective the main problem here: > >>>>>>>> We have 2 different variables for nearly the same thing: > >>>>>>>> rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. > >>>>>>>> and 2 different API to update them: dev_mtu_set() and dev_configure(). > >>>>>>> > >>>>>>> According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' > >>>>>>> Although not sure that is practically what is done for all drivers. > >>>>>> > >>>>>> I think most of Intel PMDs use it unconditionally. > >>>>>> > >>>>>>> > >>>>>>>> And inside majority of Intel PMDs we don't keep these 2 variables in sync: > >>>>>>>> - mtu_set() will update both variables. > >>>>>>>> - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. > >>>>>>>> > >>>>>>>> This patch fixes this inconsistency, which I think is a good thing. > >>>>>>>> Though yes, it introduces change in behaviour. > >>>>>>>> > >>>>>>>> Let say the code: > >>>>>>>> rte_eth_dev_set_mtu(port, 1500); > >>>>>>>> dev_conf.max_rx_pkt_len = 1000; > >>>>>>>> rte_eth_dev_configure(port, 1, 1, &dev_conf); > >>>>>>>> > >>>>>>> > >>>>>>> 'rte_eth_dev_configure()' is one of the first APIs called, it is called before > >>>>>>> 'rte_eth_dev_set_mtu(). > >>>>>> > >>>>>> Usually yes. > >>>>>> But you can still do sometimes later: dev_mtu_set(); ...; dev_stop(); dev_configure(); dev_start(); > >>>>>> > >>>>>>> > >>>>>>> When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by > >>>>>>> ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. > >>>>>> > >>>>>> See above. > >>>>>> PMD doesn't know where this MTU value came from (default ethdev value or user specified value) > >>>>>> and probably it shouldn't care. > >>>>>> > >>>>>>> > >>>>>>> And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' > >>>>>>> are updated (mostly). > >>>>>> > >>>>>> Yes, in mtu_set() we update both. > >>>>>> But we don't update MTU in dev_configure(), only max_rx_pkt_len. > >>>>>> That what this patch tries to fix (as I understand it). > >>>>> > >>>>> To be more precise - it doesn't change MTU value in dev_configure(), > >>>>> but instead doesn't allow max_rx_pkt_len to become smaller > >>>>> then MTU + OVERHEAD. > >>>>> Probably changing MTU value instead is a better choice. > >>>>> > >>>> > >>>> +1 to change mtu for this case. > >>>> And this is what happens in practice when there is no 'rte_eth_dev_set_mtu()' > >>>> call, since PMD is using ('max_rx_pkt_len' - OVERHEAD) to set MTU. > >>> > >>> Hmm, I don't see that happens within Intel PMDs. > >>> As I can read the code: if user never call mtu_set(), then MTU value is left intact. > >>> > >> > >> I was checking ice, > >> in 'ice_dev_start()', 'rxmode.max_rx_pkt_len' is used to configure the device. > > > > Yes, I am not arguing with that. > > What I am saying - dev_config() doesn't update MTU based on max_rx_pkt_len. > > While it probably should. > > > > Yes 'dev_configure()' doesn't update the 'dev->data->mtu' and 'max_rx_pkt_len' & > 'dev->data->mtu' may diverge there. > > I think best place to update 'dev->data->mtu' is where the device is actually > updated, but to prevent the diversion above we can update 'dev->data->mtu' in > ethdev layer, in 'rte_eth_dev_configure()' based on 'max_rx_pkt_len', will it work? I think - yes. At least, I don't foresee any implications with that. > > Only concern I see is if user reads the MTU ('rte_eth_dev_get_mtu()') after > 'rte_eth_dev_configure()' but before device configured, user will get the wrong > value, I guess that problem was already there but changing default value may > make it more visible. > > >> > >>>> But this won't solve the problem Steve is trying to solve. > >>> > >>> You mean we still need to update test-pmd code to calculate max_rx_pkt_len > >>> properly for default mtu value? > >>> > >> > >> Yes. > >> Because target of this set is able to receive packets with payload size > >> 'RTE_ETHER_MTU', if MTU is updated according to the provided 'max_rx_pkt_len', > >> device still won't able to receive those packets. > > > > Agree. > > > >> > >>>>>>> > >>>>>>> > >>>>>>>> Before the patch will result: > >>>>>>>> mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me > >>>>>>>> > >>>>>>>> After the patch: > >>>>>>>> mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. > >>>>>>>> > >>>>>>>> If you think we need to preserve current behaviour, > >>>>>>>> then I suppose the easiest thing would be to change dev_config() code > >>>>>>>> to update mtu value based on max_rx_pkt_len. > >>>>>>>> I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} > >>>>>>>> So the code snippet above will result: > >>>>>>>> mtu=982,max_rx_pkt_len=1000; > >>>>>>>> > >>>>>>> > >>>>>>> The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just > >>>>>>> drop it? > >>>>>>> > >>>>>>> By default device will be up with default MTU (1500), later > >>>>>>> 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. > >>>>>>> > >>>>>>> Will this work? > >>>>>> > >>>>>> I think it might, but that's a big change, probably too risky at that stage... > >>>>>> > >>>> > >>>> Defintely, I was thinking for 21.11. Let me send a deprecation notice and see > >>>> what happens. > >>>> > >>>>>> > >>>>>>> > >>>>>>> > >>>>>>> And for short term, for above Intel PMDs, there must be a place this > >>>>>>> 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that > >>>>>>> function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, > >>>>>>> otherwise use the 'MTU' value. > >>>>>> > >>>>>> Even if we'll use max_rx_pkt_len only when if JUMBO_FRAME is set, > >>>>>> I think we still need to keep max_rx_pkt_len and MTU values in sync. > >>>>>> > >>>>>>> > >>>>>>> Without 'start()' updated the current logic won't work after stop & start anyway. > >>>>>>> > >>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> And why this same thing can't happen to other PMDs? If this is a problem for > >>>>>>>>>> all PMDs, we should solve in other level, not for only some PMDs. > >>>>>>>>>> > >>>>>>>>> No, all PMDs exist the same issue, another proposal: > >>>>>>>>> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); > >>>>>>>>> - provide the uniform API for fetching the NIC's supported Ether Overhead size; > >>>>>>>>> Is it feasible? > >>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link > >>>>>>>>>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to > >>>>>>>>>>>> satisfy mtu requirement. > >>>>>>>>>>>> > >>>>>>>>>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) > >>>>>>>>>>>>> here? > >>>>>>>>>>>> ice_mtu_set(dev, mtu) will append ether overhead to > >>>>>>>>>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd > >>>>>>>>>>>> parameter, or not the max_rx_pkt_len. > >>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>>> And please remove above comment, since ether overhead is already > >>>>>>>>>>>>>> considered in ice_mtu_set. > >>>>>>>>>>>> Ether overhead is already considered in ice_mtu_set, but it also > >>>>>>>>>>>> should be considered as the adjustment condition that if ice_mtu_set > >>>>>>>>>> need be invoked. > >>>>>>>>>>>> So, it perhaps should remain this comment before this if() condition. > >>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return > >>>>>>>>>>>>>>>> +ret; } > >>>>>>>>>>>>>>>> + > >>>>>>>>>>>>>>>> ret = ice_init_rss(pf); > >>>>>>>>>>>>>>>> if (ret) { > >>>>>>>>>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > >>>>>>>>>>>>>>>> -- > >>>>>>>>>>>>>>>> 2.17.1 > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>> > >>>>> > >>> > > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-21 10:44 ` Ananyev, Konstantin @ 2020-10-21 10:53 ` Ferruh Yigit 0 siblings, 0 replies; 94+ messages in thread From: Ferruh Yigit @ 2020-10-21 10:53 UTC (permalink / raw) To: Ananyev, Konstantin, Yang, SteveX, Zhang, Qi Z, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian On 10/21/2020 11:44 AM, Ananyev, Konstantin wrote: >>>> On 10/20/2020 10:07 AM, Ananyev, Konstantin wrote: >>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> testpmd will initialize default max packet length to 1518 which >>>>>>>>>>>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the >>>>>>>>>>>>>>>>>> max mtu length packet with vlan tag, the max packet length will >>>>>>>>>>>>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw >>>>>>>>>>>>>> side. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max >>>>>>>>>>>>>>>>>> packet size, so, configures the correct max packet size in >>>>>>>>>>>>>>>>>> dev_config >>>>>>>>>>>>>>> ops. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ >>>>>>>>>>>>>>>>>> 1 file changed, 11 insertions(+) >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>>>>>>>> b/drivers/net/ice/ice_ethdev.c index >>>>>>>>>>>>>>>>>> cfd357b05..6b7098444 100644 >>>>>>>>>>>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c >>>>>>>>>>>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev >>>>>>>>>>>> *dev) >>>>>>>>>>>>>>>>>> struct ice_adapter *ad = >>>>>>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); >>>>>>>>>>>>>>>>>> struct ice_pf *pf = >>>>>>>>>>>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); >>>>>>>>>>>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; >>>>>>>>>>>>>>>>>> int ret; >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ >>>>>>>>>>>>>>>>>> -3157,6 >>>>>>>>>>>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) >>>>>>>>>>>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & >>>>>>>>>>>> ETH_MQ_RX_RSS_FLAG) >>>>>>>>>>>>>>>>>> dev->data->dev_conf.rxmode.offloads |= >>>>>>>>>>>>>>> DEV_RX_OFFLOAD_RSS_HASH; >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> +/** >>>>>>>>>>>>>>>>>> + * Considering QinQ packet, max frame size should be equal or >>>>>>>>>>>>>>>>>> + * larger than total size of MTU and Ether overhead. >>>>>>>>>>>>>>>>>> + */ >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Why we need this check? >>>>>>>>>>>>>>>>> Can we just call ice_mtu_set directly >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I think that without that check we can silently overwrite provided >>>>>>>>>>>>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> OK, I see >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> But still have one question >>>>>>>>>>>>>>> dev->data->mtu is initialized to 1518 as default , but if >>>>>>>>>>>>>>> dev->data->application set >>>>>>>>>>>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. >>>>>>>>>>>>>>> does that mean we will still will set mtu to 1518, is this expected? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> max_rx_pkt_len should be larger than mtu at least, so we should raise >>>>>>>>>>>>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). >>>>>>>>>>>>> >>>>>>>>>>>>> Ok, this describe the problem more general and better to replace exist >>>>>>>>>>>> code comment and commit log for easy understanding. >>>>>>>>>>>>> Please send a new version for reword >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I didn't really get this set. >>>>>>>>>>>> >>>>>>>>>>>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than >>>>>>>>>>>> this size is dropped. >>>>>>>>>>> >>>>>>>>>>> Sure, it is normal case for dropping oversize data. >>>>>>>>>>> >>>>>>>>>>>> Isn't this what should be, why we are trying to overwrite user configuration >>>>>>>>>>>> in PMD to prevent this? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. >>>>>>>>>>> This fix will make a decision when confliction occurred. >>>>>>>>>>> MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, >>>>>>>>>>> so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. >>>>>>>>>>> >>>>>>>>>>>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. >>>>>>>>>>>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. >>>>>>>>>>>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' >>>>>>>>>>>> and mean it? PMD will not honor the user config. >>>>>>>>>>> >>>>>>>>>>> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? >>>>>>>>>>> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? >>>>>>>>>>>> >>>>>>>>>>> The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, >>>>>>>>>>> But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. >>>>>>>>>>> >>>>>>>>>>>> And I guess even better what we need is to tell to the application what the >>>>>>>>>>>> frame overhead PMD accepts. >>>>>>>>>>>> So the application can set proper 'max_rx_pkt_len' value per port for a >>>>>>>>>>>> given/requested MTU value. >>>>>>>>>>>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD >>>>>>>>>>>> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps >>>>>>>>>>>> he has a solution now? >>>>>>>>>> >>>>>>>>>> From my perspective the main problem here: >>>>>>>>>> We have 2 different variables for nearly the same thing: >>>>>>>>>> rte_eth_dev_data.mtu and rte_eth_dev_data.dev_conf.max_rx_pkt_len. >>>>>>>>>> and 2 different API to update them: dev_mtu_set() and dev_configure(). >>>>>>>>> >>>>>>>>> According API 'max_rx_pkt_len' is 'Only used if JUMBO_FRAME enabled' >>>>>>>>> Although not sure that is practically what is done for all drivers. >>>>>>>> >>>>>>>> I think most of Intel PMDs use it unconditionally. >>>>>>>> >>>>>>>>> >>>>>>>>>> And inside majority of Intel PMDs we don't keep these 2 variables in sync: >>>>>>>>>> - mtu_set() will update both variables. >>>>>>>>>> - dev_configure() will update only max_rx_pkt_len, but will keep mtu intact. >>>>>>>>>> >>>>>>>>>> This patch fixes this inconsistency, which I think is a good thing. >>>>>>>>>> Though yes, it introduces change in behaviour. >>>>>>>>>> >>>>>>>>>> Let say the code: >>>>>>>>>> rte_eth_dev_set_mtu(port, 1500); >>>>>>>>>> dev_conf.max_rx_pkt_len = 1000; >>>>>>>>>> rte_eth_dev_configure(port, 1, 1, &dev_conf); >>>>>>>>>> >>>>>>>>> >>>>>>>>> 'rte_eth_dev_configure()' is one of the first APIs called, it is called before >>>>>>>>> 'rte_eth_dev_set_mtu(). >>>>>>>> >>>>>>>> Usually yes. >>>>>>>> But you can still do sometimes later: dev_mtu_set(); ...; dev_stop(); dev_configure(); dev_start(); >>>>>>>> >>>>>>>>> >>>>>>>>> When 'rte_eth_dev_configure()' is called, MTU is set to '1500' by default by >>>>>>>>> ethdev layer, so it is not user configuration, but 'max_rx_pkt_len' is. >>>>>>>> >>>>>>>> See above. >>>>>>>> PMD doesn't know where this MTU value came from (default ethdev value or user specified value) >>>>>>>> and probably it shouldn't care. >>>>>>>> >>>>>>>>> >>>>>>>>> And later, when 'rte_eth_dev_set_mtu()' is called, but MTU and 'max_rx_pkt_len' >>>>>>>>> are updated (mostly). >>>>>>>> >>>>>>>> Yes, in mtu_set() we update both. >>>>>>>> But we don't update MTU in dev_configure(), only max_rx_pkt_len. >>>>>>>> That what this patch tries to fix (as I understand it). >>>>>>> >>>>>>> To be more precise - it doesn't change MTU value in dev_configure(), >>>>>>> but instead doesn't allow max_rx_pkt_len to become smaller >>>>>>> then MTU + OVERHEAD. >>>>>>> Probably changing MTU value instead is a better choice. >>>>>>> >>>>>> >>>>>> +1 to change mtu for this case. >>>>>> And this is what happens in practice when there is no 'rte_eth_dev_set_mtu()' >>>>>> call, since PMD is using ('max_rx_pkt_len' - OVERHEAD) to set MTU. >>>>> >>>>> Hmm, I don't see that happens within Intel PMDs. >>>>> As I can read the code: if user never call mtu_set(), then MTU value is left intact. >>>>> >>>> >>>> I was checking ice, >>>> in 'ice_dev_start()', 'rxmode.max_rx_pkt_len' is used to configure the device. >>> >>> Yes, I am not arguing with that. >>> What I am saying - dev_config() doesn't update MTU based on max_rx_pkt_len. >>> While it probably should. >>> >> >> Yes 'dev_configure()' doesn't update the 'dev->data->mtu' and 'max_rx_pkt_len' & >> 'dev->data->mtu' may diverge there. >> >> I think best place to update 'dev->data->mtu' is where the device is actually >> updated, but to prevent the diversion above we can update 'dev->data->mtu' in >> ethdev layer, in 'rte_eth_dev_configure()' based on 'max_rx_pkt_len', will it work? > > I think - yes. > At least, I don't foresee any implications with that. > Thanks. @Steve, I think there is agreement on two patches: 1- Update testpmd to take overhead account instead of setting 'max_rx_pkt_len' to 1518 blindly. 2- In 'rte_eth_dev_configure()' update 'dev->data->mtu' based on 'max_rx_pkt_len', again taking overhead into the account. Would you mind updating the new version as above? Thanks, ferruh >> >> Only concern I see is if user reads the MTU ('rte_eth_dev_get_mtu()') after >> 'rte_eth_dev_configure()' but before device configured, user will get the wrong >> value, I guess that problem was already there but changing default value may >> make it more visible. >> >>>> >>>>>> But this won't solve the problem Steve is trying to solve. >>>>> >>>>> You mean we still need to update test-pmd code to calculate max_rx_pkt_len >>>>> properly for default mtu value? >>>>> >>>> >>>> Yes. >>>> Because target of this set is able to receive packets with payload size >>>> 'RTE_ETHER_MTU', if MTU is updated according to the provided 'max_rx_pkt_len', >>>> device still won't able to receive those packets. >>> >>> Agree. >>> >>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> Before the patch will result: >>>>>>>>>> mtu==1500, max_rx_pkt_len=1000; //out of sync looks wrong to me >>>>>>>>>> >>>>>>>>>> After the patch: >>>>>>>>>> mtu=1500, max_rx_ptk_len=1518; // in sync, change in behaviour. >>>>>>>>>> >>>>>>>>>> If you think we need to preserve current behaviour, >>>>>>>>>> then I suppose the easiest thing would be to change dev_config() code >>>>>>>>>> to update mtu value based on max_rx_pkt_len. >>>>>>>>>> I.E: dev_configure {...; mtu_set(max_rx_pkt_len - OVERHEAD); ...} >>>>>>>>>> So the code snippet above will result: >>>>>>>>>> mtu=982,max_rx_pkt_len=1000; >>>>>>>>>> >>>>>>>>> >>>>>>>>> The 'max_rx_ptk_len' is annoyance for a long time, what do you think to just >>>>>>>>> drop it? >>>>>>>>> >>>>>>>>> By default device will be up with default MTU (1500), later >>>>>>>>> 'rte_eth_dev_set_mtu' can be used to set the MTU, no frame size setting at all. >>>>>>>>> >>>>>>>>> Will this work? >>>>>>>> >>>>>>>> I think it might, but that's a big change, probably too risky at that stage... >>>>>>>> >>>>>> >>>>>> Defintely, I was thinking for 21.11. Let me send a deprecation notice and see >>>>>> what happens. >>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> And for short term, for above Intel PMDs, there must be a place this >>>>>>>>> 'max_rx_pkt_len' value taken into account (mostly 'start()' dev_ops), that >>>>>>>>> function can be updated to take 'max_rx_pkt_len' only if JUMBO_FRAME set, >>>>>>>>> otherwise use the 'MTU' value. >>>>>>>> >>>>>>>> Even if we'll use max_rx_pkt_len only when if JUMBO_FRAME is set, >>>>>>>> I think we still need to keep max_rx_pkt_len and MTU values in sync. >>>>>>>> >>>>>>>>> >>>>>>>>> Without 'start()' updated the current logic won't work after stop & start anyway. >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> And why this same thing can't happen to other PMDs? If this is a problem for >>>>>>>>>>>> all PMDs, we should solve in other level, not for only some PMDs. >>>>>>>>>>>> >>>>>>>>>>> No, all PMDs exist the same issue, another proposal: >>>>>>>>>>> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); >>>>>>>>>>> - provide the uniform API for fetching the NIC's supported Ether Overhead size; >>>>>>>>>>> Is it feasible? >>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link >>>>>>>>>>>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to >>>>>>>>>>>>>> satisfy mtu requirement. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) >>>>>>>>>>>>>>> here? >>>>>>>>>>>>>> ice_mtu_set(dev, mtu) will append ether overhead to >>>>>>>>>>>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd >>>>>>>>>>>>>> parameter, or not the max_rx_pkt_len. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> And please remove above comment, since ether overhead is already >>>>>>>>>>>>>>>> considered in ice_mtu_set. >>>>>>>>>>>>>> Ether overhead is already considered in ice_mtu_set, but it also >>>>>>>>>>>>>> should be considered as the adjustment condition that if ice_mtu_set >>>>>>>>>>>> need be invoked. >>>>>>>>>>>>>> So, it perhaps should remain this comment before this if() condition. >>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return >>>>>>>>>>>>>>>>>> +ret; } >>>>>>>>>>>>>>>>>> + >>>>>>>>>>>>>>>>>> ret = ice_init_rss(pf); >>>>>>>>>>>>>>>>>> if (ret) { >>>>>>>>>>>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> 2.17.1 >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> >>>>>>> >>>>> >>> > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default [not found] ` <DM6PR11MB43628BBF9DCE7CC4D7C05AD8F91E0@DM6PR11MB4362.namprd11.prod.outlook.com> 2020-10-19 10:49 ` Ananyev, Konstantin @ 2020-10-19 18:05 ` Ferruh Yigit [not found] ` <DM6PR11MB4362F936BFC715BF6BABBAD0F91F0@DM6PR11MB4362.namprd11.prod.outlook.com> 1 sibling, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-10-19 18:05 UTC (permalink / raw) To: Yang, SteveX, Zhang, Qi Z, Ananyev, Konstantin, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian On 10/19/2020 4:07 AM, Yang, SteveX wrote: > > >> -----Original Message----- >> From: Ferruh Yigit <ferruh.yigit@intel.com> >> Sent: Wednesday, October 14, 2020 11:38 PM >> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX >> <stevex.yang@intel.com>; Ananyev, Konstantin >> <konstantin.ananyev@intel.com>; dev@dpdk.org >> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, >> Qiming <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; >> Xing, Beilei <beilei.xing@intel.com>; Stokes, Ian <ian.stokes@intel.com> >> Subject: Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets >> with vlan tag cannot be received by default >> >> On 9/30/2020 3:32 AM, Zhang, Qi Z wrote: >>> >>> >>>> -----Original Message----- >>>> From: Yang, SteveX <stevex.yang@intel.com> >>>> Sent: Wednesday, September 30, 2020 9:32 AM >>>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Ananyev, Konstantin >>>> <konstantin.ananyev@intel.com>; dev@dpdk.org >>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> >>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>> vlan tag cannot be received by default >>>> >>>> >>>> >>>>> -----Original Message----- >>>>> From: Zhang, Qi Z <qi.z.zhang@intel.com> >>>>> Sent: Wednesday, September 30, 2020 8:35 AM >>>>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, >> SteveX >>>>> <stevex.yang@intel.com>; dev@dpdk.org >>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> >>>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>> vlan tag cannot be received by default >>>>> >>>>> >>>>> >>>>>> -----Original Message----- >>>>>> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> >>>>>> Sent: Wednesday, September 30, 2020 7:02 AM >>>>>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX >>>>>> <stevex.yang@intel.com>; dev@dpdk.org >>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>>>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> >>>>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>>> vlan tag cannot be received by default >>>>>> >>>>>>> >>>>>>>> -----Original Message----- >>>>>>>> From: Yang, SteveX <stevex.yang@intel.com> >>>>>>>> Sent: Monday, September 28, 2020 2:56 PM >>>>>>>> To: dev@dpdk.org >>>>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia >>>>>>>> <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; >> Zhang, >>>>>>>> Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing >>>>>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; >>>>>>>> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, >> SteveX >>>>>>>> <stevex.yang@intel.com> >>>>>>>> Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>>>>> vlan tag cannot be received by default >>>>>>>> >>>>>>>> testpmd will initialize default max packet length to 1518 which >>>>>>>> doesn't include vlan tag size in ether overheader. Once, send the >>>>>>>> max mtu length packet with vlan tag, the max packet length will >>>>>>>> exceed 1518 that will cause packets dropped directly from NIC hw >>>> side. >>>>>>>> >>>>>>>> ice can support dual vlan tags that need more 8 bytes for max >>>>>>>> packet size, so, configures the correct max packet size in >>>>>>>> dev_config >>>>> ops. >>>>>>>> >>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") >>>>>>>> >>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>> --- >>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ >>>>>>>> 1 file changed, 11 insertions(+) >>>>>>>> >>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c >>>>>>>> b/drivers/net/ice/ice_ethdev.c index >>>>>>>> cfd357b05..6b7098444 100644 >>>>>>>> --- a/drivers/net/ice/ice_ethdev.c >>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c >>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev >> *dev) >>>>>>>> struct ice_adapter *ad = >>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); >>>>>>>> struct ice_pf *pf = >>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); >>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; >>>>>>>> int ret; >>>>>>>> >>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ >>>>>>>> -3157,6 >>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) >>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & >> ETH_MQ_RX_RSS_FLAG) >>>>>>>> dev->data->dev_conf.rxmode.offloads |= >>>>> DEV_RX_OFFLOAD_RSS_HASH; >>>>>>>> >>>>>>>> +/** >>>>>>>> + * Considering QinQ packet, max frame size should be equal or >>>>>>>> + * larger than total size of MTU and Ether overhead. >>>>>>>> + */ >>>>>>> >>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { >>>>>>> >>>>>>> >>>>>>> Why we need this check? >>>>>>> Can we just call ice_mtu_set directly >>>>>> >>>>>> I think that without that check we can silently overwrite provided >>>>>> by user dev_conf.rxmode.max_rx_pkt_len value. >>>>> >>>>> OK, I see >>>>> >>>>> But still have one question >>>>> dev->data->mtu is initialized to 1518 as default , but if >>>>> dev->data->application set >>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. >>>>> does that mean we will still will set mtu to 1518, is this expected? >>>>> >>>> >>>> max_rx_pkt_len should be larger than mtu at least, so we should raise >>>> the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500). >>> >>> Ok, this describe the problem more general and better to replace exist >> code comment and commit log for easy understanding. >>> Please send a new version for reword >>> >> >> I didn't really get this set. >> >> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than >> this size is dropped. > > Sure, it is normal case for dropping oversize data. > >> Isn't this what should be, why we are trying to overwrite user configuration >> in PMD to prevent this? >> > > But it is a confliction that application/user sets mtu & max_rx_pkt_len at the same time. > This fix will make a decision when confliction occurred. > MTU value will come from user operation (e.g.: port config mtu 0 1500) directly, > so, the max_rx_pkt_len will resize itself to adapt expected MTU value if its size is smaller than MTU + Ether overhead. > >> During eth_dev allocation, mtu set to default '1500', by ethdev layer. >> And testpmd sets 'max_rx_pkt_len' by default to '1518'. >> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000' >> and mean it? PMD will not honor the user config. > > I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's the behavior expected? > If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be invalid. > >> >> Why not simply increase the default 'max_rx_pkt_len' in testpmd? >> > The default 'max_rx_pkt_len' has been initialized to generical value (1518) and default 'mtu' is '1500' in testpmd, > But it isn't suitable to those NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' value is preferable. > >> And I guess even better what we need is to tell to the application what the >> frame overhead PMD accepts. >> So the application can set proper 'max_rx_pkt_len' value per port for a >> given/requested MTU value. >> @Ian, cc'ed, was complaining almost same thing years ago, these PMD >> overhead macros and 'max_mtu'/'min_mtu' added because of that, perhaps >> he has a solution now? >> > >> >> And why this same thing can't happen to other PMDs? If this is a problem for >> all PMDs, we should solve in other level, not for only some PMDs. >> > No, all PMDs exist the same issue, another proposal: > - rte_ethdev provides the unique resize 'max_rx_pkt_len' in rte_eth_dev_configure(); > - provide the uniform API for fetching the NIC's supported Ether Overhead size; > Is it feasible? > overhead can be calculated as "dev_info.max_rx_pktlen - dev_info.max_mtu" What do you think update the testpmd 'init_config()', to update 'port->dev_conf.rxmode.max_rx_pkt_len' as "RTE_ETHER_MTU + overhead"? >>> >>>> Generally, the mtu value can be adjustable from user (e.g.: ip link >>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to >>>> satisfy mtu requirement. >>>> >>>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len) >>>>> here? >>>> ice_mtu_set(dev, mtu) will append ether overhead to >>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd >>>> parameter, or not the max_rx_pkt_len. >>>> >>>>> >>>>> >>>>>> >>>>>>> And please remove above comment, since ether overhead is already >>>>>> considered in ice_mtu_set. >>>> Ether overhead is already considered in ice_mtu_set, but it also >>>> should be considered as the adjustment condition that if ice_mtu_set >> need be invoked. >>>> So, it perhaps should remain this comment before this if() condition. >>>> >>>>>>> >>>>>>> >>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return >>>>>>>> +ret; } >>>>>>>> + >>>>>>>> ret = ice_init_rss(pf); >>>>>>>> if (ret) { >>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); >>>>>>>> -- >>>>>>>> 2.17.1 >>>>>>> >>>>>> >>>>> >>>> >>> > ^ permalink raw reply [flat|nested] 94+ messages in thread
[parent not found: <DM6PR11MB4362F936BFC715BF6BABBAD0F91F0@DM6PR11MB4362.namprd11.prod.outlook.com>]
* Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default [not found] ` <DM6PR11MB4362F936BFC715BF6BABBAD0F91F0@DM6PR11MB4362.namprd11.prod.outlook.com> @ 2020-10-20 8:13 ` Ferruh Yigit 0 siblings, 0 replies; 94+ messages in thread From: Ferruh Yigit @ 2020-10-20 8:13 UTC (permalink / raw) To: Yang, SteveX, Zhang, Qi Z, Ananyev, Konstantin, dev Cc: Zhao1, Wei, Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Stokes, Ian On 10/20/2020 3:57 AM, Yang, SteveX wrote: > > >> -----Original Message----- >> From: Ferruh Yigit <ferruh.yigit@intel.com> >> Sent: Tuesday, October 20, 2020 2:05 AM >> To: Yang, SteveX <stevex.yang@intel.com>; Zhang, Qi Z >> <qi.z.zhang@intel.com>; Ananyev, Konstantin >> <konstantin.ananyev@intel.com>; dev@dpdk.org >> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; Yang, >> Qiming <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; >> Xing, Beilei <beilei.xing@intel.com>; Stokes, Ian <ian.stokes@intel.com> >> Subject: Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets >> with vlan tag cannot be received by default >> >> On 10/19/2020 4:07 AM, Yang, SteveX wrote: >>> >>> >>>> -----Original Message----- >>>> From: Ferruh Yigit <ferruh.yigit@intel.com> >>>> Sent: Wednesday, October 14, 2020 11:38 PM >>>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX >>>> <stevex.yang@intel.com>; Ananyev, Konstantin >>>> <konstantin.ananyev@intel.com>; dev@dpdk.org >>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; >>>> Stokes, Ian <ian.stokes@intel.com> >>>> Subject: Re: [dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size >>>> packets with vlan tag cannot be received by default >>>> >>>> On 9/30/2020 3:32 AM, Zhang, Qi Z wrote: >>>>> >>>>> >>>>>> -----Original Message----- >>>>>> From: Yang, SteveX <stevex.yang@intel.com> >>>>>> Sent: Wednesday, September 30, 2020 9:32 AM >>>>>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Ananyev, Konstantin >>>>>> <konstantin.ananyev@intel.com>; dev@dpdk.org >>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia <jia.guo@intel.com>; >>>>>> Yang, Qiming <qiming.yang@intel.com>; Wu, Jingjing >>>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com> >>>>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>>> vlan tag cannot be received by default >>>>>> >>>>>> >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Zhang, Qi Z <qi.z.zhang@intel.com> >>>>>>> Sent: Wednesday, September 30, 2020 8:35 AM >>>>>>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, >>>> SteveX >>>>>>> <stevex.yang@intel.com>; dev@dpdk.org >>>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia >>>>>>> <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; Wu, >>>>>>> Jingjing <jingjing.wu@intel.com>; Xing, Beilei >>>>>>> <beilei.xing@intel.com> >>>>>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>>>> vlan tag cannot be received by default >>>>>>> >>>>>>> >>>>>>> >>>>>>>> -----Original Message----- >>>>>>>> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> >>>>>>>> Sent: Wednesday, September 30, 2020 7:02 AM >>>>>>>> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, SteveX >>>>>>>> <stevex.yang@intel.com>; dev@dpdk.org >>>>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia >>>>>>>> <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; Wu, >>>>>>>> Jingjing <jingjing.wu@intel.com>; Xing, Beilei >>>>>>>> <beilei.xing@intel.com> >>>>>>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets >>>>>>>> with vlan tag cannot be received by default >>>>>>>> >>>>>>>>> >>>>>>>>>> -----Original Message----- >>>>>>>>>> From: Yang, SteveX <stevex.yang@intel.com> >>>>>>>>>> Sent: Monday, September 28, 2020 2:56 PM >>>>>>>>>> To: dev@dpdk.org >>>>>>>>>> Cc: Zhao1, Wei <wei.zhao1@intel.com>; Guo, Jia >>>>>>>>>> <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; >>>> Zhang, >>>>>>>>>> Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing >>>>>>>>>> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; >>>>>>>>>> Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yang, >>>> SteveX >>>>>>>>>> <stevex.yang@intel.com> >>>>>>>>>> Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with >>>>>>>>>> vlan tag cannot be received by default >>>>>>>>>> >>>>>>>>>> testpmd will initialize default max packet length to 1518 which >>>>>>>>>> doesn't include vlan tag size in ether overheader. Once, send >>>>>>>>>> the max mtu length packet with vlan tag, the max packet length >>>>>>>>>> will exceed 1518 that will cause packets dropped directly from >>>>>>>>>> NIC hw >>>>>> side. >>>>>>>>>> >>>>>>>>>> ice can support dual vlan tags that need more 8 bytes for max >>>>>>>>>> packet size, so, configures the correct max packet size in >>>>>>>>>> dev_config >>>>>>> ops. >>>>>>>>>> >>>>>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") >>>>>>>>>> >>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>>> --- >>>>>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++ >>>>>>>>>> 1 file changed, 11 insertions(+) >>>>>>>>>> >>>>>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c >>>>>>>>>> b/drivers/net/ice/ice_ethdev.c index >>>>>>>>>> cfd357b05..6b7098444 100644 >>>>>>>>>> --- a/drivers/net/ice/ice_ethdev.c >>>>>>>>>> +++ b/drivers/net/ice/ice_ethdev.c >>>>>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev >>>> *dev) >>>>>>>>>> struct ice_adapter *ad = >>>>>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); >>>>>>>>>> struct ice_pf *pf = >>>>>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); >>>>>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; >>>>>>>>>> int ret; >>>>>>>>>> >>>>>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the >>>>>>>>>> @@ >>>>>>>>>> -3157,6 >>>>>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev) >>>>>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & >>>> ETH_MQ_RX_RSS_FLAG) >>>>>>>>>> dev->data->dev_conf.rxmode.offloads |= >>>>>>> DEV_RX_OFFLOAD_RSS_HASH; >>>>>>>>>> >>>>>>>>>> +/** >>>>>>>>>> + * Considering QinQ packet, max frame size should be equal or >>>>>>>>>> + * larger than total size of MTU and Ether overhead. >>>>>>>>>> + */ >>>>>>>>> >>>>>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) >> { >>>>>>>>> >>>>>>>>> >>>>>>>>> Why we need this check? >>>>>>>>> Can we just call ice_mtu_set directly >>>>>>>> >>>>>>>> I think that without that check we can silently overwrite >>>>>>>> provided by user dev_conf.rxmode.max_rx_pkt_len value. >>>>>>> >>>>>>> OK, I see >>>>>>> >>>>>>> But still have one question >>>>>>> dev->data->mtu is initialized to 1518 as default , but if >>>>>>> dev->data->application set >>>>>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure. >>>>>>> does that mean we will still will set mtu to 1518, is this expected? >>>>>>> >>>>>> >>>>>> max_rx_pkt_len should be larger than mtu at least, so we should >>>>>> raise the max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: >> 1500). >>>>> >>>>> Ok, this describe the problem more general and better to replace >>>>> exist >>>> code comment and commit log for easy understanding. >>>>> Please send a new version for reword >>>>> >>>> >>>> I didn't really get this set. >>>> >>>> Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame >>>> bigger than this size is dropped. >>> >>> Sure, it is normal case for dropping oversize data. >>> >>>> Isn't this what should be, why we are trying to overwrite user >>>> configuration in PMD to prevent this? >>>> >>> >>> But it is a confliction that application/user sets mtu & max_rx_pkt_len at >> the same time. >>> This fix will make a decision when confliction occurred. >>> MTU value will come from user operation (e.g.: port config mtu 0 1500) >>> directly, so, the max_rx_pkt_len will resize itself to adapt expected MTU >> value if its size is smaller than MTU + Ether overhead. >>> >>>> During eth_dev allocation, mtu set to default '1500', by ethdev layer. >>>> And testpmd sets 'max_rx_pkt_len' by default to '1518'. >>>> I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to >> '1000' >>>> and mean it? PMD will not honor the user config. >>> >>> I'm not sure when set 'mtu' to '1500' and 'max_rx_pkt_len' to '1000', what's >> the behavior expected? >>> If still keep the 'max_rx_pkt_len' value, that means the larger 'mtu' will be >> invalid. >>> >>>> >>>> Why not simply increase the default 'max_rx_pkt_len' in testpmd? >>>> >>> The default 'max_rx_pkt_len' has been initialized to generical value >>> (1518) and default 'mtu' is '1500' in testpmd, But it isn't suitable to those >> NIC drivers which Ether overhead is larger than 18. (e.g.: ice, i40e) if 'mtu' >> value is preferable. >>> >>>> And I guess even better what we need is to tell to the application >>>> what the frame overhead PMD accepts. >>>> So the application can set proper 'max_rx_pkt_len' value per port for >>>> a given/requested MTU value. >>>> @Ian, cc'ed, was complaining almost same thing years ago, these PMD >>>> overhead macros and 'max_mtu'/'min_mtu' added because of that, >>>> perhaps he has a solution now? >>>> >>> >>>> >>>> And why this same thing can't happen to other PMDs? If this is a >>>> problem for all PMDs, we should solve in other level, not for only some >> PMDs. >>>> >>> No, all PMDs exist the same issue, another proposal: >>> - rte_ethdev provides the unique resize 'max_rx_pkt_len' in >> rte_eth_dev_configure(); >>> - provide the uniform API for fetching the NIC's supported Ether >>> Overhead size; Is it feasible? >>> >> >> overhead can be calculated as "dev_info.max_rx_pktlen - >> dev_info.max_mtu" >> >> What do you think update the testpmd 'init_config()', to update 'port- >>> dev_conf.rxmode.max_rx_pkt_len' as "RTE_ETHER_MTU + overhead"? > > If update the testpmd relative code, this fix will only impact testpmd application, > Need we make the change more common for other applications or DPDK clients? > This is something needs to be done in application level. Testpmd update can be a sample usage for them. > How about update 'max_rx_pkt_len' within 'rte_eth_dev_configure()' of rte_ethdev? > What is your proposal to do in the ethdev layer? >> >>>>> >>>>>> Generally, the mtu value can be adjustable from user (e.g.: ip link >>>>>> set ens801f0 mtu 1400), hence, we just adjust the max_rx_pkt_len to >>>>>> satisfy mtu requirement. >>>>>> >>>>>>> Should we just call ice_mtu_set(dev, >> dev_conf.rxmode.max_rx_pkt_len) >>>>>>> here? >>>>>> ice_mtu_set(dev, mtu) will append ether overhead to >>>>>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the >> 2nd >>>>>> parameter, or not the max_rx_pkt_len. >>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>>> And please remove above comment, since ether overhead is >> already >>>>>>>> considered in ice_mtu_set. >>>>>> Ether overhead is already considered in ice_mtu_set, but it also >>>>>> should be considered as the adjustment condition that if ice_mtu_set >>>> need be invoked. >>>>>> So, it perhaps should remain this comment before this if() condition. >>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return >>>>>>>>>> +ret; } >>>>>>>>>> + >>>>>>>>>> ret = ice_init_rss(pf); >>>>>>>>>> if (ret) { >>>>>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); >>>>>>>>>> -- >>>>>>>>>> 2.17.1 >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>> > ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v4 4/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured SteveX Yang ` (2 preceding siblings ...) 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 3/5] net/ice: " SteveX Yang @ 2020-09-28 6:55 ` SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 5/5] net/iavf: " SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-28 6:55 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. But for i40e/i40evf, they should support dual vlan tags that need more 8 bytes for max packet size, so, configure the correct max packet size in dev_config ops. Fixes: ff8282f4bbcd ("net/i40e: consider QinQ when setting MTU") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/i40e/i40e_ethdev.c | 11 +++++++++++ drivers/net/i40e/i40e_ethdev_vf.c | 13 ++++++++++++- 2 files changed, 23 insertions(+), 1 deletion(-) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 6439baf2f..35ffe33ab 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -1916,6 +1916,7 @@ i40e_dev_configure(struct rte_eth_dev *dev) struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode; + uint32_t frame_size = dev->data->mtu + I40E_ETH_OVERHEAD; int i, ret; ret = i40e_dev_sync_phy_type(hw); @@ -1930,6 +1931,16 @@ i40e_dev_configure(struct rte_eth_dev *dev) ad->tx_simple_allowed = true; ad->tx_vec_allowed = true; + /** + * Considering QinQ packet, max frame size should be equal or + * larger than total size of MTU and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = i40e_dev_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 8531cf6b1..e3c809037 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -1669,6 +1669,8 @@ i40evf_dev_configure(struct rte_eth_dev *dev) I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); + uint32_t frame_size = dev->data->mtu + I40E_ETH_OVERHEAD; + int ret; /* Initialize to TRUE. If any of Rx queues doesn't meet the bulk * allocation or vector Rx preconditions we will reset it. @@ -1681,9 +1683,18 @@ i40evf_dev_configure(struct rte_eth_dev *dev) dev->data->dev_conf.intr_conf.lsc = !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC); + /** + * Considering QinQ packet, max frame size should be equal or + * larger than total size of MTU and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = i40evf_dev_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + if (num_queue_pairs > vf->vsi_res->num_queue_pairs) { struct i40e_hw *hw; - int ret; if (rte_eal_process_type() != RTE_PROC_PRIMARY) { PMD_DRV_LOG(ERR, -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v4 5/5] net/iavf: fix max mtu size packets with vlan tag cannot be received by default 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured SteveX Yang ` (3 preceding siblings ...) 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 4/5] net/i40e: " SteveX Yang @ 2020-09-28 6:55 ` SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang 5 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-09-28 6:55 UTC (permalink / raw) To: dev Cc: wei.zhao1, jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang testpmd will initialize default max packet length to 1518 which doesn't include vlan tag size in ether overheader. Once, send the max mtu length packet with vlan tag, the max packet length will exceed 1518 that will cause packets dropped directly from NIC hw side. iavf can support dual vlan tags that need more 8 bytes for max packet size, so, configures the correct max packet size in dev_config ops. Fixes: 02d212ca3125 ("net/iavf: rename remaining avf strings") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/iavf/iavf_ethdev.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index a88d53ab0..635d781eb 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -258,6 +258,8 @@ iavf_dev_configure(struct rte_eth_dev *dev) IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD; + int ret; ad->rx_bulk_alloc_allowed = true; /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ -269,6 +271,16 @@ iavf_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /** + * Considering QinQ packet, max frame size should be equal or + * larger than total size of MTU and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = iavf_dev_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + /* Vlan stripping setting */ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) { if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured SteveX Yang ` (4 preceding siblings ...) 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 5/5] net/iavf: " SteveX Yang @ 2020-10-14 9:19 ` SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang ` (6 more replies) 5 siblings, 7 replies; 94+ messages in thread From: SteveX Yang @ 2020-10-14 9:19 UTC (permalink / raw) To: dev Cc: jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang when application presets the max rx packet length and expected mtu at the same time, driver need identify if the preset max frame size can hold mtu data and Ether overhead completely. if not, adjust the max frame size via mtu_set ops within dev_configure. v5: * update comments and commit messages; v4: * add the adjust condition for max_rx_pkt_len; v3: * change the i40evf relative code; v2: * change the max_rx_pkt_len via mtu_set ops; SteveX Yang (5): net/e1000: fix max mtu size packets with vlan tag cannot be received by default net/igc: fix max mtu size packets with vlan tag cannot be received by default net/ice: fix max mtu size packets with vlan tag cannot be received by default net/i40e: fix max mtu size packets with vlan tag cannot be received by default net/iavf: fix max mtu size packets with vlan tag cannot be received by default drivers/net/e1000/em_ethdev.c | 12 ++++++++++++ drivers/net/i40e/i40e_ethdev.c | 11 +++++++++++ drivers/net/i40e/i40e_ethdev_vf.c | 13 ++++++++++++- drivers/net/iavf/iavf_ethdev.c | 12 ++++++++++++ drivers/net/ice/ice_ethdev.c | 11 +++++++++++ drivers/net/igc/igc_ethdev.c | 13 ++++++++++++- 6 files changed, 70 insertions(+), 2 deletions(-) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v5 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang @ 2020-10-14 9:19 ` SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 2/5] net/igc: " SteveX Yang ` (5 subsequent siblings) 6 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-10-14 9:19 UTC (permalink / raw) To: dev Cc: jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang when application presets the max rx packet length and expected mtu at the same time, driver need identify if the preset max frame size can hold mtu data and Ether overhead completely. if not, adjust the max frame size via mtu_set ops within dev_configure. Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/e1000/em_ethdev.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index d050eb478..d2cf318f8 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -432,10 +432,22 @@ eth_em_configure(struct rte_eth_dev *dev) { struct e1000_interrupt *intr = E1000_DEV_PRIVATE_TO_INTR(dev->data->dev_private); + uint16_t frame_size = dev->data->mtu + E1000_ETH_OVERHEAD; + int rc = 0; PMD_INIT_FUNC_TRACE(); intr->flags |= E1000_FLAG_NEED_LINK_UPDATE; + /** + * Reset the max frame size via mtu_set ops if preset max frame + * cannot hold MTU data and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + rc = eth_em_mtu_set(dev, dev->data->mtu); + if (rc != 0) + return rc; + } + PMD_INIT_FUNC_TRACE(); return 0; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v5 2/5] net/igc: fix max mtu size packets with vlan tag cannot be received by default 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang @ 2020-10-14 9:19 ` SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 3/5] net/ice: " SteveX Yang ` (4 subsequent siblings) 6 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-10-14 9:19 UTC (permalink / raw) To: dev Cc: jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang when application presets the max rx packet length and expected mtu at the same time, driver need identify if the preset max frame size can hold mtu data and Ether overhead completely. if not, adjust the max frame size via mtu_set ops within dev_configure. Fixes: a5aeb2b9e225 ("net/igc: support Rx and Tx") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/igc/igc_ethdev.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index 7f5066df4..98e98b3e4 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -337,11 +337,22 @@ static int eth_igc_configure(struct rte_eth_dev *dev) { struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev); + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD; int ret; PMD_INIT_FUNC_TRACE(); - ret = igc_check_mq_mode(dev); + /** + * Reset the max frame size via mtu_set ops if preset max frame + * cannot hold MTU data and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = eth_igc_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + + ret = igc_check_mq_mode(dev); if (ret != 0) return ret; -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v5 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 2/5] net/igc: " SteveX Yang @ 2020-10-14 9:19 ` SteveX Yang 2020-10-14 11:35 ` Zhang, Qi Z 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 4/5] net/i40e: " SteveX Yang ` (3 subsequent siblings) 6 siblings, 1 reply; 94+ messages in thread From: SteveX Yang @ 2020-10-14 9:19 UTC (permalink / raw) To: dev Cc: jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang when application presets the max rx packet length and expected mtu at the same time, driver need identify if the preset max frame size can hold mtu data and Ether overhead completely. if not, adjust the max frame size via mtu_set ops within dev_configure. Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/ice/ice_ethdev.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 0056da78a..a707612c2 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3305,6 +3305,7 @@ ice_dev_configure(struct rte_eth_dev *dev) struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; int ret; /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ -3316,6 +3317,16 @@ ice_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /** + * Reset the max frame size via mtu_set ops if preset max frame + * cannot hold MTU data and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = ice_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + ret = ice_init_rss(pf); if (ret) { PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 3/5] net/ice: " SteveX Yang @ 2020-10-14 11:35 ` Zhang, Qi Z 0 siblings, 0 replies; 94+ messages in thread From: Zhang, Qi Z @ 2020-10-14 11:35 UTC (permalink / raw) To: Yang, SteveX, dev Cc: Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Ananyev, Konstantin, Yang, SteveX Couple comments inline, Btw, no need to submit a new version, I will ack and merge the patch with below fix directly. but please keep in mind in your next patch. > -----Original Message----- > From: SteveX Yang <stevex.yang@intel.com> > Sent: Wednesday, October 14, 2020 5:20 PM > To: dev@dpdk.org > Cc: Guo, Jia <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; > Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; > Xing, Beilei <beilei.xing@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Yang, SteveX <stevex.yang@intel.com> > Subject: [PATCH v5 3/5] net/ice: fix max mtu size packets with vlan tag cannot > be received by default Title is too long, please use check-git-log.sh Renamed to "fix MTU size for VLAN packets" > > when application presets the max rx packet length and expected mtu at the s/when/When > same time, driver need identify if the preset max frame size can hold mtu data > and Ether overhead completely. > > if not, adjust the max frame size via mtu_set ops within dev_configure. s/if/If > > Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index > 0056da78a..a707612c2 100644 > --- a/drivers/net/ice/ice_ethdev.c > +++ b/drivers/net/ice/ice_ethdev.c > @@ -3305,6 +3305,7 @@ ice_dev_configure(struct rte_eth_dev *dev) > struct ice_adapter *ad = > ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); > + uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD; > int ret; > > /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ -3316,6 > +3317,16 @@ ice_dev_configure(struct rte_eth_dev *dev) > if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) > dev->data->dev_conf.rxmode.offloads |= > DEV_RX_OFFLOAD_RSS_HASH; > > + /** > + * Reset the max frame size via mtu_set ops if preset max frame > + * cannot hold MTU data and Ether overhead. > + */ > + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > + ret = ice_mtu_set(dev, dev->data->mtu); > + if (ret != 0) > + return ret; > + } > + > ret = ice_init_rss(pf); > if (ret) { > PMD_DRV_LOG(ERR, "Failed to enable rss for PF"); > -- > 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v5 4/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang ` (2 preceding siblings ...) 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 3/5] net/ice: " SteveX Yang @ 2020-10-14 9:19 ` SteveX Yang 2020-10-14 10:30 ` Ananyev, Konstantin 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 5/5] net/iavf: " SteveX Yang ` (2 subsequent siblings) 6 siblings, 1 reply; 94+ messages in thread From: SteveX Yang @ 2020-10-14 9:19 UTC (permalink / raw) To: dev Cc: jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang when application presets the max rx packet length and expected mtu at the same time, driver need identify if the preset max frame size can hold mtu data and Ether overhead completely. if not, adjust the max frame size via mtu_set ops within dev_configure. Fixes: ff8282f4bbcd ("net/i40e: consider QinQ when setting MTU") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/i40e/i40e_ethdev.c | 11 +++++++++++ drivers/net/i40e/i40e_ethdev_vf.c | 13 ++++++++++++- 2 files changed, 23 insertions(+), 1 deletion(-) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 943cfe71d..272cfc7ca 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -1911,6 +1911,7 @@ i40e_dev_configure(struct rte_eth_dev *dev) struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode; + uint32_t frame_size = dev->data->mtu + I40E_ETH_OVERHEAD; int i, ret; ret = i40e_dev_sync_phy_type(hw); @@ -1925,6 +1926,16 @@ i40e_dev_configure(struct rte_eth_dev *dev) ad->tx_simple_allowed = true; ad->tx_vec_allowed = true; + /** + * Reset the max frame size via mtu_set ops if preset max frame + * cannot hold MTU data and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = i40e_dev_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 4d6510d1f..686f3c627 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -1664,6 +1664,8 @@ i40evf_dev_configure(struct rte_eth_dev *dev) I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); + uint32_t frame_size = dev->data->mtu + I40E_ETH_OVERHEAD; + int ret; /* Initialize to TRUE. If any of Rx queues doesn't meet the bulk * allocation or vector Rx preconditions we will reset it. @@ -1676,9 +1678,18 @@ i40evf_dev_configure(struct rte_eth_dev *dev) dev->data->dev_conf.intr_conf.lsc = !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC); + /** + * Reset the max frame size via mtu_set ops if preset max frame + * cannot hold MTU data and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = i40evf_dev_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + if (num_queue_pairs > vf->vsi_res->num_queue_pairs) { struct i40e_hw *hw; - int ret; if (rte_eal_process_type() != RTE_PROC_PRIMARY) { PMD_DRV_LOG(ERR, -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v5 4/5] net/i40e: fix max mtu size packets with vlan tag cannot be received by default 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 4/5] net/i40e: " SteveX Yang @ 2020-10-14 10:30 ` Ananyev, Konstantin 0 siblings, 0 replies; 94+ messages in thread From: Ananyev, Konstantin @ 2020-10-14 10:30 UTC (permalink / raw) To: Yang, SteveX, dev Cc: Guo, Jia, Yang, Qiming, Zhang, Qi Z, Wu, Jingjing, Xing, Beilei, Yang, SteveX > > when application presets the max rx packet length and expected mtu at > the same time, driver need identify if the preset max frame size can > hold mtu data and Ether overhead completely. > > if not, adjust the max frame size via mtu_set ops within dev_configure. > > Fixes: ff8282f4bbcd ("net/i40e: consider QinQ when setting MTU") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > drivers/net/i40e/i40e_ethdev.c | 11 +++++++++++ > drivers/net/i40e/i40e_ethdev_vf.c | 13 ++++++++++++- > 2 files changed, 23 insertions(+), 1 deletion(-) > > diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c > index 943cfe71d..272cfc7ca 100644 > --- a/drivers/net/i40e/i40e_ethdev.c > +++ b/drivers/net/i40e/i40e_ethdev.c > @@ -1911,6 +1911,7 @@ i40e_dev_configure(struct rte_eth_dev *dev) > struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); > struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); > enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode; > + uint32_t frame_size = dev->data->mtu + I40E_ETH_OVERHEAD; > int i, ret; > > ret = i40e_dev_sync_phy_type(hw); > @@ -1925,6 +1926,16 @@ i40e_dev_configure(struct rte_eth_dev *dev) > ad->tx_simple_allowed = true; > ad->tx_vec_allowed = true; > > + /** > + * Reset the max frame size via mtu_set ops if preset max frame Typo, should be 'present', I think. Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> > + * cannot hold MTU data and Ether overhead. > + */ > + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > + ret = i40e_dev_mtu_set(dev, dev->data->mtu); > + if (ret != 0) > + return ret; > + } > + > if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) > dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; > > diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c > index 4d6510d1f..686f3c627 100644 > --- a/drivers/net/i40e/i40e_ethdev_vf.c > +++ b/drivers/net/i40e/i40e_ethdev_vf.c > @@ -1664,6 +1664,8 @@ i40evf_dev_configure(struct rte_eth_dev *dev) > I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > uint16_t num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, > dev->data->nb_tx_queues); > + uint32_t frame_size = dev->data->mtu + I40E_ETH_OVERHEAD; > + int ret; > > /* Initialize to TRUE. If any of Rx queues doesn't meet the bulk > * allocation or vector Rx preconditions we will reset it. > @@ -1676,9 +1678,18 @@ i40evf_dev_configure(struct rte_eth_dev *dev) > dev->data->dev_conf.intr_conf.lsc = > !!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC); > > + /** > + * Reset the max frame size via mtu_set ops if preset max frame > + * cannot hold MTU data and Ether overhead. > + */ > + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { > + ret = i40evf_dev_mtu_set(dev, dev->data->mtu); > + if (ret != 0) > + return ret; > + } > + > if (num_queue_pairs > vf->vsi_res->num_queue_pairs) { > struct i40e_hw *hw; > - int ret; > > if (rte_eal_process_type() != RTE_PROC_PRIMARY) { > PMD_DRV_LOG(ERR, > -- > 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v5 5/5] net/iavf: fix max mtu size packets with vlan tag cannot be received by default 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang ` (3 preceding siblings ...) 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 4/5] net/i40e: " SteveX Yang @ 2020-10-14 9:19 ` SteveX Yang 2020-10-14 11:43 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured Zhang, Qi Z 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 0/2] " SteveX Yang 6 siblings, 0 replies; 94+ messages in thread From: SteveX Yang @ 2020-10-14 9:19 UTC (permalink / raw) To: dev Cc: jia.guo, qiming.yang, qi.z.zhang, jingjing.wu, beilei.xing, konstantin.ananyev, SteveX Yang when application presets the max rx packet length and expected mtu at the same time, driver need identify if the preset max frame size can hold mtu data and Ether overhead completely. if not, adjust the max frame size via mtu_set ops within dev_configure. Fixes: 02d212ca3125 ("net/iavf: rename remaining avf strings") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- drivers/net/iavf/iavf_ethdev.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 93e26c768..8b1cf8f1c 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -291,6 +291,8 @@ iavf_dev_configure(struct rte_eth_dev *dev) IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad); struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD; + int ret; ad->rx_bulk_alloc_allowed = true; /* Initialize to TRUE. If any of Rx queues doesn't meet the @@ -302,6 +304,16 @@ iavf_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + /** + * Reset the max frame size via mtu_set ops if preset max frame + * cannot hold MTU data and Ether overhead. + */ + if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) { + ret = iavf_dev_mtu_set(dev, dev->data->mtu); + if (ret != 0) + return ret; + } + /* Vlan stripping setting */ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) { if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang ` (4 preceding siblings ...) 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 5/5] net/iavf: " SteveX Yang @ 2020-10-14 11:43 ` Zhang, Qi Z 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 0/2] " SteveX Yang 6 siblings, 0 replies; 94+ messages in thread From: Zhang, Qi Z @ 2020-10-14 11:43 UTC (permalink / raw) To: Yang, SteveX, dev Cc: Guo, Jia, Yang, Qiming, Wu, Jingjing, Xing, Beilei, Ananyev, Konstantin, Yang, SteveX > -----Original Message----- > From: SteveX Yang <stevex.yang@intel.com> > Sent: Wednesday, October 14, 2020 5:20 PM > To: dev@dpdk.org > Cc: Guo, Jia <jia.guo@intel.com>; Yang, Qiming <qiming.yang@intel.com>; > Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; > Xing, Beilei <beilei.xing@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Yang, SteveX <stevex.yang@intel.com> > Subject: [PATCH v5 0/5] fix default max mtu size when device configured > > when application presets the max rx packet length and expected mtu at the > same time, driver need identify if the preset max frame size can hold mtu data > and Ether overhead completely. > > if not, adjust the max frame size via mtu_set ops within dev_configure. > > v5: > * update comments and commit messages; > v4: > * add the adjust condition for max_rx_pkt_len; > v3: > * change the i40evf relative code; > v2: > * change the max_rx_pkt_len via mtu_set ops; > > SteveX Yang (5): > net/e1000: fix max mtu size packets with vlan tag cannot be received > by default > net/igc: fix max mtu size packets with vlan tag cannot be received by > default > net/ice: fix max mtu size packets with vlan tag cannot be received by > default > net/i40e: fix max mtu size packets with vlan tag cannot be received by > default > net/iavf: fix max mtu size packets with vlan tag cannot be received by > default > > drivers/net/e1000/em_ethdev.c | 12 ++++++++++++ > drivers/net/i40e/i40e_ethdev.c | 11 +++++++++++ > drivers/net/i40e/i40e_ethdev_vf.c | 13 ++++++++++++- > drivers/net/iavf/iavf_ethdev.c | 12 ++++++++++++ > drivers/net/ice/ice_ethdev.c | 11 +++++++++++ > drivers/net/igc/igc_ethdev.c | 13 ++++++++++++- > 6 files changed, 70 insertions(+), 2 deletions(-) > > -- > 2.17.1 Acked-by: Qi Zhang <qi.z.zhang@intel.com> Applied to dpdk-next-net-intel. Thanks Qi ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v6 0/2] fix default max mtu size when device configured 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang ` (5 preceding siblings ...) 2020-10-14 11:43 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured Zhang, Qi Z @ 2020-10-22 8:48 ` SteveX Yang 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang ` (2 more replies) 6 siblings, 3 replies; 94+ messages in thread From: SteveX Yang @ 2020-10-22 8:48 UTC (permalink / raw) To: dev Cc: ferruh.yigit, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, thomas, andrew.rybchenko, qiming.yang, qi.z.zhang, SteveX Yang For testpmd, increase the max rx packet length when size of mtu and overhead exceeds max_rx_pkt_len. For generic ethdev, readuce the mtu size to ensure the rx frame size is larger than size of mtu and overhead. v6: * change the max_rx_pkt_len in the init_config of testpmd; * change the mtu value in the rte_ethdev; v5: * update comments and commit messages; v4: * add the adjust condition for max_rx_pkt_len; v3: * change the i40evf relative code; v2: * change the max_rx_pkt_len via mtu_set ops; SteveX Yang (2): app/testpmd: fix max rx packet length for VLAN packets librte_ethdev: fix MTU size exceeds max rx packet length app/test-pmd/testpmd.c | 52 +++++++++++++++++++++++++--------- lib/librte_ethdev/rte_ethdev.c | 14 +++++++++ 2 files changed, 52 insertions(+), 14 deletions(-) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v6 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 0/2] " SteveX Yang @ 2020-10-22 8:48 ` SteveX Yang 2020-10-22 16:22 ` Ferruh Yigit 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 2/2] librte_ethdev: fix MTU size exceeds max rx packet length SteveX Yang 2020-10-28 3:03 ` [dpdk-dev] [PATCH v7 0/1] fix default max mtu size when device configured SteveX Yang 2 siblings, 1 reply; 94+ messages in thread From: SteveX Yang @ 2020-10-22 8:48 UTC (permalink / raw) To: dev Cc: ferruh.yigit, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, thomas, andrew.rybchenko, qiming.yang, qi.z.zhang, SteveX Yang When the max rx packet length is smaller than the sum of mtu size and ether overhead size, it should be enlarged, otherwise the VLAN packets will be dropped. Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- app/test-pmd/testpmd.c | 52 ++++++++++++++++++++++++++++++------------ 1 file changed, 38 insertions(+), 14 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 33fc0fddf..9031c6145 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1418,9 +1418,13 @@ init_config(void) unsigned int nb_mbuf_per_pool; lcoreid_t lc_id; uint8_t port_per_socket[RTE_MAX_NUMA_NODES]; + struct rte_eth_dev_info *dev_info; + struct rte_eth_conf *dev_conf; struct rte_gro_param gro_param; uint32_t gso_types; uint16_t data_size; + uint16_t overhead_len; + uint16_t frame_size; bool warning = 0; int k; int ret; @@ -1448,18 +1452,40 @@ init_config(void) RTE_ETH_FOREACH_DEV(pid) { port = &ports[pid]; + + dev_info = &port->dev_info; + dev_conf = &port->dev_conf; + /* Apply default TxRx configuration for all ports */ - port->dev_conf.txmode = tx_mode; - port->dev_conf.rxmode = rx_mode; + dev_conf->txmode = tx_mode; + dev_conf->rxmode = rx_mode; - ret = eth_dev_info_get_print_err(pid, &port->dev_info); + ret = eth_dev_info_get_print_err(pid, dev_info); if (ret != 0) rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n"); - if (!(port->dev_info.tx_offload_capa & + /* + * Update the max_rx_pkt_len to ensure that its size equals the + * sum of default mtu size and ether overhead length at least. + */ + if (dev_info->max_rx_pktlen && dev_info->max_mtu) + overhead_len = + dev_info->max_rx_pktlen - dev_info->max_mtu; + else + overhead_len = + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + + frame_size = RTE_ETHER_MTU + overhead_len; + if (frame_size > RTE_ETHER_MAX_LEN) { + dev_conf->rxmode.max_rx_pkt_len = frame_size; + dev_conf->rxmode.offloads |= + DEV_RX_OFFLOAD_JUMBO_FRAME; + } + + if (!(dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)) - port->dev_conf.txmode.offloads &= + dev_conf->txmode.offloads &= ~DEV_TX_OFFLOAD_MBUF_FAST_FREE; if (numa_support) { if (port_numa[pid] != NUMA_NO_CONFIG) @@ -1478,13 +1504,11 @@ init_config(void) } /* Apply Rx offloads configuration */ - for (k = 0; k < port->dev_info.max_rx_queues; k++) - port->rx_conf[k].offloads = - port->dev_conf.rxmode.offloads; + for (k = 0; k < dev_info->max_rx_queues; k++) + port->rx_conf[k].offloads = dev_conf->rxmode.offloads; /* Apply Tx offloads configuration */ - for (k = 0; k < port->dev_info.max_tx_queues; k++) - port->tx_conf[k].offloads = - port->dev_conf.txmode.offloads; + for (k = 0; k < dev_info->max_tx_queues; k++) + port->tx_conf[k].offloads = dev_conf->txmode.offloads; /* set flag to initialize port/queue */ port->need_reconfig = 1; @@ -1494,10 +1518,10 @@ init_config(void) /* Check for maximum number of segments per MTU. Accordingly * update the mbuf data size. */ - if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && - port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) { + if (dev_info->rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && + dev_info->rx_desc_lim.nb_mtu_seg_max != 0) { data_size = rx_mode.max_rx_pkt_len / - port->dev_info.rx_desc_lim.nb_mtu_seg_max; + dev_info->rx_desc_lim.nb_mtu_seg_max; if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) { -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang @ 2020-10-22 16:22 ` Ferruh Yigit 0 siblings, 0 replies; 94+ messages in thread From: Ferruh Yigit @ 2020-10-22 16:22 UTC (permalink / raw) To: SteveX Yang, dev Cc: konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, thomas, andrew.rybchenko, qiming.yang, qi.z.zhang On 10/22/2020 9:48 AM, SteveX Yang wrote: > When the max rx packet length is smaller than the sum of mtu size and > ether overhead size, it should be enlarged, otherwise the VLAN packets > will be dropped. > > Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > app/test-pmd/testpmd.c | 52 ++++++++++++++++++++++++++++++------------ > 1 file changed, 38 insertions(+), 14 deletions(-) > > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c > index 33fc0fddf..9031c6145 100644 > --- a/app/test-pmd/testpmd.c > +++ b/app/test-pmd/testpmd.c > @@ -1418,9 +1418,13 @@ init_config(void) > unsigned int nb_mbuf_per_pool; > lcoreid_t lc_id; > uint8_t port_per_socket[RTE_MAX_NUMA_NODES]; > + struct rte_eth_dev_info *dev_info; > + struct rte_eth_conf *dev_conf; > struct rte_gro_param gro_param; > uint32_t gso_types; > uint16_t data_size; > + uint16_t overhead_len; > + uint16_t frame_size; > bool warning = 0; > int k; > int ret; > @@ -1448,18 +1452,40 @@ init_config(void) > > RTE_ETH_FOREACH_DEV(pid) { > port = &ports[pid]; > + > + dev_info = &port->dev_info; > + dev_conf = &port->dev_conf; > + > /* Apply default TxRx configuration for all ports */ > - port->dev_conf.txmode = tx_mode; > - port->dev_conf.rxmode = rx_mode; > + dev_conf->txmode = tx_mode; > + dev_conf->rxmode = rx_mode; Hi Steve, This patch does a small refactoring ('dev_info' & 'dev_conf') and a small update, but the refactoring shows the patch more complex than it actually is, if you think that is required can you please seperate these two? > > - ret = eth_dev_info_get_print_err(pid, &port->dev_info); > + ret = eth_dev_info_get_print_err(pid, dev_info); > if (ret != 0) > rte_exit(EXIT_FAILURE, > "rte_eth_dev_info_get() failed\n"); > > - if (!(port->dev_info.tx_offload_capa & > + /* > + * Update the max_rx_pkt_len to ensure that its size equals the > + * sum of default mtu size and ether overhead length at least. > + */ What about simplifying the above comment like: " Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU " > + if (dev_info->max_rx_pktlen && dev_info->max_mtu) > + overhead_len = > + dev_info->max_rx_pktlen - dev_info->max_mtu; > + else > + overhead_len = > + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; > + > + frame_size = RTE_ETHER_MTU + overhead_len; > + if (frame_size > RTE_ETHER_MAX_LEN) { > + dev_conf->rxmode.max_rx_pkt_len = frame_size; > + dev_conf->rxmode.offloads |= > + DEV_RX_OFFLOAD_JUMBO_FRAME; I am not sure the jumbo frame asignment is always true. 'frame_size' can be bigger than 'RTE_ETHER_MAX_LEN', but mtu still can be <= 1500. What about dropping this? > + } > + > + if (!(dev_info->tx_offload_capa & > DEV_TX_OFFLOAD_MBUF_FAST_FREE)) > - port->dev_conf.txmode.offloads &= > + dev_conf->txmode.offloads &= > ~DEV_TX_OFFLOAD_MBUF_FAST_FREE; > if (numa_support) { > if (port_numa[pid] != NUMA_NO_CONFIG) > @@ -1478,13 +1504,11 @@ init_config(void) > } > > /* Apply Rx offloads configuration */ > - for (k = 0; k < port->dev_info.max_rx_queues; k++) > - port->rx_conf[k].offloads = > - port->dev_conf.rxmode.offloads; > + for (k = 0; k < dev_info->max_rx_queues; k++) > + port->rx_conf[k].offloads = dev_conf->rxmode.offloads; > /* Apply Tx offloads configuration */ > - for (k = 0; k < port->dev_info.max_tx_queues; k++) > - port->tx_conf[k].offloads = > - port->dev_conf.txmode.offloads; > + for (k = 0; k < dev_info->max_tx_queues; k++) > + port->tx_conf[k].offloads = dev_conf->txmode.offloads; > > /* set flag to initialize port/queue */ > port->need_reconfig = 1; > @@ -1494,10 +1518,10 @@ init_config(void) > /* Check for maximum number of segments per MTU. Accordingly > * update the mbuf data size. > */ > - if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && > - port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) { > + if (dev_info->rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && > + dev_info->rx_desc_lim.nb_mtu_seg_max != 0) { > data_size = rx_mode.max_rx_pkt_len / > - port->dev_info.rx_desc_lim.nb_mtu_seg_max; > + dev_info->rx_desc_lim.nb_mtu_seg_max; > > if ((data_size + RTE_PKTMBUF_HEADROOM) > > mbuf_data_size[0]) { > ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v6 2/2] librte_ethdev: fix MTU size exceeds max rx packet length 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 0/2] " SteveX Yang 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang @ 2020-10-22 8:48 ` SteveX Yang 2020-10-22 16:31 ` Ferruh Yigit 2020-10-22 16:52 ` Ananyev, Konstantin 2020-10-28 3:03 ` [dpdk-dev] [PATCH v7 0/1] fix default max mtu size when device configured SteveX Yang 2 siblings, 2 replies; 94+ messages in thread From: SteveX Yang @ 2020-10-22 8:48 UTC (permalink / raw) To: dev Cc: ferruh.yigit, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, thomas, andrew.rybchenko, qiming.yang, qi.z.zhang, SteveX Yang If max rx packet length is smaller then MTU + Ether overhead, that will drop all MTU size packets. Update the MTU size according to the max rx packet and Ether overhead. Fixes: 59d0ecdbf0e1 ("ethdev: MTU accessors") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- lib/librte_ethdev/rte_ethdev.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c index b12bb3854..17f1c33ac 100644 --- a/lib/librte_ethdev/rte_ethdev.c +++ b/lib/librte_ethdev/rte_ethdev.c @@ -1290,6 +1290,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; struct rte_eth_conf orig_conf; + uint16_t overhead_len; + uint16_t max_rx_pktlen; int diag; int ret; @@ -1415,6 +1417,18 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, RTE_ETHER_MAX_LEN; } + /* + * Update MTU value if MTU + OVERHEAD exceeds the max_rx_pkt_len + */ + max_rx_pktlen = dev->data->dev_conf.rxmode.max_rx_pkt_len; + if (dev_info.max_rx_pktlen && dev_info.max_mtu) + overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu; + else + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + + if (max_rx_pktlen < dev->data->mtu + overhead_len) + dev->data->mtu = max_rx_pktlen - overhead_len; + /* * If LRO is enabled, check that the maximum aggregated packet * size is supported by the configured device. -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v6 2/2] librte_ethdev: fix MTU size exceeds max rx packet length 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 2/2] librte_ethdev: fix MTU size exceeds max rx packet length SteveX Yang @ 2020-10-22 16:31 ` Ferruh Yigit 2020-10-22 16:52 ` Ananyev, Konstantin 1 sibling, 0 replies; 94+ messages in thread From: Ferruh Yigit @ 2020-10-22 16:31 UTC (permalink / raw) To: SteveX Yang, dev Cc: konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, thomas, andrew.rybchenko, qiming.yang, qi.z.zhang On 10/22/2020 9:48 AM, SteveX Yang wrote: > If max rx packet length is smaller then MTU + Ether overhead, that will > drop all MTU size packets. > > Update the MTU size according to the max rx packet and Ether overhead. > > Fixes: 59d0ecdbf0e1 ("ethdev: MTU accessors") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > lib/librte_ethdev/rte_ethdev.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c > index b12bb3854..17f1c33ac 100644 > --- a/lib/librte_ethdev/rte_ethdev.c > +++ b/lib/librte_ethdev/rte_ethdev.c > @@ -1290,6 +1290,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, > struct rte_eth_dev *dev; > struct rte_eth_dev_info dev_info; > struct rte_eth_conf orig_conf; > + uint16_t overhead_len; > + uint16_t max_rx_pktlen; > int diag; > int ret; > > @@ -1415,6 +1417,18 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, > RTE_ETHER_MAX_LEN; > } > > + /* > + * Update MTU value if MTU + OVERHEAD exceeds the max_rx_pkt_len > + */ I am not sure this conditional update is required, the target is keep 'max_rx_pktlen' & 'mtu' in sync. So why not just: dev->data->mtu = max_rx_pktlen - overhead_len; > + max_rx_pktlen = dev->data->dev_conf.rxmode.max_rx_pkt_len; > + if (dev_info.max_rx_pktlen && dev_info.max_mtu) > + overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu; > + else > + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; > + > + if (max_rx_pktlen < dev->data->mtu + overhead_len) > + dev->data->mtu = max_rx_pktlen - overhead_len; > + > /* > * If LRO is enabled, check that the maximum aggregated packet > * size is supported by the configured device. > ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v6 2/2] librte_ethdev: fix MTU size exceeds max rx packet length 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 2/2] librte_ethdev: fix MTU size exceeds max rx packet length SteveX Yang 2020-10-22 16:31 ` Ferruh Yigit @ 2020-10-22 16:52 ` Ananyev, Konstantin 1 sibling, 0 replies; 94+ messages in thread From: Ananyev, Konstantin @ 2020-10-22 16:52 UTC (permalink / raw) To: Yang, SteveX, dev Cc: Yigit, Ferruh, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, thomas, andrew.rybchenko, Yang, Qiming, Zhang, Qi Z, Yang, SteveX > > Update the MTU size according to the max rx packet and Ether overhead. > > Fixes: 59d0ecdbf0e1 ("ethdev: MTU accessors") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > lib/librte_ethdev/rte_ethdev.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c > index b12bb3854..17f1c33ac 100644 > --- a/lib/librte_ethdev/rte_ethdev.c > +++ b/lib/librte_ethdev/rte_ethdev.c > @@ -1290,6 +1290,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, > struct rte_eth_dev *dev; > struct rte_eth_dev_info dev_info; > struct rte_eth_conf orig_conf; > + uint16_t overhead_len; > + uint16_t max_rx_pktlen; > int diag; > int ret; > > @@ -1415,6 +1417,18 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, > RTE_ETHER_MAX_LEN; > } > > + /* > + * Update MTU value if MTU + OVERHEAD exceeds the max_rx_pkt_len > + */ > + max_rx_pktlen = dev->data->dev_conf.rxmode.max_rx_pkt_len; > + if (dev_info.max_rx_pktlen && dev_info.max_mtu) > + overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu; > + else > + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; > + > + if (max_rx_pktlen < dev->data->mtu + overhead_len) Do we need that if() here? Might be do assignment unconditionally? > + dev->data->mtu = max_rx_pktlen - overhead_len; > + > /* > * If LRO is enabled, check that the maximum aggregated packet > * size is supported by the configured device. > -- > 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v7 0/1] fix default max mtu size when device configured 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 0/2] " SteveX Yang 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 2/2] librte_ethdev: fix MTU size exceeds max rx packet length SteveX Yang @ 2020-10-28 3:03 ` SteveX Yang 2020-10-28 3:03 ` [dpdk-dev] [PATCH v7 1/1] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 0/2] fix default max mtu size when device configured SteveX Yang 2 siblings, 2 replies; 94+ messages in thread From: SteveX Yang @ 2020-10-28 3:03 UTC (permalink / raw) To: dev Cc: ferruh.yigit, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, SteveX Yang Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU. v7: * drop patch 2 due to Jumbo frame flag issue; v6: * change the max_rx_pkt_len in the init_config of testpmd; * change the mtu value in the rte_ethdev; v5: * update comments and commit messages; v4: * add the adjust condition for max_rx_pkt_len; v3: * change the i40evf relative code; v2: * change the max_rx_pkt_len via mtu_set ops; SteveX Yang (1): app/testpmd: fix max rx packet length for VLAN packets app/test-pmd/testpmd.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v7 1/1] app/testpmd: fix max rx packet length for VLAN packets 2020-10-28 3:03 ` [dpdk-dev] [PATCH v7 0/1] fix default max mtu size when device configured SteveX Yang @ 2020-10-28 3:03 ` SteveX Yang 2020-10-29 8:41 ` Ferruh Yigit 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 0/2] fix default max mtu size when device configured SteveX Yang 1 sibling, 1 reply; 94+ messages in thread From: SteveX Yang @ 2020-10-28 3:03 UTC (permalink / raw) To: dev Cc: ferruh.yigit, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, SteveX Yang When the max rx packet length is smaller than the sum of mtu size and ether overhead size, it should be enlarged, otherwise the VLAN packets will be dropped. Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- app/test-pmd/testpmd.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 33fc0fddf..754066950 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1421,6 +1421,7 @@ init_config(void) struct rte_gro_param gro_param; uint32_t gso_types; uint16_t data_size; + uint16_t overhead_len; bool warning = 0; int k; int ret; @@ -1457,6 +1458,25 @@ init_config(void) rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n"); + /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */ + if (port->dev_info.max_rx_pktlen && port->dev_info.max_mtu) + overhead_len = port->dev_info.max_rx_pktlen - + port->dev_info.max_mtu; + else + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + + port->dev_conf.rxmode.max_rx_pkt_len = + RTE_ETHER_MTU + overhead_len; + + /* + * Workaround: only adapt to RTE_ETHER_MAX_LEN as + * jumbo frame condition. + */ + if (port->dev_conf.rxmode.max_rx_pkt_len > RTE_ETHER_MAX_LEN) { + port->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_JUMBO_FRAME; + } + if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)) port->dev_conf.txmode.offloads &= -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/1] app/testpmd: fix max rx packet length for VLAN packets 2020-10-28 3:03 ` [dpdk-dev] [PATCH v7 1/1] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang @ 2020-10-29 8:41 ` Ferruh Yigit 0 siblings, 0 replies; 94+ messages in thread From: Ferruh Yigit @ 2020-10-29 8:41 UTC (permalink / raw) To: SteveX Yang, dev Cc: konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang On 10/28/2020 3:03 AM, SteveX Yang wrote: > When the max rx packet length is smaller than the sum of mtu size and > ether overhead size, it should be enlarged, otherwise the VLAN packets > will be dropped. > > Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > app/test-pmd/testpmd.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c > index 33fc0fddf..754066950 100644 > --- a/app/test-pmd/testpmd.c > +++ b/app/test-pmd/testpmd.c > @@ -1421,6 +1421,7 @@ init_config(void) > struct rte_gro_param gro_param; > uint32_t gso_types; > uint16_t data_size; > + uint16_t overhead_len; > bool warning = 0; > int k; > int ret; > @@ -1457,6 +1458,25 @@ init_config(void) > rte_exit(EXIT_FAILURE, > "rte_eth_dev_info_get() failed\n"); > > + /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */ > + if (port->dev_info.max_rx_pktlen && port->dev_info.max_mtu) > + overhead_len = port->dev_info.max_rx_pktlen - > + port->dev_info.max_mtu; > + else > + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; > + > + port->dev_conf.rxmode.max_rx_pkt_len = > + RTE_ETHER_MTU + overhead_len; > + > + /* > + * Workaround: only adapt to RTE_ETHER_MAX_LEN as > + * jumbo frame condition. > + */ > + if (port->dev_conf.rxmode.max_rx_pkt_len > RTE_ETHER_MAX_LEN) { > + port->dev_conf.rxmode.offloads |= > + DEV_RX_OFFLOAD_JUMBO_FRAME; > + } I think this jumbo frame set can be dropped, above just set the frame size as "RTE_ETHER_MTU + overhead_len", so it can't be jumbo frame, right? ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v8 0/2] fix default max mtu size when device configured 2020-10-28 3:03 ` [dpdk-dev] [PATCH v7 0/1] fix default max mtu size when device configured SteveX Yang 2020-10-28 3:03 ` [dpdk-dev] [PATCH v7 1/1] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang @ 2020-11-02 8:52 ` SteveX Yang 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition SteveX Yang 1 sibling, 2 replies; 94+ messages in thread From: SteveX Yang @ 2020-11-02 8:52 UTC (permalink / raw) To: dev Cc: ferruh.yigit, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, SteveX Yang Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU. v8: * update workaround comment; * add deprecation for ethdev; v7: * drop patch 2 due to Jumbo frame flag issue; v6: * change the max_rx_pkt_len in the init_config of testpmd; * change the mtu value in the rte_ethdev; v5: * update comments and commit messages; v4: * add the adjust condition for max_rx_pkt_len; v3: * change the i40evf relative code; v2: * change the max_rx_pkt_len via mtu_set ops; SteveX Yang (2): app/testpmd: fix max rx packet length for VLAN packets doc: annouce deprecation of jumbo frame flag condition app/test-pmd/testpmd.c | 23 +++++++++++++++++++++++ doc/guides/rel_notes/deprecation.rst | 12 ++++++++++++ 2 files changed, 35 insertions(+) -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 0/2] fix default max mtu size when device configured SteveX Yang @ 2020-11-02 8:52 ` SteveX Yang 2020-11-02 11:48 ` Ferruh Yigit 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition SteveX Yang 1 sibling, 1 reply; 94+ messages in thread From: SteveX Yang @ 2020-11-02 8:52 UTC (permalink / raw) To: dev Cc: ferruh.yigit, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, SteveX Yang When the max rx packet length is smaller than the sum of mtu size and ether overhead size, it should be enlarged, otherwise the VLAN packets will be dropped. Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- app/test-pmd/testpmd.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 33fc0fddf..c263121a9 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1421,6 +1421,7 @@ init_config(void) struct rte_gro_param gro_param; uint32_t gso_types; uint16_t data_size; + uint16_t overhead_len; bool warning = 0; int k; int ret; @@ -1457,6 +1458,28 @@ init_config(void) rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n"); + /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */ + if (port->dev_info.max_rx_pktlen && port->dev_info.max_mtu) + overhead_len = port->dev_info.max_rx_pktlen - + port->dev_info.max_mtu; + else + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + + port->dev_conf.rxmode.max_rx_pkt_len = + RTE_ETHER_MTU + overhead_len; + + /* + * This is workaround to avoid resize max rx packet len. + * Ethdev assumes jumbo frame size must be greater than + * RTE_ETHER_MAX_LEN, and will resize 'max_rx_pkt_len' to + * default value when it is greater than RTE_ETHER_MAX_LEN + * for normal frame. + */ + if (port->dev_conf.rxmode.max_rx_pkt_len > RTE_ETHER_MAX_LEN) { + port->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_JUMBO_FRAME; + } + if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)) port->dev_conf.txmode.offloads &= -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang @ 2020-11-02 11:48 ` Ferruh Yigit 2020-11-03 13:29 ` Ferruh Yigit 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-11-02 11:48 UTC (permalink / raw) To: SteveX Yang, dev Cc: konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman On 11/2/2020 8:52 AM, SteveX Yang wrote: > When the max rx packet length is smaller than the sum of mtu size and > ether overhead size, it should be enlarged, otherwise the VLAN packets > will be dropped. > > Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-02 11:48 ` Ferruh Yigit @ 2020-11-03 13:29 ` Ferruh Yigit 2020-11-04 16:51 ` Thomas Monjalon 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-11-03 13:29 UTC (permalink / raw) To: SteveX Yang, dev Cc: konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman On 11/2/2020 11:48 AM, Ferruh Yigit wrote: > On 11/2/2020 8:52 AM, SteveX Yang wrote: >> When the max rx packet length is smaller than the sum of mtu size and >> ether overhead size, it should be enlarged, otherwise the VLAN packets >> will be dropped. >> >> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") >> >> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> > Applied to dpdk-next-net/main, thanks. only 1/2 applied since discussion is going on for 2/2. ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-03 13:29 ` Ferruh Yigit @ 2020-11-04 16:51 ` Thomas Monjalon 2020-11-04 17:07 ` Ferruh Yigit 0 siblings, 1 reply; 94+ messages in thread From: Thomas Monjalon @ 2020-11-04 16:51 UTC (permalink / raw) To: SteveX Yang, Ferruh Yigit Cc: dev, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, david.marchand, andrew.rybchenko 03/11/2020 14:29, Ferruh Yigit: > On 11/2/2020 11:48 AM, Ferruh Yigit wrote: > > On 11/2/2020 8:52 AM, SteveX Yang wrote: > >> When the max rx packet length is smaller than the sum of mtu size and > >> ether overhead size, it should be enlarged, otherwise the VLAN packets > >> will be dropped. > >> > >> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > >> > >> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > > > Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> > > Applied to dpdk-next-net/main, thanks. > > only 1/2 applied since discussion is going on for 2/2. I'm not sure this testpmd change is good. Reminder: testpmd is for testing the PMDs. Don't we want to see VLAN packets dropped in the case described above? ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-04 16:51 ` Thomas Monjalon @ 2020-11-04 17:07 ` Ferruh Yigit 2020-11-04 17:55 ` Thomas Monjalon 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-11-04 17:07 UTC (permalink / raw) To: Thomas Monjalon, SteveX Yang Cc: dev, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, david.marchand, andrew.rybchenko On 11/4/2020 4:51 PM, Thomas Monjalon wrote: > 03/11/2020 14:29, Ferruh Yigit: >> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: >>> On 11/2/2020 8:52 AM, SteveX Yang wrote: >>>> When the max rx packet length is smaller than the sum of mtu size and >>>> ether overhead size, it should be enlarged, otherwise the VLAN packets >>>> will be dropped. >>>> >>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") >>>> >>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>> >>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> >> >> Applied to dpdk-next-net/main, thanks. >> >> only 1/2 applied since discussion is going on for 2/2. > > I'm not sure this testpmd change is good. > > Reminder: testpmd is for testing the PMDs. > Don't we want to see VLAN packets dropped in the case described above? > The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' value, which makes MTU between 1492-1500 depending on PMD. It is application responsibility to provide correct 'max_rx_pkt_len'. I guess the original intention was to set MTU as 1500 but was not correct for all PMDs and this patch is fixing it. The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' will give MTU 1500), the other patch in the set is to fix it later. ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-04 17:07 ` Ferruh Yigit @ 2020-11-04 17:55 ` Thomas Monjalon 2020-11-04 20:19 ` Ferruh Yigit 0 siblings, 1 reply; 94+ messages in thread From: Thomas Monjalon @ 2020-11-04 17:55 UTC (permalink / raw) To: SteveX Yang, Ferruh Yigit Cc: dev, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, david.marchand, andrew.rybchenko 04/11/2020 18:07, Ferruh Yigit: > On 11/4/2020 4:51 PM, Thomas Monjalon wrote: > > 03/11/2020 14:29, Ferruh Yigit: > >> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: > >>> On 11/2/2020 8:52 AM, SteveX Yang wrote: > >>>> When the max rx packet length is smaller than the sum of mtu size and > >>>> ether overhead size, it should be enlarged, otherwise the VLAN packets > >>>> will be dropped. > >>>> > >>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > >>>> > >>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > >>> > >>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> > >> > >> Applied to dpdk-next-net/main, thanks. > >> > >> only 1/2 applied since discussion is going on for 2/2. > > > > I'm not sure this testpmd change is good. > > > > Reminder: testpmd is for testing the PMDs. > > Don't we want to see VLAN packets dropped in the case described above? > > > > The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all PMDs, > otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' value, which makes MTU > between 1492-1500 depending on PMD. > > It is application responsibility to provide correct 'max_rx_pkt_len'. > I guess the original intention was to set MTU as 1500 but was not correct for > all PMDs and this patch is fixing it. > > The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' will give MTU > 1500), the other patch in the set is to fix it later. OK but the testpmd patch is just hiding the issue, isn't it? ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-04 17:55 ` Thomas Monjalon @ 2020-11-04 20:19 ` Ferruh Yigit 2020-11-04 20:39 ` Thomas Monjalon 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-11-04 20:19 UTC (permalink / raw) To: Thomas Monjalon, SteveX Yang Cc: dev, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, david.marchand, andrew.rybchenko On 11/4/2020 5:55 PM, Thomas Monjalon wrote: > 04/11/2020 18:07, Ferruh Yigit: >> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: >>> 03/11/2020 14:29, Ferruh Yigit: >>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: >>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: >>>>>> When the max rx packet length is smaller than the sum of mtu size and >>>>>> ether overhead size, it should be enlarged, otherwise the VLAN packets >>>>>> will be dropped. >>>>>> >>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") >>>>>> >>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>> >>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> >>>> >>>> Applied to dpdk-next-net/main, thanks. >>>> >>>> only 1/2 applied since discussion is going on for 2/2. >>> >>> I'm not sure this testpmd change is good. >>> >>> Reminder: testpmd is for testing the PMDs. >>> Don't we want to see VLAN packets dropped in the case described above? >>> >> >> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all PMDs, >> otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' value, which makes MTU >> between 1492-1500 depending on PMD. >> >> It is application responsibility to provide correct 'max_rx_pkt_len'. >> I guess the original intention was to set MTU as 1500 but was not correct for >> all PMDs and this patch is fixing it. >> >> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' will give MTU >> 1500), the other patch in the set is to fix it later. > > OK but the testpmd patch is just hiding the issue, isn't it? > I don't think so, issue was application (testpmd) setting the 'max_rx_pkt_len' wrong. What is hidden? ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-04 20:19 ` Ferruh Yigit @ 2020-11-04 20:39 ` Thomas Monjalon 2020-11-05 8:54 ` Andrew Rybchenko 0 siblings, 1 reply; 94+ messages in thread From: Thomas Monjalon @ 2020-11-04 20:39 UTC (permalink / raw) To: SteveX Yang, Ferruh Yigit Cc: dev, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, david.marchand, andrew.rybchenko 04/11/2020 21:19, Ferruh Yigit: > On 11/4/2020 5:55 PM, Thomas Monjalon wrote: > > 04/11/2020 18:07, Ferruh Yigit: > >> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: > >>> 03/11/2020 14:29, Ferruh Yigit: > >>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: > >>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: > >>>>>> When the max rx packet length is smaller than the sum of mtu size and > >>>>>> ether overhead size, it should be enlarged, otherwise the VLAN packets > >>>>>> will be dropped. > >>>>>> > >>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > >>>>>> > >>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > >>>>> > >>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> > >>>> > >>>> Applied to dpdk-next-net/main, thanks. > >>>> > >>>> only 1/2 applied since discussion is going on for 2/2. > >>> > >>> I'm not sure this testpmd change is good. > >>> > >>> Reminder: testpmd is for testing the PMDs. > >>> Don't we want to see VLAN packets dropped in the case described above? > >>> > >> > >> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all PMDs, > >> otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' value, which makes MTU > >> between 1492-1500 depending on PMD. > >> > >> It is application responsibility to provide correct 'max_rx_pkt_len'. > >> I guess the original intention was to set MTU as 1500 but was not correct for > >> all PMDs and this patch is fixing it. > >> > >> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' will give MTU > >> 1500), the other patch in the set is to fix it later. > > > > OK but the testpmd patch is just hiding the issue, isn't it? > > > > I don't think so, issue was application (testpmd) setting the 'max_rx_pkt_len' > wrong. > > What is hidden? I was looking for adding a helper in ethdev API. But I think I can agree with your way of thinking. ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-04 20:39 ` Thomas Monjalon @ 2020-11-05 8:54 ` Andrew Rybchenko [not found] ` <DM6PR11MB43622CC5DF485DD034037CD3F9EE0@DM6PR11MB4362.namprd11.prod.outlook.com> 0 siblings, 1 reply; 94+ messages in thread From: Andrew Rybchenko @ 2020-11-05 8:54 UTC (permalink / raw) To: Thomas Monjalon, SteveX Yang, Ferruh Yigit Cc: dev, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, david.marchand On 11/4/20 11:39 PM, Thomas Monjalon wrote: > 04/11/2020 21:19, Ferruh Yigit: >> On 11/4/2020 5:55 PM, Thomas Monjalon wrote: >>> 04/11/2020 18:07, Ferruh Yigit: >>>> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: >>>>> 03/11/2020 14:29, Ferruh Yigit: >>>>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: >>>>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: >>>>>>>> When the max rx packet length is smaller than the sum of mtu size and >>>>>>>> ether overhead size, it should be enlarged, otherwise the VLAN packets >>>>>>>> will be dropped. >>>>>>>> >>>>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") >>>>>>>> >>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>> >>>>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> >>>>>> >>>>>> Applied to dpdk-next-net/main, thanks. >>>>>> >>>>>> only 1/2 applied since discussion is going on for 2/2. >>>>> >>>>> I'm not sure this testpmd change is good. >>>>> >>>>> Reminder: testpmd is for testing the PMDs. >>>>> Don't we want to see VLAN packets dropped in the case described above? >>>>> >>>> >>>> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all PMDs, >>>> otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' value, which makes MTU >>>> between 1492-1500 depending on PMD. >>>> >>>> It is application responsibility to provide correct 'max_rx_pkt_len'. >>>> I guess the original intention was to set MTU as 1500 but was not correct for >>>> all PMDs and this patch is fixing it. >>>> >>>> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' will give MTU >>>> 1500), the other patch in the set is to fix it later. >>> >>> OK but the testpmd patch is just hiding the issue, isn't it? >>> >> >> I don't think so, issue was application (testpmd) setting the 'max_rx_pkt_len' >> wrong. >> >> What is hidden? > > I was looking for adding a helper in ethdev API. > But I think I can agree with your way of thinking. > The patch breaks running testpmd on Virtio-Net because the driver populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to UINT16_MAX as it was filled in by ethdev. As the result: Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to configure port 0 ^ permalink raw reply [flat|nested] 94+ messages in thread
[parent not found: <DM6PR11MB43622CC5DF485DD034037CD3F9EE0@DM6PR11MB4362.namprd11.prod.outlook.com>]
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets [not found] ` <DM6PR11MB43622CC5DF485DD034037CD3F9EE0@DM6PR11MB4362.namprd11.prod.outlook.com> @ 2020-11-05 10:37 ` Ferruh Yigit 2020-11-05 10:44 ` Thomas Monjalon 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-11-05 10:37 UTC (permalink / raw) To: Yang, SteveX, Andrew Rybchenko, Thomas Monjalon Cc: dev, Ananyev, Konstantin, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, nhorman, david.marchand On 11/5/2020 9:33 AM, Yang, SteveX wrote: > > >> -----Original Message----- >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> >> Sent: Thursday, November 5, 2020 4:54 PM >> To: Thomas Monjalon <thomas@monjalon.net>; Yang, SteveX >> <stevex.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com> >> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; >> Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; >> Iremonger, Bernard <bernard.iremonger@intel.com>; Yang, Qiming >> <qiming.yang@intel.com>; mdr@ashroe.eu; nhorman@tuxdriver.com; >> david.marchand@redhat.com >> Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet >> length for VLAN packets >> >> On 11/4/20 11:39 PM, Thomas Monjalon wrote: >>> 04/11/2020 21:19, Ferruh Yigit: >>>> On 11/4/2020 5:55 PM, Thomas Monjalon wrote: >>>>> 04/11/2020 18:07, Ferruh Yigit: >>>>>> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: >>>>>>> 03/11/2020 14:29, Ferruh Yigit: >>>>>>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: >>>>>>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: >>>>>>>>>> When the max rx packet length is smaller than the sum of mtu >>>>>>>>>> size and ether overhead size, it should be enlarged, otherwise >>>>>>>>>> the VLAN packets will be dropped. >>>>>>>>>> >>>>>>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") >>>>>>>>>> >>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>> >>>>>>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> >>>>>>>> >>>>>>>> Applied to dpdk-next-net/main, thanks. >>>>>>>> >>>>>>>> only 1/2 applied since discussion is going on for 2/2. >>>>>>> >>>>>>> I'm not sure this testpmd change is good. >>>>>>> >>>>>>> Reminder: testpmd is for testing the PMDs. >>>>>>> Don't we want to see VLAN packets dropped in the case described >> above? >>>>>>> >>>>>> >>>>>> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all >>>>>> PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' >> value, >>>>>> which makes MTU between 1492-1500 depending on PMD. >>>>>> >>>>>> It is application responsibility to provide correct 'max_rx_pkt_len'. >>>>>> I guess the original intention was to set MTU as 1500 but was not >>>>>> correct for all PMDs and this patch is fixing it. >>>>>> >>>>>> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' >> will >>>>>> give MTU 1500), the other patch in the set is to fix it later. >>>>> >>>>> OK but the testpmd patch is just hiding the issue, isn't it? >>>>> >>>> >>>> I don't think so, issue was application (testpmd) setting the >> 'max_rx_pkt_len' >>>> wrong. >>>> >>>> What is hidden? >>> >>> I was looking for adding a helper in ethdev API. >>> But I think I can agree with your way of thinking. >>> >> >> The patch breaks running testpmd on Virtio-Net because the driver >> populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to >> UINT16_MAX as it was filled in by ethdev. As the result: >> >> Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to >> configure port 0 > > Similar issue occurred for other net PMD drivers which use default max_mtu (UINT16_MAX). > More strict checking condition will be added within new patch sooner. > :( For drivers not providing 'max_mtu' information explicitly, the default 'UINT16_MAX' is set in ethdev layer. This prevents calculating PMD specific 'overhead' and the logic in the patch is broken. Indeed this makes inconsistency in the driver too, for example for virtio, it claims 'max_rx_pktlen' as "VIRTIO_MAX_RX_PKTLEN (9728)" and 'max_mtu' as UINT16_MAX. From 'virtio_mtu_set()' we can see the real limit is 'VIRTIO_MAX_RX_PKTLEN'. When PMDs fixed, the logic in this patch can work but not sure if post -rc2 is good time to start fixing the PMDs. ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-05 10:37 ` Ferruh Yigit @ 2020-11-05 10:44 ` Thomas Monjalon 2020-11-05 10:48 ` Thomas Monjalon 2020-11-05 10:49 ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets Ferruh Yigit 0 siblings, 2 replies; 94+ messages in thread From: Thomas Monjalon @ 2020-11-05 10:44 UTC (permalink / raw) To: Yang, SteveX, Andrew Rybchenko, Ferruh Yigit Cc: dev, Ananyev, Konstantin, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, nhorman, david.marchand 05/11/2020 11:37, Ferruh Yigit: > On 11/5/2020 9:33 AM, Yang, SteveX wrote: > > > > > >> -----Original Message----- > >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> > >> Sent: Thursday, November 5, 2020 4:54 PM > >> To: Thomas Monjalon <thomas@monjalon.net>; Yang, SteveX > >> <stevex.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com> > >> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > >> Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; > >> Iremonger, Bernard <bernard.iremonger@intel.com>; Yang, Qiming > >> <qiming.yang@intel.com>; mdr@ashroe.eu; nhorman@tuxdriver.com; > >> david.marchand@redhat.com > >> Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet > >> length for VLAN packets > >> > >> On 11/4/20 11:39 PM, Thomas Monjalon wrote: > >>> 04/11/2020 21:19, Ferruh Yigit: > >>>> On 11/4/2020 5:55 PM, Thomas Monjalon wrote: > >>>>> 04/11/2020 18:07, Ferruh Yigit: > >>>>>> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: > >>>>>>> 03/11/2020 14:29, Ferruh Yigit: > >>>>>>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: > >>>>>>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: > >>>>>>>>>> When the max rx packet length is smaller than the sum of mtu > >>>>>>>>>> size and ether overhead size, it should be enlarged, otherwise > >>>>>>>>>> the VLAN packets will be dropped. > >>>>>>>>>> > >>>>>>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > >>>>>>>>>> > >>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > >>>>>>>>> > >>>>>>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> > >>>>>>>> > >>>>>>>> Applied to dpdk-next-net/main, thanks. > >>>>>>>> > >>>>>>>> only 1/2 applied since discussion is going on for 2/2. > >>>>>>> > >>>>>>> I'm not sure this testpmd change is good. > >>>>>>> > >>>>>>> Reminder: testpmd is for testing the PMDs. > >>>>>>> Don't we want to see VLAN packets dropped in the case described > >> above? > >>>>>>> > >>>>>> > >>>>>> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all > >>>>>> PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' > >> value, > >>>>>> which makes MTU between 1492-1500 depending on PMD. > >>>>>> > >>>>>> It is application responsibility to provide correct 'max_rx_pkt_len'. > >>>>>> I guess the original intention was to set MTU as 1500 but was not > >>>>>> correct for all PMDs and this patch is fixing it. > >>>>>> > >>>>>> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' > >> will > >>>>>> give MTU 1500), the other patch in the set is to fix it later. > >>>>> > >>>>> OK but the testpmd patch is just hiding the issue, isn't it? > >>>>> > >>>> > >>>> I don't think so, issue was application (testpmd) setting the > >> 'max_rx_pkt_len' > >>>> wrong. > >>>> > >>>> What is hidden? > >>> > >>> I was looking for adding a helper in ethdev API. > >>> But I think I can agree with your way of thinking. > >>> > >> > >> The patch breaks running testpmd on Virtio-Net because the driver > >> populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to > >> UINT16_MAX as it was filled in by ethdev. As the result: > >> > >> Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to > >> configure port 0 > > > > Similar issue occurred for other net PMD drivers which use default max_mtu (UINT16_MAX). > > More strict checking condition will be added within new patch sooner. > > > > :( > > For drivers not providing 'max_mtu' information explicitly, the default > 'UINT16_MAX' is set in ethdev layer. > This prevents calculating PMD specific 'overhead' and the logic in the patch is > broken. > > Indeed this makes inconsistency in the driver too, for example for virtio, it > claims 'max_rx_pktlen' as "VIRTIO_MAX_RX_PKTLEN (9728)" and 'max_mtu' as > UINT16_MAX. From 'virtio_mtu_set()' we can see the real limit is > 'VIRTIO_MAX_RX_PKTLEN'. > > When PMDs fixed, the logic in this patch can work but not sure if post -rc2 is > good time to start fixing the PMDs. Do you suggest revert is the best choice here? ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-05 10:44 ` Thomas Monjalon @ 2020-11-05 10:48 ` Thomas Monjalon 2020-11-05 10:50 ` Ferruh Yigit 2020-11-05 10:49 ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets Ferruh Yigit 1 sibling, 1 reply; 94+ messages in thread From: Thomas Monjalon @ 2020-11-05 10:48 UTC (permalink / raw) To: Yang, SteveX, Andrew Rybchenko, Ferruh Yigit Cc: dev, Ananyev, Konstantin, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, david.marchand, jerinj, ajit.khaparde, maxime.coquelin, matan, viacheslavo, hemant.agrawal, bruce.richardson, stephen + more maintainers Cc'ed We have a critical issue with testpmd in -rc2. It is blocking a lot of testing. Would be good to do a -rc3 today. Please see below. 05/11/2020 11:44, Thomas Monjalon: > 05/11/2020 11:37, Ferruh Yigit: > > On 11/5/2020 9:33 AM, Yang, SteveX wrote: > > > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> > > >> Sent: Thursday, November 5, 2020 4:54 PM > > >> To: Thomas Monjalon <thomas@monjalon.net>; Yang, SteveX > > >> <stevex.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com> > > >> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > > >> Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; > > >> Iremonger, Bernard <bernard.iremonger@intel.com>; Yang, Qiming > > >> <qiming.yang@intel.com>; mdr@ashroe.eu; nhorman@tuxdriver.com; > > >> david.marchand@redhat.com > > >> Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet > > >> length for VLAN packets > > >> > > >> On 11/4/20 11:39 PM, Thomas Monjalon wrote: > > >>> 04/11/2020 21:19, Ferruh Yigit: > > >>>> On 11/4/2020 5:55 PM, Thomas Monjalon wrote: > > >>>>> 04/11/2020 18:07, Ferruh Yigit: > > >>>>>> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: > > >>>>>>> 03/11/2020 14:29, Ferruh Yigit: > > >>>>>>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: > > >>>>>>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: > > >>>>>>>>>> When the max rx packet length is smaller than the sum of mtu > > >>>>>>>>>> size and ether overhead size, it should be enlarged, otherwise > > >>>>>>>>>> the VLAN packets will be dropped. > > >>>>>>>>>> > > >>>>>>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > > >>>>>>>>>> > > >>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > >>>>>>>>> > > >>>>>>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> > > >>>>>>>> > > >>>>>>>> Applied to dpdk-next-net/main, thanks. > > >>>>>>>> > > >>>>>>>> only 1/2 applied since discussion is going on for 2/2. > > >>>>>>> > > >>>>>>> I'm not sure this testpmd change is good. > > >>>>>>> > > >>>>>>> Reminder: testpmd is for testing the PMDs. > > >>>>>>> Don't we want to see VLAN packets dropped in the case described > > >> above? > > >>>>>>> > > >>>>>> > > >>>>>> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all > > >>>>>> PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' > > >> value, > > >>>>>> which makes MTU between 1492-1500 depending on PMD. > > >>>>>> > > >>>>>> It is application responsibility to provide correct 'max_rx_pkt_len'. > > >>>>>> I guess the original intention was to set MTU as 1500 but was not > > >>>>>> correct for all PMDs and this patch is fixing it. > > >>>>>> > > >>>>>> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' > > >> will > > >>>>>> give MTU 1500), the other patch in the set is to fix it later. > > >>>>> > > >>>>> OK but the testpmd patch is just hiding the issue, isn't it? > > >>>>> > > >>>> > > >>>> I don't think so, issue was application (testpmd) setting the > > >> 'max_rx_pkt_len' > > >>>> wrong. > > >>>> > > >>>> What is hidden? > > >>> > > >>> I was looking for adding a helper in ethdev API. > > >>> But I think I can agree with your way of thinking. > > >>> > > >> > > >> The patch breaks running testpmd on Virtio-Net because the driver > > >> populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to > > >> UINT16_MAX as it was filled in by ethdev. As the result: > > >> > > >> Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to > > >> configure port 0 > > > > > > Similar issue occurred for other net PMD drivers which use default max_mtu (UINT16_MAX). > > > More strict checking condition will be added within new patch sooner. > > > > > > > :( > > > > For drivers not providing 'max_mtu' information explicitly, the default > > 'UINT16_MAX' is set in ethdev layer. > > This prevents calculating PMD specific 'overhead' and the logic in the patch is > > broken. > > > > Indeed this makes inconsistency in the driver too, for example for virtio, it > > claims 'max_rx_pktlen' as "VIRTIO_MAX_RX_PKTLEN (9728)" and 'max_mtu' as > > UINT16_MAX. From 'virtio_mtu_set()' we can see the real limit is > > 'VIRTIO_MAX_RX_PKTLEN'. > > > > When PMDs fixed, the logic in this patch can work but not sure if post -rc2 is > > good time to start fixing the PMDs. > > Do you suggest revert is the best choice here? ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-05 10:48 ` Thomas Monjalon @ 2020-11-05 10:50 ` Ferruh Yigit 2020-11-05 13:52 ` Olivier Matz 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-11-05 10:50 UTC (permalink / raw) To: Thomas Monjalon, Yang, SteveX, Andrew Rybchenko Cc: dev, Ananyev, Konstantin, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, david.marchand, jerinj, ajit.khaparde, maxime.coquelin, matan, viacheslavo, hemant.agrawal, bruce.richardson, stephen On 11/5/2020 10:48 AM, Thomas Monjalon wrote: > + more maintainers Cc'ed > > We have a critical issue with testpmd in -rc2. > It is blocking a lot of testing. > Would be good to do a -rc3 today. > Please see below. > > 05/11/2020 11:44, Thomas Monjalon: >> 05/11/2020 11:37, Ferruh Yigit: >>> On 11/5/2020 9:33 AM, Yang, SteveX wrote: >>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> >>>>> Sent: Thursday, November 5, 2020 4:54 PM >>>>> To: Thomas Monjalon <thomas@monjalon.net>; Yang, SteveX >>>>> <stevex.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com> >>>>> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; >>>>> Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; >>>>> Iremonger, Bernard <bernard.iremonger@intel.com>; Yang, Qiming >>>>> <qiming.yang@intel.com>; mdr@ashroe.eu; nhorman@tuxdriver.com; >>>>> david.marchand@redhat.com >>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet >>>>> length for VLAN packets >>>>> >>>>> On 11/4/20 11:39 PM, Thomas Monjalon wrote: >>>>>> 04/11/2020 21:19, Ferruh Yigit: >>>>>>> On 11/4/2020 5:55 PM, Thomas Monjalon wrote: >>>>>>>> 04/11/2020 18:07, Ferruh Yigit: >>>>>>>>> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: >>>>>>>>>> 03/11/2020 14:29, Ferruh Yigit: >>>>>>>>>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: >>>>>>>>>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: >>>>>>>>>>>>> When the max rx packet length is smaller than the sum of mtu >>>>>>>>>>>>> size and ether overhead size, it should be enlarged, otherwise >>>>>>>>>>>>> the VLAN packets will be dropped. >>>>>>>>>>>>> >>>>>>>>>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") >>>>>>>>>>>>> >>>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>>>>> >>>>>>>>>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> >>>>>>>>>>> >>>>>>>>>>> Applied to dpdk-next-net/main, thanks. >>>>>>>>>>> >>>>>>>>>>> only 1/2 applied since discussion is going on for 2/2. >>>>>>>>>> >>>>>>>>>> I'm not sure this testpmd change is good. >>>>>>>>>> >>>>>>>>>> Reminder: testpmd is for testing the PMDs. >>>>>>>>>> Don't we want to see VLAN packets dropped in the case described >>>>> above? >>>>>>>>>> >>>>>>>>> >>>>>>>>> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all >>>>>>>>> PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' >>>>> value, >>>>>>>>> which makes MTU between 1492-1500 depending on PMD. >>>>>>>>> >>>>>>>>> It is application responsibility to provide correct 'max_rx_pkt_len'. >>>>>>>>> I guess the original intention was to set MTU as 1500 but was not >>>>>>>>> correct for all PMDs and this patch is fixing it. >>>>>>>>> >>>>>>>>> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' >>>>> will >>>>>>>>> give MTU 1500), the other patch in the set is to fix it later. >>>>>>>> >>>>>>>> OK but the testpmd patch is just hiding the issue, isn't it? >>>>>>>> >>>>>>> >>>>>>> I don't think so, issue was application (testpmd) setting the >>>>> 'max_rx_pkt_len' >>>>>>> wrong. >>>>>>> >>>>>>> What is hidden? >>>>>> >>>>>> I was looking for adding a helper in ethdev API. >>>>>> But I think I can agree with your way of thinking. >>>>>> >>>>> >>>>> The patch breaks running testpmd on Virtio-Net because the driver >>>>> populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to >>>>> UINT16_MAX as it was filled in by ethdev. As the result: >>>>> >>>>> Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to >>>>> configure port 0 >>>> >>>> Similar issue occurred for other net PMD drivers which use default max_mtu (UINT16_MAX). >>>> More strict checking condition will be added within new patch sooner. >>>> >>> >>> :( >>> >>> For drivers not providing 'max_mtu' information explicitly, the default >>> 'UINT16_MAX' is set in ethdev layer. >>> This prevents calculating PMD specific 'overhead' and the logic in the patch is >>> broken. >>> >>> Indeed this makes inconsistency in the driver too, for example for virtio, it >>> claims 'max_rx_pktlen' as "VIRTIO_MAX_RX_PKTLEN (9728)" and 'max_mtu' as >>> UINT16_MAX. From 'virtio_mtu_set()' we can see the real limit is >>> 'VIRTIO_MAX_RX_PKTLEN'. >>> >>> When PMDs fixed, the logic in this patch can work but not sure if post -rc2 is >>> good time to start fixing the PMDs. >> >> Do you suggest revert is the best choice here? > > (copy/pasting previous reply to this eamil) One option is revert, but than the issue this patch is trying to fix still remain. Other option is the extend the patch as Steve sent [1], the check there is more like workaround in application, so not nice to have them, but with extending the deprecation notice (other patch in this patchset) to fix PMDs too in next release, I would be OK to have these checks. What do you think? [1] https://patches.dpdk.org/patch/83717/ ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-05 10:50 ` Ferruh Yigit @ 2020-11-05 13:52 ` Olivier Matz 2020-11-05 15:11 ` Lance Richardson 0 siblings, 1 reply; 94+ messages in thread From: Olivier Matz @ 2020-11-05 13:52 UTC (permalink / raw) To: Ferruh Yigit Cc: Thomas Monjalon, Yang, SteveX, Andrew Rybchenko, dev, Ananyev, Konstantin, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, david.marchand, jerinj, ajit.khaparde, maxime.coquelin, matan, viacheslavo, hemant.agrawal, bruce.richardson, stephen On Thu, Nov 05, 2020 at 10:50:45AM +0000, Ferruh Yigit wrote: > On 11/5/2020 10:48 AM, Thomas Monjalon wrote: > > + more maintainers Cc'ed > > > > We have a critical issue with testpmd in -rc2. > > It is blocking a lot of testing. > > Would be good to do a -rc3 today. > > Please see below. > > > > 05/11/2020 11:44, Thomas Monjalon: > > > 05/11/2020 11:37, Ferruh Yigit: > > > > On 11/5/2020 9:33 AM, Yang, SteveX wrote: > > > > > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> > > > > > > Sent: Thursday, November 5, 2020 4:54 PM > > > > > > To: Thomas Monjalon <thomas@monjalon.net>; Yang, SteveX > > > > > > <stevex.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com> > > > > > > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > > > > > > Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; > > > > > > Iremonger, Bernard <bernard.iremonger@intel.com>; Yang, Qiming > > > > > > <qiming.yang@intel.com>; mdr@ashroe.eu; nhorman@tuxdriver.com; > > > > > > david.marchand@redhat.com > > > > > > Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet > > > > > > length for VLAN packets > > > > > > > > > > > > On 11/4/20 11:39 PM, Thomas Monjalon wrote: > > > > > > > 04/11/2020 21:19, Ferruh Yigit: > > > > > > > > On 11/4/2020 5:55 PM, Thomas Monjalon wrote: > > > > > > > > > 04/11/2020 18:07, Ferruh Yigit: > > > > > > > > > > On 11/4/2020 4:51 PM, Thomas Monjalon wrote: > > > > > > > > > > > 03/11/2020 14:29, Ferruh Yigit: > > > > > > > > > > > > On 11/2/2020 11:48 AM, Ferruh Yigit wrote: > > > > > > > > > > > > > On 11/2/2020 8:52 AM, SteveX Yang wrote: > > > > > > > > > > > > > > When the max rx packet length is smaller than the sum of mtu > > > > > > > > > > > > > > size and ether overhead size, it should be enlarged, otherwise > > > > > > > > > > > > > > the VLAN packets will be dropped. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > > > > > > > > > > > > > > > > > > > > > > > > > > > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > > > > > > > > > > > > > > > > > > > > > > > > > Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> > > > > > > > > > > > > > > > > > > > > > > > > Applied to dpdk-next-net/main, thanks. > > > > > > > > > > > > > > > > > > > > > > > > only 1/2 applied since discussion is going on for 2/2. > > > > > > > > > > > > > > > > > > > > > > I'm not sure this testpmd change is good. > > > > > > > > > > > > > > > > > > > > > > Reminder: testpmd is for testing the PMDs. > > > > > > > > > > > Don't we want to see VLAN packets dropped in the case described > > > > > > above? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all > > > > > > > > > > PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' > > > > > > value, > > > > > > > > > > which makes MTU between 1492-1500 depending on PMD. > > > > > > > > > > > > > > > > > > > > It is application responsibility to provide correct 'max_rx_pkt_len'. > > > > > > > > > > I guess the original intention was to set MTU as 1500 but was not > > > > > > > > > > correct for all PMDs and this patch is fixing it. > > > > > > > > > > > > > > > > > > > > The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' > > > > > > will > > > > > > > > > > give MTU 1500), the other patch in the set is to fix it later. > > > > > > > > > > > > > > > > > > OK but the testpmd patch is just hiding the issue, isn't it? > > > > > > > > > > > > > > > > > > > > > > > > > I don't think so, issue was application (testpmd) setting the > > > > > > 'max_rx_pkt_len' > > > > > > > > wrong. > > > > > > > > > > > > > > > > What is hidden? > > > > > > > > > > > > > > I was looking for adding a helper in ethdev API. > > > > > > > But I think I can agree with your way of thinking. > > > > > > > > > > > > > > > > > > > The patch breaks running testpmd on Virtio-Net because the driver > > > > > > populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to > > > > > > UINT16_MAX as it was filled in by ethdev. As the result: > > > > > > > > > > > > Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to > > > > > > configure port 0 > > > > > > > > > > Similar issue occurred for other net PMD drivers which use default max_mtu (UINT16_MAX). > > > > > More strict checking condition will be added within new patch sooner. > > > > > > > > > > > > > :( > > > > > > > > For drivers not providing 'max_mtu' information explicitly, the default > > > > 'UINT16_MAX' is set in ethdev layer. > > > > This prevents calculating PMD specific 'overhead' and the logic in the patch is > > > > broken. > > > > > > > > Indeed this makes inconsistency in the driver too, for example for virtio, it > > > > claims 'max_rx_pktlen' as "VIRTIO_MAX_RX_PKTLEN (9728)" and 'max_mtu' as > > > > UINT16_MAX. From 'virtio_mtu_set()' we can see the real limit is > > > > 'VIRTIO_MAX_RX_PKTLEN'. > > > > > > > > When PMDs fixed, the logic in this patch can work but not sure if post -rc2 is > > > > good time to start fixing the PMDs. > > > > > > Do you suggest revert is the best choice here? > > > > > > (copy/pasting previous reply to this eamil) > > One option is revert, but than the issue this patch is trying to fix still remain. > > Other option is the extend the patch as Steve sent [1], the check there is > more like workaround in application, so not nice to have them, but with > extending the deprecation notice (other patch in this patchset) to fix PMDs > too in next release, I would be OK to have these checks. What do you think? +1 for this second option. I think it is ok to have a workaround to fix an issue. Clarifying and uniformizing the ethdev/drivers behavior in that area can come in a second time. > [1] > https://patches.dpdk.org/patch/83717/ ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-05 13:52 ` Olivier Matz @ 2020-11-05 15:11 ` Lance Richardson 2020-11-05 15:56 ` Ferruh Yigit 0 siblings, 1 reply; 94+ messages in thread From: Lance Richardson @ 2020-11-05 15:11 UTC (permalink / raw) To: Olivier Matz Cc: Ferruh Yigit, Thomas Monjalon, Yang, SteveX, Andrew Rybchenko, dev, Ananyev, Konstantin, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, david.marchand, jerinj, Ajit Kumar Khaparde, Maxime Coquelin, matan, viacheslavo, hemant.agrawal, Bruce Richardson, Stephen Hemminger With this change, the bnxt driver fails to initialize under testpmd: Configuring Port 0 (socket 0) Port 0 failed to enable Rx offload JUMBO_FRAME Fail to configure port 0 EAL: Error - exiting with code: 1 It appears that the cause is this bit of code in bnxt_ethdev.c: if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) { bp->eth_dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; bp->flags |= BNXT_FLAG_JUMBO; } else { bp->eth_dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; bp->flags &= ~BNXT_FLAG_JUMBO; } Should a PMD be overriding this offload on dev_start()? Or should this test be changed to be based on max_rx_pkt_len instead of mtu? Thanks, Lance On Thu, Nov 5, 2020 at 8:52 AM Olivier Matz <olivier.matz@6wind.com> wrote: > > On Thu, Nov 05, 2020 at 10:50:45AM +0000, Ferruh Yigit wrote: > > On 11/5/2020 10:48 AM, Thomas Monjalon wrote: > > > + more maintainers Cc'ed > > > > > > We have a critical issue with testpmd in -rc2. > > > It is blocking a lot of testing. > > > Would be good to do a -rc3 today. > > > Please see below. > > > > > > 05/11/2020 11:44, Thomas Monjalon: > > > > 05/11/2020 11:37, Ferruh Yigit: > > > > > On 11/5/2020 9:33 AM, Yang, SteveX wrote: > > > > > > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> > > > > > > > Sent: Thursday, November 5, 2020 4:54 PM > > > > > > > To: Thomas Monjalon <thomas@monjalon.net>; Yang, SteveX > > > > > > > <stevex.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com> > > > > > > > Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; > > > > > > > Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; > > > > > > > Iremonger, Bernard <bernard.iremonger@intel.com>; Yang, Qiming > > > > > > > <qiming.yang@intel.com>; mdr@ashroe.eu; nhorman@tuxdriver.com; > > > > > > > david.marchand@redhat.com > > > > > > > Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet > > > > > > > length for VLAN packets > > > > > > > > > > > > > > On 11/4/20 11:39 PM, Thomas Monjalon wrote: > > > > > > > > 04/11/2020 21:19, Ferruh Yigit: > > > > > > > > > On 11/4/2020 5:55 PM, Thomas Monjalon wrote: > > > > > > > > > > 04/11/2020 18:07, Ferruh Yigit: > > > > > > > > > > > On 11/4/2020 4:51 PM, Thomas Monjalon wrote: > > > > > > > > > > > > 03/11/2020 14:29, Ferruh Yigit: > > > > > > > > > > > > > On 11/2/2020 11:48 AM, Ferruh Yigit wrote: > > > > > > > > > > > > > > On 11/2/2020 8:52 AM, SteveX Yang wrote: > > > > > > > > > > > > > > > When the max rx packet length is smaller than the sum of mtu > > > > > > > > > > > > > > > size and ether overhead size, it should be enlarged, otherwise > > > > > > > > > > > > > > > the VLAN packets will be dropped. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > > > > > > > > > > > > > > > > > > > > > > > > > > > > Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> > > > > > > > > > > > > > > > > > > > > > > > > > > Applied to dpdk-next-net/main, thanks. > > > > > > > > > > > > > > > > > > > > > > > > > > only 1/2 applied since discussion is going on for 2/2. > > > > > > > > > > > > > > > > > > > > > > > > I'm not sure this testpmd change is good. > > > > > > > > > > > > > > > > > > > > > > > > Reminder: testpmd is for testing the PMDs. > > > > > > > > > > > > Don't we want to see VLAN packets dropped in the case described > > > > > > > above? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all > > > > > > > > > > > PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' > > > > > > > value, > > > > > > > > > > > which makes MTU between 1492-1500 depending on PMD. > > > > > > > > > > > > > > > > > > > > > > It is application responsibility to provide correct 'max_rx_pkt_len'. > > > > > > > > > > > I guess the original intention was to set MTU as 1500 but was not > > > > > > > > > > > correct for all PMDs and this patch is fixing it. > > > > > > > > > > > > > > > > > > > > > > The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' > > > > > > > will > > > > > > > > > > > give MTU 1500), the other patch in the set is to fix it later. > > > > > > > > > > > > > > > > > > > > OK but the testpmd patch is just hiding the issue, isn't it? > > > > > > > > > > > > > > > > > > > > > > > > > > > > I don't think so, issue was application (testpmd) setting the > > > > > > > 'max_rx_pkt_len' > > > > > > > > > wrong. > > > > > > > > > > > > > > > > > > What is hidden? > > > > > > > > > > > > > > > > I was looking for adding a helper in ethdev API. > > > > > > > > But I think I can agree with your way of thinking. > > > > > > > > > > > > > > > > > > > > > > The patch breaks running testpmd on Virtio-Net because the driver > > > > > > > populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to > > > > > > > UINT16_MAX as it was filled in by ethdev. As the result: > > > > > > > > > > > > > > Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to > > > > > > > configure port 0 > > > > > > > > > > > > Similar issue occurred for other net PMD drivers which use default max_mtu (UINT16_MAX). > > > > > > More strict checking condition will be added within new patch sooner. > > > > > > > > > > > > > > > > :( > > > > > > > > > > For drivers not providing 'max_mtu' information explicitly, the default > > > > > 'UINT16_MAX' is set in ethdev layer. > > > > > This prevents calculating PMD specific 'overhead' and the logic in the patch is > > > > > broken. > > > > > > > > > > Indeed this makes inconsistency in the driver too, for example for virtio, it > > > > > claims 'max_rx_pktlen' as "VIRTIO_MAX_RX_PKTLEN (9728)" and 'max_mtu' as > > > > > UINT16_MAX. From 'virtio_mtu_set()' we can see the real limit is > > > > > 'VIRTIO_MAX_RX_PKTLEN'. > > > > > > > > > > When PMDs fixed, the logic in this patch can work but not sure if post -rc2 is > > > > > good time to start fixing the PMDs. > > > > > > > > Do you suggest revert is the best choice here? > > > > > > > > > > (copy/pasting previous reply to this eamil) > > > > One option is revert, but than the issue this patch is trying to fix still remain. > > > > Other option is the extend the patch as Steve sent [1], the check there is > > more like workaround in application, so not nice to have them, but with > > extending the deprecation notice (other patch in this patchset) to fix PMDs > > too in next release, I would be OK to have these checks. What do you think? > > +1 for this second option. > > I think it is ok to have a workaround to fix an issue. Clarifying and > uniformizing the ethdev/drivers behavior in that area can come in a > second time. > > > [1] > > https://patches.dpdk.org/patch/83717/ ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-05 15:11 ` Lance Richardson @ 2020-11-05 15:56 ` Ferruh Yigit 2020-11-05 16:23 ` Lance Richardson 2020-11-05 17:44 ` [dpdk-dev] [PATCH 1/1] app/testpmd: revert max Rx packet length adjustment Thomas Monjalon 0 siblings, 2 replies; 94+ messages in thread From: Ferruh Yigit @ 2020-11-05 15:56 UTC (permalink / raw) To: Lance Richardson, Olivier Matz Cc: Thomas Monjalon, Yang, SteveX, Andrew Rybchenko, dev, Ananyev, Konstantin, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, david.marchand, jerinj, Ajit Kumar Khaparde, Maxime Coquelin, matan, viacheslavo, hemant.agrawal, Bruce Richardson, Stephen Hemminger On 11/5/2020 3:11 PM, Lance Richardson wrote: > With this change, the bnxt driver fails to initialize under testpmd: > > Configuring Port 0 (socket 0) > Port 0 failed to enable Rx offload JUMBO_FRAME > Fail to configure port 0 > EAL: Error - exiting with code: 1 > > It appears that the cause is this bit of code in bnxt_ethdev.c: > > if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) { > bp->eth_dev->data->dev_conf.rxmode.offloads |= > DEV_RX_OFFLOAD_JUMBO_FRAME; > bp->flags |= BNXT_FLAG_JUMBO; > } else { > bp->eth_dev->data->dev_conf.rxmode.offloads &= > ~DEV_RX_OFFLOAD_JUMBO_FRAME; > bp->flags &= ~BNXT_FLAG_JUMBO; > } > > Should a PMD be overriding this offload on dev_start()? Or should this > test be changed to be based on max_rx_pkt_len instead of mtu? > I think testing 'mtu' is correct thing to do, problem looks somewhere else, First the code cause problem in the driver looks in another place, following in 'bnxt_mtu_set_op()': if (new_mtu > RTE_ETHER_MTU) { bp->flags |= BNXT_FLAG_JUMBO; bp->eth_dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; } else { bp->eth_dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; bp->flags &= ~BNXT_FLAG_JUMBO; } The backtrace is rte_eth_dev_configure() bnxt_dev_configure_op() bnxt_mtu_set_op() //cleans the JUMBO FRAME CONFIG //complains requested JUMBO FRAME is not set How 'ethdev' is checking for jumbo frame is wrong: http://lxr.dpdk.org/dpdk/latest/source/lib/librte_ethdev/rte_ethdev.c#L1344 If application doesn't set the JUMBO FRAME flag, it doesn't allow max_rx_pkt_len to be more than 'RTE_ETHER_MAX_LEN', instead this should be checked against the MTU, not 'RTE_ETHER_MAX_LEN', the deprecation notice in other patch is to fix this. To workaround above behavior, testpmd sets the JUMBO FRAME flag, and bnxt detects requested frame size doesn't require the JUMBO FRAME support and unsets the flag during configure, but this time 'rte_eth_dev_configure()' API complains that requested offload (JUMBO FRAME) is not enabled by the driver. The ethdev part not fixed in this release because there are PMDs testing JUMBO frame against 'max_rx_pkt_len', didn't want to create unexpected side affect for them. Not sure what to do, Perhaps we can revert the patch for this release, and in next release we can fix testpmd, ethdev and PMDs altogether. Even possible to remove the JUMBO FRAME offload flag as already suggested: https://mails.dpdk.org/archives/dev/2020-November/190940.html > Thanks, > Lance > > On Thu, Nov 5, 2020 at 8:52 AM Olivier Matz <olivier.matz@6wind.com> wrote: >> >> On Thu, Nov 05, 2020 at 10:50:45AM +0000, Ferruh Yigit wrote: >>> On 11/5/2020 10:48 AM, Thomas Monjalon wrote: >>>> + more maintainers Cc'ed >>>> >>>> We have a critical issue with testpmd in -rc2. >>>> It is blocking a lot of testing. >>>> Would be good to do a -rc3 today. >>>> Please see below. >>>> >>>> 05/11/2020 11:44, Thomas Monjalon: >>>>> 05/11/2020 11:37, Ferruh Yigit: >>>>>> On 11/5/2020 9:33 AM, Yang, SteveX wrote: >>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> >>>>>>>> Sent: Thursday, November 5, 2020 4:54 PM >>>>>>>> To: Thomas Monjalon <thomas@monjalon.net>; Yang, SteveX >>>>>>>> <stevex.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com> >>>>>>>> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; >>>>>>>> Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; >>>>>>>> Iremonger, Bernard <bernard.iremonger@intel.com>; Yang, Qiming >>>>>>>> <qiming.yang@intel.com>; mdr@ashroe.eu; nhorman@tuxdriver.com; >>>>>>>> david.marchand@redhat.com >>>>>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet >>>>>>>> length for VLAN packets >>>>>>>> >>>>>>>> On 11/4/20 11:39 PM, Thomas Monjalon wrote: >>>>>>>>> 04/11/2020 21:19, Ferruh Yigit: >>>>>>>>>> On 11/4/2020 5:55 PM, Thomas Monjalon wrote: >>>>>>>>>>> 04/11/2020 18:07, Ferruh Yigit: >>>>>>>>>>>> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: >>>>>>>>>>>>> 03/11/2020 14:29, Ferruh Yigit: >>>>>>>>>>>>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: >>>>>>>>>>>>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: >>>>>>>>>>>>>>>> When the max rx packet length is smaller than the sum of mtu >>>>>>>>>>>>>>>> size and ether overhead size, it should be enlarged, otherwise >>>>>>>>>>>>>>>> the VLAN packets will be dropped. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Applied to dpdk-next-net/main, thanks. >>>>>>>>>>>>>> >>>>>>>>>>>>>> only 1/2 applied since discussion is going on for 2/2. >>>>>>>>>>>>> >>>>>>>>>>>>> I'm not sure this testpmd change is good. >>>>>>>>>>>>> >>>>>>>>>>>>> Reminder: testpmd is for testing the PMDs. >>>>>>>>>>>>> Don't we want to see VLAN packets dropped in the case described >>>>>>>> above? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all >>>>>>>>>>>> PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' >>>>>>>> value, >>>>>>>>>>>> which makes MTU between 1492-1500 depending on PMD. >>>>>>>>>>>> >>>>>>>>>>>> It is application responsibility to provide correct 'max_rx_pkt_len'. >>>>>>>>>>>> I guess the original intention was to set MTU as 1500 but was not >>>>>>>>>>>> correct for all PMDs and this patch is fixing it. >>>>>>>>>>>> >>>>>>>>>>>> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' >>>>>>>> will >>>>>>>>>>>> give MTU 1500), the other patch in the set is to fix it later. >>>>>>>>>>> >>>>>>>>>>> OK but the testpmd patch is just hiding the issue, isn't it? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I don't think so, issue was application (testpmd) setting the >>>>>>>> 'max_rx_pkt_len' >>>>>>>>>> wrong. >>>>>>>>>> >>>>>>>>>> What is hidden? >>>>>>>>> >>>>>>>>> I was looking for adding a helper in ethdev API. >>>>>>>>> But I think I can agree with your way of thinking. >>>>>>>>> >>>>>>>> >>>>>>>> The patch breaks running testpmd on Virtio-Net because the driver >>>>>>>> populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to >>>>>>>> UINT16_MAX as it was filled in by ethdev. As the result: >>>>>>>> >>>>>>>> Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to >>>>>>>> configure port 0 >>>>>>> >>>>>>> Similar issue occurred for other net PMD drivers which use default max_mtu (UINT16_MAX). >>>>>>> More strict checking condition will be added within new patch sooner. >>>>>>> >>>>>> >>>>>> :( >>>>>> >>>>>> For drivers not providing 'max_mtu' information explicitly, the default >>>>>> 'UINT16_MAX' is set in ethdev layer. >>>>>> This prevents calculating PMD specific 'overhead' and the logic in the patch is >>>>>> broken. >>>>>> >>>>>> Indeed this makes inconsistency in the driver too, for example for virtio, it >>>>>> claims 'max_rx_pktlen' as "VIRTIO_MAX_RX_PKTLEN (9728)" and 'max_mtu' as >>>>>> UINT16_MAX. From 'virtio_mtu_set()' we can see the real limit is >>>>>> 'VIRTIO_MAX_RX_PKTLEN'. >>>>>> >>>>>> When PMDs fixed, the logic in this patch can work but not sure if post -rc2 is >>>>>> good time to start fixing the PMDs. >>>>> >>>>> Do you suggest revert is the best choice here? >>>> >>>> >>> >>> (copy/pasting previous reply to this eamil) >>> >>> One option is revert, but than the issue this patch is trying to fix still remain. >>> >>> Other option is the extend the patch as Steve sent [1], the check there is >>> more like workaround in application, so not nice to have them, but with >>> extending the deprecation notice (other patch in this patchset) to fix PMDs >>> too in next release, I would be OK to have these checks. What do you think? >> >> +1 for this second option. >> >> I think it is ok to have a workaround to fix an issue. Clarifying and >> uniformizing the ethdev/drivers behavior in that area can come in a >> second time. >> >>> [1] >>> https://patches.dpdk.org/patch/83717/ ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-05 15:56 ` Ferruh Yigit @ 2020-11-05 16:23 ` Lance Richardson 2020-11-05 17:44 ` [dpdk-dev] [PATCH 1/1] app/testpmd: revert max Rx packet length adjustment Thomas Monjalon 1 sibling, 0 replies; 94+ messages in thread From: Lance Richardson @ 2020-11-05 16:23 UTC (permalink / raw) To: Ferruh Yigit Cc: Olivier Matz, Thomas Monjalon, Yang, SteveX, Andrew Rybchenko, dev, Ananyev, Konstantin, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, david.marchand, jerinj, Ajit Kumar Khaparde, Maxime Coquelin, matan, viacheslavo, hemant.agrawal, Bruce Richardson, Stephen Hemminger > First the code cause problem in the driver looks in another place, following in > 'bnxt_mtu_set_op()': > > if (new_mtu > RTE_ETHER_MTU) { > bp->flags |= BNXT_FLAG_JUMBO; > bp->eth_dev->data->dev_conf.rxmode.offloads |= > DEV_RX_OFFLOAD_JUMBO_FRAME; > } else { > bp->eth_dev->data->dev_conf.rxmode.offloads &= > ~DEV_RX_OFFLOAD_JUMBO_FRAME; > bp->flags &= ~BNXT_FLAG_JUMBO; > } > You're correct, the issue in this case is definitely in bnxt_mtu_set_op(), not the similar code that is executed for the start op. +1 to the idea of removing DEV_RX_OFFLOAD_JUMBO_FRAME. ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH 1/1] app/testpmd: revert max Rx packet length adjustment 2020-11-05 15:56 ` Ferruh Yigit 2020-11-05 16:23 ` Lance Richardson @ 2020-11-05 17:44 ` Thomas Monjalon 2020-11-05 18:02 ` Lance Richardson 2020-11-05 18:11 ` Ferruh Yigit 1 sibling, 2 replies; 94+ messages in thread From: Thomas Monjalon @ 2020-11-05 17:44 UTC (permalink / raw) To: dev Cc: ferruh.yigit, david.marchand, olivier.matz, andrew.rybchenko, lance.richardson, maxime.coquelin, stable, Wenzhuo Lu, Beilei Xing, Bernard Iremonger, Steve Yang The fix of max_rx_pkt_len for allowing VLAN packets in all cases was breaking configuration of some drivers. Example with virtio: Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to configure port 0 Trying to fix the logic was revealing other issues in some drivers. That's why it is decided to revert. The workaround for the original issue would be to set the MTU explicitly from the application with rte_eth_dev_set_mtu(). Fixes: f6870a7ed6b3 ("app/testpmd: fix max Rx packet length for VLAN packet") Cc: stable@dpdk.org Reported-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Signed-off-by: Thomas Monjalon <thomas@monjalon.net> --- app/test-pmd/testpmd.c | 23 ----------------------- 1 file changed, 23 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index c263121a9a..33fc0fddf5 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1421,7 +1421,6 @@ init_config(void) struct rte_gro_param gro_param; uint32_t gso_types; uint16_t data_size; - uint16_t overhead_len; bool warning = 0; int k; int ret; @@ -1458,28 +1457,6 @@ init_config(void) rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n"); - /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */ - if (port->dev_info.max_rx_pktlen && port->dev_info.max_mtu) - overhead_len = port->dev_info.max_rx_pktlen - - port->dev_info.max_mtu; - else - overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; - - port->dev_conf.rxmode.max_rx_pkt_len = - RTE_ETHER_MTU + overhead_len; - - /* - * This is workaround to avoid resize max rx packet len. - * Ethdev assumes jumbo frame size must be greater than - * RTE_ETHER_MAX_LEN, and will resize 'max_rx_pkt_len' to - * default value when it is greater than RTE_ETHER_MAX_LEN - * for normal frame. - */ - if (port->dev_conf.rxmode.max_rx_pkt_len > RTE_ETHER_MAX_LEN) { - port->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - } - if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)) port->dev_conf.txmode.offloads &= -- 2.28.0 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH 1/1] app/testpmd: revert max Rx packet length adjustment 2020-11-05 17:44 ` [dpdk-dev] [PATCH 1/1] app/testpmd: revert max Rx packet length adjustment Thomas Monjalon @ 2020-11-05 18:02 ` Lance Richardson 2020-11-05 18:11 ` Ferruh Yigit 1 sibling, 0 replies; 94+ messages in thread From: Lance Richardson @ 2020-11-05 18:02 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Ferruh Yigit, David Marchand, Olivier Matz, Andrew Rybchenko, Maxime Coquelin, stable, Wenzhuo Lu, Beilei Xing, Bernard Iremonger, Steve Yang On Thu, Nov 5, 2020 at 12:51 PM Thomas Monjalon <thomas@monjalon.net> wrote: > > The fix of max_rx_pkt_len for allowing VLAN packets in all cases > was breaking configuration of some drivers. Example with virtio: > > Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 > Fail to configure port 0 > > Trying to fix the logic was revealing other issues in some drivers. > That's why it is decided to revert. > > The workaround for the original issue would be > to set the MTU explicitly from the application > with rte_eth_dev_set_mtu(). > > Fixes: f6870a7ed6b3 ("app/testpmd: fix max Rx packet length for VLAN packet") > Cc: stable@dpdk.org > > Reported-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> > Signed-off-by: Thomas Monjalon <thomas@monjalon.net> > --- Acked-by: Lance Richardson <lance.richardson@broadcom.com> ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH 1/1] app/testpmd: revert max Rx packet length adjustment 2020-11-05 17:44 ` [dpdk-dev] [PATCH 1/1] app/testpmd: revert max Rx packet length adjustment Thomas Monjalon 2020-11-05 18:02 ` Lance Richardson @ 2020-11-05 18:11 ` Ferruh Yigit 2020-11-05 18:18 ` Thomas Monjalon 1 sibling, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-11-05 18:11 UTC (permalink / raw) To: Thomas Monjalon, dev Cc: david.marchand, olivier.matz, andrew.rybchenko, lance.richardson, maxime.coquelin, stable, Wenzhuo Lu, Beilei Xing, Bernard Iremonger, Steve Yang On 11/5/2020 5:44 PM, Thomas Monjalon wrote: > The fix of max_rx_pkt_len for allowing VLAN packets in all cases > was breaking configuration of some drivers. Example with virtio: > > Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 > Fail to configure port 0 > > Trying to fix the logic was revealing other issues in some drivers. > That's why it is decided to revert. > > The workaround for the original issue would be > to set the MTU explicitly from the application > with rte_eth_dev_set_mtu(). > Sent this option as RFC: https://patches.dpdk.org/patch/83756/ > Fixes: f6870a7ed6b3 ("app/testpmd: fix max Rx packet length for VLAN packet") > Cc: stable@dpdk.org > > Reported-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> > Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com> ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH 1/1] app/testpmd: revert max Rx packet length adjustment 2020-11-05 18:11 ` Ferruh Yigit @ 2020-11-05 18:18 ` Thomas Monjalon 0 siblings, 0 replies; 94+ messages in thread From: Thomas Monjalon @ 2020-11-05 18:18 UTC (permalink / raw) To: Ferruh Yigit Cc: dev, david.marchand, olivier.matz, andrew.rybchenko, lance.richardson, maxime.coquelin, stable, Wenzhuo Lu, Beilei Xing, Bernard Iremonger, Steve Yang 05/11/2020 19:11, Ferruh Yigit: > On 11/5/2020 5:44 PM, Thomas Monjalon wrote: > > The fix of max_rx_pkt_len for allowing VLAN packets in all cases > > was breaking configuration of some drivers. Example with virtio: > > > > Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 > > Fail to configure port 0 > > > > Trying to fix the logic was revealing other issues in some drivers. > > That's why it is decided to revert. > > > > The workaround for the original issue would be > > to set the MTU explicitly from the application > > with rte_eth_dev_set_mtu(). > > > > Sent this option as RFC: > https://patches.dpdk.org/patch/83756/ > > > Fixes: f6870a7ed6b3 ("app/testpmd: fix max Rx packet length for VLAN packet") > > Cc: stable@dpdk.org > > > > Reported-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> > > Signed-off-by: Thomas Monjalon <thomas@monjalon.net> > Acked-by: Ferruh Yigit <ferruh.yigit@intel.com> Applied ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets 2020-11-05 10:44 ` Thomas Monjalon 2020-11-05 10:48 ` Thomas Monjalon @ 2020-11-05 10:49 ` Ferruh Yigit 1 sibling, 0 replies; 94+ messages in thread From: Ferruh Yigit @ 2020-11-05 10:49 UTC (permalink / raw) To: Thomas Monjalon, Yang, SteveX, Andrew Rybchenko Cc: dev, Ananyev, Konstantin, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, nhorman, david.marchand On 11/5/2020 10:44 AM, Thomas Monjalon wrote: > 05/11/2020 11:37, Ferruh Yigit: >> On 11/5/2020 9:33 AM, Yang, SteveX wrote: >>> >>> >>>> -----Original Message----- >>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> >>>> Sent: Thursday, November 5, 2020 4:54 PM >>>> To: Thomas Monjalon <thomas@monjalon.net>; Yang, SteveX >>>> <stevex.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com> >>>> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; >>>> Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>; >>>> Iremonger, Bernard <bernard.iremonger@intel.com>; Yang, Qiming >>>> <qiming.yang@intel.com>; mdr@ashroe.eu; nhorman@tuxdriver.com; >>>> david.marchand@redhat.com >>>> Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet >>>> length for VLAN packets >>>> >>>> On 11/4/20 11:39 PM, Thomas Monjalon wrote: >>>>> 04/11/2020 21:19, Ferruh Yigit: >>>>>> On 11/4/2020 5:55 PM, Thomas Monjalon wrote: >>>>>>> 04/11/2020 18:07, Ferruh Yigit: >>>>>>>> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: >>>>>>>>> 03/11/2020 14:29, Ferruh Yigit: >>>>>>>>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: >>>>>>>>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: >>>>>>>>>>>> When the max rx packet length is smaller than the sum of mtu >>>>>>>>>>>> size and ether overhead size, it should be enlarged, otherwise >>>>>>>>>>>> the VLAN packets will be dropped. >>>>>>>>>>>> >>>>>>>>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") >>>>>>>>>>>> >>>>>>>>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>>>>>>> >>>>>>>>>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> >>>>>>>>>> >>>>>>>>>> Applied to dpdk-next-net/main, thanks. >>>>>>>>>> >>>>>>>>>> only 1/2 applied since discussion is going on for 2/2. >>>>>>>>> >>>>>>>>> I'm not sure this testpmd change is good. >>>>>>>>> >>>>>>>>> Reminder: testpmd is for testing the PMDs. >>>>>>>>> Don't we want to see VLAN packets dropped in the case described >>>> above? >>>>>>>>> >>>>>>>> >>>>>>>> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all >>>>>>>> PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' >>>> value, >>>>>>>> which makes MTU between 1492-1500 depending on PMD. >>>>>>>> >>>>>>>> It is application responsibility to provide correct 'max_rx_pkt_len'. >>>>>>>> I guess the original intention was to set MTU as 1500 but was not >>>>>>>> correct for all PMDs and this patch is fixing it. >>>>>>>> >>>>>>>> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' >>>> will >>>>>>>> give MTU 1500), the other patch in the set is to fix it later. >>>>>>> >>>>>>> OK but the testpmd patch is just hiding the issue, isn't it? >>>>>>> >>>>>> >>>>>> I don't think so, issue was application (testpmd) setting the >>>> 'max_rx_pkt_len' >>>>>> wrong. >>>>>> >>>>>> What is hidden? >>>>> >>>>> I was looking for adding a helper in ethdev API. >>>>> But I think I can agree with your way of thinking. >>>>> >>>> >>>> The patch breaks running testpmd on Virtio-Net because the driver >>>> populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to >>>> UINT16_MAX as it was filled in by ethdev. As the result: >>>> >>>> Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to >>>> configure port 0 >>> >>> Similar issue occurred for other net PMD drivers which use default max_mtu (UINT16_MAX). >>> More strict checking condition will be added within new patch sooner. >>> >> >> :( >> >> For drivers not providing 'max_mtu' information explicitly, the default >> 'UINT16_MAX' is set in ethdev layer. >> This prevents calculating PMD specific 'overhead' and the logic in the patch is >> broken. >> >> Indeed this makes inconsistency in the driver too, for example for virtio, it >> claims 'max_rx_pktlen' as "VIRTIO_MAX_RX_PKTLEN (9728)" and 'max_mtu' as >> UINT16_MAX. From 'virtio_mtu_set()' we can see the real limit is >> 'VIRTIO_MAX_RX_PKTLEN'. >> >> When PMDs fixed, the logic in this patch can work but not sure if post -rc2 is >> good time to start fixing the PMDs. > > Do you suggest revert is the best choice here? > One option is revert, but than the issue this patch is trying to fix still remain. Other option is the extend the patch as Steve sent [1], the check there is more like workaround in application, so not nice to have them, but with extending the deprecation notice (other patch in this patchset) to fix PMDs too in next release, I would be OK to have these checks. What do you think? [1] https://patches.dpdk.org/patch/83717/ ^ permalink raw reply [flat|nested] 94+ messages in thread
* [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 0/2] fix default max mtu size when device configured SteveX Yang 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang @ 2020-11-02 8:52 ` SteveX Yang 2020-11-02 11:50 ` Ferruh Yigit 2020-11-02 13:18 ` Andrew Rybchenko 1 sibling, 2 replies; 94+ messages in thread From: SteveX Yang @ 2020-11-02 8:52 UTC (permalink / raw) To: dev Cc: ferruh.yigit, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, SteveX Yang Annouce to replace 'RTE_ETHER_MAX_LEN' with 'RTE_ETHER_MTU' as type condition of jumbo frame. Involved scopes: - rte_ethdev; - app, e.g.: test-pmd, test-eventdev; - examples, e.g.: ipsec-secgw, l3fwd, vhost; - net PMDs which support VLAN tag(s) within overhead, e.g.: i40e, ixgbe; Signed-off-by: SteveX Yang <stevex.yang@intel.com> --- doc/guides/rel_notes/deprecation.rst | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 2e082499b..fae139f01 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -138,6 +138,18 @@ Deprecation Notices will be limited to maximum 256 queues. Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed. +* ethdev: Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be set according to + ``RTE_ETHER_MTU`` in next release. Currently, the jumbo frame uses the + ``RTE_ETHER_MAX_LEN`` as boundary condition. When the MTU (1500) set, the + frame type of rx packet will be different if used different overhead, it will + cause the consistency issue. Hence, using fixed value ``RTE_ETHER_MTU`` can + avoid this issue. + Following scopes will be changed: + - ``rte_ethdev`` + - ``app``, e.g.: ``test-pmd``, ``test-eventdev``; + - ``examples``, e.g.: ``ipsec-secgw``, ``l3fwd``, ``vhost``; + - net PMDs which support VLAN tag(s) within overhead, e.g.: ``i40e``; + * cryptodev: support for using IV with all sizes is added, J0 still can be used but only when IV length in following structs ``rte_crypto_auth_xform``, ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal -- 2.17.1 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition SteveX Yang @ 2020-11-02 11:50 ` Ferruh Yigit 2020-11-02 13:18 ` Andrew Rybchenko 1 sibling, 0 replies; 94+ messages in thread From: Ferruh Yigit @ 2020-11-02 11:50 UTC (permalink / raw) To: SteveX Yang, dev Cc: konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman, Andrew Rybchenko, Thomas Monjalon On 11/2/2020 8:52 AM, SteveX Yang wrote: > Annouce to replace 'RTE_ETHER_MAX_LEN' with 'RTE_ETHER_MTU' as type > condition of jumbo frame. Involved scopes: > - rte_ethdev; > - app, e.g.: test-pmd, test-eventdev; > - examples, e.g.: ipsec-secgw, l3fwd, vhost; > - net PMDs which support VLAN tag(s) within overhead, e.g.: i40e, ixgbe; > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > doc/guides/rel_notes/deprecation.rst | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst > index 2e082499b..fae139f01 100644 > --- a/doc/guides/rel_notes/deprecation.rst > +++ b/doc/guides/rel_notes/deprecation.rst > @@ -138,6 +138,18 @@ Deprecation Notices > will be limited to maximum 256 queues. > Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed. > > +* ethdev: Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be set according to > + ``RTE_ETHER_MTU`` in next release. Currently, the jumbo frame uses the > + ``RTE_ETHER_MAX_LEN`` as boundary condition. When the MTU (1500) set, the > + frame type of rx packet will be different if used different overhead, it will > + cause the consistency issue. Hence, using fixed value ``RTE_ETHER_MTU`` can > + avoid this issue. > + Following scopes will be changed: > + - ``rte_ethdev`` > + - ``app``, e.g.: ``test-pmd``, ``test-eventdev``; > + - ``examples``, e.g.: ``ipsec-secgw``, ``l3fwd``, ``vhost``; > + - net PMDs which support VLAN tag(s) within overhead, e.g.: ``i40e``; > + > * cryptodev: support for using IV with all sizes is added, J0 still can > be used but only when IV length in following structs ``rte_crypto_auth_xform``, > ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal > Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> The mentioned code in the ethdev is: http://lxr.dpdk.org/dpdk/v20.08/source/lib/librte_ethdev/rte_ethdev.c#L1345 cc'ed other ethdev maintainers. ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition SteveX Yang 2020-11-02 11:50 ` Ferruh Yigit @ 2020-11-02 13:18 ` Andrew Rybchenko 2020-11-02 13:58 ` Ferruh Yigit 1 sibling, 1 reply; 94+ messages in thread From: Andrew Rybchenko @ 2020-11-02 13:18 UTC (permalink / raw) To: SteveX Yang, dev Cc: ferruh.yigit, konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman On 11/2/20 11:52 AM, SteveX Yang wrote: > Annouce to replace 'RTE_ETHER_MAX_LEN' with 'RTE_ETHER_MTU' as type > condition of jumbo frame. Involved scopes: > - rte_ethdev; > - app, e.g.: test-pmd, test-eventdev; > - examples, e.g.: ipsec-secgw, l3fwd, vhost; > - net PMDs which support VLAN tag(s) within overhead, e.g.: i40e, ixgbe; > > Signed-off-by: SteveX Yang <stevex.yang@intel.com> > --- > doc/guides/rel_notes/deprecation.rst | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst > index 2e082499b..fae139f01 100644 > --- a/doc/guides/rel_notes/deprecation.rst > +++ b/doc/guides/rel_notes/deprecation.rst > @@ -138,6 +138,18 @@ Deprecation Notices > will be limited to maximum 256 queues. > Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed. > > +* ethdev: Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be set according to > + ``RTE_ETHER_MTU`` in next release. Currently, the jumbo frame uses the > + ``RTE_ETHER_MAX_LEN`` as boundary condition. When the MTU (1500) set, the > + frame type of rx packet will be different if used different overhead, it will > + cause the consistency issue. Hence, using fixed value ``RTE_ETHER_MTU`` can > + avoid this issue. > + Following scopes will be changed: > + - ``rte_ethdev`` > + - ``app``, e.g.: ``test-pmd``, ``test-eventdev``; > + - ``examples``, e.g.: ``ipsec-secgw``, ``l3fwd``, ``vhost``; > + - net PMDs which support VLAN tag(s) within overhead, e.g.: ``i40e``; > + > * cryptodev: support for using IV with all sizes is added, J0 still can > be used but only when IV length in following structs ``rte_crypto_auth_xform``, > ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal > If so, what's the point to have the offload? May be just deprecate and later remove it? ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition 2020-11-02 13:18 ` Andrew Rybchenko @ 2020-11-02 13:58 ` Ferruh Yigit 2020-11-02 16:05 ` Ananyev, Konstantin 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-11-02 13:58 UTC (permalink / raw) To: Andrew Rybchenko, SteveX Yang, dev Cc: konstantin.ananyev, beilei.xing, wenzhuo.lu, bernard.iremonger, qiming.yang, mdr, nhorman On 11/2/2020 1:18 PM, Andrew Rybchenko wrote: > On 11/2/20 11:52 AM, SteveX Yang wrote: >> Annouce to replace 'RTE_ETHER_MAX_LEN' with 'RTE_ETHER_MTU' as type >> condition of jumbo frame. Involved scopes: >> - rte_ethdev; >> - app, e.g.: test-pmd, test-eventdev; >> - examples, e.g.: ipsec-secgw, l3fwd, vhost; >> - net PMDs which support VLAN tag(s) within overhead, e.g.: i40e, ixgbe; >> >> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >> --- >> doc/guides/rel_notes/deprecation.rst | 12 ++++++++++++ >> 1 file changed, 12 insertions(+) >> >> diff --git a/doc/guides/rel_notes/deprecation.rst >> b/doc/guides/rel_notes/deprecation.rst >> index 2e082499b..fae139f01 100644 >> --- a/doc/guides/rel_notes/deprecation.rst >> +++ b/doc/guides/rel_notes/deprecation.rst >> @@ -138,6 +138,18 @@ Deprecation Notices >> will be limited to maximum 256 queues. >> Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed. >> +* ethdev: Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be set according to >> + ``RTE_ETHER_MTU`` in next release. Currently, the jumbo frame uses the >> + ``RTE_ETHER_MAX_LEN`` as boundary condition. When the MTU (1500) set, the >> + frame type of rx packet will be different if used different overhead, it will >> + cause the consistency issue. Hence, using fixed value ``RTE_ETHER_MTU`` can >> + avoid this issue. >> + Following scopes will be changed: >> + - ``rte_ethdev`` >> + - ``app``, e.g.: ``test-pmd``, ``test-eventdev``; >> + - ``examples``, e.g.: ``ipsec-secgw``, ``l3fwd``, ``vhost``; >> + - net PMDs which support VLAN tag(s) within overhead, e.g.: ``i40e``; >> + >> * cryptodev: support for using IV with all sizes is added, J0 still can >> be used but only when IV length in following structs >> ``rte_crypto_auth_xform``, >> ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal >> > > If so, what's the point to have the offload? May be just deprecate and > later remove it? > Above just changes the condition of what is called jumbo frame, nothing more. ethdev assumes max frame size (without jumbo frame support) can be 'RTE_ETHER_MAX_LEN' (1518) But a PMD can support double VLAN, and it can have RTE_ETHER_MAX_LEN + 8 bytes frame size and still may not support jumbo frame. In that case ethdev overwrites the frame size as RTE_ETHER_MAX_LEN, and this will reduce the supported MTU to 1492 (instead of expected 1500). Above deprecation notice is to fix this. ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition 2020-11-02 13:58 ` Ferruh Yigit @ 2020-11-02 16:05 ` Ananyev, Konstantin [not found] ` <DM6PR11MB43625C5CF594BEDC9CE479F7F9110@DM6PR11MB4362.namprd11.prod.outlook.com> 0 siblings, 1 reply; 94+ messages in thread From: Ananyev, Konstantin @ 2020-11-02 16:05 UTC (permalink / raw) To: Yigit, Ferruh, Andrew Rybchenko, Yang, SteveX, dev Cc: Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, nhorman > On 11/2/2020 1:18 PM, Andrew Rybchenko wrote: > > On 11/2/20 11:52 AM, SteveX Yang wrote: > >> Annouce to replace 'RTE_ETHER_MAX_LEN' with 'RTE_ETHER_MTU' as type > >> condition of jumbo frame. Involved scopes: > >> - rte_ethdev; > >> - app, e.g.: test-pmd, test-eventdev; > >> - examples, e.g.: ipsec-secgw, l3fwd, vhost; > >> - net PMDs which support VLAN tag(s) within overhead, e.g.: i40e, ixgbe; > >> > >> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > >> --- > >> doc/guides/rel_notes/deprecation.rst | 12 ++++++++++++ > >> 1 file changed, 12 insertions(+) > >> > >> diff --git a/doc/guides/rel_notes/deprecation.rst > >> b/doc/guides/rel_notes/deprecation.rst > >> index 2e082499b..fae139f01 100644 > >> --- a/doc/guides/rel_notes/deprecation.rst > >> +++ b/doc/guides/rel_notes/deprecation.rst > >> @@ -138,6 +138,18 @@ Deprecation Notices > >> will be limited to maximum 256 queues. > >> Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed. > >> +* ethdev: Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be set according to > >> + ``RTE_ETHER_MTU`` in next release. Currently, the jumbo frame uses the > >> + ``RTE_ETHER_MAX_LEN`` as boundary condition. When the MTU (1500) set, the > >> + frame type of rx packet will be different if used different overhead, it will > >> + cause the consistency issue. Hence, using fixed value ``RTE_ETHER_MTU`` can > >> + avoid this issue. > >> + Following scopes will be changed: > >> + - ``rte_ethdev`` > >> + - ``app``, e.g.: ``test-pmd``, ``test-eventdev``; > >> + - ``examples``, e.g.: ``ipsec-secgw``, ``l3fwd``, ``vhost``; > >> + - net PMDs which support VLAN tag(s) within overhead, e.g.: ``i40e``; > >> + > >> * cryptodev: support for using IV with all sizes is added, J0 still can > >> be used but only when IV length in following structs > >> ``rte_crypto_auth_xform``, > >> ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal > >> > > > > If so, what's the point to have the offload? May be just deprecate and > > later remove it? > > Same thought actually, can we remove DEV_RX_OFFLOAD_JUMBO_FRAME flag completely? PMD can decide based on provided MTU size does segmentation, etc. is needed. > > Above just changes the condition of what is called jumbo frame, nothing more. > > ethdev assumes max frame size (without jumbo frame support) can be > 'RTE_ETHER_MAX_LEN' (1518) > > But a PMD can support double VLAN, and it can have RTE_ETHER_MAX_LEN + 8 bytes > frame size and still may not support jumbo frame. > > In that case ethdev overwrites the frame size as RTE_ETHER_MAX_LEN, and this > will reduce the supported MTU to 1492 (instead of expected 1500). > Above deprecation notice is to fix this. ^ permalink raw reply [flat|nested] 94+ messages in thread
[parent not found: <DM6PR11MB43625C5CF594BEDC9CE479F7F9110@DM6PR11MB4362.namprd11.prod.outlook.com>]
* Re: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition [not found] ` <DM6PR11MB43625C5CF594BEDC9CE479F7F9110@DM6PR11MB4362.namprd11.prod.outlook.com> @ 2020-11-24 17:46 ` Ferruh Yigit 2020-11-27 12:19 ` Andrew Rybchenko 0 siblings, 1 reply; 94+ messages in thread From: Ferruh Yigit @ 2020-11-24 17:46 UTC (permalink / raw) To: Yang, SteveX, Ananyev, Konstantin, Andrew Rybchenko, dev, Thomas Monjalon, Zhang, Qi Z, Ajit Khaparde, jerinj, Viacheslav Ovsiienko, Matan Azrad, Bruce Richardson Cc: Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, nhorman On 11/3/2020 9:07 AM, Yang, SteveX wrote: > > >> -----Original Message----- >> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> >> Sent: Tuesday, November 3, 2020 12:05 AM >> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Andrew Rybchenko >> <andrew.rybchenko@oktetlabs.ru>; Yang, SteveX <stevex.yang@intel.com>; >> dev@dpdk.org >> Cc: Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo >> <wenzhuo.lu@intel.com>; Iremonger, Bernard >> <bernard.iremonger@intel.com>; Yang, Qiming <qiming.yang@intel.com>; >> mdr@ashroe.eu; nhorman@tuxdriver.com >> Subject: RE: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo >> frame flag condition >> >> >> >>> On 11/2/2020 1:18 PM, Andrew Rybchenko wrote: >>>> On 11/2/20 11:52 AM, SteveX Yang wrote: >>>>> Annouce to replace 'RTE_ETHER_MAX_LEN' with 'RTE_ETHER_MTU' as >> type >>>>> condition of jumbo frame. Involved scopes: >>>>> - rte_ethdev; >>>>> - app, e.g.: test-pmd, test-eventdev; >>>>> - examples, e.g.: ipsec-secgw, l3fwd, vhost; >>>>> - net PMDs which support VLAN tag(s) within overhead, e.g.: i40e, >>>>> ixgbe; >>>>> >>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>> --- >>>>> doc/guides/rel_notes/deprecation.rst | 12 ++++++++++++ >>>>> 1 file changed, 12 insertions(+) >>>>> >>>>> diff --git a/doc/guides/rel_notes/deprecation.rst >>>>> b/doc/guides/rel_notes/deprecation.rst >>>>> index 2e082499b..fae139f01 100644 >>>>> --- a/doc/guides/rel_notes/deprecation.rst >>>>> +++ b/doc/guides/rel_notes/deprecation.rst >>>>> @@ -138,6 +138,18 @@ Deprecation Notices >>>>> will be limited to maximum 256 queues. >>>>> Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be >> removed. >>>>> +* ethdev: Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be >> set >>>>> +according to >>>>> + ``RTE_ETHER_MTU`` in next release. Currently, the jumbo frame >>>>> +uses the >>>>> + ``RTE_ETHER_MAX_LEN`` as boundary condition. When the MTU >> (1500) >>>>> +set, the >>>>> + frame type of rx packet will be different if used different >>>>> +overhead, it will >>>>> + cause the consistency issue. Hence, using fixed value >>>>> +``RTE_ETHER_MTU`` can >>>>> + avoid this issue. >>>>> + Following scopes will be changed: >>>>> + - ``rte_ethdev`` >>>>> + - ``app``, e.g.: ``test-pmd``, ``test-eventdev``; >>>>> + - ``examples``, e.g.: ``ipsec-secgw``, ``l3fwd``, ``vhost``; >>>>> + - net PMDs which support VLAN tag(s) within overhead, e.g.: >>>>> +``i40e``; >>>>> + >>>>> * cryptodev: support for using IV with all sizes is added, J0 >>>>> still can >>>>> be used but only when IV length in following structs >>>>> ``rte_crypto_auth_xform``, >>>>> ``rte_crypto_aead_xform`` is set to zero. When IV length is >>>>> greater or equal >>>>> >>>> >>>> If so, what's the point to have the offload? May be just deprecate >>>> and later remove it? >>>> >> >> Same thought actually, can we remove DEV_RX_OFFLOAD_JUMBO_FRAME >> flag completely? >> PMD can decide based on provided MTU size does segmentation, etc. is >> needed. >> > > Yes, it seems can be removed when base on MTU size. > I've checked related code of using DEV_RX_OFFLOAD_JUMBO_FRAME. > Most of them use for checking boundary of max packet length and set 'dev->data->mtu'. > Steve already sent the RFC for above fix: https://patches.dpdk.org/patch/84486/ We can consider removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' but anyway it is for 21.11. This deprecation notice is required to fix the ethdev in next release, as in the above RFC. I cc'ed a few more relevant people, can you please review this deprecation notice. Thanks, ferruh >>> >>> Above just changes the condition of what is called jumbo frame, nothing >> more. >>> >>> ethdev assumes max frame size (without jumbo frame support) can be >>> 'RTE_ETHER_MAX_LEN' (1518) >>> >>> But a PMD can support double VLAN, and it can have >> RTE_ETHER_MAX_LEN + >>> 8 bytes frame size and still may not support jumbo frame. >>> >>> In that case ethdev overwrites the frame size as RTE_ETHER_MAX_LEN, >>> and this will reduce the supported MTU to 1492 (instead of expected 1500). >>> Above deprecation notice is to fix this. ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition 2020-11-24 17:46 ` Ferruh Yigit @ 2020-11-27 12:19 ` Andrew Rybchenko 2020-11-27 17:08 ` Bruce Richardson 0 siblings, 1 reply; 94+ messages in thread From: Andrew Rybchenko @ 2020-11-27 12:19 UTC (permalink / raw) To: Ferruh Yigit, Yang, SteveX, Ananyev, Konstantin, dev, Thomas Monjalon, Zhang, Qi Z, Ajit Khaparde, jerinj, Viacheslav Ovsiienko, Matan Azrad, Bruce Richardson Cc: Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, nhorman On 11/24/20 8:46 PM, Ferruh Yigit wrote: > On 11/3/2020 9:07 AM, Yang, SteveX wrote: >>> -----Original Message----- >>> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> >>> Sent: Tuesday, November 3, 2020 12:05 AM >>> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Andrew Rybchenko >>> <andrew.rybchenko@oktetlabs.ru>; Yang, SteveX <stevex.yang@intel.com>; >>> dev@dpdk.org >>> Cc: Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo >>> <wenzhuo.lu@intel.com>; Iremonger, Bernard >>> <bernard.iremonger@intel.com>; Yang, Qiming <qiming.yang@intel.com>; >>> mdr@ashroe.eu; nhorman@tuxdriver.com >>> Subject: RE: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo >>> frame flag condition >>> >>>> On 11/2/2020 1:18 PM, Andrew Rybchenko wrote: >>>>> On 11/2/20 11:52 AM, SteveX Yang wrote: >>>>>> Annouce to replace 'RTE_ETHER_MAX_LEN' with 'RTE_ETHER_MTU' as type >>>>>> condition of jumbo frame. Involved scopes: >>>>>> - rte_ethdev; >>>>>> - app, e.g.: test-pmd, test-eventdev; >>>>>> - examples, e.g.: ipsec-secgw, l3fwd, vhost; >>>>>> - net PMDs which support VLAN tag(s) within overhead, e.g.: i40e, >>>>>> ixgbe; >>>>>> >>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> >>>>>> --- >>>>>> doc/guides/rel_notes/deprecation.rst | 12 ++++++++++++ >>>>>> 1 file changed, 12 insertions(+) >>>>>> >>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst >>>>>> b/doc/guides/rel_notes/deprecation.rst >>>>>> index 2e082499b..fae139f01 100644 >>>>>> --- a/doc/guides/rel_notes/deprecation.rst >>>>>> +++ b/doc/guides/rel_notes/deprecation.rst >>>>>> @@ -138,6 +138,18 @@ Deprecation Notices >>>>>> will be limited to maximum 256 queues. >>>>>> Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed. >>>>>> +* ethdev: Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be set >>>>>> +according to >>>>>> + ``RTE_ETHER_MTU`` in next release. Currently, the jumbo frame >>>>>> +uses the >>>>>> + ``RTE_ETHER_MAX_LEN`` as boundary condition. When the MTU (1500) >>>>>> +set, the >>>>>> + frame type of rx packet will be different if used different >>>>>> +overhead, it will >>>>>> + cause the consistency issue. Hence, using fixed value >>>>>> +``RTE_ETHER_MTU`` can >>>>>> + avoid this issue. >>>>>> + Following scopes will be changed: >>>>>> + - ``rte_ethdev`` >>>>>> + - ``app``, e.g.: ``test-pmd``, ``test-eventdev``; >>>>>> + - ``examples``, e.g.: ``ipsec-secgw``, ``l3fwd``, ``vhost``; >>>>>> + - net PMDs which support VLAN tag(s) within overhead, e.g.: >>>>>> +``i40e``; >>>>>> + >>>>>> * cryptodev: support for using IV with all sizes is added, J0 >>>>>> still can >>>>>> be used but only when IV length in following structs >>>>>> ``rte_crypto_auth_xform``, >>>>>> ``rte_crypto_aead_xform`` is set to zero. When IV length is >>>>>> greater or equal >>>>>> >>>>> >>>>> If so, what's the point to have the offload? May be just deprecate >>>>> and later remove it? >>>>> >>> >>> Same thought actually, can we remove DEV_RX_OFFLOAD_JUMBO_FRAME >>> flag completely? >>> PMD can decide based on provided MTU size does segmentation, etc. is >>> needed. >>> >> >> Yes, it seems can be removed when base on MTU size. >> I've checked related code of using DEV_RX_OFFLOAD_JUMBO_FRAME. >> Most of them use for checking boundary of max packet length and set >> 'dev->data->mtu'. >> > > Steve already sent the RFC for above fix: > https://patches.dpdk.org/patch/84486/ > > We can consider removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' but anyway it is > for 21.11. > > This deprecation notice is required to fix the ethdev in next release, > as in the above RFC. > > I cc'ed a few more relevant people, can you please review this > deprecation notice. > > Thanks, > ferruh > > >>>> >>>> Above just changes the condition of what is called jumbo frame, nothing more. >>>> >>>> ethdev assumes max frame size (without jumbo frame support) can be >>>> 'RTE_ETHER_MAX_LEN' (1518) >>>> >>>> But a PMD can support double VLAN, and it can have RTE_ETHER_MAX_LEN + >>>> 8 bytes frame size and still may not support jumbo frame. >>>> >>>> In that case ethdev overwrites the frame size as RTE_ETHER_MAX_LEN, >>>> and this will reduce the supported MTU to 1492 (instead of expected >>>> 1500). >>>> Above deprecation notice is to fix this. My problem with the deprecation notice is that I don't actually understand what will be done to address it. The idea explained by Ferruh in details makes sense. But I don't understand how the deprecation notice description corresponding to it. I read "Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be set.." as an enforcement of the offload flag based on other parameters. I think it is incorrect. Or I still don't understand something... Looking at [1] adds more confusion since I don't understand why we care about dev_conf->rxmode.max_rx_pkt_len when JUMBO_FRAME offload is disabled. As far as I know JUMBO_FRAME offload enable means that driver should take a look at it and apply. Otherwise, just ignore it. [1] http://lxr.dpdk.org/dpdk/v20.08/source/lib/librte_ethdev/rte_ethdev.c#L1345 ^ permalink raw reply [flat|nested] 94+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition 2020-11-27 12:19 ` Andrew Rybchenko @ 2020-11-27 17:08 ` Bruce Richardson 0 siblings, 0 replies; 94+ messages in thread From: Bruce Richardson @ 2020-11-27 17:08 UTC (permalink / raw) To: Andrew Rybchenko Cc: Ferruh Yigit, Yang, SteveX, Ananyev, Konstantin, dev, Thomas Monjalon, Zhang, Qi Z, Ajit Khaparde, jerinj, Viacheslav Ovsiienko, Matan Azrad, Xing, Beilei, Lu, Wenzhuo, Iremonger, Bernard, Yang, Qiming, mdr, nhorman On Fri, Nov 27, 2020 at 03:19:43PM +0300, Andrew Rybchenko wrote: > On 11/24/20 8:46 PM, Ferruh Yigit wrote: > > On 11/3/2020 9:07 AM, Yang, SteveX wrote: > >>> -----Original Message----- > >>> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > >>> Sent: Tuesday, November 3, 2020 12:05 AM > >>> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Andrew Rybchenko > >>> <andrew.rybchenko@oktetlabs.ru>; Yang, SteveX <stevex.yang@intel.com>; > >>> dev@dpdk.org > >>> Cc: Xing, Beilei <beilei.xing@intel.com>; Lu, Wenzhuo > >>> <wenzhuo.lu@intel.com>; Iremonger, Bernard > >>> <bernard.iremonger@intel.com>; Yang, Qiming <qiming.yang@intel.com>; > >>> mdr@ashroe.eu; nhorman@tuxdriver.com > >>> Subject: RE: [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo > >>> frame flag condition > >>> > >>>> On 11/2/2020 1:18 PM, Andrew Rybchenko wrote: > >>>>> On 11/2/20 11:52 AM, SteveX Yang wrote: > >>>>>> Annouce to replace 'RTE_ETHER_MAX_LEN' with 'RTE_ETHER_MTU' as type > >>>>>> condition of jumbo frame. Involved scopes: > >>>>>> - rte_ethdev; > >>>>>> - app, e.g.: test-pmd, test-eventdev; > >>>>>> - examples, e.g.: ipsec-secgw, l3fwd, vhost; > >>>>>> - net PMDs which support VLAN tag(s) within overhead, e.g.: i40e, > >>>>>> ixgbe; > >>>>>> > >>>>>> Signed-off-by: SteveX Yang <stevex.yang@intel.com> > >>>>>> --- > >>>>>> doc/guides/rel_notes/deprecation.rst | 12 ++++++++++++ > >>>>>> 1 file changed, 12 insertions(+) > >>>>>> > >>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst > >>>>>> b/doc/guides/rel_notes/deprecation.rst > >>>>>> index 2e082499b..fae139f01 100644 > >>>>>> --- a/doc/guides/rel_notes/deprecation.rst > >>>>>> +++ b/doc/guides/rel_notes/deprecation.rst > >>>>>> @@ -138,6 +138,18 @@ Deprecation Notices > >>>>>> will be limited to maximum 256 queues. > >>>>>> Also compile time flag ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` will be removed. > >>>>>> +* ethdev: Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be set > >>>>>> +according to > >>>>>> + ``RTE_ETHER_MTU`` in next release. Currently, the jumbo frame > >>>>>> +uses the > >>>>>> + ``RTE_ETHER_MAX_LEN`` as boundary condition. When the MTU (1500) > >>>>>> +set, the > >>>>>> + frame type of rx packet will be different if used different > >>>>>> +overhead, it will > >>>>>> + cause the consistency issue. Hence, using fixed value > >>>>>> +``RTE_ETHER_MTU`` can > >>>>>> + avoid this issue. > >>>>>> + Following scopes will be changed: > >>>>>> + - ``rte_ethdev`` > >>>>>> + - ``app``, e.g.: ``test-pmd``, ``test-eventdev``; > >>>>>> + - ``examples``, e.g.: ``ipsec-secgw``, ``l3fwd``, ``vhost``; > >>>>>> + - net PMDs which support VLAN tag(s) within overhead, e.g.: > >>>>>> +``i40e``; > >>>>>> + > >>>>>> * cryptodev: support for using IV with all sizes is added, J0 > >>>>>> still can > >>>>>> be used but only when IV length in following structs > >>>>>> ``rte_crypto_auth_xform``, > >>>>>> ``rte_crypto_aead_xform`` is set to zero. When IV length is > >>>>>> greater or equal > >>>>>> > >>>>> > >>>>> If so, what's the point to have the offload? May be just deprecate > >>>>> and later remove it? > >>>>> > >>> > >>> Same thought actually, can we remove DEV_RX_OFFLOAD_JUMBO_FRAME > >>> flag completely? > >>> PMD can decide based on provided MTU size does segmentation, etc. is > >>> needed. > >>> > >> > >> Yes, it seems can be removed when base on MTU size. > >> I've checked related code of using DEV_RX_OFFLOAD_JUMBO_FRAME. > >> Most of them use for checking boundary of max packet length and set > >> 'dev->data->mtu'. > >> > > > > Steve already sent the RFC for above fix: > > https://patches.dpdk.org/patch/84486/ > > > > We can consider removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' but anyway it is > > for 21.11. > > > > This deprecation notice is required to fix the ethdev in next release, > > as in the above RFC. > > > > I cc'ed a few more relevant people, can you please review this > > deprecation notice. > > > > Thanks, > > ferruh > > > > > >>>> > >>>> Above just changes the condition of what is called jumbo frame, nothing more. > >>>> > >>>> ethdev assumes max frame size (without jumbo frame support) can be > >>>> 'RTE_ETHER_MAX_LEN' (1518) > >>>> > >>>> But a PMD can support double VLAN, and it can have RTE_ETHER_MAX_LEN + > >>>> 8 bytes frame size and still may not support jumbo frame. > >>>> > >>>> In that case ethdev overwrites the frame size as RTE_ETHER_MAX_LEN, > >>>> and this will reduce the supported MTU to 1492 (instead of expected > >>>> 1500). > >>>> Above deprecation notice is to fix this. > > My problem with the deprecation notice is that I don't actually > understand what will be done to address it. > > The idea explained by Ferruh in details makes sense. But I > don't understand how the deprecation notice description > corresponding to it. I read > "Offload flag ``DEV_RX_OFFLOAD_JUMBO_FRAME`` will be set.." > as an enforcement of the offload flag based on other > parameters. I think it is incorrect. Or I still don't > understand something... > > Looking at [1] adds more confusion since I don't understand why > we care about dev_conf->rxmode.max_rx_pkt_len when JUMBO_FRAME > offload is disabled. As far as I know JUMBO_FRAME offload > enable means that driver should take a look at it and apply. > Otherwise, just ignore it. > I agree with the comment here - my understanding is the same that if the JUMBO_FRAME offload flag is not set, then the max_rx_pkt_len should be ignored (which for me implies that it should be set to 0 or similar sentinal value in ethdev to ensure drivers don't accidentally use it). In terms of the deprecation notice, I also think it's fairly confusing, and after talking with Ferruh, I'm not convinced we need one. It seems that the planned changes based on this are just bug fixes, where packets that should not have been dropped were dropped. Perhaps someone could comment on the specific behaviour change that would affect apps (where it's not just plain buggy behaviour!) However, it does appear that this area is in need of clean-up generally. The suggestion to drop the jumbo frame flag, packet_len/mtu value from the ethdev config, and just use the existing API calls, sounds interesting. If that is not the approach taken, I'd like to see the existing approach kept, so that a zero-initialized config is acceptable for packet size setting, i.e. no jumbo frame flag and zero max-length == default ethernet MTU. Just my 2c. /Bruce ^ permalink raw reply [flat|nested] 94+ messages in thread
end of thread, other threads:[~2020-11-27 17:08 UTC | newest] Thread overview: 94+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-09-16 5:52 [dpdk-dev] [PATCH v1 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 2/5] net/igc: " SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 3/5] net/ice: " SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 4/5] net/iavf: " SteveX Yang 2020-09-16 5:52 ` [dpdk-dev] [PATCH v1 5/5] net/i40e: " SteveX Yang 2020-09-16 14:41 ` Ananyev, Konstantin [not found] ` <DM6PR11MB4362E5FF332551D12AA20017F93E0@DM6PR11MB4362.namprd11.prod.outlook.com> 2020-09-17 12:18 ` Ananyev, Konstantin 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 2/5] net/igc: " SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 3/5] net/ice: " SteveX Yang 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 4/5] net/i40e: " SteveX Yang 2020-09-22 10:47 ` Ananyev, Konstantin 2020-09-22 1:23 ` [dpdk-dev] [PATCH v2 5/5] net/iavf: " SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 2/5] net/igc: " SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 3/5] net/ice: " SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 4/5] net/i40e: " SteveX Yang 2020-09-23 4:09 ` [dpdk-dev] [PATCH v3 5/5] net/iavf: " SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 0/5] fix default max mtu size when device configured SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 2/5] net/igc: " SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 3/5] net/ice: " SteveX Yang 2020-09-29 11:59 ` Zhang, Qi Z 2020-09-29 23:01 ` Ananyev, Konstantin 2020-09-30 0:34 ` Zhang, Qi Z [not found] ` <DM6PR11MB4362515283D00E27A793E6B0F9330@DM6PR11MB4362.namprd11.prod.outlook.com> 2020-09-30 2:32 ` Zhang, Qi Z 2020-10-14 15:38 ` Ferruh Yigit [not found] ` <DM6PR11MB43628BBF9DCE7CC4D7C05AD8F91E0@DM6PR11MB4362.namprd11.prod.outlook.com> 2020-10-19 10:49 ` Ananyev, Konstantin 2020-10-19 13:07 ` Ferruh Yigit 2020-10-19 14:07 ` Ananyev, Konstantin 2020-10-19 14:28 ` Ananyev, Konstantin 2020-10-19 18:01 ` Ferruh Yigit 2020-10-20 9:07 ` Ananyev, Konstantin 2020-10-20 12:29 ` Ferruh Yigit 2020-10-21 9:47 ` Ananyev, Konstantin 2020-10-21 10:36 ` Ferruh Yigit 2020-10-21 10:44 ` Ananyev, Konstantin 2020-10-21 10:53 ` Ferruh Yigit 2020-10-19 18:05 ` Ferruh Yigit [not found] ` <DM6PR11MB4362F936BFC715BF6BABBAD0F91F0@DM6PR11MB4362.namprd11.prod.outlook.com> 2020-10-20 8:13 ` Ferruh Yigit 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 4/5] net/i40e: " SteveX Yang 2020-09-28 6:55 ` [dpdk-dev] [PATCH v4 5/5] net/iavf: " SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 1/5] net/e1000: fix max mtu size packets with vlan tag cannot be received by default SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 2/5] net/igc: " SteveX Yang 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 3/5] net/ice: " SteveX Yang 2020-10-14 11:35 ` Zhang, Qi Z 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 4/5] net/i40e: " SteveX Yang 2020-10-14 10:30 ` Ananyev, Konstantin 2020-10-14 9:19 ` [dpdk-dev] [PATCH v5 5/5] net/iavf: " SteveX Yang 2020-10-14 11:43 ` [dpdk-dev] [PATCH v5 0/5] fix default max mtu size when device configured Zhang, Qi Z 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 0/2] " SteveX Yang 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 1/2] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang 2020-10-22 16:22 ` Ferruh Yigit 2020-10-22 8:48 ` [dpdk-dev] [PATCH v6 2/2] librte_ethdev: fix MTU size exceeds max rx packet length SteveX Yang 2020-10-22 16:31 ` Ferruh Yigit 2020-10-22 16:52 ` Ananyev, Konstantin 2020-10-28 3:03 ` [dpdk-dev] [PATCH v7 0/1] fix default max mtu size when device configured SteveX Yang 2020-10-28 3:03 ` [dpdk-dev] [PATCH v7 1/1] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang 2020-10-29 8:41 ` Ferruh Yigit 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 0/2] fix default max mtu size when device configured SteveX Yang 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets SteveX Yang 2020-11-02 11:48 ` Ferruh Yigit 2020-11-03 13:29 ` Ferruh Yigit 2020-11-04 16:51 ` Thomas Monjalon 2020-11-04 17:07 ` Ferruh Yigit 2020-11-04 17:55 ` Thomas Monjalon 2020-11-04 20:19 ` Ferruh Yigit 2020-11-04 20:39 ` Thomas Monjalon 2020-11-05 8:54 ` Andrew Rybchenko [not found] ` <DM6PR11MB43622CC5DF485DD034037CD3F9EE0@DM6PR11MB4362.namprd11.prod.outlook.com> 2020-11-05 10:37 ` Ferruh Yigit 2020-11-05 10:44 ` Thomas Monjalon 2020-11-05 10:48 ` Thomas Monjalon 2020-11-05 10:50 ` Ferruh Yigit 2020-11-05 13:52 ` Olivier Matz 2020-11-05 15:11 ` Lance Richardson 2020-11-05 15:56 ` Ferruh Yigit 2020-11-05 16:23 ` Lance Richardson 2020-11-05 17:44 ` [dpdk-dev] [PATCH 1/1] app/testpmd: revert max Rx packet length adjustment Thomas Monjalon 2020-11-05 18:02 ` Lance Richardson 2020-11-05 18:11 ` Ferruh Yigit 2020-11-05 18:18 ` Thomas Monjalon 2020-11-05 10:49 ` [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets Ferruh Yigit 2020-11-02 8:52 ` [dpdk-dev] [PATCH v8 2/2] doc: annouce deprecation of jumbo frame flag condition SteveX Yang 2020-11-02 11:50 ` Ferruh Yigit 2020-11-02 13:18 ` Andrew Rybchenko 2020-11-02 13:58 ` Ferruh Yigit 2020-11-02 16:05 ` Ananyev, Konstantin [not found] ` <DM6PR11MB43625C5CF594BEDC9CE479F7F9110@DM6PR11MB4362.namprd11.prod.outlook.com> 2020-11-24 17:46 ` Ferruh Yigit 2020-11-27 12:19 ` Andrew Rybchenko 2020-11-27 17:08 ` Bruce Richardson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).