From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB6E5A04B1; Thu, 5 Nov 2020 11:37:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C00BEBE3F; Thu, 5 Nov 2020 11:37:49 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 9FD6EBC5E for ; Thu, 5 Nov 2020 11:37:48 +0100 (CET) IronPort-SDR: TcV8WpB8qPVXH48oTbsljOwl5JxDwG5TnW73FSvHK7aGrISusBqSPWe2s2N9x1EKl4IcwMctIM gDFDGg1qQs2Q== X-IronPort-AV: E=McAfee;i="6000,8403,9795"; a="156352244" X-IronPort-AV: E=Sophos;i="5.77,453,1596524400"; d="scan'208";a="156352244" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2020 02:37:46 -0800 IronPort-SDR: zAlSUZZVJlWmNPtzBZMTWb3soUp7Vp/hBl4LUpSmBUDr9TYYyfpwgqqOfp4KOJfSKHyyKFv8Xy 3LV+Cxa6TOyQ== X-IronPort-AV: E=Sophos;i="5.77,453,1596524400"; d="scan'208";a="539349107" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.252.54]) ([10.213.252.54]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2020 02:37:42 -0800 To: "Yang, SteveX" , Andrew Rybchenko , Thomas Monjalon Cc: "dev@dpdk.org" , "Ananyev, Konstantin" , "Xing, Beilei" , "Lu, Wenzhuo" , "Iremonger, Bernard" , "Yang, Qiming" , "mdr@ashroe.eu" , "nhorman@tuxdriver.com" , "david.marchand@redhat.com" References: <20201028030334.30300-1-stevex.yang@intel.com> <2034736.YrmxQ9UtPI@thomas> <8358021.KZzejQbnKC@thomas> <164a2edf-56d6-d34c-7575-d2a8789c8bd5@oktetlabs.ru> From: Ferruh Yigit Message-ID: <0c5f86c4-49e9-0cfe-fb98-5646712fbeb6@intel.com> Date: Thu, 5 Nov 2020 10:37:38 +0000 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet length for VLAN packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 11/5/2020 9:33 AM, Yang, SteveX wrote: > > >> -----Original Message----- >> From: Andrew Rybchenko >> Sent: Thursday, November 5, 2020 4:54 PM >> To: Thomas Monjalon ; Yang, SteveX >> ; Yigit, Ferruh >> Cc: dev@dpdk.org; Ananyev, Konstantin ; >> Xing, Beilei ; Lu, Wenzhuo ; >> Iremonger, Bernard ; Yang, Qiming >> ; mdr@ashroe.eu; nhorman@tuxdriver.com; >> david.marchand@redhat.com >> Subject: Re: [dpdk-dev] [PATCH v8 1/2] app/testpmd: fix max rx packet >> length for VLAN packets >> >> On 11/4/20 11:39 PM, Thomas Monjalon wrote: >>> 04/11/2020 21:19, Ferruh Yigit: >>>> On 11/4/2020 5:55 PM, Thomas Monjalon wrote: >>>>> 04/11/2020 18:07, Ferruh Yigit: >>>>>> On 11/4/2020 4:51 PM, Thomas Monjalon wrote: >>>>>>> 03/11/2020 14:29, Ferruh Yigit: >>>>>>>> On 11/2/2020 11:48 AM, Ferruh Yigit wrote: >>>>>>>>> On 11/2/2020 8:52 AM, SteveX Yang wrote: >>>>>>>>>> When the max rx packet length is smaller than the sum of mtu >>>>>>>>>> size and ether overhead size, it should be enlarged, otherwise >>>>>>>>>> the VLAN packets will be dropped. >>>>>>>>>> >>>>>>>>>> Fixes: 35b2d13fd6fd ("net: add rte prefix to ether defines") >>>>>>>>>> >>>>>>>>>> Signed-off-by: SteveX Yang >>>>>>>>> >>>>>>>>> Reviewed-by: Ferruh Yigit >>>>>>>> >>>>>>>> Applied to dpdk-next-net/main, thanks. >>>>>>>> >>>>>>>> only 1/2 applied since discussion is going on for 2/2. >>>>>>> >>>>>>> I'm not sure this testpmd change is good. >>>>>>> >>>>>>> Reminder: testpmd is for testing the PMDs. >>>>>>> Don't we want to see VLAN packets dropped in the case described >> above? >>>>>>> >>>>>> >>>>>> The patch set 'max_rx_pkt_len' in a way to make MTU 1500 for all >>>>>> PMDs, otherwise testpmd set hard-coded 'RTE_ETHER_MAX_LEN' >> value, >>>>>> which makes MTU between 1492-1500 depending on PMD. >>>>>> >>>>>> It is application responsibility to provide correct 'max_rx_pkt_len'. >>>>>> I guess the original intention was to set MTU as 1500 but was not >>>>>> correct for all PMDs and this patch is fixing it. >>>>>> >>>>>> The same problem in the ethdev, (assuming 'RTE_ETHER_MAX_LEN' >> will >>>>>> give MTU 1500), the other patch in the set is to fix it later. >>>>> >>>>> OK but the testpmd patch is just hiding the issue, isn't it? >>>>> >>>> >>>> I don't think so, issue was application (testpmd) setting the >> 'max_rx_pkt_len' >>>> wrong. >>>> >>>> What is hidden? >>> >>> I was looking for adding a helper in ethdev API. >>> But I think I can agree with your way of thinking. >>> >> >> The patch breaks running testpmd on Virtio-Net because the driver >> populates dev_info.max_rx_pktlen but keeps dev_info.max_mtu equal to >> UINT16_MAX as it was filled in by ethdev. As the result: >> >> Ethdev port_id=0 max_rx_pkt_len 11229 > max valid value 9728 Fail to >> configure port 0 > > Similar issue occurred for other net PMD drivers which use default max_mtu (UINT16_MAX). > More strict checking condition will be added within new patch sooner. > :( For drivers not providing 'max_mtu' information explicitly, the default 'UINT16_MAX' is set in ethdev layer. This prevents calculating PMD specific 'overhead' and the logic in the patch is broken. Indeed this makes inconsistency in the driver too, for example for virtio, it claims 'max_rx_pktlen' as "VIRTIO_MAX_RX_PKTLEN (9728)" and 'max_mtu' as UINT16_MAX. From 'virtio_mtu_set()' we can see the real limit is 'VIRTIO_MAX_RX_PKTLEN'. When PMDs fixed, the logic in this patch can work but not sure if post -rc2 is good time to start fixing the PMDs.