From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3536BA04B4; Fri, 8 Nov 2019 17:53:22 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5876C1C2B9; Fri, 8 Nov 2019 17:53:21 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 9FB981C2B8 for ; Fri, 8 Nov 2019 17:53:19 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Nov 2019 08:53:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,282,1569308400"; d="scan'208";a="377811561" Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.237.221.96]) ([10.237.221.96]) by orsmga005.jf.intel.com with ESMTP; 08 Nov 2019 08:53:12 -0800 To: Dekel Peled , Matan Azrad , "john.mcnamara@intel.com" , "marko.kovacevic@intel.com" , "nhorman@tuxdriver.com" , "ajit.khaparde@broadcom.com" , "somnath.kotur@broadcom.com" , "anatoly.burakov@intel.com" , "xuanziyang2@huawei.com" , "cloud.wangxiaoyun@huawei.com" , "zhouguoyang@huawei.com" , "wenzhuo.lu@intel.com" , "konstantin.ananyev@intel.com" , Shahaf Shuler , Slava Ovsiienko , "rmody@marvell.com" , "shshaikh@marvell.com" , "maxime.coquelin@redhat.com" , "tiwei.bie@intel.com" , "zhihong.wang@intel.com" , "yongwang@vmware.com" , Thomas Monjalon , "arybchenko@solarflare.com" , "jingjing.wu@intel.com" , "bernard.iremonger@intel.com" Cc: "dev@dpdk.org" References: <4c64b7941e1e9416ae7946cb44d50a01888d70c4.1573129825.git.dekelp@mellanox.com> <0523c7d7-bc97-7e30-c024-e578f9548797@intel.com> <0a1708e5-70ba-16f8-29b0-bef8d4f20f80@intel.com> <60dc4ef1-7e9a-5073-c534-e3b7a42a9abf@intel.com> From: Ferruh Yigit Openpgp: preference=signencrypt Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata= mQINBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABtCVGZXJydWggWWln aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+iQJUBBMBCgA+AhsDAh4BAheABQsJCAcDBRUK CQgLBRYCAwEAFiEE0jZTh0IuwoTjmYHH+TPrQ98TYR8FAl1meboFCQlupOoACgkQ+TPrQ98T YR9ACBAAv2tomhyxY0Tp9Up7mNGLfEdBu/7joB/vIdqMRv63ojkwr9orQq5V16V/25+JEAD0 60cKodBDM6HdUvqLHatS8fooWRueSXHKYwJ3vxyB2tWDyZrLzLI1jxEvunGodoIzUOtum0Ce gPynnfQCelXBja0BwLXJMplM6TY1wXX22ap0ZViC0m714U5U4LQpzjabtFtjT8qOUR6L7hfy YQ72PBuktGb00UR/N5UrR6GqB0x4W41aZBHXfUQnvWIMmmCrRUJX36hOTYBzh+x86ULgg7H2 1499tA4o6rvE13FiGccplBNWCAIroAe/G11rdoN5NBgYVXu++38gTa/MBmIt6zRi6ch15oLA Ln2vHOdqhrgDuxjhMpG2bpNE36DG/V9WWyWdIRlz3NYPCDM/S3anbHlhjStXHOz1uHOnerXM 1jEjcsvmj1vSyYoQMyRcRJmBZLrekvgZeh7nJzbPHxtth8M7AoqiZ/o/BpYU+0xZ+J5/szWZ aYxxmIRu5ejFf+Wn9s5eXNHmyqxBidpCWvcbKYDBnkw2+Y9E5YTpL0mS0dCCOlrO7gca27ux ybtbj84aaW1g0CfIlUnOtHgMCmz6zPXThb+A8H8j3O6qmPoVqT3qnq3Uhy6GOoH8Fdu2Vchh TWiF5yo+pvUagQP6LpslffufSnu+RKAagkj7/RSuZV25Ag0EV9ZMvgEQAKc0Db17xNqtSwEv mfp4tkddwW9XA0tWWKtY4KUdd/jijYqc3fDD54ESYpV8QWj0xK4YM0dLxnDU2IYxjEshSB1T qAatVWz9WtBYvzalsyTqMKP3w34FciuL7orXP4AibPtrHuIXWQOBECcVZTTOdZYGAzaYzxiA ONzF9eTiwIqe9/oaOjTwTLnOarHt16QApTYQSnxDUQljeNvKYt1lZE/gAUUxNLWsYyTT+22/ vU0GDUahsJxs1+f1yEr+OGrFiEAmqrzpF0lCS3f/3HVTU6rS9cK3glVUeaTF4+1SK5ZNO35p iVQCwphmxa+dwTG/DvvHYCtgOZorTJ+OHfvCnSVjsM4kcXGjJPy3JZmUtyL9UxEbYlrffGPQ I3gLXIGD5AN5XdAXFCjjaID/KR1c9RHd7Oaw0Pdcq9UtMLgM1vdX8RlDuMGPrj5sQrRVbgYH fVU/TQCk1C9KhzOwg4Ap2T3tE1umY/DqrXQgsgH71PXFucVjOyHMYXXugLT8YQ0gcBPHy9mZ qw5mgOI5lCl6d4uCcUT0l/OEtPG/rA1lxz8ctdFBVOQOxCvwRG2QCgcJ/UTn5vlivul+cThi 6ERPvjqjblLncQtRg8izj2qgmwQkvfj+h7Ex88bI8iWtu5+I3K3LmNz/UxHBSWEmUnkg4fJl Rr7oItHsZ0ia6wWQ8lQnABEBAAGJAjwEGAEKACYCGwwWIQTSNlOHQi7ChOOZgcf5M+tD3xNh HwUCXWZ5wAUJB3FgggAKCRD5M+tD3xNhH2O+D/9OEz62YuJQLuIuOfL67eFTIB5/1+0j8Tsu o2psca1PUQ61SZJZOMl6VwNxpdvEaolVdrpnSxUF31kPEvR0Igy8HysQ11pj8AcgH0a9FrvU /8k2Roccd2ZIdpNLkirGFZR7LtRw41Kt1Jg+lafI0efkiHKMT/6D/P1EUp1RxOBNtWGV2hrd 0Yg9ds+VMphHHU69fDH02SwgpvXwG8Qm14Zi5WQ66R4CtTkHuYtA63sS17vMl8fDuTCtvfPF HzvdJLIhDYN3Mm1oMjKLlq4PUdYh68Fiwm+boJoBUFGuregJFlO3hM7uHBDhSEnXQr5mqpPM 6R/7Q5BjAxrwVBisH0yQGjsWlnysRWNfExAE2sRePSl0or9q19ddkRYltl6X4FDUXy2DTXa9 a+Fw4e1EvmcF3PjmTYs9IE3Vc64CRQXkhujcN4ZZh5lvOpU8WgyDxFq7bavFnSS6kx7Tk29/ wNJBp+cf9qsQxLbqhW5kfORuZGecus0TLcmpZEFKKjTJBK9gELRBB/zoN3j41hlEl7uTUXTI JQFLhpsFlEdKLujyvT/aCwP3XWT+B2uZDKrMAElF6ltpTxI53JYi22WO7NH7MR16Fhi4R6vh FHNBOkiAhUpoXRZXaCR6+X4qwA8CwHGqHRBfYFSU/Ulq1ZLR+S3hNj2mbnSx0lBs1eEqe2vh cA== Message-ID: Date: Fri, 8 Nov 2019 16:53:11 +0000 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 11/8/2019 4:11 PM, Dekel Peled wrote: > Thanks, PSB. > >> -----Original Message----- >> From: Ferruh Yigit >> Sent: Friday, November 8, 2019 2:52 PM >> To: Matan Azrad ; Dekel Peled >> ; john.mcnamara@intel.com; >> marko.kovacevic@intel.com; nhorman@tuxdriver.com; >> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; >> anatoly.burakov@intel.com; xuanziyang2@huawei.com; >> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; >> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Shahaf Shuler >> ; Slava Ovsiienko ; >> rmody@marvell.com; shshaikh@marvell.com; >> maxime.coquelin@redhat.com; tiwei.bie@intel.com; >> zhihong.wang@intel.com; yongwang@vmware.com; Thomas Monjalon >> ; arybchenko@solarflare.com; >> jingjing.wu@intel.com; bernard.iremonger@intel.com >> Cc: dev@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO >> packet size >> >> On 11/8/2019 11:56 AM, Matan Azrad wrote: >>> >>> >>> From: Ferruh Yigit >>>> On 11/8/2019 10:10 AM, Matan Azrad wrote: >>>>> >>>>> >>>>> From: Ferruh Yigit >>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote: >>>>>>> Hi >>>>>>> >>>>>>> From: Ferruh Yigit >>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote: >>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev * >>>>>>>>> >>>>>>>> RTE_ETHER_MAX_LEN; >>>>>>>>> } >>>>>>>>> >>>>>>>>> + /* >>>>>>>>> + * If LRO is enabled, check that the maximum aggregated >>>> packet >>>>>>>>> + * size is supported by the configured device. >>>>>>>>> + */ >>>>>>>>> + if (dev_conf->rxmode.offloads & >>>> DEV_RX_OFFLOAD_TCP_LRO) { >>>>>>>>> + ret = check_lro_pkt_size( >>>>>>>>> + port_id, dev_conf- >>>>>>>>> rxmode.max_lro_pkt_size, >>>>>>>>> + dev_info.max_lro_pkt_size); >>>>>>>>> + if (ret != 0) >>>>>>>>> + goto rollback; >>>>>>>>> + } >>>>>>>>> + >>>>>>>> >>>>>>>> This check forces applications that enable LRO to provide >>>>>> 'max_lro_pkt_size' >>>>>>>> config value. >>>>>>> >>>>>>> Yes.(we can break an API, we noticed it) >>>>>> >>>>>> I am not talking about API/ABI breakage, that part is OK. >>>>>> With this check, if the application requested LRO offload but not >>>>>> provided 'max_lro_pkt_size' value, device configuration will fail. >>>>>> >>>>> Yes >>>>>> Can there be a case application is good with whatever the PMD can >>>>>> support as max? >>>>> Yes can be - you know, we can do everything we want but it is better >>>>> to be >>>> consistent: >>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO >>>>> offload, max >>>> lro pkt len should be mandatory for LRO offload. >>>>> >>>>> So your question is actually why both, non-lro packets and LRO >>>>> packets max >>>> size are mandatory... >>>>> >>>>> >>>>> I think it should be important values for net applications management. >>>>> Also good for mbuf size managements. >>>>> >>>>>>> >>>>>>>> - Why it is mandatory now, how it was working before if it is >>>>>>>> mandatory value? >>>>>>> >>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo >>>>>>> frame >>>>>> offload. >>>>>>> So now, when the user configures a LRO offload he must to set max >>>>>>> lro pkt >>>>>> len. >>>>>>> We don't want to confuse the user here with the max rx pkt len >>>>>> configurations and behaviors, they should be with same logic. >>>>>>> >>>>>>> This parameter defines well the LRO behavior. >>>>>>> Before this, each PMD took its own interpretation to what should >>>>>>> be the >>>>>> maximum size for LRO aggregated packets. >>>>>>> Now, the user must say what is his intension, and the ethdev can >>>>>>> limit it >>>>>> according to the device capability. >>>>>>> By this way, also, the PMD can organize\optimize its data-path more. >>>>>>> Also, the application can create different mempools for LRO queues >>>>>>> to >>>>>> allow bigger packet receiving for LRO traffic. >>>>>>> >>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is >> '0'? >>>>>>> Yes, you can see the feature description Dekel added. >>>>>>> This patch also updates all the PMDs support an LRO for non-0 value. >>>>>> >>>>>> Of course I can see the updates Matan, my point is "What happens if >>>>>> PMD doesn't provide 'max_lro_pkt_size'", >>>>>> 1) There is no check for it right, so it is acceptable? >>>>> >>>>> There is check. >>>>> If the capability is 0, any non-zero configuration will fail. >>>>> >>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is >>>>>> easy to make new fields mandatory for PMDs but is this really >> necessary? >>>>> >>>>> Yes, for consistence. >>>>> >>>>>>> >>>>>>> as same as max rx pkt len, no? >>>>>>> >>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to >>>>>>>> what PMD provided if application doesn't provide it? >>>>>>> Same answers as above. >>>>>>> >>>>>> >>>>>> If application doesn't care the value, as it has been till now, and >>>>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use >>>>>> the value provided by PMD instead of failing? >>>>> >>>>> Again, same question we can ask on max rx pkt len. >>>>> >>>>> Looks like the packet size is very important value which should be >>>>> set by >>>> the application. >>>>> >>>>> Previous applications have no option to configure it, so they >>>>> haven't >>>> configure it, (probably cover it somehow) I think it is our miss to >>>> supply this info. >>>>> >>>>> Let's do it in same way as we do max rx pkt len (as this patch main idea). >>>>> Later, we can change both to other meaning. >>>>> >>>> >>>> I think it is not a good reason to introduce a new mandatory config >>>> option for application because of 'max_rx_pkt_len' does it. >>> >>> It is mandatory only if LRO offload is configured. >>> >>>> Will it work, if: >>>> - If application doesn't provide this value, use the PMD max >>> >>> May cause a problem if the mbuf size is not enough for the PMD maximum. >> >> OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will >> be used but you already explained that application may want to use different >> mempools for LRO queues. >> >> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into >> account and program the device accordingly (of course in LRO enabled case) >> ? >> This part seems missing and should be highlighted to other PMD maintainers. >> > > All relevant PMDs were modified and maintainers are copied on this patch series. > What modified is PMD announcing a 'dev_info->max_lro_pkt_size' value, which is good. But PMDs are not using user provided 'rxmode.max_lro_pkt_size' value, I assume they are still using 'max_rx_pkt_len' to configure the device. +1 to cc'ing maintainers, but everyone not able to follow all patches and not sure if every maintainer read the patch and recognized they should update their driver. I think better to highlight this things in cover letter / emails etc. I hope it is more clear now. Not for this patch, but generally; As a process, previously I proposed a keeping a todo list under documentation for PMDs for these kind of things, that each PMD maintainer can go there to figure out what kind of changes required because of others changes, but that didn't go in. Other option is whoever updating library update all PMDs fully, but based on feature it can be very hard to update others PMDs. Overall these gaps are causing inconsistencies between PMDs and we need a proper solution.