From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83C20A04B8; Mon, 11 Nov 2019 13:21:44 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E10A8DE3; Mon, 11 Nov 2019 13:21:42 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id E6932CF3 for ; Mon, 11 Nov 2019 13:21:40 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Nov 2019 04:21:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,293,1569308400"; d="scan'208";a="354757654" Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.237.221.96]) ([10.237.221.96]) by orsmga004.jf.intel.com with ESMTP; 11 Nov 2019 04:21:34 -0800 To: Matan Azrad , Dekel Peled , "john.mcnamara@intel.com" , "marko.kovacevic@intel.com" , "nhorman@tuxdriver.com" , "ajit.khaparde@broadcom.com" , "somnath.kotur@broadcom.com" , "anatoly.burakov@intel.com" , "xuanziyang2@huawei.com" , "cloud.wangxiaoyun@huawei.com" , "zhouguoyang@huawei.com" , "wenzhuo.lu@intel.com" , "konstantin.ananyev@intel.com" , Shahaf Shuler , Slava Ovsiienko , "rmody@marvell.com" , "shshaikh@marvell.com" , "maxime.coquelin@redhat.com" , "tiwei.bie@intel.com" , "zhihong.wang@intel.com" , "yongwang@vmware.com" , Thomas Monjalon , "arybchenko@solarflare.com" , "jingjing.wu@intel.com" , "bernard.iremonger@intel.com" Cc: "dev@dpdk.org" References: <4c64b7941e1e9416ae7946cb44d50a01888d70c4.1573129825.git.dekelp@mellanox.com> <0523c7d7-bc97-7e30-c024-e578f9548797@intel.com> <0a1708e5-70ba-16f8-29b0-bef8d4f20f80@intel.com> <60dc4ef1-7e9a-5073-c534-e3b7a42a9abf@intel.com> From: Ferruh Yigit Openpgp: preference=signencrypt Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata= mQINBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABtCVGZXJydWggWWln aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+iQJUBBMBCgA+AhsDAh4BAheABQsJCAcDBRUK CQgLBRYCAwEAFiEE0jZTh0IuwoTjmYHH+TPrQ98TYR8FAl1meboFCQlupOoACgkQ+TPrQ98T YR9ACBAAv2tomhyxY0Tp9Up7mNGLfEdBu/7joB/vIdqMRv63ojkwr9orQq5V16V/25+JEAD0 60cKodBDM6HdUvqLHatS8fooWRueSXHKYwJ3vxyB2tWDyZrLzLI1jxEvunGodoIzUOtum0Ce gPynnfQCelXBja0BwLXJMplM6TY1wXX22ap0ZViC0m714U5U4LQpzjabtFtjT8qOUR6L7hfy YQ72PBuktGb00UR/N5UrR6GqB0x4W41aZBHXfUQnvWIMmmCrRUJX36hOTYBzh+x86ULgg7H2 1499tA4o6rvE13FiGccplBNWCAIroAe/G11rdoN5NBgYVXu++38gTa/MBmIt6zRi6ch15oLA Ln2vHOdqhrgDuxjhMpG2bpNE36DG/V9WWyWdIRlz3NYPCDM/S3anbHlhjStXHOz1uHOnerXM 1jEjcsvmj1vSyYoQMyRcRJmBZLrekvgZeh7nJzbPHxtth8M7AoqiZ/o/BpYU+0xZ+J5/szWZ aYxxmIRu5ejFf+Wn9s5eXNHmyqxBidpCWvcbKYDBnkw2+Y9E5YTpL0mS0dCCOlrO7gca27ux ybtbj84aaW1g0CfIlUnOtHgMCmz6zPXThb+A8H8j3O6qmPoVqT3qnq3Uhy6GOoH8Fdu2Vchh TWiF5yo+pvUagQP6LpslffufSnu+RKAagkj7/RSuZV25Ag0EV9ZMvgEQAKc0Db17xNqtSwEv mfp4tkddwW9XA0tWWKtY4KUdd/jijYqc3fDD54ESYpV8QWj0xK4YM0dLxnDU2IYxjEshSB1T qAatVWz9WtBYvzalsyTqMKP3w34FciuL7orXP4AibPtrHuIXWQOBECcVZTTOdZYGAzaYzxiA ONzF9eTiwIqe9/oaOjTwTLnOarHt16QApTYQSnxDUQljeNvKYt1lZE/gAUUxNLWsYyTT+22/ vU0GDUahsJxs1+f1yEr+OGrFiEAmqrzpF0lCS3f/3HVTU6rS9cK3glVUeaTF4+1SK5ZNO35p iVQCwphmxa+dwTG/DvvHYCtgOZorTJ+OHfvCnSVjsM4kcXGjJPy3JZmUtyL9UxEbYlrffGPQ I3gLXIGD5AN5XdAXFCjjaID/KR1c9RHd7Oaw0Pdcq9UtMLgM1vdX8RlDuMGPrj5sQrRVbgYH fVU/TQCk1C9KhzOwg4Ap2T3tE1umY/DqrXQgsgH71PXFucVjOyHMYXXugLT8YQ0gcBPHy9mZ qw5mgOI5lCl6d4uCcUT0l/OEtPG/rA1lxz8ctdFBVOQOxCvwRG2QCgcJ/UTn5vlivul+cThi 6ERPvjqjblLncQtRg8izj2qgmwQkvfj+h7Ex88bI8iWtu5+I3K3LmNz/UxHBSWEmUnkg4fJl Rr7oItHsZ0ia6wWQ8lQnABEBAAGJAjwEGAEKACYCGwwWIQTSNlOHQi7ChOOZgcf5M+tD3xNh HwUCXWZ5wAUJB3FgggAKCRD5M+tD3xNhH2O+D/9OEz62YuJQLuIuOfL67eFTIB5/1+0j8Tsu o2psca1PUQ61SZJZOMl6VwNxpdvEaolVdrpnSxUF31kPEvR0Igy8HysQ11pj8AcgH0a9FrvU /8k2Roccd2ZIdpNLkirGFZR7LtRw41Kt1Jg+lafI0efkiHKMT/6D/P1EUp1RxOBNtWGV2hrd 0Yg9ds+VMphHHU69fDH02SwgpvXwG8Qm14Zi5WQ66R4CtTkHuYtA63sS17vMl8fDuTCtvfPF HzvdJLIhDYN3Mm1oMjKLlq4PUdYh68Fiwm+boJoBUFGuregJFlO3hM7uHBDhSEnXQr5mqpPM 6R/7Q5BjAxrwVBisH0yQGjsWlnysRWNfExAE2sRePSl0or9q19ddkRYltl6X4FDUXy2DTXa9 a+Fw4e1EvmcF3PjmTYs9IE3Vc64CRQXkhujcN4ZZh5lvOpU8WgyDxFq7bavFnSS6kx7Tk29/ wNJBp+cf9qsQxLbqhW5kfORuZGecus0TLcmpZEFKKjTJBK9gELRBB/zoN3j41hlEl7uTUXTI JQFLhpsFlEdKLujyvT/aCwP3XWT+B2uZDKrMAElF6ltpTxI53JYi22WO7NH7MR16Fhi4R6vh FHNBOkiAhUpoXRZXaCR6+X4qwA8CwHGqHRBfYFSU/Ulq1ZLR+S3hNj2mbnSx0lBs1eEqe2vh cA== Message-ID: <5596d453-4343-e702-9cb1-d70ab06dcfff@intel.com> Date: Mon, 11 Nov 2019 12:21:33 +0000 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 11/11/2019 11:33 AM, Matan Azrad wrote: > > > From: Ferruh Yigit >> On 11/9/2019 6:20 PM, Matan Azrad wrote: >>> Hi >>> >>> From: Ferruh Yigit >>>> On 11/8/2019 11:56 AM, Matan Azrad wrote: >>>>> >>>>> >>>>> From: Ferruh Yigit >>>>>> On 11/8/2019 10:10 AM, Matan Azrad wrote: >>>>>>> >>>>>>> >>>>>>> From: Ferruh Yigit >>>>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote: >>>>>>>>> Hi >>>>>>>>> >>>>>>>>> From: Ferruh Yigit >>>>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote: >>>>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev * >>>>>>>>>>> >>>>>>>>>> RTE_ETHER_MAX_LEN; >>>>>>>>>>> } >>>>>>>>>>> >>>>>>>>>>> + /* >>>>>>>>>>> + * If LRO is enabled, check that the maximum aggregated >>>>>> packet >>>>>>>>>>> + * size is supported by the configured device. >>>>>>>>>>> + */ >>>>>>>>>>> + if (dev_conf->rxmode.offloads & >>>>>> DEV_RX_OFFLOAD_TCP_LRO) { >>>>>>>>>>> + ret = check_lro_pkt_size( >>>>>>>>>>> + port_id, dev_conf- >>>>>>>>>>> rxmode.max_lro_pkt_size, >>>>>>>>>>> + dev_info.max_lro_pkt_size); >>>>>>>>>>> + if (ret != 0) >>>>>>>>>>> + goto rollback; >>>>>>>>>>> + } >>>>>>>>>>> + >>>>>>>>>> >>>>>>>>>> This check forces applications that enable LRO to provide >>>>>>>> 'max_lro_pkt_size' >>>>>>>>>> config value. >>>>>>>>> >>>>>>>>> Yes.(we can break an API, we noticed it) >>>>>>>> >>>>>>>> I am not talking about API/ABI breakage, that part is OK. >>>>>>>> With this check, if the application requested LRO offload but not >>>>>>>> provided 'max_lro_pkt_size' value, device configuration will fail. >>>>>>>> >>>>>>> Yes >>>>>>>> Can there be a case application is good with whatever the PMD can >>>>>>>> support as max? >>>>>>> Yes can be - you know, we can do everything we want but it is >>>>>>> better to be >>>>>> consistent: >>>>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO >>>>>>> offload, max >>>>>> lro pkt len should be mandatory for LRO offload. >>>>>>> >>>>>>> So your question is actually why both, non-lro packets and LRO >>>>>>> packets max >>>>>> size are mandatory... >>>>>>> >>>>>>> >>>>>>> I think it should be important values for net applications management. >>>>>>> Also good for mbuf size managements. >>>>>>> >>>>>>>>> >>>>>>>>>> - Why it is mandatory now, how it was working before if it is >>>>>>>>>> mandatory value? >>>>>>>>> >>>>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo >>>>>>>>> frame >>>>>>>> offload. >>>>>>>>> So now, when the user configures a LRO offload he must to set >>>>>>>>> max lro pkt >>>>>>>> len. >>>>>>>>> We don't want to confuse the user here with the max rx pkt len >>>>>>>> configurations and behaviors, they should be with same logic. >>>>>>>>> >>>>>>>>> This parameter defines well the LRO behavior. >>>>>>>>> Before this, each PMD took its own interpretation to what should >>>>>>>>> be the >>>>>>>> maximum size for LRO aggregated packets. >>>>>>>>> Now, the user must say what is his intension, and the ethdev can >>>>>>>>> limit it >>>>>>>> according to the device capability. >>>>>>>>> By this way, also, the PMD can organize\optimize its data-path >> more. >>>>>>>>> Also, the application can create different mempools for LRO >>>>>>>>> queues to >>>>>>>> allow bigger packet receiving for LRO traffic. >>>>>>>>> >>>>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it >>>>>>>>>> is >>>> '0'? >>>>>>>>> Yes, you can see the feature description Dekel added. >>>>>>>>> This patch also updates all the PMDs support an LRO for non-0 >> value. >>>>>>>> >>>>>>>> Of course I can see the updates Matan, my point is "What happens >>>>>>>> if PMD doesn't provide 'max_lro_pkt_size'", >>>>>>>> 1) There is no check for it right, so it is acceptable? >>>>>>> >>>>>>> There is check. >>>>>>> If the capability is 0, any non-zero configuration will fail. >>>>>>> >>>>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is >>>>>>>> easy to make new fields mandatory for PMDs but is this really >>>> necessary? >>>>>>> >>>>>>> Yes, for consistence. >>>>>>> >>>>>>>>> >>>>>>>>> as same as max rx pkt len, no? >>>>>>>>> >>>>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to >>>>>>>>>> what PMD provided if application doesn't provide it? >>>>>>>>> Same answers as above. >>>>>>>>> >>>>>>>> >>>>>>>> If application doesn't care the value, as it has been till now, >>>>>>>> and not provided explicit 'max_lro_pkt_size', why not ethdev >>>>>>>> level use the value provided by PMD instead of failing? >>>>>>> >>>>>>> Again, same question we can ask on max rx pkt len. >>>>>>> >>>>>>> Looks like the packet size is very important value which should be >>>>>>> set by >>>>>> the application. >>>>>>> >>>>>>> Previous applications have no option to configure it, so they >>>>>>> haven't >>>>>> configure it, (probably cover it somehow) I think it is our miss to >>>>>> supply this info. >>>>>>> >>>>>>> Let's do it in same way as we do max rx pkt len (as this patch main >> idea). >>>>>>> Later, we can change both to other meaning. >>>>>>> >>>>>> >>>>>> I think it is not a good reason to introduce a new mandatory config >>>>>> option for application because of 'max_rx_pkt_len' does it. >>>>> >>>>> It is mandatory only if LRO offload is configured. >>>>> >>>>>> Will it work, if: >>>>>> - If application doesn't provide this value, use the PMD max >>>>> >>>>> May cause a problem if the mbuf size is not enough for the PMD >> maximum. >>>> >>>> OK, this is what I was missing, for this case I was thinking >>>> max_rx_pkt_len will be used but you already explained that >>>> application may want to use different mempools for LRO queues. >>>> >>> So , are you agree with the idea? >>> >>>> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into >>>> account and program the device accordingly (of course in LRO enabled >>>> case) ? >>>> This part seems missing and should be highlighted to other PMD >> maintainers. >>> >>> >>> Yes, you are right. >>> PMDs must limit the LRO aggregated packet according to the new field, >>> And it probably very hard for the patch introducer to understand how to do >> it for each PMD. >>> >>> I think each new configuration requires other maintainers\developers to >> adjust their own PMD code to the new configuration and it should be done in >> limited time. >> >> Agree. >> But experience showed that this synchronization is not as easy as it sounds, >> whoever changing the interface/library says other PMDs should reflect the >> change but most of the times other PMD maintainers not aware of it or if >> they do they have other priorities for the release, so the changes should be >> in a way to give more time to PMDs to adapt it and during this time library >> change shouldn't break other PMDs. >> > > Yes. > >>> My suggestion here: >>> 1. To reserve the info field and the configuration field for rc2.(if >>> it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the >> start of rc3. >>> 3. Request each relevant PMD to adjust its PMD to the new configuration >> for the end of rc3. >>> Note: this should be small change and only for ~5 PMDs: >>> a. Introduce the info field according to the device ability. >>> b. For each LRO queue: >>> Use the LRO max size configuration instead of the >> current max rx pkt len configuration(looks like small condition). >>> >>> What do you think? >> >> There is already a v6 which only updates dev_info fields to have the >> 'max_lro_pktlen' field, the PMD updates there also looks safe, so I think we >> can go with it for rc2. >> > > Doesn’t make sense to expose the info field without the configuration. > > >> For the configuration part, I suggest deferring it next release, which gives >> more time for discussion and enough time for other PMDs to implement it. >> >> >> And related configuration, right now devices already configured to limit the >> packet size to 'max_rx_pkt_len', it can be an optimization to increase it to >> 'max_lro_pkt_len' for the queues LRO is supported, why not make this >> configuration more explicitly with specific API as Konstantin suggested [1], >> this way it only affects the applications that are interested in and the PMDs >> that want to support this. >> Current implementation is under 'rte_eth_dev_configure()' which is used by >> all DPDK applications and impact of changing it is much larger, also it makes >> mandatory for applications to provide this config option when LRO enabled, >> explicit API gives same result without making a mandatory config option. >> >> [1] >> int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro); > > Please see my answers to Konstantin regarding this topic. > > > > One more option: > In order to not break PMDs because of this feature: > 0 in the capability field means, The PMD doesn't support LRO special limitation so if the application configuration is not the same like max_rx_pkt_len the validation will fail. > I don't see this is a mandatory field if the LRO is enabled, am I missing something? And current implementation does so by failing configure(), the affect to the applications is my first concern. Second is when application supplied the proper values but PMD is not doing anything without letting application anything done. That is why I think explicit API makes this clear and only required by application wants to use it. Similar can be done with following, this also doesn't require both application and PMD changes, wdyt? ethdev, configure(): if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) { if (dev_conf->rxmode.max_lro_pktlen) { if (dev_info.max_lro_pktlen) { validate(rxmode.max_lro_pktlen, dev_info.max_lro_pktlen) } else if (dev_info.max_rx_pktlen) validate(rxmode.max_lro_pktlen, dev_info.max_rx_pktlen) } } } in PMD: if (LRO) { queue.max_pktlen = rxmode.max_lro_pktlen ? rxmode.max_lro_pktlen : rxmode.max_tx_pktlen; }