From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 7D0B5A05D3 for ; Tue, 23 Apr 2019 16:12:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1DEA21B3FB; Tue, 23 Apr 2019 16:12:35 +0200 (CEST) Received: from relay0122.mxlogin.com (relay0122.mxlogin.com [199.181.239.122]) by dpdk.org (Postfix) with ESMTP id CCBCD1B3F0 for ; Tue, 23 Apr 2019 16:12:33 +0200 (CEST) Received: from filter002.mxroute.com (unknown [94.130.183.33]) by relay0122.mxlogin.com (Postfix) with ESMTP id 29687CC1031B; Tue, 23 Apr 2019 09:12:32 -0500 (CDT) Received: from localhost (unknown [23.92.70.113]) by filter002.mxroute.com (Postfix) with ESMTPS id 433D93F0CC; Tue, 23 Apr 2019 14:12:31 +0000 (UTC) Received: from jfdmzpr04-ext.jf.intel.com ([134.134.139.73]) by localhost with esmtpsa (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128) (Exim 4.91) (envelope-from ) id 1hIwAd-00081G-IM; Tue, 23 Apr 2019 10:13:15 -0400 To: Bruce Richardson , Honnappa Nagarahalli Cc: "dev@dpdk.org" , Stephen Hemminger , "Ananyev, Konstantin" , "thomas@monjalon.net" , nd References: <20190417083637.GB1890@bricha3-MOBL.ger.corp.intel.com> <20190418102811.GB1817@bricha3-MOBL.ger.corp.intel.com> From: Ray Kinsella Openpgp: preference=signencrypt Autocrypt: addr=mdr@ashroe.eu; keydata= mQINBFv8B3wBEAC+5ImcgbIvadt3axrTnt7Sxch3FsmWTTomXfB8YiuHT8KL8L/bFRQSL1f6 ASCHu3M89EjYazlY+vJUWLr0BhK5t/YI7bQzrOuYrl9K94vlLwzD19s/zB/g5YGGR5plJr0s JtJsFGEvF9LL3e+FKMRXveQxBB8A51nAHfwG0WSyx53d61DYz7lp4/Y4RagxaJoHp9lakn8j HV2N6rrnF+qt5ukj5SbbKWSzGg5HQF2t0QQ5tzWhCAKTfcPlnP0GymTBfNMGOReWivi3Qqzr S51Xo7hoGujUgNAM41sxpxmhx8xSwcQ5WzmxgAhJ/StNV9cb3HWIoE5StCwQ4uXOLplZNGnS uxNdegvKB95NHZjRVRChg/uMTGpg9PqYbTIFoPXjuk27sxZLRJRrueg4tLbb3HM39CJwSB++ YICcqf2N+GVD48STfcIlpp12/HI+EcDSThzfWFhaHDC0hyirHxJyHXjnZ8bUexI/5zATn/ux TpMbc/vicJxeN+qfaVqPkCbkS71cHKuPluM3jE8aNCIBNQY1/j87k5ELzg3qaesLo2n1krBH bKvFfAmQuUuJT84/IqfdVtrSCTabvDuNBDpYBV0dGbTwaRfE7i+LiJJclUr8lOvHUpJ4Y6a5 0cxEPxm498G12Z3NoY/mP5soItPIPtLR0rA0fage44zSPwp6cQARAQABtBxSYXkgS2luc2Vs bGEgPG1kckBhc2hyb2UuZXU+iQJUBBMBCAA+FiEEcDUDlKDJaDuJlfZfdJdaH/sCCpsFAlv8 B3wCGyMFCQlmAYAFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQdJdaH/sCCptdtRAAl0oE msa+djBVYLIsax+0f8acidtWg2l9f7kc2hEjp9h9aZCpPchQvhhemtew/nKavik3RSnLTAyn B3C/0GNlmvI1l5PFROOgPZwz4xhJKGN7jOsRrbkJa23a8ly5UXwF3Vqnlny7D3z+7cu1qq/f VRK8qFyWkAb+xgqeZ/hTcbJUWtW+l5Zb+68WGEp8hB7TuJLEWb4+VKgHTpQ4vElYj8H3Z94a 04s2PJMbLIZSgmKDASnyrKY0CzTpPXx5rSJ1q+B1FCsfepHLqt3vKSALa3ld6bJ8fSJtDUJ7 JLiU8dFZrywgDIVme01jPbjJtUScW6jONLvhI8Z2sheR71UoKqGomMHNQpZ03ViVWBEALzEt TcjWgJFn8yAmxqM4nBnZ+hE3LbMo34KCHJD4eg18ojDt3s9VrDLa+V9fNxUHPSib9FD9UX/1 +nGfU/ZABmiTuUDM7WZdXri7HaMpzDRJUKI6b+/uunF8xH/h/MHW16VuMzgI5dkOKKv1LejD dT5mA4R+2zBS+GsM0oa2hUeX9E5WwjaDzXtVDg6kYq8YvEd+m0z3M4e6diFeLS77/sAOgaYL 92UcoKD+Beym/fVuC6/55a0e12ksTmgk5/ZoEdoNQLlVgd2INtvnO+0k5BJcn66ZjKn3GbEC VqFbrnv1GnA58nEInRCTzR1k26h9nmS5Ag0EW/wHfAEQAMth1vHr3fOZkVOPfod3M6DkQir5 xJvUW5EHgYUjYCPIa2qzgIVVuLDqZgSCCinyooG5dUJONVHj3nCbITCpJp4eB3PI84RPfDcC hf/V34N/Gx5mTeoymSZDBmXT8YtvV/uJvn+LvHLO4ZJdvq5ZxmDyxfXFmkm3/lLw0+rrNdK5 pt6OnVlCqEU9tcDBezjUwDtOahyV20XqxtUttN4kQWbDRkhT+HrA9WN9l2HX91yEYC+zmF1S OhBqRoTPLrR6g4sCWgFywqztpvZWhyIicJipnjac7qL/wRS+wrWfsYy6qWLIV80beN7yoa6v ccnuy4pu2uiuhk9/edtlmFE4dNdoRf7843CV9k1yRASTlmPkU59n0TJbw+okTa9fbbQgbIb1 pWsAuicRHyLUIUz4f6kPgdgty2FgTKuPuIzJd1s8s6p2aC1qo+Obm2gnBTduB+/n1Jw+vKpt 07d+CKEKu4CWwvZZ8ktJJLeofi4hMupTYiq+oMzqH+V1k6QgNm0Da489gXllU+3EFC6W1qKj tkvQzg2rYoWeYD1Qn8iXcO4Fpk6wzylclvatBMddVlQ6qrYeTmSbCsk+m2KVrz5vIyja0o5Y yfeN29s9emXnikmNfv/dA5fpi8XCANNnz3zOfA93DOB9DBf0TQ2/OrSPGjB3op7RCfoPBZ7u AjJ9dM7VABEBAAGJAjwEGAEIACYWIQRwNQOUoMloO4mV9l90l1of+wIKmwUCW/wHfAIbDAUJ CWYBgAAKCRB0l1of+wIKm3KlD/9w/LOG5rtgtCUWPl4B3pZvGpNym6XdK8cop9saOnE85zWf u+sKWCrxNgYkYP7aZrYMPwqDvilxhbTsIJl5HhPgpTO1b0i+c0n1Tij3EElj5UCg3q8mEc17 c+5jRrY3oz77g7E3oPftAjaq1ybbXjY4K32o3JHFR6I8wX3m9wJZJe1+Y+UVrrjY65gZFxcA thNVnWKErarVQGjeNgHV4N1uF3pIx3kT1N4GSnxhoz4Bki91kvkbBhUgYfNflGURfZT3wIKK +d50jd7kqRouXUCzTdzmDh7jnYrcEFM4nvyaYu0JjSS5R672d9SK5LVIfWmoUGzqD4AVmUW8 pcv461+PXchuS8+zpltR9zajl72Q3ymlT4BTAQOlCWkD0snBoKNUB5d2EXPNV13nA0qlm4U2 GpROfJMQXjV6fyYRvttKYfM5xYKgRgtP0z5lTAbsjg9WFKq0Fndh7kUlmHjuAIwKIV4Tzo75 QO2zC0/NTaTjmrtiXhP+vkC4pcrOGNsbHuaqvsc/ZZ0siXyYsqbctj/sCd8ka2r94u+c7o4l BGaAm+FtwAfEAkXHu4y5Phuv2IRR+x1wTey1U1RaEPgN8xq0LQ1OitX4t2mQwjdPihZQBCnZ wzOrkbzlJMNrMKJpEgulmxAHmYJKgvZHXZXtLJSejFjR0GdHJcL5rwVOMWB8cg== Message-ID: <2ec4da50-d874-865a-6bcc-916ac676be39@ashroe.eu> Date: Tue, 23 Apr 2019 15:12:22 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190418102811.GB1817@bricha3-MOBL.ger.corp.intel.com> Content-Type: text/plain; charset="UTF-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-AuthUser: mdr@ashroe.eu Subject: Re: [dpdk-dev] ABI and inline functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190423141222.paKzTia7SNI_ovq5q29gIRQ2R3fm2I305GR6Ig79iUU@z> On 18/04/2019 11:28, Bruce Richardson wrote: > On Thu, Apr 18, 2019 at 04:34:53AM +0000, Honnappa Nagarahalli wrote: >>> >>> On Wed, Apr 17, 2019 at 05:12:43AM +0000, Honnappa Nagarahalli wrote: >>>> Hello, >>>> There was a conversation [1] in the context of RCU library. I thought >>>> it needs participation from broader audience. Summary for the context >>>> (please look at [1] for full details) >>>> >>> >>> Thanks for kicking off this discussion >>> >>>> 1) How do we provide ABI compatibility when the code base contains >>> inline functions? Unless we freeze development in these inline functions and >>> corresponding structures, it might not be possible to claim ABI compatibility. >>> Application has to be recompiled. >>> >>> I agree that in some cases the application "might" need to be recompiled, >>> but on the other hand I also think that there are many cases where ABI >>> function versioning can still be used to mitigate things. For example, if we >>> think of a couple of various scenarios: >>> >>> 1. If everything is inline and variables are allocated by app, e.g. >>> spinlock on stack, then there is no issue as everything is application >>> contained. >> If there is a bug fix which requires the structure to change, the application would need to recompile. I guess you are talking about a scenario when nothing changed in the inline function/variables. >> > > If the application wants the bugfix, then yes. However, if the app is > unaffected by the bug, then it should have the option of updating DPDK > without a recompile. I would also imagine that should be an extremely rare case, that a bugfix would require a structure change ... perhaps for an alignment issues? > >>> >>> 2. If the situation is as in #1, but the structures in question are passed to >>> non-inline DPDK functions. In this case, any changes to the structures require >>> those functions taking the structures to be versioned for old and new >>> structures >> I think this can get complicated on the inline function side. The application and the DPDK library will end up having different inline functions. The changed inline function needs to be aware of 2 structure formats or the inline function needs to be duplicated (one for each version of the structure). I guess these are the workarounds we have to do. >> > No, there is never any need for two versions of the inline functions, only > the newest version is needed. This is because in the case of a newly compiled > application only the newest version of the non-inline functions is ever > used. The other older versions are only used at runtime for compatilibity > with pre-compiled apps with the older inlines. > >>> >>> 3. If instead we have the case, like in rte_ring, where the data >>> structures are allocated using functions, but they are used via inlines >>> in the app. In this case the creation functions (and any other function >>> using the structures) need to be versioned so that the application gets >>> the expected structure back from the creation. >> The new structure members have to be added such that the previous layout >> is not affected. Either add at the end or use the gaps that are left >> because of cache line alignment. >> > Not necessarily. There is nothing that requires the older and newer > versions of a function to use the same structure. We can rename the > original structure when versioning the old function, and then create a new > structure with the original name for later recompiled code using the newest > ABIs. > >>> >>> It might be useful to think of what other scenarios we have and what ones >>> are likely to be problematic, especially those that can't be solved by having >>> multiple function versions. >>> >>>> 2) Every function that is not in the direct datapath (fastpath?) >>>> should not be inline. Exceptions or things like rx/tx burst, ring >>>> enqueue/dequeue, and packet alloc/free - Stephen >>> >>> Agree with this. Anything not data path should not be inline. The next >> Yes, very clear on this. >> >>> question then is for data path items how to determine whether they need to >>> be inline or not. In general my rule-of-thumb: >>> * anything dealing with bursts of packets should not be inline >>> * anything what works with single packet at a time should be inline >>> >>> The one exception to this rule is cases where we have to consider "empty >>> polling" as a likely use-case. For example, rte_ring_dequeue_burst works >>> with bursts of packets, but there is a valid application use case where a >>> thread could be polling a set of rings where only a small number are >>> actually busy. Right now, polling an empty ring only costs a few cycles, >>> meaning that it's neglible to have a few polls of empty rings before you get >>> to a busy one. Having that function not-inline will cause that cost to jump >>> significantly, so could cause problems. (This leads to the interesting scenario >>> where ring enqueue is safe to un-inline, while dequeue is not). >> A good way to think about it would be - ratio of amount of work done in the function to cost of function call. >> > > Yes, I would tend to agree in general. The other thing is the frequency of > calls, and - as already stated - whether it takes a burst of not. Because > even if it's a trivial function that takes only 10 cycles and we want to > uninline it, the cost may double; but if it takes a burst of packets and is > only used once/twice per burst the cost per packet should still only be a > fraction of a cycle. > >>> >>>> 3) Plus synchronization routines: spin/rwlock/barrier, etc. I think >>>> rcu should be one of such exceptions - it is just another >>>> synchronization mechanism after all (just a bit more sophisticated). - >>>> Konstantin >>>> >>> In general I believe the synchronisation primitives should be inline. >>> However, it does come down to cost too - if a function takes 300 cycles, do >>> we really care if it takes 305 or 310 instead to make it not inline? >>> Hopefully most synchronisation primitives are faster than this so this >>> situation should not occur. >>> >>>> 2) and 3) can be good guidelines to what functions/APIs can be inline. But, >>> I believe this guideline exists today too. Are there any thoughts to change >>> this? >>>> >>>> Coming to 1), I think DPDK cannot provide ABI compatibility unless all the >>> inline functions are converted to normal functions and symbol versioning is >>> done for those (not bothering about performance). >>>> >>> I disagree. I think even in the case of #1, we should be able to manage some >>> changes without breaking ABI. >> I completely agree with you on trying to keep the ABI break surface small and doing due diligence when one is required. However, I think claiming 100% ABI compatibility all the time (or as frequently as other projects claim) might be tough. IMO, we need to look beyond this to solve the end user problem. May be packaging multiple LTS with distros when DPDK could not avoid breaking ABI compatibility? >> > Having multiple LTS's per distro would be nice, but it's putting a lot more > work on the distro folks, I think. I completely disagree with this approach. Anytime I have seen this done, most frequently with interpreted languages, think multiple versions of Java and Python concurrently on a system, it has always been a usability nightmare. > >>> >>>> In this context, does it make sense to say that we will maintain API >>>> compatibility rather than saying ABI compatibility? This will also >>>> send the right message to the end users. >>>> >>> I would value ABI compatibility much higher than API compatibility. If >>> someone is recompiling the application anyway, making a couple of small >>> changes (large rework is obviously a different issue) to the code should not >>> be a massive issue, I hope. On the other hand, ABI compatibility is needed to >>> allow seamless update from one version to another, and it's that ABI >>> compatiblity that allows distro's to pick up our latest and greatest versions. >>> >> I think it is also important to setup the right expectations to DPDK users. i.e. DPDK will police itself better to provide ABI compatibility but occasionally it might not be possible. >> > The trouble here is that a DPDK release, as a product is either backward > compatible or not. Either a user can take it as a drop-in replacement for > the previous version or not. Users do not want to have to go through a > checklist for each app to see if the app uses only "compatible" APIs or > not. Same for the distro folks, they are not going to take some libs from a > release but not others because the ABI is broken for some but not others. Agreed, a DPDK release is either backwards compatible or it isn't, and to Bruce's point if it isn't, it maters less if this is for a big API change or a small API change - the failure of the API stability guarantee is the significant piece. The reality is that most other system libraries provide strong guarantees ... to date we have provided very little. If we start saying "Yes, but except when", the exception case very quickly becomes the general case and then we are back to the beginning again. > > Regards, > /Bruce >