From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1DF4DA04C3; Mon, 28 Sep 2020 17:03:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DD0901D560; Mon, 28 Sep 2020 17:03:03 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id C28611D511 for ; Mon, 28 Sep 2020 17:03:02 +0200 (CEST) IronPort-SDR: LexLuVRGU7aPpTOSnPy3oVw9RnJ/J5QcSwi78ltH57JFlQ4x2Fcli8/hf6jJAaWDvntTVbkwsv lWzDwJXS8lUQ== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="141414119" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="141414119" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 08:03:00 -0700 IronPort-SDR: D/BaNcowHc6vmbLfVw4fK+eCV4Qf1ib7kofsLwkf2T/banvYAGPrmE3+wXryeJzcI8vqfvwJUy I4DiM9cx/Msw== X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="488612966" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.193.117]) ([10.213.193.117]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 08:02:59 -0700 To: Dumitru Ceara , "Ananyev, Konstantin" , "dev@dpdk.org" Cc: "Richardson, Bruce" References: <1600425415-31834-1-git-send-email-dceara@redhat.com> <40128dc3-e379-6916-67fa-69b4394cac0a@intel.com> <4210e299-3af5-63b2-717c-7495ba38822b@redhat.com> <5df07200-8a27-98a9-4121-76c44dd652fd@intel.com> <1d1c2d4a-ecee-db54-9790-961c143363df@intel.com> <61c1063c-e814-6a78-0c75-3cf96099ea34@intel.com> <503bd08c-6797-c70d-ae24-b16411edf175@redhat.com> From: Ferruh Yigit Message-ID: <1eac5024-4f5e-64a8-7f72-2fd1b67a9d6b@intel.com> Date: Mon, 28 Sep 2020 16:02:55 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.2.2 MIME-Version: 1.0 In-Reply-To: <503bd08c-6797-c70d-ae24-b16411edf175@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/28/2020 2:58 PM, Dumitru Ceara wrote: > On 9/28/20 3:26 PM, Ferruh Yigit wrote: >> On 9/28/2020 2:10 PM, Ananyev, Konstantin wrote: >>> >>> >>>> -----Original Message----- >>>> From: Ferruh Yigit >>>> Sent: Monday, September 28, 2020 1:43 PM >>>> To: Ananyev, Konstantin ; Dumitru Ceara >>>> ; dev@dpdk.org >>>> Cc: Richardson, Bruce >>>> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment >>>> support. >>>> >>>> On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote: >>>>>> On 9/28/2020 8:31 AM, Dumitru Ceara wrote: >>>>>>> On 9/22/20 4:21 PM, Ferruh Yigit wrote: >>>>>>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote: >>>>>>>>> Even though ring interfaces don't support any other TX/RX >>>>>>>>> offloads they >>>>>>>>> do support sending multi segment packets and this should be >>>>>>>>> advertised >>>>>>>>> in order to not break applications that use ring interfaces. >>>>>>>>> >>>>>>>> >>>>>>>> Does ring PMD support sending multi segmented packets? >>>>>>>> >>>>>>> >>>>>>> Yes, sending multi segmented packets works fine with ring PMD. >>>>>>> >>>>>> >>>>>> Define "works fine" :) >>>>>> >>>>>> All PMDs can put the first mbuf of the chained mbuf to the ring, in >>>>>> that case >>>>>> what is the difference between the ones supports >>>>>> 'DEV_TX_OFFLOAD_MULTI_SEGS' and >>>>>> the ones doesn't support? >>>>>> >>>>>> If the traffic is only from ring PMD to ring PMD, you won't >>>>>> recognize the >>>>>> difference between segmented or not-segmented mbufs, and it will >>>>>> look like >>>>>> segmented packets works fine. >>>>>> But if there is other PMDs involved in the forwarding, or if need >>>>>> to process the >>>>>> packets, will it still work fine? >>>>>> >>>>>>>> As far as I can see ring PMD doesn't know about the mbuf segments. >>>>>>>> >>>>>>> >>>>>>> Right, the PMD doesn't care about the mbuf segments but it implicitly >>>>>>> supports sending multi segmented packets. From what I see it's >>>>>>> actually >>>>>>> the case for most of the PMDs, in the sense that most don't even >>>>>>> check >>>>>>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi >>>>>>> segment packets they are just accepted. >>>>>>    > >>>>>> >>>>>> As far as I can see, if the segmented packets sent, the ring PMD >>>>>> will put the >>>>>> first mbuf into the ring without doing anything specific to the >>>>>> next segments. >>>>>> >>>>>> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should >>>>>> detect the >>>>>> segmented packets and put each chained mbuf into the separate field >>>>>> in the ring. >>>>> >>>>> Hmm, wonder why do you think this is necessary? >>>>>   From my perspective current behaviour is sufficient for TX-ing >>>>> multi-seg packets >>>>> over the ring. >>>>> >>>> >>>> I was thinking based on what some PMDs already doing, but right ring >>>> may not >>>> need to do it. >>>> >>>> Also for the case, one application is sending multi segmented packets >>>> to the >>>> ring, and other application pulling packets from the ring and sending >>>> to a PMD >>>> that does NOT support the multi-seg TX. I thought ring PMD claiming the >>>> multi-seg Tx support should serialize packets to support this case, >>>> but instead >>>> ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing >>>> the >>>> responsibility to the application. >>>> >>>> So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' & >>>> 'DEV_RX_OFFLOAD_SCATTER', what do you think? >>> >>> Seems so... >>> Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here, >>>   if DEV_RX_OFFLOAD_SCATTER was not specified? >>> >> >> I think better to have a new version of the patch to claim both >> capabilities together. >> > > OK, I can do that and send a v2 to claim both caps together. > > Just so that it's clear to me though, these capabilities will only be > advertised and the current behavior of the ring PMD at tx/rx will remain > unchanged, right? > Yes, PMD behavior won't change, only PMD's hint to applications on what it supports will change. > >>> >>>> >>>>>> >>>>>>> >>>>>>> However, the fact that the ring PMD doesn't advertise this implicit >>>>>>> support forces applications that use ring PMD to have a special >>>>>>> case for >>>>>>> handling ring interfaces. If the ring PMD would advertise >>>>>>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be >>>>>>> oblivious >>>>>>> to the type of underlying interface. >>>>>>> >>>>>> >>>>>> This is not handling the special case for the ring PMD, this is why >>>>>> he have the >>>>>> offload capability flag. Application should behave according >>>>>> capability flags, >>>>>> not per specific PMD. >>>>>> >>>>>> Is there any specific usecase you are trying to cover? >>> >> >