From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CFA5EA04C3; Mon, 28 Sep 2020 15:27:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D7AE11BCC1; Mon, 28 Sep 2020 15:26:59 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 587601BCAC for ; Mon, 28 Sep 2020 15:26:57 +0200 (CEST) IronPort-SDR: r4ZjVJJ79v01YcVP3wHw4AelaBDAxPXFwAnRdqaaBmjh/C+zS7f2LoY3jm2F8qFwKcAVRdv+9q mJFgkKgFC1Tw== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="159334303" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="159334303" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 06:26:55 -0700 IronPort-SDR: LoTPR29L/ytBZOkQNs1GNDid1QQuVXSeZg6CXxN+qvsZ56E27sl94Dwe9bWcVm7Q0/+HISbo+6 hCsi1qeBX4Sg== X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="488578307" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.193.117]) ([10.213.193.117]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 06:26:53 -0700 To: "Ananyev, Konstantin" , Dumitru Ceara , "dev@dpdk.org" Cc: "Richardson, Bruce" References: <1600425415-31834-1-git-send-email-dceara@redhat.com> <40128dc3-e379-6916-67fa-69b4394cac0a@intel.com> <4210e299-3af5-63b2-717c-7495ba38822b@redhat.com> <5df07200-8a27-98a9-4121-76c44dd652fd@intel.com> <1d1c2d4a-ecee-db54-9790-961c143363df@intel.com> From: Ferruh Yigit Message-ID: <61c1063c-e814-6a78-0c75-3cf96099ea34@intel.com> Date: Mon, 28 Sep 2020 14:26:50 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.2.2 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/28/2020 2:10 PM, Ananyev, Konstantin wrote: > > >> -----Original Message----- >> From: Ferruh Yigit >> Sent: Monday, September 28, 2020 1:43 PM >> To: Ananyev, Konstantin ; Dumitru Ceara ; dev@dpdk.org >> Cc: Richardson, Bruce >> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support. >> >> On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote: >>>> On 9/28/2020 8:31 AM, Dumitru Ceara wrote: >>>>> On 9/22/20 4:21 PM, Ferruh Yigit wrote: >>>>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote: >>>>>>> Even though ring interfaces don't support any other TX/RX offloads they >>>>>>> do support sending multi segment packets and this should be advertised >>>>>>> in order to not break applications that use ring interfaces. >>>>>>> >>>>>> >>>>>> Does ring PMD support sending multi segmented packets? >>>>>> >>>>> >>>>> Yes, sending multi segmented packets works fine with ring PMD. >>>>> >>>> >>>> Define "works fine" :) >>>> >>>> All PMDs can put the first mbuf of the chained mbuf to the ring, in that case >>>> what is the difference between the ones supports 'DEV_TX_OFFLOAD_MULTI_SEGS' and >>>> the ones doesn't support? >>>> >>>> If the traffic is only from ring PMD to ring PMD, you won't recognize the >>>> difference between segmented or not-segmented mbufs, and it will look like >>>> segmented packets works fine. >>>> But if there is other PMDs involved in the forwarding, or if need to process the >>>> packets, will it still work fine? >>>> >>>>>> As far as I can see ring PMD doesn't know about the mbuf segments. >>>>>> >>>>> >>>>> Right, the PMD doesn't care about the mbuf segments but it implicitly >>>>> supports sending multi segmented packets. From what I see it's actually >>>>> the case for most of the PMDs, in the sense that most don't even check >>>>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi >>>>> segment packets they are just accepted. >>>> > >>>> >>>> As far as I can see, if the segmented packets sent, the ring PMD will put the >>>> first mbuf into the ring without doing anything specific to the next segments. >>>> >>>> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect the >>>> segmented packets and put each chained mbuf into the separate field in the ring. >>> >>> Hmm, wonder why do you think this is necessary? >>> From my perspective current behaviour is sufficient for TX-ing multi-seg packets >>> over the ring. >>> >> >> I was thinking based on what some PMDs already doing, but right ring may not >> need to do it. >> >> Also for the case, one application is sending multi segmented packets to the >> ring, and other application pulling packets from the ring and sending to a PMD >> that does NOT support the multi-seg TX. I thought ring PMD claiming the >> multi-seg Tx support should serialize packets to support this case, but instead >> ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing the >> responsibility to the application. >> >> So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' & >> 'DEV_RX_OFFLOAD_SCATTER', what do you think? > > Seems so... > Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here, > if DEV_RX_OFFLOAD_SCATTER was not specified? > I think better to have a new version of the patch to claim both capabilities together. > >> >>>> >>>>> >>>>> However, the fact that the ring PMD doesn't advertise this implicit >>>>> support forces applications that use ring PMD to have a special case for >>>>> handling ring interfaces. If the ring PMD would advertise >>>>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious >>>>> to the type of underlying interface. >>>>> >>>> >>>> This is not handling the special case for the ring PMD, this is why he have the >>>> offload capability flag. Application should behave according capability flags, >>>> not per specific PMD. >>>> >>>> Is there any specific usecase you are trying to cover? >