From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 425BFA04C3; Mon, 28 Sep 2020 14:45:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 425971D9A4; Mon, 28 Sep 2020 14:45:10 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 85D861D99C for ; Mon, 28 Sep 2020 14:45:08 +0200 (CEST) IronPort-SDR: Pn/qPaAPcRtclAsavSKBDlTHcorexF8IfNA113euzdQH9D1p677kO+ZFurApiZu3Fy8rkNSlqn PTcXraFdf8Vw== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="226134664" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="226134664" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 05:45:06 -0700 IronPort-SDR: ectO/UTI5uHTXExcPG0PlVS1sWIUIkHqjMJh3dvginI57MCT3eBbORXTjQMcEE0nwaHm/OTj+6 verv8h86bq2A== X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="456808094" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.193.117]) ([10.213.193.117]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 05:45:05 -0700 To: Bruce Richardson Cc: Dumitru Ceara , dev@dpdk.org, Konstantin Ananyev References: <1600425415-31834-1-git-send-email-dceara@redhat.com> <40128dc3-e379-6916-67fa-69b4394cac0a@intel.com> <4210e299-3af5-63b2-717c-7495ba38822b@redhat.com> <5df07200-8a27-98a9-4121-76c44dd652fd@intel.com> <20200928110145.GB951@bricha3-MOBL.ger.corp.intel.com> From: Ferruh Yigit Message-ID: <3c26e08c-fca9-597f-a16b-9e5870dacded@intel.com> Date: Mon, 28 Sep 2020 13:45:04 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.2.2 MIME-Version: 1.0 In-Reply-To: <20200928110145.GB951@bricha3-MOBL.ger.corp.intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/28/2020 12:01 PM, Bruce Richardson wrote: > On Mon, Sep 28, 2020 at 11:25:34AM +0100, Ferruh Yigit wrote: >> On 9/28/2020 8:31 AM, Dumitru Ceara wrote: >>> On 9/22/20 4:21 PM, Ferruh Yigit wrote: >>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote: >>>>> Even though ring interfaces don't support any other TX/RX offloads they >>>>> do support sending multi segment packets and this should be advertised >>>>> in order to not break applications that use ring interfaces. >>>>> >>>> >>>> Does ring PMD support sending multi segmented packets? >>>> >>> >>> Yes, sending multi segmented packets works fine with ring PMD. >>> >> >> Define "works fine" :) >> >> All PMDs can put the first mbuf of the chained mbuf to the ring, in that >> case what is the difference between the ones supports >> 'DEV_TX_OFFLOAD_MULTI_SEGS' and the ones doesn't support? >> >> If the traffic is only from ring PMD to ring PMD, you won't recognize the >> difference between segmented or not-segmented mbufs, and it will look like >> segmented packets works fine. >> But if there is other PMDs involved in the forwarding, or if need to process >> the packets, will it still work fine? >> > > What other PMDs do or don't do should be irrelevant here, I think. The fact > that multi-segment PMDs make it though the ring PMD in valid form should be > sufficient to mark it as supported. > >>>> As far as I can see ring PMD doesn't know about the mbuf segments. >>>> >>> >>> Right, the PMD doesn't care about the mbuf segments but it implicitly >>> supports sending multi segmented packets. From what I see it's actually >>> the case for most of the PMDs, in the sense that most don't even check >>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi >>> segment packets they are just accepted. >>> >> >> As far as I can see, if the segmented packets sent, the ring PMD will put >> the first mbuf into the ring without doing anything specific to the next >> segments. >> >> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect >> the segmented packets and put each chained mbuf into the separate field in >> the ring. >> > > Why, what would be the advantage of that? Right now if you send in a valid > packet chain to the Ring PMD, you get a valid packet chain out again the > other side, so I don't see what needs to change about that behaviour. > Got it. Konstantin also had similar comment, I have replied there. >>> >>> However, the fact that the ring PMD doesn't advertise this implicit >>> support forces applications that use ring PMD to have a special case for >>> handling ring interfaces. If the ring PMD would advertise >>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious >>> to the type of underlying interface. >>> >> >> This is not handling the special case for the ring PMD, this is why he have >> the offload capability flag. Application should behave according capability >> flags, not per specific PMD. >> >> Is there any specific usecase you are trying to cover?