From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6FBA2A04C3; Mon, 28 Sep 2020 14:42:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 24DC51D42A; Mon, 28 Sep 2020 14:42:55 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 3DD6B1D8FB for ; Mon, 28 Sep 2020 14:42:52 +0200 (CEST) IronPort-SDR: vjnTk/vVhMDzTvj0k0veaXpWouujFtgETMXScl0A7cNokchm2LhlpQBvRrOMh2Iq+zpRRyNoJH ju+b8W1cstXg== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="141393178" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="141393178" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 05:42:49 -0700 IronPort-SDR: JJLue/Ouk9sBELTalLpgELtRTTotRY4H5wBcFBMmkshd1OzFP0SsHSJxtir8imDtsUW/al+wyC l1L9hT2nMi4A== X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="456807509" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.193.117]) ([10.213.193.117]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 05:42:47 -0700 To: "Ananyev, Konstantin" , Dumitru Ceara , "dev@dpdk.org" Cc: "Richardson, Bruce" References: <1600425415-31834-1-git-send-email-dceara@redhat.com> <40128dc3-e379-6916-67fa-69b4394cac0a@intel.com> <4210e299-3af5-63b2-717c-7495ba38822b@redhat.com> <5df07200-8a27-98a9-4121-76c44dd652fd@intel.com> From: Ferruh Yigit Message-ID: <1d1c2d4a-ecee-db54-9790-961c143363df@intel.com> Date: Mon, 28 Sep 2020 13:42:44 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.2.2 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote: >> On 9/28/2020 8:31 AM, Dumitru Ceara wrote: >>> On 9/22/20 4:21 PM, Ferruh Yigit wrote: >>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote: >>>>> Even though ring interfaces don't support any other TX/RX offloads they >>>>> do support sending multi segment packets and this should be advertised >>>>> in order to not break applications that use ring interfaces. >>>>> >>>> >>>> Does ring PMD support sending multi segmented packets? >>>> >>> >>> Yes, sending multi segmented packets works fine with ring PMD. >>> >> >> Define "works fine" :) >> >> All PMDs can put the first mbuf of the chained mbuf to the ring, in that case >> what is the difference between the ones supports 'DEV_TX_OFFLOAD_MULTI_SEGS' and >> the ones doesn't support? >> >> If the traffic is only from ring PMD to ring PMD, you won't recognize the >> difference between segmented or not-segmented mbufs, and it will look like >> segmented packets works fine. >> But if there is other PMDs involved in the forwarding, or if need to process the >> packets, will it still work fine? >> >>>> As far as I can see ring PMD doesn't know about the mbuf segments. >>>> >>> >>> Right, the PMD doesn't care about the mbuf segments but it implicitly >>> supports sending multi segmented packets. From what I see it's actually >>> the case for most of the PMDs, in the sense that most don't even check >>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi >>> segment packets they are just accepted. >> > >> >> As far as I can see, if the segmented packets sent, the ring PMD will put the >> first mbuf into the ring without doing anything specific to the next segments. >> >> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect the >> segmented packets and put each chained mbuf into the separate field in the ring. > > Hmm, wonder why do you think this is necessary? > From my perspective current behaviour is sufficient for TX-ing multi-seg packets > over the ring. > I was thinking based on what some PMDs already doing, but right ring may not need to do it. Also for the case, one application is sending multi segmented packets to the ring, and other application pulling packets from the ring and sending to a PMD that does NOT support the multi-seg TX. I thought ring PMD claiming the multi-seg Tx support should serialize packets to support this case, but instead ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing the responsibility to the application. So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' & 'DEV_RX_OFFLOAD_SCATTER', what do you think? >> >>> >>> However, the fact that the ring PMD doesn't advertise this implicit >>> support forces applications that use ring PMD to have a special case for >>> handling ring interfaces. If the ring PMD would advertise >>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious >>> to the type of underlying interface. >>> >> >> This is not handling the special case for the ring PMD, this is why he have the >> offload capability flag. Application should behave according capability flags, >> not per specific PMD. >> >> Is there any specific usecase you are trying to cover?