From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3B14A04C3; Mon, 28 Sep 2020 13:01:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3F9E71D8E1; Mon, 28 Sep 2020 13:01:54 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id CFE8B1D727 for ; Mon, 28 Sep 2020 13:01:51 +0200 (CEST) IronPort-SDR: HhOel0/A9SvyUgnaZBxk36FTEBEKIGm1jot/4Kww2zhQoivIw0R+4cT6rz23gnX/pYKt+fCgZo PtnvSsHk/glQ== X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="162029279" X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="162029279" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2020 04:01:49 -0700 IronPort-SDR: v8w+BVu3rx9jWXyu3lE7gR/MOcdFVOjRUYKTXmhlbux3FVbv2PU6HazTSFrhC7/tv71O8/amtj 4x5mFJ7bZb9A== X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="488527860" Received: from bricha3-mobl.ger.corp.intel.com ([10.213.192.54]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 28 Sep 2020 04:01:48 -0700 Date: Mon, 28 Sep 2020 12:01:45 +0100 From: Bruce Richardson To: Ferruh Yigit Cc: Dumitru Ceara , dev@dpdk.org Message-ID: <20200928110145.GB951@bricha3-MOBL.ger.corp.intel.com> References: <1600425415-31834-1-git-send-email-dceara@redhat.com> <40128dc3-e379-6916-67fa-69b4394cac0a@intel.com> <4210e299-3af5-63b2-717c-7495ba38822b@redhat.com> <5df07200-8a27-98a9-4121-76c44dd652fd@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5df07200-8a27-98a9-4121-76c44dd652fd@intel.com> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Mon, Sep 28, 2020 at 11:25:34AM +0100, Ferruh Yigit wrote: > On 9/28/2020 8:31 AM, Dumitru Ceara wrote: > > On 9/22/20 4:21 PM, Ferruh Yigit wrote: > > > On 9/18/2020 11:36 AM, Dumitru Ceara wrote: > > > > Even though ring interfaces don't support any other TX/RX offloads they > > > > do support sending multi segment packets and this should be advertised > > > > in order to not break applications that use ring interfaces. > > > > > > > > > > Does ring PMD support sending multi segmented packets? > > > > > > > Yes, sending multi segmented packets works fine with ring PMD. > > > > Define "works fine" :) > > All PMDs can put the first mbuf of the chained mbuf to the ring, in that > case what is the difference between the ones supports > 'DEV_TX_OFFLOAD_MULTI_SEGS' and the ones doesn't support? > > If the traffic is only from ring PMD to ring PMD, you won't recognize the > difference between segmented or not-segmented mbufs, and it will look like > segmented packets works fine. > But if there is other PMDs involved in the forwarding, or if need to process > the packets, will it still work fine? > What other PMDs do or don't do should be irrelevant here, I think. The fact that multi-segment PMDs make it though the ring PMD in valid form should be sufficient to mark it as supported. > > > As far as I can see ring PMD doesn't know about the mbuf segments. > > > > > > > Right, the PMD doesn't care about the mbuf segments but it implicitly > > supports sending multi segmented packets. From what I see it's actually > > the case for most of the PMDs, in the sense that most don't even check > > the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi > > segment packets they are just accepted. > > > > As far as I can see, if the segmented packets sent, the ring PMD will put > the first mbuf into the ring without doing anything specific to the next > segments. > > If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect > the segmented packets and put each chained mbuf into the separate field in > the ring. > Why, what would be the advantage of that? Right now if you send in a valid packet chain to the Ring PMD, you get a valid packet chain out again the other side, so I don't see what needs to change about that behaviour. > > > > However, the fact that the ring PMD doesn't advertise this implicit > > support forces applications that use ring PMD to have a special case for > > handling ring interfaces. If the ring PMD would advertise > > DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious > > to the type of underlying interface. > > > > This is not handling the special case for the ring PMD, this is why he have > the offload capability flag. Application should behave according capability > flags, not per specific PMD. > > Is there any specific usecase you are trying to cover?