From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 98F33A04C3;
	Mon, 28 Sep 2020 12:25:46 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 19FF61D6F4;
	Mon, 28 Sep 2020 12:25:44 +0200 (CEST)
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by dpdk.org (Postfix) with ESMTP id 2C2171D6F3
 for <dev@dpdk.org>; Mon, 28 Sep 2020 12:25:43 +0200 (CEST)
IronPort-SDR: PLmXMb55GL6JP5Dbco9NkDKnXCb97KIeq4oZ4Y6pWktx4YiG3D6F99UX7DtrzecJBFZIU6gR7L
 qve1J3V1bXbA==
X-IronPort-AV: E=McAfee;i="6000,8403,9757"; a="226119498"
X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="226119498"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 28 Sep 2020 03:25:39 -0700
IronPort-SDR: X/lMHOiuZeb16VeryFIYFHSLDlgtN4AZ+C7w5HZNhZaXkDWkYCZQEzJykwbFWA5fyNYd8ng5x6
 zDkA7S9QTIgw==
X-IronPort-AV: E=Sophos;i="5.77,313,1596524400"; d="scan'208";a="456771119"
Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.213.193.117])
 ([10.213.193.117])
 by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 28 Sep 2020 03:25:38 -0700
To: Dumitru Ceara <dceara@redhat.com>, dev@dpdk.org
Cc: bruce.richardson@intel.com
References: <1600425415-31834-1-git-send-email-dceara@redhat.com>
 <40128dc3-e379-6916-67fa-69b4394cac0a@intel.com>
 <4210e299-3af5-63b2-717c-7495ba38822b@redhat.com>
From: Ferruh Yigit <ferruh.yigit@intel.com>
Message-ID: <5df07200-8a27-98a9-4121-76c44dd652fd@intel.com>
Date: Mon, 28 Sep 2020 11:25:34 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.2.2
MIME-Version: 1.0
In-Reply-To: <4210e299-3af5-63b2-717c-7495ba38822b@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support.
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
> On 9/22/20 4:21 PM, Ferruh Yigit wrote:
>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
>>> Even though ring interfaces don't support any other TX/RX offloads they
>>> do support sending multi segment packets and this should be advertised
>>> in order to not break applications that use ring interfaces.
>>>
>>
>> Does ring PMD support sending multi segmented packets?
>>
> 
> Yes, sending multi segmented packets works fine with ring PMD.
> 

Define "works fine" :)

All PMDs can put the first mbuf of the chained mbuf to the ring, in that case 
what is the difference between the ones supports 'DEV_TX_OFFLOAD_MULTI_SEGS' and 
the ones doesn't support?

If the traffic is only from ring PMD to ring PMD, you won't recognize the 
difference between segmented or not-segmented mbufs, and it will look like 
segmented packets works fine.
But if there is other PMDs involved in the forwarding, or if need to process the 
packets, will it still work fine?

>> As far as I can see ring PMD doesn't know about the mbuf segments.
>>
> 
> Right, the PMD doesn't care about the mbuf segments but it implicitly
> supports sending multi segmented packets. From what I see it's actually
> the case for most of the PMDs, in the sense that most don't even check
> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
> segment packets they are just accepted.
 >

As far as I can see, if the segmented packets sent, the ring PMD will put the 
first mbuf into the ring without doing anything specific to the next segments.

If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect the 
segmented packets and put each chained mbuf into the separate field in the ring.

> 
> However, the fact that the ring PMD doesn't advertise this implicit
> support forces applications that use ring PMD to have a special case for
> handling ring interfaces. If the ring PMD would advertise
> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
> to the type of underlying interface.
> 

This is not handling the special case for the ring PMD, this is why he have the 
offload capability flag. Application should behave according capability flags, 
not per specific PMD.

Is there any specific usecase you are trying to cover?