From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8C271A04C3; Mon, 28 Sep 2020 16:00:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 610351C2A5; Mon, 28 Sep 2020 16:00:09 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by dpdk.org (Postfix) with ESMTP id 35A291C25E for ; Mon, 28 Sep 2020 16:00:07 +0200 (CEST) Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601301605; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eEnNc3F4CJU8J07h/qTf3gLM8Go8InCRzOfv53l2B+0=; b=OcJ/HiWwTUdZrv5LTfEsZTYLmg9+nMf9S/cTga5AI0/yBpGTpjWWRqxdO/wgmiqDlcrFvd C9CDFoUskqPFTEeTvWOISe20GO0U6YRinkC88FkVqykEUbCpi26UJKMMjrfmERrXK8uxwD uAkyB8jqghs+fdtVPzvofZ+C2HbUniY= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-500-DrLYsbHtNfKVLpcUSOAwtA-1; Mon, 28 Sep 2020 09:58:11 -0400 X-MC-Unique: DrLYsbHtNfKVLpcUSOAwtA-1 Received: by mail-wm1-f70.google.com with SMTP id m19so416330wmg.6 for ; Mon, 28 Sep 2020 06:58:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=eEnNc3F4CJU8J07h/qTf3gLM8Go8InCRzOfv53l2B+0=; b=EMRjqT50XuyAlQF3hJXH4M3xRTjPJe231nIn2vg/njecWWYu1LaIwKHndKDdSvuAp0 7xao0rFRmQZe/rynXN8toLJkiJmuO04cOFm+p5OvdCgp08MrvAj+g9WSoSeVHkMfqu/O Abi2cRfibDITTBSimm3fWVEK+gohQWlj+cf/3r5bFSPzN1/ot8tsrH2UdWQb/FNjgoI1 PV2LoDgMaIjtUOkIM9qfeVpcJEoVRbtH9mM1Th9aXIvtjXVw7fhNKj6QDv9S9e8loAHv rvHQWvmUcPnABYI46j1QBsaildkKBlewoW0ICBSRE+g5h7j8F/WD78YNvV1I7Z9SpI+e z6mA== X-Gm-Message-State: AOAM533QMaBCwcdNCtxWKugxnHIBaQm/reoEwRLpoDJGBjq9c9qRgAVV WdjR+lX3FtqVZxkfmL1P/P9+t6jMiF7SEMVJUyT3vrNA7YEFQ0k1zdXbHD0GN6slFdGzeTLFuC7 KA5g= X-Received: by 2002:a1c:4c06:: with SMTP id z6mr1757681wmf.40.1601301489044; Mon, 28 Sep 2020 06:58:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwHFpSRgXqkNkn0SS1Dq6hP/s2b+1BonzxKokfjatI0sUqRQ7/SlJ8OKfdfJpuShsVELwcGFQ== X-Received: by 2002:a1c:4c06:: with SMTP id z6mr1757668wmf.40.1601301488857; Mon, 28 Sep 2020 06:58:08 -0700 (PDT) Received: from dceara.remote.csb (i87195.upc-i.chello.nl. [62.195.87.195]) by smtp.gmail.com with ESMTPSA id t15sm1311477wmj.15.2020.09.28.06.58.05 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 28 Sep 2020 06:58:08 -0700 (PDT) To: Ferruh Yigit , "Ananyev, Konstantin" , "dev@dpdk.org" Cc: "Richardson, Bruce" References: <1600425415-31834-1-git-send-email-dceara@redhat.com> <40128dc3-e379-6916-67fa-69b4394cac0a@intel.com> <4210e299-3af5-63b2-717c-7495ba38822b@redhat.com> <5df07200-8a27-98a9-4121-76c44dd652fd@intel.com> <1d1c2d4a-ecee-db54-9790-961c143363df@intel.com> <61c1063c-e814-6a78-0c75-3cf96099ea34@intel.com> From: Dumitru Ceara Message-ID: <503bd08c-6797-c70d-ae24-b16411edf175@redhat.com> Date: Mon, 28 Sep 2020 15:58:01 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <61c1063c-e814-6a78-0c75-3cf96099ea34@intel.com> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dceara@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/28/20 3:26 PM, Ferruh Yigit wrote: > On 9/28/2020 2:10 PM, Ananyev, Konstantin wrote: >> >> >>> -----Original Message----- >>> From: Ferruh Yigit >>> Sent: Monday, September 28, 2020 1:43 PM >>> To: Ananyev, Konstantin ; Dumitru Ceara >>> ; dev@dpdk.org >>> Cc: Richardson, Bruce >>> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment >>> support. >>> >>> On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote: >>>>> On 9/28/2020 8:31 AM, Dumitru Ceara wrote: >>>>>> On 9/22/20 4:21 PM, Ferruh Yigit wrote: >>>>>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote: >>>>>>>> Even though ring interfaces don't support any other TX/RX >>>>>>>> offloads they >>>>>>>> do support sending multi segment packets and this should be >>>>>>>> advertised >>>>>>>> in order to not break applications that use ring interfaces. >>>>>>>> >>>>>>> >>>>>>> Does ring PMD support sending multi segmented packets? >>>>>>> >>>>>> >>>>>> Yes, sending multi segmented packets works fine with ring PMD. >>>>>> >>>>> >>>>> Define "works fine" :) >>>>> >>>>> All PMDs can put the first mbuf of the chained mbuf to the ring, in >>>>> that case >>>>> what is the difference between the ones supports >>>>> 'DEV_TX_OFFLOAD_MULTI_SEGS' and >>>>> the ones doesn't support? >>>>> >>>>> If the traffic is only from ring PMD to ring PMD, you won't >>>>> recognize the >>>>> difference between segmented or not-segmented mbufs, and it will >>>>> look like >>>>> segmented packets works fine. >>>>> But if there is other PMDs involved in the forwarding, or if need >>>>> to process the >>>>> packets, will it still work fine? >>>>> >>>>>>> As far as I can see ring PMD doesn't know about the mbuf segments. >>>>>>> >>>>>> >>>>>> Right, the PMD doesn't care about the mbuf segments but it implicitly >>>>>> supports sending multi segmented packets. From what I see it's >>>>>> actually >>>>>> the case for most of the PMDs, in the sense that most don't even >>>>>> check >>>>>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi >>>>>> segment packets they are just accepted. >>>>>    > >>>>> >>>>> As far as I can see, if the segmented packets sent, the ring PMD >>>>> will put the >>>>> first mbuf into the ring without doing anything specific to the >>>>> next segments. >>>>> >>>>> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should >>>>> detect the >>>>> segmented packets and put each chained mbuf into the separate field >>>>> in the ring. >>>> >>>> Hmm, wonder why do you think this is necessary? >>>>   From my perspective current behaviour is sufficient for TX-ing >>>> multi-seg packets >>>> over the ring. >>>> >>> >>> I was thinking based on what some PMDs already doing, but right ring >>> may not >>> need to do it. >>> >>> Also for the case, one application is sending multi segmented packets >>> to the >>> ring, and other application pulling packets from the ring and sending >>> to a PMD >>> that does NOT support the multi-seg TX. I thought ring PMD claiming the >>> multi-seg Tx support should serialize packets to support this case, >>> but instead >>> ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing >>> the >>> responsibility to the application. >>> >>> So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' & >>> 'DEV_RX_OFFLOAD_SCATTER', what do you think? >> >> Seems so... >> Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here, >>   if DEV_RX_OFFLOAD_SCATTER was not specified? >> > > I think better to have a new version of the patch to claim both > capabilities together. > OK, I can do that and send a v2 to claim both caps together. Just so that it's clear to me though, these capabilities will only be advertised and the current behavior of the ring PMD at tx/rx will remain unchanged, right? Thanks, Dumitru >> >>> >>>>> >>>>>> >>>>>> However, the fact that the ring PMD doesn't advertise this implicit >>>>>> support forces applications that use ring PMD to have a special >>>>>> case for >>>>>> handling ring interfaces. If the ring PMD would advertise >>>>>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be >>>>>> oblivious >>>>>> to the type of underlying interface. >>>>>> >>>>> >>>>> This is not handling the special case for the ring PMD, this is why >>>>> he have the >>>>> offload capability flag. Application should behave according >>>>> capability flags, >>>>> not per specific PMD. >>>>> >>>>> Is there any specific usecase you are trying to cover? >> >