From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id CDC83A0353;
	Thu,  6 Aug 2020 18:26:11 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id AD65F1C036;
	Thu,  6 Aug 2020 18:26:11 +0200 (CEST)
Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com
 [209.85.214.196]) by dpdk.org (Postfix) with ESMTP id 9A3231C036
 for <dev@dpdk.org>; Thu,  6 Aug 2020 18:26:09 +0200 (CEST)
Received: by mail-pl1-f196.google.com with SMTP id o1so27896788plk.1
 for <dev@dpdk.org>; Thu, 06 Aug 2020 09:26:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=networkplumber-org.20150623.gappssmtp.com; s=20150623;
 h=date:from:to:cc:subject:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=/qYVGHAEj7I20Cp9TS0SkACdIO3zNvzQZf4cTksGOBM=;
 b=Sr0ReIQTYAuf+kvWaW9fHOmBL24mho+Sc66RPu3aa9WoDImTAY6mLYr5ZAj/rol6Nm
 8+AA68LZgM7wYs+kFAG6TKGgIwtmhmxHBOYCfKA+IPHBxC4lgjZaI+S03ZxwYLxjUARV
 Qghvnvh2xbZLKECKprhtflv68gasxWx8Uh8ok1DVv1wMQrgFQsmwmE1e2lMKGAIw7HRW
 QRGd9+uEylQRTpNbNu4a6ky0s8iktr0+NnqEzlo7qAaHOpi8CrMpjvBRBGSYhlqaGyly
 DFtID9mSLTV10LDGVzwebNou8skwqlGuvRCklOuNiqBktuKQNifejFBFebSqCtZDCtgk
 zmPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=/qYVGHAEj7I20Cp9TS0SkACdIO3zNvzQZf4cTksGOBM=;
 b=U1SdW9kT8X0CML/R+8ktBJH5P//KvE8fkLmxp15daGZljxrBQBP7SFHCF04r1a9A3l
 E+7QwwmnTdOinE/BVnNtkAWxZABfYuW5O6yrimXkjFwM8rbtS5jYuGBVN0PMHh9UOYxX
 oF/mbCz+8omusfWhKGD05Dvf4+v9KWw8BTCMG2U6nl5O/2vMD91ljHbbnilO6R9W8ude
 4oFn3liS5aH2iRSJpWSQNTMiKAsbSA6SCkCswaQiWe1gQ9T1ahozntG53OfECafN4HxA
 GrO9f2o1fYnJmjgDD4ibTLrnHljiWvPO7PiZ1pHgwqNF21h/fOreh0kVXbcp+YS6Drxz
 vFMw==
X-Gm-Message-State: AOAM5335anIxHRFpJC4lQhQKshvKv46I+euNhMiseOrwNN5YKyOCrQ0t
 o+0nLCyTErGeHICWVM1wlWqgLA==
X-Google-Smtp-Source: ABdhPJzWD65HrYiIXIyq+vpB6Hsh50p6kvDTUZ3/1fctNtFL2vL1ZG0h/SkpmH4qaDB1Z8jdncrRzg==
X-Received: by 2002:a17:90a:d78f:: with SMTP id
 z15mr9291377pju.9.1596731168820; 
 Thu, 06 Aug 2020 09:26:08 -0700 (PDT)
Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127])
 by smtp.gmail.com with ESMTPSA id q13sm8145854pjj.36.2020.08.06.09.26.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 Aug 2020 09:26:08 -0700 (PDT)
Date: Thu, 6 Aug 2020 09:25:59 -0700
From: Stephen Hemminger <stephen@networkplumber.org>
To: Ferruh Yigit <ferruh.yigit@intel.com>
Cc: Jerin Jacob <jerinjacobk@gmail.com>, Slava Ovsiienko
 <viacheslavo@mellanox.com>, dpdk-dev <dev@dpdk.org>, Matan Azrad
 <matan@mellanox.com>, Raslan Darawsheh <rasland@mellanox.com>, Thomas
 Monjalon <thomas@monjalon.net>, Andrew Rybchenko
 <arybchenko@solarflare.com>, Ajit Khaparde <ajit.khaparde@broadcom.com>,
 Maxime Coquelin <maxime.coquelin@redhat.com>, Olivier Matz
 <olivier.matz@6wind.com>, David Marchand <david.marchand@redhat.com>
Message-ID: <20200806092559.614ae91f@hermes.lan>
In-Reply-To: <bd2bcee0-8205-fcd1-0de0-1350b7c07b60@intel.com>
References: <1596452291-25535-1-git-send-email-viacheslavo@mellanox.com>
 <CALBAE1MNt=+UL42vm5Wz5dafPL8FdgBLM7UBmjVmSzJ+Ai98_A@mail.gmail.com>
 <AM4PR05MB32659A45A1E20408361D18FFD24D0@AM4PR05MB3265.eurprd05.prod.outlook.com>
 <CALBAE1NZZLbtKZQ8qM0LBoJAq74i6Xx1rLAZ5dcOSSoQ9keFTg@mail.gmail.com>
 <bd2bcee0-8205-fcd1-0de0-1350b7c07b60@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Subject: Re: [dpdk-dev] [PATCH] doc: announce changes to ethdev rxconf
	structure
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On Thu, 6 Aug 2020 16:58:22 +0100
Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 8/4/2020 2:32 PM, Jerin Jacob wrote:
> > On Mon, Aug 3, 2020 at 6:36 PM Slava Ovsiienko <viacheslavo@mellanox.com> wrote:  
> >>
> >> Hi, Jerin,
> >>
> >> Thanks for the comment,  please, see below.
> >>  
> >>> -----Original Message-----
> >>> From: Jerin Jacob <jerinjacobk@gmail.com>
> >>> Sent: Monday, August 3, 2020 14:57
> >>> To: Slava Ovsiienko <viacheslavo@mellanox.com>
> >>> Cc: dpdk-dev <dev@dpdk.org>; Matan Azrad <matan@mellanox.com>;
> >>> Raslan Darawsheh <rasland@mellanox.com>; Thomas Monjalon
> >>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Stephen
> >>> Hemminger <stephen@networkplumber.org>; Andrew Rybchenko
> >>> <arybchenko@solarflare.com>; Ajit Khaparde
> >>> <ajit.khaparde@broadcom.com>; Maxime Coquelin
> >>> <maxime.coquelin@redhat.com>; Olivier Matz <olivier.matz@6wind.com>;
> >>> David Marchand <david.marchand@redhat.com>
> >>> Subject: Re: [PATCH] doc: announce changes to ethdev rxconf structure
> >>>
> >>> On Mon, Aug 3, 2020 at 4:28 PM Viacheslav Ovsiienko
> >>> <viacheslavo@mellanox.com> wrote:  
> >>>>
> >>>> The DPDK datapath in the transmit direction is very flexible.
> >>>> The applications can build multisegment packets and manages almost all
> >>>> data aspects - the memory pools where segments are allocated from, the
> >>>> segment lengths, the memory attributes like external, registered, etc.
> >>>>
> >>>> In the receiving direction, the datapath is much less flexible, the
> >>>> applications can only specify the memory pool to configure the
> >>>> receiving queue and nothing more. In order to extend the receiving
> >>>> datapath capabilities it is proposed to add the new fields into
> >>>> rte_eth_rxconf structure:
> >>>>
> >>>> struct rte_eth_rxconf {
> >>>>     ...
> >>>>     uint16_t rx_split_num; /* number of segments to split */
> >>>>     uint16_t *rx_split_len; /* array of segment lengthes */
> >>>>     struct rte_mempool **mp; /* array of segment memory pools */  
> >>>
> >>> The pool has the packet length it's been configured for.
> >>> So I think, rx_split_len can be removed.  
> >>
> >> Yes, it is one of the supposed options - if pointer to array of segment lengths
> >> is NULL , the queue_setup() could use the lengths from the pool's properties.
> >> But we are talking about packet split, in general, it should not depend
> >> on pool properties. What if application provides the single pool
> >> and just wants to have the tunnel header in the first dedicated mbuf?
> >>  
> >>>
> >>> This feature also available in Marvell HW. So it not specific to one vendor.
> >>> Maybe we could just the use case mention the use case in the depreciation
> >>> notice and the tentative change in rte_eth_rxconf and exact details can be
> >>> worked out at the time of implementation.
> >>>  
> >> So, if I understand correctly, the struct changes in the commit message
> >> should be marked as just possible implementation?  
> > 
> > Yes.
> > 
> > We may need to have a detailed discussion on the correct abstraction for various
> > HW is available with this feature.
> > 
> > On Marvell HW, We can configure TWO pools for given eth Rx queue.
> > One pool can be configured as a small packet pool and other one as
> > large packet pool.
> > And there is a threshold value to decide the pool between small and large.
> > For example:
> > - The small pool is configured 2k
> > - The large pool is configured with 10k
> > - And if the threshold value is configured as 2k.
> > Any packet size <=2K will land in small pool and others in a large pool.
> > The use case, we are targeting is to save the memory space for jumbo frames.  
> 
> Out of curiosity, do you provide two different buffer address in the descriptor
> and HW automatically uses one based on the size,
> or driver uses one of the pools based on the configuration and possible largest
> packet size?

I am all for allowing more configuration of buffer pool.
But don't want that to be exposed as a hardware specific requirement in the
API for applications. The worst case would be if your API changes required:

  if (strcmp(dev->driver_name, "marvell") == 0) {
     // make another mempool for this driver