From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 02A97A0353; Thu, 6 Aug 2020 18:42:13 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6CF352C28; Thu, 6 Aug 2020 18:42:13 +0200 (CEST) Received: from mail-il1-f193.google.com (mail-il1-f193.google.com [209.85.166.193]) by dpdk.org (Postfix) with ESMTP id BCAF92BF2 for ; Thu, 6 Aug 2020 18:42:11 +0200 (CEST) Received: by mail-il1-f193.google.com with SMTP id 77so12512866ilc.5 for ; Thu, 06 Aug 2020 09:42:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=SNU7GRig/zI5MtRvJfubPdhExaT+Uzilks2DOUjbTrE=; b=uC5GBbJ6eD39O1zvvaErSdrv6oY7H2y8HVfAslNi9wSbyv2X6r/sSpzSZ13odlxj8k 02RHXwFxZwj2fJI8ljCgAJSmeawK1IR8ASt6YytIvK/E2kvKNbllTuS/TujNQKmTu4UM 0AENd5n9hPtf/9R5TI39296kPh71q5UDaP+sfWqYO1J4RJWnEPQL7x1iHGa1eq1k5Eki I0Ucv4kgUNz4XoTgzUAD8cXAFPy8YOl+78yxbl1T3aXjl0veOM+n6u6Br0ecxciNndu9 BfzryyRVTEfrINSqwPXdcLpCKke14qFiDJIqBFGBw7WdjqYC3dmDVg8i90ilYClHCAfQ 3fEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=SNU7GRig/zI5MtRvJfubPdhExaT+Uzilks2DOUjbTrE=; b=gDt4EsXYCH1Vm0hR0DMWWPHabG6+Rb0K2wYDn9ahddyafIsHtNtUEIkiKIe37a4T1L zY8QIZ4EfLL40ff4zJwZfXKsB6KFiAe+vRsCiiYb6Tq3iI6w/pXQTIjQMfpLnEL6hhI6 U2lznGdGgWWxCnp42HHGlYxYNLGpKiZ+zDvxu9sgwvNeObb7zLPQpHiQ+YDVjB5tIfbC 2ouIIYjswUjU9IyFhL8lkkly3IBqI9IeDGDUlnSGFHSDHdGdzeXt12h+O6s5DFv04wNs wPljjCTxMI/kjKo+r9520a9gdkj9LdOuxvLVeAJ0pT5/FJTIszdioBhGhMVul8e42BB3 4fwg== X-Gm-Message-State: AOAM533q2MYHUMJcuFud2WlPo1q2JtSIBptccv3J5hj1EwbuRpaqDZ/y UOJgkUa2kXwocVNjo/oYWPlJ5379ORFBqt2Gn3Y= X-Google-Smtp-Source: ABdhPJwYDLRs7OGMnF0VJ3R9cOgmFcBGFf1jgo8vzvC69s2gLN2EqWErDbVcNmn2hRiUxHfsy4pDFjxCl2XLxJa0Xgs= X-Received: by 2002:a92:dcc8:: with SMTP id b8mr10801136ilr.60.1596732130871; Thu, 06 Aug 2020 09:42:10 -0700 (PDT) MIME-Version: 1.0 References: <1596452291-25535-1-git-send-email-viacheslavo@mellanox.com> <20200806092559.614ae91f@hermes.lan> In-Reply-To: <20200806092559.614ae91f@hermes.lan> From: Jerin Jacob Date: Thu, 6 Aug 2020 22:11:54 +0530 Message-ID: To: Stephen Hemminger Cc: Ferruh Yigit , Slava Ovsiienko , dpdk-dev , Matan Azrad , Raslan Darawsheh , Thomas Monjalon , Andrew Rybchenko , Ajit Khaparde , Maxime Coquelin , Olivier Matz , David Marchand Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH] doc: announce changes to ethdev rxconf structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Aug 6, 2020 at 9:56 PM Stephen Hemminger wrote: > > On Thu, 6 Aug 2020 16:58:22 +0100 > Ferruh Yigit wrote: > > > On 8/4/2020 2:32 PM, Jerin Jacob wrote: > > > On Mon, Aug 3, 2020 at 6:36 PM Slava Ovsiienko wrote: > > >> > > >> Hi, Jerin, > > >> > > >> Thanks for the comment, please, see below. > > >> > > >>> -----Original Message----- > > >>> From: Jerin Jacob > > >>> Sent: Monday, August 3, 2020 14:57 > > >>> To: Slava Ovsiienko > > >>> Cc: dpdk-dev ; Matan Azrad ; > > >>> Raslan Darawsheh ; Thomas Monjalon > > >>> ; Ferruh Yigit ; Stephen > > >>> Hemminger ; Andrew Rybchenko > > >>> ; Ajit Khaparde > > >>> ; Maxime Coquelin > > >>> ; Olivier Matz ; > > >>> David Marchand > > >>> Subject: Re: [PATCH] doc: announce changes to ethdev rxconf structure > > >>> > > >>> On Mon, Aug 3, 2020 at 4:28 PM Viacheslav Ovsiienko > > >>> wrote: > > >>>> > > >>>> The DPDK datapath in the transmit direction is very flexible. > > >>>> The applications can build multisegment packets and manages almost all > > >>>> data aspects - the memory pools where segments are allocated from, the > > >>>> segment lengths, the memory attributes like external, registered, etc. > > >>>> > > >>>> In the receiving direction, the datapath is much less flexible, the > > >>>> applications can only specify the memory pool to configure the > > >>>> receiving queue and nothing more. In order to extend the receiving > > >>>> datapath capabilities it is proposed to add the new fields into > > >>>> rte_eth_rxconf structure: > > >>>> > > >>>> struct rte_eth_rxconf { > > >>>> ... > > >>>> uint16_t rx_split_num; /* number of segments to split */ > > >>>> uint16_t *rx_split_len; /* array of segment lengthes */ > > >>>> struct rte_mempool **mp; /* array of segment memory pools */ > > >>> > > >>> The pool has the packet length it's been configured for. > > >>> So I think, rx_split_len can be removed. > > >> > > >> Yes, it is one of the supposed options - if pointer to array of segment lengths > > >> is NULL , the queue_setup() could use the lengths from the pool's properties. > > >> But we are talking about packet split, in general, it should not depend > > >> on pool properties. What if application provides the single pool > > >> and just wants to have the tunnel header in the first dedicated mbuf? > > >> > > >>> > > >>> This feature also available in Marvell HW. So it not specific to one vendor. > > >>> Maybe we could just the use case mention the use case in the depreciation > > >>> notice and the tentative change in rte_eth_rxconf and exact details can be > > >>> worked out at the time of implementation. > > >>> > > >> So, if I understand correctly, the struct changes in the commit message > > >> should be marked as just possible implementation? > > > > > > Yes. > > > > > > We may need to have a detailed discussion on the correct abstraction for various > > > HW is available with this feature. > > > > > > On Marvell HW, We can configure TWO pools for given eth Rx queue. > > > One pool can be configured as a small packet pool and other one as > > > large packet pool. > > > And there is a threshold value to decide the pool between small and large. > > > For example: > > > - The small pool is configured 2k > > > - The large pool is configured with 10k > > > - And if the threshold value is configured as 2k. > > > Any packet size <=2K will land in small pool and others in a large pool. > > > The use case, we are targeting is to save the memory space for jumbo frames. > > > > Out of curiosity, do you provide two different buffer address in the descriptor > > and HW automatically uses one based on the size, > > or driver uses one of the pools based on the configuration and possible largest > > packet size? The later one. > > I am all for allowing more configuration of buffer pool. > But don't want that to be exposed as a hardware specific requirement in the > API for applications. The worst case would be if your API changes required: > > if (strcmp(dev->driver_name, "marvell") == 0) { > // make another mempool for this driver There is no HW specific requirements here. If one pool specified(like the existing situation), HW will create scatter-gather frame. It is mostly useful for the application use case where it needs single contiguous of data for processing(like crypto) and/or improving Rx/TX performance by running in single seg mode without losing too much of memory. > >