From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2DD85A0353; Thu, 6 Aug 2020 18:37:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 051CE1C039; Thu, 6 Aug 2020 18:37:36 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 5BC8E1C039 for ; Thu, 6 Aug 2020 18:37:34 +0200 (CEST) IronPort-SDR: apnyFuO7Z3L4UrZErypfqPB+NAAI9zlwQZtOsoNwwSl8SvRNiiDi3G2gk4UiMC/0iQi/nV8AFS 8WD/zUvbE0IA== X-IronPort-AV: E=McAfee;i="6000,8403,9704"; a="132945309" X-IronPort-AV: E=Sophos;i="5.75,441,1589266800"; d="scan'208";a="132945309" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2020 09:37:33 -0700 IronPort-SDR: yeY0tF1phkNj4ULTJemzUmWkszcTaN10w3taNzxuuyLA1uufovUuCPXh3BFWVr2pINbnI+DAG2 AjJ5ISBFO0jg== X-IronPort-AV: E=Sophos;i="5.75,441,1589266800"; d="scan'208";a="493725370" Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.213.255.242]) ([10.213.255.242]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2020 09:37:31 -0700 To: Slava Ovsiienko , Andrew Rybchenko , "dev@dpdk.org" Cc: Matan Azrad , Raslan Darawsheh , Thomas Monjalon , "jerinjacobk@gmail.com" , "stephen@networkplumber.org" , "ajit.khaparde@broadcom.com" , "maxime.coquelin@redhat.com" , "olivier.matz@6wind.com" , "david.marchand@redhat.com" References: <1596452291-25535-1-git-send-email-viacheslavo@mellanox.com> <32d6e003-d4e7-06b0-39f3-f4a3ba2b6df6@solarflare.com> From: Ferruh Yigit Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata= mQINBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABtCVGZXJydWggWWln aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+iQJsBBMBCgBWAhsDAh4BAheABQsJCAcDBRUK CQgLBRYCAwEABQkKqZZ8FiEE0jZTh0IuwoTjmYHH+TPrQ98TYR8FAl6ha3sXGHZrczovL2tl eXMub3BlbnBncC5vcmcACgkQ+TPrQ98TYR8uLA//QwltuFliUWe60xwmu9sY38c1DXvX67wk UryQ1WijVdIoj4H8cf/s2KtyIBjc89R254KMEfJDao/LrXqJ69KyGKXFhFPlF3VmFLsN4XiT PSfxkx8s6kHVaB3O183p4xAqnnl/ql8nJ5ph9HuwdL8CyO5/7dC/MjZ/mc4NGq5O9zk3YRGO lvdZAp5HW9VKW4iynvy7rl3tKyEqaAE62MbGyfJDH3C/nV/4+mPc8Av5rRH2hV+DBQourwuC ci6noiDP6GCNQqTh1FHYvXaN4GPMHD9DX6LtT8Fc5mL/V9i9kEVikPohlI0WJqhE+vQHFzR2 1q5nznE+pweYsBi3LXIMYpmha9oJh03dJOdKAEhkfBr6n8BWkWQMMiwfdzg20JX0o7a/iF8H 4dshBs+dXdIKzPfJhMjHxLDFNPNH8zRQkB02JceY9ESEah3wAbzTwz+e/9qQ5OyDTQjKkVOo cxC2U7CqeNt0JZi0tmuzIWrfxjAUulVhBmnceqyMOzGpSCQIkvalb6+eXsC9V1DZ4zsHZ2Mx Hi+7pCksdraXUhKdg5bOVCt8XFmx1MX4AoV3GWy6mZ4eMMvJN2hjXcrreQgG25BdCdcxKgqp e9cMbCtF+RZax8U6LkAWueJJ1QXrav1Jk5SnG8/5xANQoBQKGz+yFiWcgEs9Tpxth15o2v59 gXK5Ag0EV9ZMvgEQAKc0Db17xNqtSwEvmfp4tkddwW9XA0tWWKtY4KUdd/jijYqc3fDD54ES YpV8QWj0xK4YM0dLxnDU2IYxjEshSB1TqAatVWz9WtBYvzalsyTqMKP3w34FciuL7orXP4Ai bPtrHuIXWQOBECcVZTTOdZYGAzaYzxiAONzF9eTiwIqe9/oaOjTwTLnOarHt16QApTYQSnxD UQljeNvKYt1lZE/gAUUxNLWsYyTT+22/vU0GDUahsJxs1+f1yEr+OGrFiEAmqrzpF0lCS3f/ 3HVTU6rS9cK3glVUeaTF4+1SK5ZNO35piVQCwphmxa+dwTG/DvvHYCtgOZorTJ+OHfvCnSVj sM4kcXGjJPy3JZmUtyL9UxEbYlrffGPQI3gLXIGD5AN5XdAXFCjjaID/KR1c9RHd7Oaw0Pdc q9UtMLgM1vdX8RlDuMGPrj5sQrRVbgYHfVU/TQCk1C9KhzOwg4Ap2T3tE1umY/DqrXQgsgH7 1PXFucVjOyHMYXXugLT8YQ0gcBPHy9mZqw5mgOI5lCl6d4uCcUT0l/OEtPG/rA1lxz8ctdFB VOQOxCvwRG2QCgcJ/UTn5vlivul+cThi6ERPvjqjblLncQtRg8izj2qgmwQkvfj+h7Ex88bI 8iWtu5+I3K3LmNz/UxHBSWEmUnkg4fJlRr7oItHsZ0ia6wWQ8lQnABEBAAGJAjwEGAEKACYC GwwWIQTSNlOHQi7ChOOZgcf5M+tD3xNhHwUCXqFrngUJCKxSYAAKCRD5M+tD3xNhH3YWD/9b cUiWaHJasX+OpiuZ1Li5GG3m9aw4lR/k2lET0UPRer2Jy1JsL+uqzdkxGvPqzFTBXgx/6Byz EMa2mt6R9BCyR286s3lxVS5Bgr5JGB3EkpPcoJT3A7QOYMV95jBiiJTy78Qdzi5LrIu4tW6H o0MWUjpjdbR01cnj6EagKrDx9kAsqQTfvz4ff5JIFyKSKEHQMaz1YGHyCWhsTwqONhs0G7V2 0taQS1bGiaWND0dIBJ/u0pU998XZhmMzn765H+/MqXsyDXwoHv1rcaX/kcZIcN3sLUVcbdxA WHXOktGTQemQfEpCNuf2jeeJlp8sHmAQmV3dLS1R49h0q7hH4qOPEIvXjQebJGs5W7s2vxbA 5u5nLujmMkkfg1XHsds0u7Zdp2n200VC4GQf8vsUp6CSMgjedHeF9zKv1W4lYXpHp576ZV7T GgsEsvveAE1xvHnpV9d7ZehPuZfYlP4qgo2iutA1c0AXZLn5LPcDBgZ+KQZTzm05RU1gkx7n gL9CdTzVrYFy7Y5R+TrE9HFUnsaXaGsJwOB/emByGPQEKrupz8CZFi9pkqPuAPwjN6Wonokv ChAewHXPUadcJmCTj78Oeg9uXR6yjpxyFjx3vdijQIYgi5TEGpeTQBymLANOYxYWYOjXk+ae dYuOYKR9nbPv+2zK9pwwQ2NXbUBystaGyQ== Message-ID: <1e555836-cd8d-9b06-f348-f1a0e2d77dbb@intel.com> Date: Thu, 6 Aug 2020 17:37:27 +0100 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] doc: announce changes to ethdev rxconf structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 8/6/2020 5:29 PM, Slava Ovsiienko wrote: >> -----Original Message----- >> From: Ferruh Yigit >> Sent: Thursday, August 6, 2020 19:16 >> To: Andrew Rybchenko ; Slava Ovsiienko >> ; dev@dpdk.org >> Cc: Matan Azrad ; Raslan Darawsheh >> ; Thomas Monjalon ; >> jerinjacobk@gmail.com; stephen@networkplumber.org; >> ajit.khaparde@broadcom.com; maxime.coquelin@redhat.com; >> olivier.matz@6wind.com; david.marchand@redhat.com >> Subject: Re: [PATCH] doc: announce changes to ethdev rxconf structure >> >> On 8/3/2020 3:31 PM, Andrew Rybchenko wrote: >>> On 8/3/20 1:58 PM, Viacheslav Ovsiienko wrote: >>>> The DPDK datapath in the transmit direction is very flexible. >>>> The applications can build multisegment packets and manages almost >>>> all data aspects - the memory pools where segments are allocated >>>> from, the segment lengths, the memory attributes like external, >>>> registered, etc. >>>> >>>> In the receiving direction, the datapath is much less flexible, the >>>> applications can only specify the memory pool to configure the >>>> receiving queue and nothing more. In order to extend the receiving >>>> datapath capabilities it is proposed to add the new fields into >>>> rte_eth_rxconf structure: >>>> >>>> struct rte_eth_rxconf { >>>> ... >>>> uint16_t rx_split_num; /* number of segments to split */ >>>> uint16_t *rx_split_len; /* array of segment lengthes */ >>>> struct rte_mempool **mp; /* array of segment memory pools */ >>>> ... >>>> }; >>>> >>>> The non-zero value of rx_split_num field configures the receiving >>>> queue to split ingress packets into multiple segments to the mbufs >>>> allocated from various memory pools according to the specified >>>> lengths. The zero value of rx_split_num field provides the backward >>>> compatibility and queue should be configured in a regular way (with >>>> single/multiple mbufs of the same data buffer length allocated from >>>> the single memory pool). >>> >>> From the above description it is not 100% clear how it will coexist >>> with: >>>  - existing mb_pool argument of the rte_eth_rx_queue_setup() >> >> +1 > - supposed to be NULL if the array of lengths/pools is used > >> >>>  - DEV_RX_OFFLOAD_SCATTER >>>  - DEV_RX_OFFLOAD_HEADER_SPLIT >>> How will application know that the feature is supported? Limitations? >> >> +1 > New flag DEV_RX_OFFLOAD_BUFFER_SPLIT is supposed to be introduced. > The feature requires the DEV_RX_OFFLOAD_SCATTER is set. > If DEV_RX_OFFLOAD_HEADER_SPLIT is set the error is returned. > >> >>> Is it always split by specified/fixed length? >>> What happens if header length is actually different? >> >> As far as I understand intention is to filter specific packets to a queue first >> and later do the split, so the header length will be fixed... > > Not exactly. The filtering should be handled by rte_flow engine. > The intention is to provide the more flexible way to describe > rx buffers. Currently it is the single pool with fixed size segments. No way to > split the packet into multiple segments with specified lengths and in > the specified pools. What if packet payload should be stored in the physical > memory on other device (GPU/Storage)? What if caching is not desired for > the payload (just forwarding application)? We could provide the special NC pool. > What if packet should be split into the chunks with specific gaps? > For Tx direction we have this opportunity to gather packet from various > pools and any desired combinations , but Rx is much less flexible. > >>> >>>> The new approach would allow splitting the ingress packets into >>>> multiple parts pushed to the memory with different attributes. >>>> For example, the packet headers can be pushed to the embedded data >>>> buffers within mbufs and the application data into the external >>>> buffers attached to mbufs allocated from the different memory pools. >>>> The memory attributes for the split parts may differ either - for >>>> example the application data may be pushed into the external memory >>>> located on the dedicated physical device, say GPU or NVMe. This would >>>> improve the DPDK receiving datapath flexibility preserving >>>> compatibility with existing API. If you don't know the packet types in advance, how can you use fixed sizes to split a packet? Won't it be like having random parts of packet in each mempool.. >>>> >>>> Signed-off-by: Viacheslav Ovsiienko >>>> --- >>>> doc/guides/rel_notes/deprecation.rst | 5 +++++ >>>> 1 file changed, 5 insertions(+) >>>> >>>> diff --git a/doc/guides/rel_notes/deprecation.rst >>>> b/doc/guides/rel_notes/deprecation.rst >>>> index ea4cfa7..cd700ae 100644 >>>> --- a/doc/guides/rel_notes/deprecation.rst >>>> +++ b/doc/guides/rel_notes/deprecation.rst >>>> @@ -99,6 +99,11 @@ Deprecation Notices >>>> In 19.11 PMDs will still update the field even when the offload is not >>>> enabled. >>>> >>>> +* ethdev: add new fields to ``rte_eth_rxconf`` to configure the >>>> +receiving >>>> + queues to split ingress packets into multiple segments according >>>> +to the >>>> + specified lengths into the buffers allocated from the specified >>>> + memory pools. The backward compatibility to existing API is preserved. >>>> + >>>> * ethdev: ``rx_descriptor_done`` dev_ops and >> ``rte_eth_rx_descriptor_done`` >>>> will be deprecated in 20.11 and will be removed in 21.11. >>>> Existing ``rte_eth_rx_descriptor_status`` and >>>> ``rte_eth_tx_descriptor_status`` >>> >