From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 60AAEA0353; Thu, 6 Aug 2020 18:31:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3B80E1C033; Thu, 6 Aug 2020 18:31:52 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id D6BE91BFF5 for ; Thu, 6 Aug 2020 18:31:49 +0200 (CEST) IronPort-SDR: T70+zwExSxqGhMUilJshfNw2JxVsbAgVRf4DgYyNZC1xZQ76+xNFxWmdfZRu/+f2xIJ5boSQUE SglJCEbp2KpA== X-IronPort-AV: E=McAfee;i="6000,8403,9704"; a="132942050" X-IronPort-AV: E=Sophos;i="5.75,441,1589266800"; d="scan'208";a="132942050" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2020 09:31:49 -0700 IronPort-SDR: omzGUK/1orhaGsZnlU9YjyP5itLTKzqyAh2eeUlv04JRHMoEbV/XPi0svGsTi7cnbf1TTw0wdf bDUofCIodkCw== X-IronPort-AV: E=Sophos;i="5.75,441,1589266800"; d="scan'208";a="493723212" Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.213.255.242]) ([10.213.255.242]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Aug 2020 09:31:46 -0700 To: Viacheslav Ovsiienko , dev@dpdk.org Cc: matan@mellanox.com, rasland@mellanox.com, thomas@monjalon.net, jerinjacobk@gmail.com, stephen@networkplumber.org, arybchenko@solarflare.com, ajit.khaparde@broadcom.com, maxime.coquelin@redhat.com, olivier.matz@6wind.com, david.marchand@redhat.com References: <1596452291-25535-1-git-send-email-viacheslavo@mellanox.com> <1596617395-29271-1-git-send-email-viacheslavo@mellanox.com> From: Ferruh Yigit Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata= mQINBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABtCVGZXJydWggWWln aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+iQJsBBMBCgBWAhsDAh4BAheABQsJCAcDBRUK CQgLBRYCAwEABQkKqZZ8FiEE0jZTh0IuwoTjmYHH+TPrQ98TYR8FAl6ha3sXGHZrczovL2tl eXMub3BlbnBncC5vcmcACgkQ+TPrQ98TYR8uLA//QwltuFliUWe60xwmu9sY38c1DXvX67wk UryQ1WijVdIoj4H8cf/s2KtyIBjc89R254KMEfJDao/LrXqJ69KyGKXFhFPlF3VmFLsN4XiT PSfxkx8s6kHVaB3O183p4xAqnnl/ql8nJ5ph9HuwdL8CyO5/7dC/MjZ/mc4NGq5O9zk3YRGO lvdZAp5HW9VKW4iynvy7rl3tKyEqaAE62MbGyfJDH3C/nV/4+mPc8Av5rRH2hV+DBQourwuC ci6noiDP6GCNQqTh1FHYvXaN4GPMHD9DX6LtT8Fc5mL/V9i9kEVikPohlI0WJqhE+vQHFzR2 1q5nznE+pweYsBi3LXIMYpmha9oJh03dJOdKAEhkfBr6n8BWkWQMMiwfdzg20JX0o7a/iF8H 4dshBs+dXdIKzPfJhMjHxLDFNPNH8zRQkB02JceY9ESEah3wAbzTwz+e/9qQ5OyDTQjKkVOo cxC2U7CqeNt0JZi0tmuzIWrfxjAUulVhBmnceqyMOzGpSCQIkvalb6+eXsC9V1DZ4zsHZ2Mx Hi+7pCksdraXUhKdg5bOVCt8XFmx1MX4AoV3GWy6mZ4eMMvJN2hjXcrreQgG25BdCdcxKgqp e9cMbCtF+RZax8U6LkAWueJJ1QXrav1Jk5SnG8/5xANQoBQKGz+yFiWcgEs9Tpxth15o2v59 gXK5Ag0EV9ZMvgEQAKc0Db17xNqtSwEvmfp4tkddwW9XA0tWWKtY4KUdd/jijYqc3fDD54ES YpV8QWj0xK4YM0dLxnDU2IYxjEshSB1TqAatVWz9WtBYvzalsyTqMKP3w34FciuL7orXP4Ai bPtrHuIXWQOBECcVZTTOdZYGAzaYzxiAONzF9eTiwIqe9/oaOjTwTLnOarHt16QApTYQSnxD UQljeNvKYt1lZE/gAUUxNLWsYyTT+22/vU0GDUahsJxs1+f1yEr+OGrFiEAmqrzpF0lCS3f/ 3HVTU6rS9cK3glVUeaTF4+1SK5ZNO35piVQCwphmxa+dwTG/DvvHYCtgOZorTJ+OHfvCnSVj sM4kcXGjJPy3JZmUtyL9UxEbYlrffGPQI3gLXIGD5AN5XdAXFCjjaID/KR1c9RHd7Oaw0Pdc q9UtMLgM1vdX8RlDuMGPrj5sQrRVbgYHfVU/TQCk1C9KhzOwg4Ap2T3tE1umY/DqrXQgsgH7 1PXFucVjOyHMYXXugLT8YQ0gcBPHy9mZqw5mgOI5lCl6d4uCcUT0l/OEtPG/rA1lxz8ctdFB VOQOxCvwRG2QCgcJ/UTn5vlivul+cThi6ERPvjqjblLncQtRg8izj2qgmwQkvfj+h7Ex88bI 8iWtu5+I3K3LmNz/UxHBSWEmUnkg4fJlRr7oItHsZ0ia6wWQ8lQnABEBAAGJAjwEGAEKACYC GwwWIQTSNlOHQi7ChOOZgcf5M+tD3xNhHwUCXqFrngUJCKxSYAAKCRD5M+tD3xNhH3YWD/9b cUiWaHJasX+OpiuZ1Li5GG3m9aw4lR/k2lET0UPRer2Jy1JsL+uqzdkxGvPqzFTBXgx/6Byz EMa2mt6R9BCyR286s3lxVS5Bgr5JGB3EkpPcoJT3A7QOYMV95jBiiJTy78Qdzi5LrIu4tW6H o0MWUjpjdbR01cnj6EagKrDx9kAsqQTfvz4ff5JIFyKSKEHQMaz1YGHyCWhsTwqONhs0G7V2 0taQS1bGiaWND0dIBJ/u0pU998XZhmMzn765H+/MqXsyDXwoHv1rcaX/kcZIcN3sLUVcbdxA WHXOktGTQemQfEpCNuf2jeeJlp8sHmAQmV3dLS1R49h0q7hH4qOPEIvXjQebJGs5W7s2vxbA 5u5nLujmMkkfg1XHsds0u7Zdp2n200VC4GQf8vsUp6CSMgjedHeF9zKv1W4lYXpHp576ZV7T GgsEsvveAE1xvHnpV9d7ZehPuZfYlP4qgo2iutA1c0AXZLn5LPcDBgZ+KQZTzm05RU1gkx7n gL9CdTzVrYFy7Y5R+TrE9HFUnsaXaGsJwOB/emByGPQEKrupz8CZFi9pkqPuAPwjN6Wonokv ChAewHXPUadcJmCTj78Oeg9uXR6yjpxyFjx3vdijQIYgi5TEGpeTQBymLANOYxYWYOjXk+ae dYuOYKR9nbPv+2zK9pwwQ2NXbUBystaGyQ== Message-ID: Date: Thu, 6 Aug 2020 17:31:43 +0100 MIME-Version: 1.0 In-Reply-To: <1596617395-29271-1-git-send-email-viacheslavo@mellanox.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v2] doc: announce changes to ethdev rxconf structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 8/5/2020 9:49 AM, Viacheslav Ovsiienko wrote: > The DPDK datapath in the transmit direction is very flexible. > The applications can build multi-segment packets and manages > almost all data aspects - the memory pools where segments > are allocated from, the segment lengths, the memory attributes > like external, registered, etc. > > In the receiving direction, the datapath is much less flexible, > the applications can only specify the memory pool to configure > the receiving queue and nothing more. The packet being received > can only be pushed to the chain of the mbufs of the same data > buffer size and allocated from the same pool. In order to extend > the receiving datapath buffer description it is proposed to add > the new fields into rte_eth_rxconf structure: > > struct rte_eth_rxconf { > ... > uint16_t rx_split_num; /* number of segments to split */ > uint16_t *rx_split_len; /* array of segment lengths */ > struct rte_mempool **mp; /* array of segment memory pools */ > ... > }; What is the way to say first 14 bytes will go first mempool and rest will go second one? Or do you have to define fixed sizes for all segments? What if that 'rest' part larger than given buffer size for that mempool? Intel NICs also has header split support, similar to what Jerin described, header and data goes to different buffers, which doesn't require fixed sizes and need only two mempools, not sure if it should be integrated to this feature but we can discuss later. Also there are some valid concerns Andrew highlighted, like how application will know if PMD supports this feature etc.. and more. But since these are design/implementation related concerns, not a blocker for deprecation notice I think, overall no objection to config structure change, hence: Acked-by: Ferruh Yigit > > The non-zero value of rx_split_num field configures the receiving > queue to split ingress packets into multiple segments to the mbufs > allocated from various memory pools according to the specified > lengths. The zero value of rx_split_num field provides the > backward compatibility and queue should be configured in a regular > way (with single/multiple mbufs of the same data buffer length > allocated from the single memory pool). > > The new approach would allow splitting the ingress packets into > multiple parts pushed to the memory with different attributes. > For example, the packet headers can be pushed to the embedded data > buffers within mbufs and the application data into the external > buffers attached to mbufs allocated from the different memory > pools. The memory attributes for the split parts may differ > either - for example the application data may be pushed into > the external memory located on the dedicated physical device, > say GPU or NVMe. This would improve the DPDK receiving datapath > flexibility preserving compatibility with existing API. > > The proposed extended description of receiving buffers might be > considered by other vendors to be involved into similar features > support, it is the subject for the further discussion. > > Signed-off-by: Viacheslav Ovsiienko > Acked-by: Jerin Jacob > > --- > v1->v2: commit message updated, proposed to consider the new > fields for supporting similar features by multiple > vendors > --- > doc/guides/rel_notes/deprecation.rst | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst > index acf87d3..b6bdb83 100644 > --- a/doc/guides/rel_notes/deprecation.rst > +++ b/doc/guides/rel_notes/deprecation.rst > @@ -99,6 +99,11 @@ Deprecation Notices > In 19.11 PMDs will still update the field even when the offload is not > enabled. > > +* ethdev: add new fields to ``rte_eth_rxconf`` to configure the receiving > + queues to split ingress packets into multiple segments according to the > + specified lengths into the buffers allocated from the specified > + memory pools. The backward compatibility to existing API is preserved. > + > * ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done`` > will be deprecated in 20.11 and will be removed in 21.11. > Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status`` >