From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E2974A04BF; Thu, 3 Sep 2020 17:00:50 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5B65F1BEAF; Thu, 3 Sep 2020 17:00:50 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 822A4CF3 for ; Thu, 3 Sep 2020 17:00:48 +0200 (CEST) IronPort-SDR: aal6HBddNd1HIG64COwtFGPJzETKnhkv2C+zm0NEaRB7PbkAKP1Hej4zLSh39i5QwmRorIzQAf Ko49ILcIhhgg== X-IronPort-AV: E=McAfee;i="6000,8403,9733"; a="156853137" X-IronPort-AV: E=Sophos;i="5.76,387,1592895600"; d="scan'208";a="156853137" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2020 08:00:24 -0700 IronPort-SDR: RZC28afwOOJvWPbSHZGKD0rkNkITcnhM4DTtSrpw8pM6uwCim80u5PzHk0JRZkP8u+uuO9H4Ow aI3STQYdZ3XA== X-IronPort-AV: E=Sophos;i="5.76,387,1592895600"; d="scan'208";a="478077241" Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.213.246.71]) ([10.213.246.71]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Sep 2020 08:00:21 -0700 To: Matan Azrad , Chengchang Tang , "dev@dpdk.org" Cc: "maryam.tahhan@intel.com" , "linuxarm@huawei.com" , "wenzhuo.lu@intel.com" , NBU-Contact-Thomas Monjalon , "arybchenko@solarflare.com" References: <1592483709-7076-1-git-send-email-tangchengchang@huawei.com> <1598685199-1630-1-git-send-email-tangchengchang@huawei.com> <1598685199-1630-2-git-send-email-tangchengchang@huawei.com> From: Ferruh Yigit Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata= mQINBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABtCVGZXJydWggWWln aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+iQJsBBMBCgBWAhsDAh4BAheABQsJCAcDBRUK CQgLBRYCAwEABQkKqZZ8FiEE0jZTh0IuwoTjmYHH+TPrQ98TYR8FAl6ha3sXGHZrczovL2tl eXMub3BlbnBncC5vcmcACgkQ+TPrQ98TYR8uLA//QwltuFliUWe60xwmu9sY38c1DXvX67wk UryQ1WijVdIoj4H8cf/s2KtyIBjc89R254KMEfJDao/LrXqJ69KyGKXFhFPlF3VmFLsN4XiT PSfxkx8s6kHVaB3O183p4xAqnnl/ql8nJ5ph9HuwdL8CyO5/7dC/MjZ/mc4NGq5O9zk3YRGO lvdZAp5HW9VKW4iynvy7rl3tKyEqaAE62MbGyfJDH3C/nV/4+mPc8Av5rRH2hV+DBQourwuC ci6noiDP6GCNQqTh1FHYvXaN4GPMHD9DX6LtT8Fc5mL/V9i9kEVikPohlI0WJqhE+vQHFzR2 1q5nznE+pweYsBi3LXIMYpmha9oJh03dJOdKAEhkfBr6n8BWkWQMMiwfdzg20JX0o7a/iF8H 4dshBs+dXdIKzPfJhMjHxLDFNPNH8zRQkB02JceY9ESEah3wAbzTwz+e/9qQ5OyDTQjKkVOo cxC2U7CqeNt0JZi0tmuzIWrfxjAUulVhBmnceqyMOzGpSCQIkvalb6+eXsC9V1DZ4zsHZ2Mx Hi+7pCksdraXUhKdg5bOVCt8XFmx1MX4AoV3GWy6mZ4eMMvJN2hjXcrreQgG25BdCdcxKgqp e9cMbCtF+RZax8U6LkAWueJJ1QXrav1Jk5SnG8/5xANQoBQKGz+yFiWcgEs9Tpxth15o2v59 gXK5Ag0EV9ZMvgEQAKc0Db17xNqtSwEvmfp4tkddwW9XA0tWWKtY4KUdd/jijYqc3fDD54ES YpV8QWj0xK4YM0dLxnDU2IYxjEshSB1TqAatVWz9WtBYvzalsyTqMKP3w34FciuL7orXP4Ai bPtrHuIXWQOBECcVZTTOdZYGAzaYzxiAONzF9eTiwIqe9/oaOjTwTLnOarHt16QApTYQSnxD UQljeNvKYt1lZE/gAUUxNLWsYyTT+22/vU0GDUahsJxs1+f1yEr+OGrFiEAmqrzpF0lCS3f/ 3HVTU6rS9cK3glVUeaTF4+1SK5ZNO35piVQCwphmxa+dwTG/DvvHYCtgOZorTJ+OHfvCnSVj sM4kcXGjJPy3JZmUtyL9UxEbYlrffGPQI3gLXIGD5AN5XdAXFCjjaID/KR1c9RHd7Oaw0Pdc q9UtMLgM1vdX8RlDuMGPrj5sQrRVbgYHfVU/TQCk1C9KhzOwg4Ap2T3tE1umY/DqrXQgsgH7 1PXFucVjOyHMYXXugLT8YQ0gcBPHy9mZqw5mgOI5lCl6d4uCcUT0l/OEtPG/rA1lxz8ctdFB VOQOxCvwRG2QCgcJ/UTn5vlivul+cThi6ERPvjqjblLncQtRg8izj2qgmwQkvfj+h7Ex88bI 8iWtu5+I3K3LmNz/UxHBSWEmUnkg4fJlRr7oItHsZ0ia6wWQ8lQnABEBAAGJAjwEGAEKACYC GwwWIQTSNlOHQi7ChOOZgcf5M+tD3xNhHwUCXqFrngUJCKxSYAAKCRD5M+tD3xNhH3YWD/9b cUiWaHJasX+OpiuZ1Li5GG3m9aw4lR/k2lET0UPRer2Jy1JsL+uqzdkxGvPqzFTBXgx/6Byz EMa2mt6R9BCyR286s3lxVS5Bgr5JGB3EkpPcoJT3A7QOYMV95jBiiJTy78Qdzi5LrIu4tW6H o0MWUjpjdbR01cnj6EagKrDx9kAsqQTfvz4ff5JIFyKSKEHQMaz1YGHyCWhsTwqONhs0G7V2 0taQS1bGiaWND0dIBJ/u0pU998XZhmMzn765H+/MqXsyDXwoHv1rcaX/kcZIcN3sLUVcbdxA WHXOktGTQemQfEpCNuf2jeeJlp8sHmAQmV3dLS1R49h0q7hH4qOPEIvXjQebJGs5W7s2vxbA 5u5nLujmMkkfg1XHsds0u7Zdp2n200VC4GQf8vsUp6CSMgjedHeF9zKv1W4lYXpHp576ZV7T GgsEsvveAE1xvHnpV9d7ZehPuZfYlP4qgo2iutA1c0AXZLn5LPcDBgZ+KQZTzm05RU1gkx7n gL9CdTzVrYFy7Y5R+TrE9HFUnsaXaGsJwOB/emByGPQEKrupz8CZFi9pkqPuAPwjN6Wonokv ChAewHXPUadcJmCTj78Oeg9uXR6yjpxyFjx3vdijQIYgi5TEGpeTQBymLANOYxYWYOjXk+ae dYuOYKR9nbPv+2zK9pwwQ2NXbUBystaGyQ== Message-ID: Date: Thu, 3 Sep 2020 16:00:20 +0100 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v3 1/4] ethdev: add a field for rxq info structure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/2/2020 8:19 AM, Matan Azrad wrote: > > Hi Chengchang > > From: Chengchang Tang >> Hi, Matan >> >> On 2020/9/1 23:33, Matan Azrad wrote: >>> >>> Hi Chengchang >>> >>> Please see some question below. >>> >>> From: Chengchang Tang >>>> Add a field named rx_buf_size in rte_eth_rxq_info to indicate the >>>> buffer size used in receiving packets for HW. >>>> >>>> In this way, upper-layer users can get this information by calling >>>> rte_eth_rx_queue_info_get. >>>> >>>> Signed-off-by: Chengchang Tang >>>> Reviewed-by: Wei Hu (Xavier) >>>> Acked-by: Andrew Rybchenko >>>> --- >>>> lib/librte_ethdev/rte_ethdev.h | 2 ++ >>>> 1 file changed, 2 insertions(+) >>>> >>>> diff --git a/lib/librte_ethdev/rte_ethdev.h >>>> b/lib/librte_ethdev/rte_ethdev.h index 70295d7..9fed5cb 100644 >>>> --- a/lib/librte_ethdev/rte_ethdev.h >>>> +++ b/lib/librte_ethdev/rte_ethdev.h >>>> @@ -1420,6 +1420,8 @@ struct rte_eth_rxq_info { >>>> struct rte_eth_rxconf conf; /**< queue config parameters. */ >>>> uint8_t scattered_rx; /**< scattered packets RX supported. */ >>>> uint16_t nb_desc; /**< configured number of RXDs. */ >>>> + /**< buffer size used for hardware when receive packets. */ >>>> + uint16_t rx_buf_size; >>> >>> Is it the maximum supported Rx buffer by the HW? >>> If yes, maybe max_rx_buf_size is better name? >> >> No, it is the Rx buffer size currently used by HW. > > Doesn't it defined by the user? Using Rx queue mem-pool mbuf room size? > > And it may be different per Rx queue.... There is no explicit configuration for Rx buffer size, PMDs does as you said above and set mbuf data size as Rx buffer size, but this is not a defined rule, technically PMD is free to select any size smaller than mbuf data size as their Rx buffer size. And this new field is to feed this configured Rx buffer size information back to application. > >> IMHO, the structure rte_eth_rxq_info and associated query API are mainly >> used to query HW configurations at runtime or after queue is >> configured/setup. Therefore, the content of this structure should be the >> current HW configuration. > > It looks me more like capabilities... > The one which define the current configuration is the user by the configuration APIs(after reading the capabilities). > > I don't think we have here all the current configurations, so what is special in this one? > > >>> Maybe document that 0 means - no limitation by HW? >> >> Yes, there is no need to fill this filed for HW that has no restrictions on it. >> I'll add it in v4. >> >>> Must application read it in order to know if its datapath should handle >> multi-segment buffers? >> >> I think it's more appropriate to use scattered_rx to determine if multi- >> segment buffers should be handled. >> >>> >>> Maybe it will be good to force application to configure scatter when this >> field is valid and smaller than max_rx_pkt_len\max_lro.. (<= room size)... > > Can you explain more what is the issue you came to solve? > >>> >>>> } __rte_cache_min_aligned; >>>> >>>> /** >> >