From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6D24A034E; Thu, 7 Nov 2019 13:35:33 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 846461EA0A; Thu, 7 Nov 2019 13:35:28 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 5FB691E9F5 for ; Thu, 7 Nov 2019 13:35:24 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from dekelp@mellanox.com) with ESMTPS (AES256-SHA encrypted); 7 Nov 2019 14:35:22 +0200 Received: from mtl-vdi-280.wap.labs.mlnx. (mtl-vdi-280.wap.labs.mlnx [10.128.130.87]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id xA7CZKiH013198; Thu, 7 Nov 2019 14:35:22 +0200 From: Dekel Peled To: john.mcnamara@intel.com, marko.kovacevic@intel.com, nhorman@tuxdriver.com, ajit.khaparde@broadcom.com, somnath.kotur@broadcom.com, anatoly.burakov@intel.com, xuanziyang2@huawei.com, cloud.wangxiaoyun@huawei.com, zhouguoyang@huawei.com, wenzhuo.lu@intel.com, konstantin.ananyev@intel.com, matan@mellanox.com, shahafs@mellanox.com, viacheslavo@mellanox.com, rmody@marvell.com, shshaikh@marvell.com, maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com, yongwang@vmware.com, thomas@monjalon.net, ferruh.yigit@intel.com, arybchenko@solarflare.com, jingjing.wu@intel.com, bernard.iremonger@intel.com Cc: dev@dpdk.org Date: Thu, 7 Nov 2019 14:35:13 +0200 Message-Id: X-Mailer: git-send-email 1.7.1 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v4 2/3] net/mlx5: use API to set max LRO packet size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch implements use of the API for LRO aggregated packet max size. Rx queue create is updated to use the relevant configuration. Documentation is updated accordingly. Signed-off-by: Dekel Peled --- doc/guides/nics/mlx5.rst | 2 ++ drivers/net/mlx5/mlx5_rxq.c | 4 +++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 4f1093f..3b10daf 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -207,6 +207,8 @@ Limitations - KEEP_CRC offload cannot be supported with LRO. - The first mbuf length, without head-room, must be big enough to include the TCP header (122B). + - Rx queue with LRO offload enabled, receiving a non-LRO packet, can forward + it with size limited to max LRO size, not to max RX packet length. Statistics ---------- diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 9423e7b..c725e14 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1772,7 +1772,9 @@ struct mlx5_rxq_ctrl * dev->data->dev_conf.rxmode.offloads; unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO); const int mprq_en = mlx5_check_mprq_support(dev) > 0; - unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; + unsigned int max_rx_pkt_len = lro_on_queue ? + dev->data->dev_conf.rxmode.max_lro_pkt_size : + dev->data->dev_conf.rxmode.max_rx_pkt_len; unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len + RTE_PKTMBUF_HEADROOM; unsigned int max_lro_size = 0; -- 1.8.3.1