From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 69542A046B for ; Tue, 23 Jul 2019 03:04:21 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 507801BFFA; Tue, 23 Jul 2019 03:04:21 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 3EB041BFE6 for ; Tue, 23 Jul 2019 03:04:20 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE2 (envelope-from yskoh@mellanox.com) with ESMTPS (AES256-SHA encrypted); 23 Jul 2019 04:04:17 +0300 Received: from scfae-sc-2.mti.labs.mlnx (scfae-sc-2.mti.labs.mlnx [10.101.0.96]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x6N11HgS026580; Tue, 23 Jul 2019 04:04:16 +0300 From: Yongseok Koh To: Yongseok Koh Cc: Shahaf Shuler , dpdk stable Date: Mon, 22 Jul 2019 18:01:09 -0700 Message-Id: <20190723010115.6446-102-yskoh@mellanox.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190723010115.6446-1-yskoh@mellanox.com> References: <20190723010115.6446-1-yskoh@mellanox.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] patch 'net/mlx5: fix max number of queues for NEON Tx' has been queued to LTS release 17.11.7 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to LTS release 17.11.7 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objection by 07/27/19. So please shout if anyone has objection. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Thanks. Yongseok --- >From 6f72d0091807e85594022a80f1225602be6674af Mon Sep 17 00:00:00 2001 From: Yongseok Koh Date: Tue, 30 Apr 2019 18:37:17 -0700 Subject: [PATCH] net/mlx5: fix max number of queues for NEON Tx [ upstream commit 9cc42c58931a428a94e9adf7019eb97faa8fddb1 ] BlueField SmartNIC has 0xa2d2 as PCI device ID on both ARM and x86 host. On ARM side, Tx inlining need not be used as PCI bandwidth is not bottleneck. Vectorized Tx can still be used up to 16 queues. For other archs (e.g., x86), keep using the default value. Fixes: 09d8b41699bb ("net/mlx5: make vectorized Tx threshold configurable") Signed-off-by: Yongseok Koh Acked-by: Shahaf Shuler --- drivers/net/mlx5/mlx5_defs.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 1de3bdc417..695c982f13 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -91,10 +91,11 @@ /* Default maximum number of Tx queues for vectorized Tx. */ #if defined(RTE_ARCH_ARM64) #define MLX5_VPMD_MAX_TXQS 8 +#define MLX5_VPMD_MAX_TXQS_BLUEFIELD 16 #else #define MLX5_VPMD_MAX_TXQS 4 +#define MLX5_VPMD_MAX_TXQS_BLUEFIELD MLX5_VPMD_MAX_TXQS #endif -#define MLX5_VPMD_MAX_TXQS_BLUEFIELD 16 /* Threshold of buffer replenishment for vectorized Rx. */ #define MLX5_VPMD_RXQ_RPLNSH_THRESH(n) \ -- 2.21.0 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2019-07-22 17:55:11.973561912 -0700 +++ 0102-net-mlx5-fix-max-number-of-queues-for-NEON-Tx.patch 2019-07-22 17:55:06.514479000 -0700 @@ -1,15 +1,16 @@ -From 9cc42c58931a428a94e9adf7019eb97faa8fddb1 Mon Sep 17 00:00:00 2001 +From 6f72d0091807e85594022a80f1225602be6674af Mon Sep 17 00:00:00 2001 From: Yongseok Koh Date: Tue, 30 Apr 2019 18:37:17 -0700 Subject: [PATCH] net/mlx5: fix max number of queues for NEON Tx +[ upstream commit 9cc42c58931a428a94e9adf7019eb97faa8fddb1 ] + BlueField SmartNIC has 0xa2d2 as PCI device ID on both ARM and x86 host. On ARM side, Tx inlining need not be used as PCI bandwidth is not bottleneck. Vectorized Tx can still be used up to 16 queues. For other archs (e.g., x86), keep using the default value. Fixes: 09d8b41699bb ("net/mlx5: make vectorized Tx threshold configurable") -Cc: stable@dpdk.org Signed-off-by: Yongseok Koh Acked-by: Shahaf Shuler @@ -18,10 +19,10 @@ 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h -index 69b6960e94..13801a5c2d 100644 +index 1de3bdc417..695c982f13 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h -@@ -63,10 +63,11 @@ +@@ -91,10 +91,11 @@ /* Default maximum number of Tx queues for vectorized Tx. */ #if defined(RTE_ARCH_ARM64) #define MLX5_VPMD_MAX_TXQS 8