From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A17D542D46 for ; Sun, 25 Jun 2023 08:45:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9CB8442B8B; Sun, 25 Jun 2023 08:45:18 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2089.outbound.protection.outlook.com [40.107.237.89]) by mails.dpdk.org (Postfix) with ESMTP id 1E5B940A7F for ; Sun, 25 Jun 2023 08:45:17 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XQ2UccvALBdEIxcDUW6TtY+UZDp4HqEz92mO7UUX6SbECNi6zrwaRITWZENi1D5HZtNptBaup0G8rbuzZIs28+PEl+9xc4FQNSI2ByZWlEomHk+qdFLNUqJMKCOkkoEbTgwFy9CP2WL1GzkYnDm1Yf0LqDsoJFDBO+ufxRdcSUFESaM6P4jBnRy+YO3r3iqkvmVAhpDwQYBXlwaiYoVUSo8Bfy585OoMt+eWuT3rtE3RJHAdfEOWGl5I9+MW1uGCAYPMNRxjI+ycRdHYgq14EbvRUM/15nKDddkZwUW6BpuQNUndUyX+G71Z6ifb3tOffDX/senlPgt/nImGvqMHFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/TKUZMoyWXGY9GtOkVfMZTcXdQbF3FjjFz0JOoHhVBM=; b=QaBf3ImiF67pFtlMCWOeic+kyyEexuhFyeVhFBgse3FA/WGK7FcXDJPVenYokTo7p5DGSfsbyNrVVCIwPMO7TteOiQ7yncpumNd9iVIE74wv95qCuBZ5fFAvEBLuZYouRrkc6/ZHgixEUAods2H6gJkTMMDs3W8V6L8ysh4+dlX5bCmLymy92bzGSuEaZJ2kIWAxo+jXBL9mPEnt5k2plx2fUb8amHN0fSeZjz4gsOMV2QBB9tbojuoGC4gUkkKgUWQS4MqpZCs/Eqxgv6bT3cARmi73zeP1lHgIf2YIHdmPth2dtEViWcVDtNPH67FYusNAw08VMSo04RkYeA0i7w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/TKUZMoyWXGY9GtOkVfMZTcXdQbF3FjjFz0JOoHhVBM=; b=Zm3GzPiJZzABDuagneZqKzYrNy7c1C15xBMOtTdLoUqVtjNf6GzYgxx89CDt1/nTDd5ryprfqJHPiFxJuqM0SGQ6dfR7w1weGlWDKOc+zfSasq/YumZLfoEYQZHD6ujorZS0v0tPBbKec06OW+Ee56lZCl++S73DwJFd/H+lu78UL4fUR9LEnBQO5temlChw3Tsnnc1aU+iFTB2PAKOub2+PhbCbcJXNiv7x9GFllscad7LtKQl/SkgsGOvJxF4McAzfYU8RgW3m5s16JFu84bPIn2KtYnoqez2Car9AB1wAsC3r6QUU++dbYNCbDfFpRBVGUVjaccdeTpjCjH80tg== Received: from MW4PR02CA0005.namprd02.prod.outlook.com (2603:10b6:303:16d::14) by DS7PR12MB6309.namprd12.prod.outlook.com (2603:10b6:8:96::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.23; Sun, 25 Jun 2023 06:45:15 +0000 Received: from CO1NAM11FT013.eop-nam11.prod.protection.outlook.com (2603:10b6:303:16d:cafe::80) by MW4PR02CA0005.outlook.office365.com (2603:10b6:303:16d::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.33 via Frontend Transport; Sun, 25 Jun 2023 06:45:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT013.mail.protection.outlook.com (10.13.174.227) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.47 via Frontend Transport; Sun, 25 Jun 2023 06:45:14 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Sat, 24 Jun 2023 23:45:04 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Sat, 24 Jun 2023 23:45:03 -0700 From: Xueming Li To: Wenjun Wu CC: Qi Zhang , dpdk stable Subject: patch 'net/idpf: fix Rx data buffer size' has been queued to stable release 22.11.3 Date: Sun, 25 Jun 2023 14:35:24 +0800 Message-ID: <20230625063544.11183-107-xuemingl@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230625063544.11183-1-xuemingl@nvidia.com> References: <20230625063544.11183-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT013:EE_|DS7PR12MB6309:EE_ X-MS-Office365-Filtering-Correlation-Id: 22bb60b1-1477-4684-10e3-08db7547bb4e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BbYR3E2Pha/VyUHNDvSphA769+N9vTiLPE+AtfWCE3Ne9xziifRthsz2kpVf2/YXyeHCDXRSVN43qyQUWCMFAaLSeBBO31oQBnR1A6F6orXDOWbZ951Z6XkQN5/ya9QX8k3Rsv1fd44wANN3EqR5+X51BCj1iBioSnXVK+3hBfvnx721XDWgbIN+cq1yqr1QphzOm/St92MKZ+RHxqb3Z4iMlgDw4JVho9MNeclJYSk8zApas4/PS6EEsVMuHWb4l46FnesKZFr0Pz3XBMNz5j2bZrDaw4MZLzqDwinGIo/Pns2lNOxnaQGeJCWiqEB6yMJFp92kMevoNB+h/aK0d7Hrkj3YjZJnTbqOfIak/M5y24Cw72ESt81aBMeoqasMws4jCF8X9AbXsyOWtsy5+rpKWlpk+5CTV/TdPyJHVHlNbStAEzgVAlCwpXobylfkut65KnPaNPI3qW0DcurfeJs23L26bhoYDOi1v3vTSNBwyiaD+ED/EDzMqv44z97XaFs8EMGAu4PZMVNEipKlTdvVxZOI03M5PcwU8tjhihL0RjW6jRSAY0PI5WkyADsy24FoQEJ7HI4L3TWQPAuYLUF7swd61dCB/IMaEr3RJ3t+MLQeUHTJcCRGjgKPMxA3xSrRo3aZK8hmS40p3gsvRqvvVOZyyGhinBRzQ2JfwwAAS8s8IESRUJuJCwaO3KKiLIAYQgP86JRrrO6NvSGZcgA3BGpssdOokYJENynfjyk8TAU4gD69zA+2m1CasJMAr/5ysG2nIe5QYmtaKPNsxA== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(376002)(346002)(396003)(136003)(451199021)(40470700004)(46966006)(36840700001)(40460700003)(2906002)(7696005)(82310400005)(82740400003)(6666004)(356005)(7636003)(2616005)(83380400001)(47076005)(336012)(426003)(1076003)(26005)(186003)(6286002)(16526019)(53546011)(966005)(36860700001)(55016003)(41300700001)(40480700001)(86362001)(54906003)(478600001)(316002)(70586007)(70206006)(36756003)(6916009)(4326008)(8936002)(8676002)(5660300002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2023 06:45:14.7703 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 22bb60b1-1477-4684-10e3-08db7547bb4e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT013.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6309 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 22.11.3 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 06/27/23. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=22.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=22.11-staging&id=c86c1efd2f380e03010cf3f47306b2d8939bf119 Thanks. Xueming Li --- >From c86c1efd2f380e03010cf3f47306b2d8939bf119 Mon Sep 17 00:00:00 2001 From: Wenjun Wu Date: Fri, 14 Apr 2023 13:47:43 +0800 Subject: [PATCH] net/idpf: fix Rx data buffer size Cc: Xueming Li [ upstream commit 4fc6c4d96dacc0af9733a0474061328be14f9a52 ] This patch does two fixes. 1. According to hardware spec, the data buffer size should not be greater than 16K - 128. 2. Align data buffer size to 128. Fixes: 9c47c29739a1 ("net/idpf: add Rx queue setup") Signed-off-by: Wenjun Wu Acked-by: Qi Zhang --- drivers/net/idpf/idpf_rxtx.c | 6 ++++-- drivers/net/idpf/idpf_rxtx.h | 3 +++ 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c index 1cbd5be8cc..ceb34d4d32 100644 --- a/drivers/net/idpf/idpf_rxtx.c +++ b/drivers/net/idpf/idpf_rxtx.c @@ -374,7 +374,8 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq, bufq->adapter = adapter; len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM; - bufq->rx_buf_len = len; + bufq->rx_buf_len = RTE_ALIGN_FLOOR(len, (1 << IDPF_RLAN_CTX_DBUF_S)); + bufq->rx_buf_len = RTE_MIN(bufq->rx_buf_len, IDPF_RX_MAX_DATA_BUF_SIZE); /* Allocate the software ring. */ len = nb_desc + IDPF_RX_MAX_BURST; @@ -473,7 +474,8 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, rxq->offloads = offloads; len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; - rxq->rx_buf_len = len; + rxq->rx_buf_len = RTE_ALIGN_FLOOR(len, (1 << IDPF_RLAN_CTX_DBUF_S)); + rxq->rx_buf_len = RTE_MIN(rxq->rx_buf_len, IDPF_RX_MAX_DATA_BUF_SIZE); len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST; ring_size = RTE_ALIGN(len * diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h index 730dc64ebc..1c5b5b7c38 100644 --- a/drivers/net/idpf/idpf_rxtx.h +++ b/drivers/net/idpf/idpf_rxtx.h @@ -6,6 +6,9 @@ #define _IDPF_RXTX_H_ #include "idpf_ethdev.h" +#define IDPF_RLAN_CTX_DBUF_S 7 +#define IDPF_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) + /* MTS */ #define GLTSYN_CMD_SYNC_0_0 (PF_TIMESYNC_BASE + 0x0) -- 2.25.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2023-06-25 14:32:01.331250200 +0800 +++ 0106-net-idpf-fix-Rx-data-buffer-size.patch 2023-06-25 14:31:58.555773900 +0800 @@ -1 +1 @@ -From 4fc6c4d96dacc0af9733a0474061328be14f9a52 Mon Sep 17 00:00:00 2001 +From c86c1efd2f380e03010cf3f47306b2d8939bf119 Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit 4fc6c4d96dacc0af9733a0474061328be14f9a52 ] @@ -13 +15,0 @@ -Cc: stable@dpdk.org @@ -18,2 +20,2 @@ - drivers/common/idpf/idpf_common_rxtx.h | 3 +++ - drivers/net/idpf/idpf_rxtx.c | 6 ++++-- + drivers/net/idpf/idpf_rxtx.c | 6 ++++-- + drivers/net/idpf/idpf_rxtx.h | 3 +++ @@ -22,14 +23,0 @@ -diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h -index 11260d07f9..6cb83fc0a6 100644 ---- a/drivers/common/idpf/idpf_common_rxtx.h -+++ b/drivers/common/idpf/idpf_common_rxtx.h -@@ -34,6 +34,9 @@ - #define IDPF_MAX_TSO_FRAME_SIZE 262143 - #define IDPF_TX_MAX_MTU_SEG 10 - -+#define IDPF_RLAN_CTX_DBUF_S 7 -+#define IDPF_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) -+ - #define IDPF_TX_CKSUM_OFFLOAD_MASK ( \ - RTE_MBUF_F_TX_IP_CKSUM | \ - RTE_MBUF_F_TX_L4_MASK | \ @@ -37 +25 @@ -index 414f9a37f6..3e3d81ca6d 100644 +index 1cbd5be8cc..ceb34d4d32 100644 @@ -40 +28 @@ -@@ -155,7 +155,8 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq, +@@ -374,7 +374,8 @@ idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq, @@ -48 +36 @@ - /* Allocate a little more to support bulk allocate. */ + /* Allocate the software ring. */ @@ -50,2 +38,2 @@ -@@ -275,7 +276,8 @@ idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, - rxq->offloads = idpf_rx_offload_convert(offloads); +@@ -473,7 +474,8 @@ idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, + rxq->offloads = offloads; @@ -58,2 +46,16 @@ - /* Allocate a little more to support bulk allocate. */ - len = nb_desc + IDPF_RX_MAX_BURST; + len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST; + ring_size = RTE_ALIGN(len * +diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h +index 730dc64ebc..1c5b5b7c38 100644 +--- a/drivers/net/idpf/idpf_rxtx.h ++++ b/drivers/net/idpf/idpf_rxtx.h +@@ -6,6 +6,9 @@ + #define _IDPF_RXTX_H_ + + #include "idpf_ethdev.h" ++#define IDPF_RLAN_CTX_DBUF_S 7 ++#define IDPF_RX_MAX_DATA_BUF_SIZE (16 * 1024 - 128) ++ + + /* MTS */ + #define GLTSYN_CMD_SYNC_0_0 (PF_TIMESYNC_BASE + 0x0)