From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 46780457A1 for ; Mon, 12 Aug 2024 14:51:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 400DE402C3; Mon, 12 Aug 2024 14:51:51 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2048.outbound.protection.outlook.com [40.107.92.48]) by mails.dpdk.org (Postfix) with ESMTP id D0DA94029C for ; Mon, 12 Aug 2024 14:51:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ryYi77gCDOusrvkOWt0WV4k5UzU5ZeSDr0Dbr6ZRzam1BC1H/Pa1YqkgY9ekz4qrXEcSyWCrmSyJRv1AeGwf2YFaxKJtSMeJofOtYK+yyknhD2EA2VJJCAutCg2DRLZNEtrMME9yJbt29l3LALYm/pGvI9blSl2EV5N7oDxKhxm3u+5z3uSAbkrftKg17QCgWisiPYZ/CUO29+DGgyePKgvkz0ukhqgQSgGA2cXaDZ4YoSbeLI45wLk1TxVXFGJQVC+lPUjaDeKDLlhLG+MwFzXmMONoqzfH8xly2j40+YvDWNhosEf391AEc0gXU/YtHKKuvbnTzxim38DulvAYRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=x4O08jb47gAQ1fj0e80HT4cHDHLdk8K2uisB8kTiOk0=; b=LS/pCT1/U4TXCRDp4FfJkKArs3rkqJOcoKGahh++QLPv9kiOqAr2dlkr+sxcS65aSD6LnV5mlf8xUWLs2BJDOijpC/QspPcu4BKpTm4gfDFb8kgKw7EH1xZLp8Q2hQBCds5fJ0AVu8rsnYbUg/BuPldrCegH/O51yEShYIPVHsHJqI88YSCwd6/+96svZSB0oin5tWcNpgwXOBJwiferkjf+Bhtcp0+4JGhovmKRrGIWz46QKJBiz3QizOf7DOq/Qxt5CIs5J1oKPWAW5gYuVUO0/wNR/7y5YBERgTCsRUcHfb6tLoJI9RI6TyFWY08dn/6u5+8PtzIViuChMT7ceg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=x4O08jb47gAQ1fj0e80HT4cHDHLdk8K2uisB8kTiOk0=; b=otTMm2EHBrHQM7+v+xSOxB1mE718gS3ky544Nqd2g57BthscbpMLR5Y2u3rwHhu70dzION9/ct4UeYVdkz4zgtV6xMwAJM1xAL+bBd9ONR7YEpdtILQ6l4uM9lScCb2Ik6wpT4y56XwF3i8U4j1tEL9FpG6rd2XUgEmtATREsMOzkpEBV1gSbhrxI/ps4RW3Pp+sjpPhVtZLQ4lzme1ImUOylGfGlKdzjdCRw4aklhYU7bw8GTU/gMF7wNET992prTBUet3YkqvXaXGk+2ao6naGR4Q/twkKa5/Hb4smdLtym9Uv9+EzMdaxVx0zE8KBqFVPXkADcAYM9dQnzdei9w== Received: from SA9P223CA0025.NAMP223.PROD.OUTLOOK.COM (2603:10b6:806:26::30) by PH7PR12MB7939.namprd12.prod.outlook.com (2603:10b6:510:278::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7849.20; Mon, 12 Aug 2024 12:51:43 +0000 Received: from SA2PEPF00003F64.namprd04.prod.outlook.com (2603:10b6:806:26:cafe::f4) by SA9P223CA0025.outlook.office365.com (2603:10b6:806:26::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7849.20 via Frontend Transport; Mon, 12 Aug 2024 12:51:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SA2PEPF00003F64.mail.protection.outlook.com (10.167.248.39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7849.8 via Frontend Transport; Mon, 12 Aug 2024 12:51:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 12 Aug 2024 05:51:29 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 12 Aug 2024 05:51:27 -0700 From: Xueming Li To: Hernan Vargas CC: , Maxime Coquelin , "dpdk stable" Subject: patch 'app/bbdev: fix interrupt tests' has been queued to stable release 23.11.2 Date: Mon, 12 Aug 2024 20:48:02 +0800 Message-ID: <20240812125035.389667-6-xuemingl@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240812125035.389667-1-xuemingl@nvidia.com> References: <20240712110153.309690-23-xuemingl@nvidia.com> <20240812125035.389667-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SA2PEPF00003F64:EE_|PH7PR12MB7939:EE_ X-MS-Office365-Filtering-Correlation-Id: 4b9aa1f8-4a59-441e-0b23-08dcbacd8429 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|36860700013|376014|82310400026|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?nckVRbsvx7g6loQdBMKV9ib2/1Gzge7mlvPyInqDet4UojfKue73eK/3Yg0G?= =?us-ascii?Q?FwrSO2f6INQv103wb1soWov+jcUUPxmy8XP9Fo5Hj8USCA0yyZDsiie5Pg6m?= =?us-ascii?Q?Fd8GSYBXZpsu7UOcVo2T5TmH/3XVaWagl9+IDtOnIWOvGP7uGdrLI5YHtUzO?= =?us-ascii?Q?MVr8HyUVLmzPiFSkmd5T9TtyV/uY5D/bNwRKQBZ1KBCLZtp/+J50Wq+xQzlX?= =?us-ascii?Q?nwlih3NQDueogpdDhLqtWjtvEYQVkn4ta0g2Yp5jePj3EqgPSNjpoJIe5som?= =?us-ascii?Q?+Ox/ZSzKPXG2Q/P0kAxxZcr5s0Il38PAX8qYCaKDCTXwJvL8SNy8TYOsF68C?= =?us-ascii?Q?7XcFE16+wz79YBxCSbBFP8v389lYC6dbZ28OjjAOi3rM/zoOxEO6H7C4Vyzq?= =?us-ascii?Q?e6NYyAeDBd/PzE0Mtw2ZQ/ivcnKM6q0wbOVXLl63+/j7xSOg19GMBWxvqeyY?= =?us-ascii?Q?cdVkTxZGjBRT2TGYlxCvnIbS7z0cuqfoJhfFSqYfCP0x35iloEZCmww/l6uj?= =?us-ascii?Q?DQPv6yasmeijThJ/jrrzg2WPHktsSBBf/PY6jxjiffPMdO7FGNKMfiaSIi0C?= =?us-ascii?Q?91pXqXjD4wngcuhBE40eR7Lkm/1dNqVUOUXLNVSNglNxUpZHY6EcRAV5C9/X?= =?us-ascii?Q?hJmdUgDwhXtNRN1IDlra99B+hIqvfsILnycC0DZzeVaXQgocnQjwv8UEYB17?= =?us-ascii?Q?phxzvyMYge3gb1e7OVj+x+0OCSJ46WL42FrrpUFbY+rgZT7q9o486OeQFcGp?= =?us-ascii?Q?TTCgXleAsYeygaxezUtOUxQWCfa//fja8JEi/9lEoeunN04VtHe1dGotPDk1?= =?us-ascii?Q?PgOz+rKtd6gdSbj7gQqPJQhxXyBDlzvFHVsvhqI4oIRVFrCTz8V/j4G2VDTL?= =?us-ascii?Q?t11Sh5iJjLMxxZS2llbioD4l+EXKY0yDoar1H/LLrl6Vg+RD+UqO0Mdc9661?= =?us-ascii?Q?+tm/KYgpyr34gUdnw7iOQy98bC+/yCrMfh+an8a/X9H6lNz1aOhe00ZwyImR?= =?us-ascii?Q?3FK/AfwekhhkSOFAoaUqjG2DUvxJEM4iN/7vCXDrhHl51i9kQ/UrOpDNKQ0d?= =?us-ascii?Q?JMsAhqFViu75ceuTfSy8S+rewRYcKDoQTpbB2vP7zuILtpaEocIJF2/TxyBw?= =?us-ascii?Q?S8wL5Sk5oQPNXpaamrPA9jhn5iuUySDfTd/NYXIZfGke36//vF+3+UPTfrIx?= =?us-ascii?Q?UiOZBQy9YdIz0kDUJpo++NGXG/xhRco91z8SWw7QtNzV+6IHNKMBt/3ga5Kz?= =?us-ascii?Q?ESPaNUSFFUNeI7n9fayWIQ8LAvBYAdcF0mwMy2G+DgI7fV54NvGmGvlssQ7l?= =?us-ascii?Q?eZw3Tgu1qtgLjXVGpiEtw1YcCyFtbC3Q2nih33zmm+tJ5GuvvBoQDLkmN+bs?= =?us-ascii?Q?GWvReQmxrJ0iHEoQbU1/jzTkuTdAavul3wCVnG4YKs+5pd4FuTLSR3WVOO6Y?= =?us-ascii?Q?uVkNRLf98omVHJRglXjvvbxwfceO1tdY?= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230040)(36860700013)(376014)(82310400026)(1800799024); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Aug 2024 12:51:42.6705 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4b9aa1f8-4a59-441e-0b23-08dcbacd8429 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SA2PEPF00003F64.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7939 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 23.11.2 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 08/14/24. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c343cb088f5f741f234eb672f736f31f42df99b2 Thanks. Xueming Li --- >From c343cb088f5f741f234eb672f736f31f42df99b2 Mon Sep 17 00:00:00 2001 From: Hernan Vargas Date: Mon, 24 Jun 2024 08:02:31 -0700 Subject: [PATCH] app/bbdev: fix interrupt tests Cc: Xueming Li [ upstream commit fdcee665c5066cd1300d08b85d159ccabf9f3657 ] Fix possible error with regards to setting the burst size from the enqueue thread. Fixes: b2e2aec3239e ("app/bbdev: enhance interrupt test") Signed-off-by: Hernan Vargas Reviewed-by: Maxime Coquelin --- app/test-bbdev/test_bbdev_perf.c | 98 ++++++++++++++++---------------- 1 file changed, 49 insertions(+), 49 deletions(-) diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c index d9267bb91f..5c1755ae0d 100644 --- a/app/test-bbdev/test_bbdev_perf.c +++ b/app/test-bbdev/test_bbdev_perf.c @@ -3408,15 +3408,6 @@ throughput_intr_lcore_ldpc_dec(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_ldpc_dec_ops( - tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(num_to_enq != enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3426,6 +3417,15 @@ throughput_intr_lcore_ldpc_dec(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_ldpc_dec_ops( + tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(num_to_enq != enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3500,14 +3500,6 @@ throughput_intr_lcore_dec(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_dec_ops(tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(num_to_enq != enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3517,6 +3509,14 @@ throughput_intr_lcore_dec(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_dec_ops(tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(num_to_enq != enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3586,14 +3586,6 @@ throughput_intr_lcore_enc(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_enc_ops(tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3603,6 +3595,14 @@ throughput_intr_lcore_enc(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_enc_ops(tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3674,15 +3674,6 @@ throughput_intr_lcore_ldpc_enc(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_ldpc_enc_ops( - tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3692,6 +3683,15 @@ throughput_intr_lcore_ldpc_enc(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_ldpc_enc_ops( + tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3763,14 +3763,6 @@ throughput_intr_lcore_fft(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_fft_ops(tp->dev_id, - queue_id, &ops[enqueued], - num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3780,6 +3772,14 @@ throughput_intr_lcore_fft(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_fft_ops(tp->dev_id, + queue_id, &ops[enqueued], + num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ @@ -3846,13 +3846,6 @@ throughput_intr_lcore_mldts(void *arg) if (unlikely(num_to_process - enqueued < num_to_enq)) num_to_enq = num_to_process - enqueued; - enq = 0; - do { - enq += rte_bbdev_enqueue_mldts_ops(tp->dev_id, - queue_id, &ops[enqueued], num_to_enq); - } while (unlikely(enq != num_to_enq)); - enqueued += enq; - /* Write to thread burst_sz current number of enqueued * descriptors. It ensures that proper number of * descriptors will be dequeued in callback @@ -3862,6 +3855,13 @@ throughput_intr_lcore_mldts(void *arg) */ __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); + enq = 0; + do { + enq += rte_bbdev_enqueue_mldts_ops(tp->dev_id, + queue_id, &ops[enqueued], num_to_enq); + } while (unlikely(enq != num_to_enq)); + enqueued += enq; + /* Wait until processing of previous batch is * completed */ -- 2.34.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2024-08-12 20:44:02.816614136 +0800 +++ 0005-app-bbdev-fix-interrupt-tests.patch 2024-08-12 20:44:01.885069253 +0800 @@ -1 +1 @@ -From fdcee665c5066cd1300d08b85d159ccabf9f3657 Mon Sep 17 00:00:00 2001 +From c343cb088f5f741f234eb672f736f31f42df99b2 Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit fdcee665c5066cd1300d08b85d159ccabf9f3657 ] @@ -10 +12,0 @@ -Cc: stable@dpdk.org @@ -19 +21 @@ -index 9841464922..20cd8df19b 100644 +index d9267bb91f..5c1755ae0d 100644 @@ -22 +24 @@ -@@ -3419,15 +3419,6 @@ throughput_intr_lcore_ldpc_dec(void *arg) +@@ -3408,15 +3408,6 @@ throughput_intr_lcore_ldpc_dec(void *arg) @@ -38,3 +40,3 @@ -@@ -3438,6 +3429,15 @@ throughput_intr_lcore_ldpc_dec(void *arg) - rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, - rte_memory_order_relaxed); +@@ -3426,6 +3417,15 @@ throughput_intr_lcore_ldpc_dec(void *arg) + */ + __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); @@ -54 +56 @@ -@@ -3514,14 +3514,6 @@ throughput_intr_lcore_dec(void *arg) +@@ -3500,14 +3500,6 @@ throughput_intr_lcore_dec(void *arg) @@ -69,3 +71,3 @@ -@@ -3532,6 +3524,14 @@ throughput_intr_lcore_dec(void *arg) - rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, - rte_memory_order_relaxed); +@@ -3517,6 +3509,14 @@ throughput_intr_lcore_dec(void *arg) + */ + __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); @@ -84 +86 @@ -@@ -3603,14 +3603,6 @@ throughput_intr_lcore_enc(void *arg) +@@ -3586,14 +3586,6 @@ throughput_intr_lcore_enc(void *arg) @@ -99,3 +101,3 @@ -@@ -3621,6 +3613,14 @@ throughput_intr_lcore_enc(void *arg) - rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, - rte_memory_order_relaxed); +@@ -3603,6 +3595,14 @@ throughput_intr_lcore_enc(void *arg) + */ + __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); @@ -114 +116 @@ -@@ -3694,15 +3694,6 @@ throughput_intr_lcore_ldpc_enc(void *arg) +@@ -3674,15 +3674,6 @@ throughput_intr_lcore_ldpc_enc(void *arg) @@ -130,3 +132,3 @@ -@@ -3713,6 +3704,15 @@ throughput_intr_lcore_ldpc_enc(void *arg) - rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, - rte_memory_order_relaxed); +@@ -3692,6 +3683,15 @@ throughput_intr_lcore_ldpc_enc(void *arg) + */ + __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); @@ -146 +148 @@ -@@ -3786,14 +3786,6 @@ throughput_intr_lcore_fft(void *arg) +@@ -3763,14 +3763,6 @@ throughput_intr_lcore_fft(void *arg) @@ -161,3 +163,3 @@ -@@ -3804,6 +3796,14 @@ throughput_intr_lcore_fft(void *arg) - rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, - rte_memory_order_relaxed); +@@ -3780,6 +3772,14 @@ throughput_intr_lcore_fft(void *arg) + */ + __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED); @@ -176 +178 @@ -@@ -3872,13 +3872,6 @@ throughput_intr_lcore_mldts(void *arg) +@@ -3846,13 +3846,6 @@ throughput_intr_lcore_mldts(void *arg) @@ -190,3 +192,3 @@ -@@ -3889,6 +3882,13 @@ throughput_intr_lcore_mldts(void *arg) - rte_atomic_store_explicit(&tp->burst_sz, num_to_enq, - rte_memory_order_relaxed); +@@ -3862,6 +3855,13 @@ throughput_intr_lcore_mldts(void *arg) + */ + __atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED);