From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id B2F9C43BBA;
	Thu,  7 Mar 2024 12:33:35 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 782224067E;
	Thu,  7 Mar 2024 12:33:35 +0100 (CET)
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by mails.dpdk.org (Postfix) with ESMTP id AF32340272
 for <dev@dpdk.org>; Thu,  7 Mar 2024 12:33:33 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1709811213;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=TLGvTfQosBfdWnbUtWAym4B4vAPu9ZikVl7T3k92Bxo=;
 b=Ge4CndrAifl8d1pyYl1RglckSTfVzj3nvtJ14aOc9eRntB7ZWoxAMyAfoKIuoX0DFFfQZs
 wLCM6oYHRy8wg+12mbiXS8+fzbH0ItjgxaS3aBgiOgfOhGv7Mr0b7F/ekXjGAABVNAaIiN
 Tl/xRvU37FZXwYyTVW2WHUdBvZGVEUU=
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-467-7L8Cz6xLOqmgDuMa4cEcyw-1; Thu, 07 Mar 2024 06:33:30 -0500
X-MC-Unique: 7L8Cz6xLOqmgDuMa4cEcyw-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com
 [10.11.54.4])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C3CB3185A784;
 Thu,  7 Mar 2024 11:33:29 +0000 (UTC)
Received: from dmarchan.redhat.com (unknown [10.45.225.66])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 2888C2022EDB;
 Thu,  7 Mar 2024 11:33:29 +0000 (UTC)
From: David Marchand <david.marchand@redhat.com>
To: dev@dpdk.org
Cc: bluca@debian.org
Subject: [PATCH] test/lcores: reduce cpu consumption
Date: Thu,  7 Mar 2024 12:33:24 +0100
Message-ID: <20240307113324.845309-1-david.marchand@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset="US-ASCII"; x-default=true
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Busy looping on RTE_MAX_LCORES threads is too heavy in some CI
environments running the fast-test testsuite.
Ask for a reschedule at the threads synchronisation points.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Note: this is a quick patch with no validation, except that it runs fine
on my laptop.
Luca, can you check and see if it helps in your CI?

---
 app/test/test_lcores.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/app/test/test_lcores.c b/app/test/test_lcores.c
index 22225a9fd3..7adc03d3da 100644
--- a/app/test/test_lcores.c
+++ b/app/test/test_lcores.c
@@ -2,6 +2,7 @@
  * Copyright (c) 2020 Red Hat, Inc.
  */
 
+#include <sched.h>
 #include <string.h>
 
 #include <rte_common.h>
@@ -43,7 +44,7 @@ static uint32_t thread_loop(void *arg)
 
 	/* Wait for release from the control thread. */
 	while (__atomic_load_n(t->registered_count, __ATOMIC_ACQUIRE) != 0)
-		;
+		sched_yield();
 	rte_thread_unregister();
 	lcore_id = rte_lcore_id();
 	if (lcore_id != LCORE_ID_ANY) {
@@ -85,7 +86,7 @@ test_non_eal_lcores(unsigned int eal_threads_count)
 	/* Wait all non-EAL threads to register. */
 	while (__atomic_load_n(&registered_count, __ATOMIC_ACQUIRE) !=
 			non_eal_threads_count)
-		;
+		sched_yield();
 
 	/* We managed to create the max number of threads, let's try to create
 	 * one more. This will allow one more check.
@@ -101,7 +102,7 @@ test_non_eal_lcores(unsigned int eal_threads_count)
 		printf("non-EAL threads count: %u\n", non_eal_threads_count);
 		while (__atomic_load_n(&registered_count, __ATOMIC_ACQUIRE) !=
 				non_eal_threads_count)
-			;
+			sched_yield();
 	}
 
 skip_lcore_any:
@@ -267,7 +268,7 @@ test_non_eal_lcores_callback(unsigned int eal_threads_count)
 	non_eal_threads_count++;
 	while (__atomic_load_n(&registered_count, __ATOMIC_ACQUIRE) !=
 			non_eal_threads_count)
-		;
+		sched_yield();
 	if (l[0].init != eal_threads_count + 1 ||
 			l[1].init != eal_threads_count + 1) {
 		printf("Error: incorrect init calls, expected %u, %u, got %u, %u\n",
@@ -290,7 +291,7 @@ test_non_eal_lcores_callback(unsigned int eal_threads_count)
 	non_eal_threads_count++;
 	while (__atomic_load_n(&registered_count, __ATOMIC_ACQUIRE) !=
 			non_eal_threads_count)
-		;
+		sched_yield();
 	if (l[0].init != eal_threads_count + 2 ||
 			l[1].init != eal_threads_count + 2) {
 		printf("Error: incorrect init calls, expected %u, %u, got %u, %u\n",
-- 
2.44.0