From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F031443BCF; Thu, 7 Mar 2024 15:16:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DE6AA4067E; Thu, 7 Mar 2024 15:16:14 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id C0CFE40272 for ; Thu, 7 Mar 2024 15:16:12 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709820972; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=omzCZfc9pkvxnsypdVLvpgtKOa0EIk6KigDS7l/Qzug=; b=KUJyOZJ2DFNbwSsOa5+dRQN0A3wmzQkM1rP6/OVXFthRBPPNmR98MqxN9EKKRWbVIe96et K6MRX/Y5fh2iZg7maqFdlElOfsqO0fOGdOyPBH1pV6hY6C9flt9hgH2SqtxQohsbkMPOtj Eh/JsmUJIqrvA+O4B2BeY+a0iLJF7CA= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-593-rAE5WlBtNQ66LdlZFBSpDA-1; Thu, 07 Mar 2024 09:16:11 -0500 X-MC-Unique: rAE5WlBtNQ66LdlZFBSpDA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D319329AC013; Thu, 7 Mar 2024 14:16:10 +0000 (UTC) Received: from dmarchan.redhat.com (unknown [10.45.225.66]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3FC10200B01D; Thu, 7 Mar 2024 14:16:10 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: bluca@debian.org Subject: [PATCH v2] test/lcores: reduce cpu consumption Date: Thu, 7 Mar 2024 15:16:06 +0100 Message-ID: <20240307141608.1450695-1-david.marchand@redhat.com> In-Reply-To: <20240307113324.845309-1-david.marchand@redhat.com> References: <20240307113324.845309-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or build systems running the fast-test testsuite. Ask for a reschedule at the threads synchronisation points. Signed-off-by: David Marchand Acked-by: Luca Boccassi --- Changes since v1: - fix build with mingw, --- app/test/test_lcores.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/app/test/test_lcores.c b/app/test/test_lcores.c index 22225a9fd3..08c4e8dfba 100644 --- a/app/test/test_lcores.c +++ b/app/test/test_lcores.c @@ -2,7 +2,9 @@ * Copyright (c) 2020 Red Hat, Inc. */ +#include #include +#include #include #include @@ -11,6 +13,14 @@ #include "test.h" +#ifndef _POSIX_PRIORITY_SCHEDULING +/* sched_yield(2): + * POSIX systems on which sched_yield() is available define _POSIX_PRIOR‐ + * ITY_SCHEDULING in . + */ +#define sched_yield() +#endif + struct thread_context { enum { Thread_INIT, Thread_ERROR, Thread_DONE } state; bool lcore_id_any; @@ -43,7 +53,7 @@ static uint32_t thread_loop(void *arg) /* Wait for release from the control thread. */ while (__atomic_load_n(t->registered_count, __ATOMIC_ACQUIRE) != 0) - ; + sched_yield(); rte_thread_unregister(); lcore_id = rte_lcore_id(); if (lcore_id != LCORE_ID_ANY) { @@ -85,7 +95,7 @@ test_non_eal_lcores(unsigned int eal_threads_count) /* Wait all non-EAL threads to register. */ while (__atomic_load_n(®istered_count, __ATOMIC_ACQUIRE) != non_eal_threads_count) - ; + sched_yield(); /* We managed to create the max number of threads, let's try to create * one more. This will allow one more check. @@ -101,7 +111,7 @@ test_non_eal_lcores(unsigned int eal_threads_count) printf("non-EAL threads count: %u\n", non_eal_threads_count); while (__atomic_load_n(®istered_count, __ATOMIC_ACQUIRE) != non_eal_threads_count) - ; + sched_yield(); } skip_lcore_any: @@ -267,7 +277,7 @@ test_non_eal_lcores_callback(unsigned int eal_threads_count) non_eal_threads_count++; while (__atomic_load_n(®istered_count, __ATOMIC_ACQUIRE) != non_eal_threads_count) - ; + sched_yield(); if (l[0].init != eal_threads_count + 1 || l[1].init != eal_threads_count + 1) { printf("Error: incorrect init calls, expected %u, %u, got %u, %u\n", @@ -290,7 +300,7 @@ test_non_eal_lcores_callback(unsigned int eal_threads_count) non_eal_threads_count++; while (__atomic_load_n(®istered_count, __ATOMIC_ACQUIRE) != non_eal_threads_count) - ; + sched_yield(); if (l[0].init != eal_threads_count + 2 || l[1].init != eal_threads_count + 2) { printf("Error: incorrect init calls, expected %u, %u, got %u, %u\n", -- 2.44.0