From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5D0CAA00C2; Thu, 6 Oct 2022 10:28:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA5C742BB3; Thu, 6 Oct 2022 10:28:28 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id B078142B88; Thu, 6 Oct 2022 10:28:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665044908; x=1696580908; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XxfKNUolZM5R7dn9JBjCaLCcSla61gR8QsQ5w3Y6Ijs=; b=anGkOvCcCMixkvv1PEIKTmw4jiBe53iUo/EY6b88rFmTX1o9Zua0+KsE zcqUwLym6C78BU665m39cQviqKrF2gXlloSuNSOXXnJQEpTx1dK+BkK7Y nFeoLji7XmaFQzVUy3g/vKvDUMT+YG/bm9U8bnbG/4OAC3wLYiIuCmnoU TA/E2boLbazJetksCiepMlYTg342qk9phIdW5CwjMD1AxocARrFCF+AMj Wt5qQlRl4f7aAk3fHwZITmurQyHO7UAWmH710uIiLhome8/gxSKYO6400 3khgWwc+VoaHHciJ0iZJsuBUyAGkA89+ugpmxYhuipAV3QN41Z6dOT5gS g==; X-IronPort-AV: E=McAfee;i="6500,9779,10491"; a="303360433" X-IronPort-AV: E=Sophos;i="5.95,163,1661842800"; d="scan'208";a="303360433" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Oct 2022 01:28:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10491"; a="750074270" X-IronPort-AV: E=Sophos;i="5.95,163,1661842800"; d="scan'208";a="750074270" Received: from silpixa00401454.ir.intel.com ([10.55.128.131]) by orsmga004.jf.intel.com with ESMTP; 06 Oct 2022 01:28:20 -0700 From: Harry van Haaren To: dev@dpdk.org Cc: david.marchand@redhat.com, dpdklab@iol.unh.edu, ci@dpdk.org, Honnappa.Nagarahalli@arm.com, mb@smartsharesystems.com, mattias.ronnblom@ericsson.com, thomas@monjalon.net, Harry van Haaren Subject: [PATCH v2] test/service: fix spurious failures by extending timeout Date: Thu, 6 Oct 2022 08:28:13 +0000 Message-Id: <20221006082813.579255-1-harry.van.haaren@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006081729.578475-1-harry.van.haaren@intel.com> References: <20221006081729.578475-1-harry.van.haaren@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit extends the timeout for service_may_be_active() from 100ms to 1000ms. Local testing on a idle and loaded system (compiling DPDK with all cores) always completes after 1 ms. The wait time for a service-lcore to finish is also extended from 100ms to 1000ms. The same timeout waiting code was duplicated in two tests, and is now refactored to a standalone function avoiding duplication. Reported-by: David Marchand Suggested-by: Mattias Ronnblom Signed-off-by: Harry van Haaren --- Apologies for the quick respin noise; only the first diff-section is added, no changes to the rest of the patch. v2: - v1 addressed only testcase 15 issue, v2 also addresses test case 5, which has an service-lcore wait code-path. --- app/test/test_service_cores.c | 47 ++++++++++++++++------------------- 1 file changed, 22 insertions(+), 25 deletions(-) diff --git a/app/test/test_service_cores.c b/app/test/test_service_cores.c index 359b6dcd8b..4b147bd64c 100644 --- a/app/test/test_service_cores.c +++ b/app/test/test_service_cores.c @@ -123,14 +123,14 @@ unregister_all(void) return TEST_SUCCESS; } -/* Wait until service lcore not active, or for 100x SERVICE_DELAY */ +/* Wait until service lcore not active, or for N times SERVICE_DELAY */ static void wait_slcore_inactive(uint32_t slcore_id) { int i; for (i = 0; rte_service_lcore_may_be_active(slcore_id) == 1 && - i < 100; i++) + i < 1000; i++) rte_delay_ms(SERVICE_DELAY); } @@ -921,12 +921,26 @@ service_lcore_start_stop(void) return unregister_all(); } +static int +service_ensure_stopped_with_timeout(uint32_t sid) +{ + /* give the service time to stop running */ + int32_t timeout_ms = 1000; + int i; + for (i = 0; i < timeout_ms; i++) { + if (!rte_service_may_be_active(sid)) + break; + rte_delay_ms(SERVICE_DELAY); + } + + return rte_service_may_be_active(sid); +} + /* stop a service and wait for it to become inactive */ static int service_may_be_active(void) { const uint32_t sid = 0; - int i; /* expected failure cases */ TEST_ASSERT_EQUAL(-EINVAL, rte_service_may_be_active(10000), @@ -946,19 +960,11 @@ service_may_be_active(void) TEST_ASSERT_EQUAL(1, service_lcore_running_check(), "Service core expected to poll service but it didn't"); - /* stop the service */ + /* stop the service, and wait for not-active with timeout */ TEST_ASSERT_EQUAL(0, rte_service_runstate_set(sid, 0), "Error: Service stop returned non-zero"); - - /* give the service 100ms to stop running */ - for (i = 0; i < 100; i++) { - if (!rte_service_may_be_active(sid)) - break; - rte_delay_ms(SERVICE_DELAY); - } - - TEST_ASSERT_EQUAL(0, rte_service_may_be_active(sid), - "Error: Service not stopped after 100ms"); + TEST_ASSERT_EQUAL(0, service_ensure_stopped_with_timeout(sid), + "Error: Service not stopped after timeout period."); return unregister_all(); } @@ -972,7 +978,6 @@ service_active_two_cores(void) return TEST_SKIPPED; const uint32_t sid = 0; - int i; uint32_t lcore = rte_get_next_lcore(/* start core */ -1, /* skip main */ 1, @@ -1002,16 +1007,8 @@ service_active_two_cores(void) /* stop the service */ TEST_ASSERT_EQUAL(0, rte_service_runstate_set(sid, 0), "Error: Service stop returned non-zero"); - - /* give the service 100ms to stop running */ - for (i = 0; i < 100; i++) { - if (!rte_service_may_be_active(sid)) - break; - rte_delay_ms(SERVICE_DELAY); - } - - TEST_ASSERT_EQUAL(0, rte_service_may_be_active(sid), - "Error: Service not stopped after 100ms"); + TEST_ASSERT_EQUAL(0, service_ensure_stopped_with_timeout(sid), + "Error: Service not stopped after timeout period."); return unregister_all(); } -- 2.34.1