From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 477EDA0548; Thu, 11 Aug 2022 17:37:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3B38942C07; Thu, 11 Aug 2022 17:37:22 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id A641642C04; Thu, 11 Aug 2022 17:37:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660232241; x=1691768241; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=anzzft6slexbzIc4v7WGKL6WBmaz3E1wQMdI1qRgqm8=; b=kYeDi5w9sQZEM6GZbLY4KeHeKX74DClTkoj6t69e2G0qZvIsnIsVYKxB R3E462n4xCFS/yaCGNUsXF2rDr5Sm1lhf1KF44VP0l/r3zbk4e831r/Gj TyQ8kJwWFBYthkXqKoKxQTOuXXPTempeeFCn+FkPgUAfn6H7Uk3d0LW2U GXFAxOhrTPjU0kKfyrBxU9hkE3RX4rjO18u1ISW/1uC8iTt2a/xcTN8Jz uMt5JusURDole/DcR+iHqNsu1NBqw9iEIMiX6woyvGq2BhJKTtj1Zy0Jg 09tMOipwdWrg7fu0zO2owmwt0xSMssZ/xcmco0rL0Bvc6w2XVgYsxjtvZ A==; X-IronPort-AV: E=McAfee;i="6400,9594,10436"; a="353122495" X-IronPort-AV: E=Sophos;i="5.93,230,1654585200"; d="scan'208";a="353122495" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Aug 2022 08:37:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,230,1654585200"; d="scan'208";a="581716852" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by orsmga006.jf.intel.com with ESMTP; 11 Aug 2022 08:37:19 -0700 From: Naga Harish K S V To: erik.g.carrillo@intel.com Cc: dev@dpdk.org, stable@dpdk.org Subject: [PATCH v3 3/4] timer: fix function to stop all timers Date: Thu, 11 Aug 2022 10:37:17 -0500 Message-Id: <20220811153717.3992516-1-s.v.naga.harish.k@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20220810070958.3111119-1-s.v.naga.harish.k@intel.com> References: <20220810070958.3111119-1-s.v.naga.harish.k@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There is a possibility of deadlock in this API, as same spinlock is tried to be acquired in nested manner. If the lcore that is stopping the timer is different from the lcore that owns the timer, the timer list lock is acquired in timer_del(), even if local_is_locked is true. Because the same lock was already acquired in rte_timer_stop_all(), the thread will hang. This patch removes the acquisition of nested lock. Fixes: 821c51267bcd63a ("timer: add function to stop all timers in a list") Cc: stable@dpdk.org Signed-off-by: Naga Harish K S V --- lib/timer/rte_timer.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/lib/timer/rte_timer.c b/lib/timer/rte_timer.c index 9994813d0d..85d67573eb 100644 --- a/lib/timer/rte_timer.c +++ b/lib/timer/rte_timer.c @@ -580,7 +580,7 @@ rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks, } static int -__rte_timer_stop(struct rte_timer *tim, int local_is_locked, +__rte_timer_stop(struct rte_timer *tim, struct rte_timer_data *timer_data) { union rte_timer_status prev_status, status; @@ -602,7 +602,7 @@ __rte_timer_stop(struct rte_timer *tim, int local_is_locked, /* remove it from list */ if (prev_status.state == RTE_TIMER_PENDING) { - timer_del(tim, prev_status, local_is_locked, priv_timer); + timer_del(tim, prev_status, 0, priv_timer); __TIMER_STAT_ADD(priv_timer, pending, -1); } @@ -631,7 +631,7 @@ rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim) TIMER_DATA_VALID_GET_OR_ERR_RET(timer_data_id, timer_data, -EINVAL); - return __rte_timer_stop(tim, 0, timer_data); + return __rte_timer_stop(tim, timer_data); } /* loop until rte_timer_stop() succeed */ @@ -987,21 +987,16 @@ rte_timer_stop_all(uint32_t timer_data_id, unsigned int *walk_lcores, walk_lcore = walk_lcores[i]; priv_timer = &timer_data->priv_timer[walk_lcore]; - rte_spinlock_lock(&priv_timer->list_lock); - for (tim = priv_timer->pending_head.sl_next[0]; tim != NULL; tim = next_tim) { next_tim = tim->sl_next[0]; - /* Call timer_stop with lock held */ - __rte_timer_stop(tim, 1, timer_data); + __rte_timer_stop(tim, timer_data); if (f) f(tim, f_arg); } - - rte_spinlock_unlock(&priv_timer->list_lock); } return 0; -- 2.25.1