From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B4E8A00BE; Tue, 7 Jul 2020 13:14:38 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EB3B91DDEC; Tue, 7 Jul 2020 13:14:37 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 56FC51DDEA; Tue, 7 Jul 2020 13:14:36 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C7FFB1FB; Tue, 7 Jul 2020 04:14:35 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.109.153]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9607B3F71E; Tue, 7 Jul 2020 04:14:31 -0700 (PDT) From: Phil Yang To: thomas@monjalon.net, erik.g.carrillo@intel.com, dev@dpdk.org Cc: jerinj@marvell.com, Honnappa.Nagarahalli@arm.com, drc@linux.vnet.ibm.com, Ruifeng.Wang@arm.com, Dharmik.Thakkar@arm.com, nd@arm.com, david.marchand@redhat.com, mdr@ashroe.eu, nhorman@tuxdriver.com, dodji@redhat.com, stable@dpdk.org Date: Tue, 7 Jul 2020 19:13:20 +0800 Message-Id: <1594120403-17643-1-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1593667604-12029-1-git-send-email-phil.yang@arm.com> References: <1593667604-12029-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v3 1/4] eventdev: fix race condition on timer list counter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The n_poll_lcores counter and poll_lcore array are shared between lcores and the update of these variables are out of the protection of spinlock on each lcore timer list. The read-modify-write operations of the counter are not atomic, so it has the potential of race condition between lcores. Use c11 atomics with RELAXED ordering to prevent confliction. Fixes: cc7b73ea9e3b ("eventdev: add new software timer adapter") Cc: erik.g.carrillo@intel.com Cc: stable@dpdk.org Signed-off-by: Phil Yang Reviewed-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Acked-by: Erik Gabriel Carrillo --- v2: Align the code. (Erik) lib/librte_eventdev/rte_event_timer_adapter.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/lib/librte_eventdev/rte_event_timer_adapter.c b/lib/librte_eventdev/rte_event_timer_adapter.c index 2321803..370ea40 100644 --- a/lib/librte_eventdev/rte_event_timer_adapter.c +++ b/lib/librte_eventdev/rte_event_timer_adapter.c @@ -583,6 +583,7 @@ swtim_callback(struct rte_timer *tim) uint16_t nb_evs_invalid = 0; uint64_t opaque; int ret; + int n_lcores; opaque = evtim->impl_opaque[1]; adapter = (struct rte_event_timer_adapter *)(uintptr_t)opaque; @@ -605,8 +606,12 @@ swtim_callback(struct rte_timer *tim) "with immediate expiry value"); } - if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore].v))) - sw->poll_lcores[sw->n_poll_lcores++] = lcore; + if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore].v))) { + n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1, + __ATOMIC_RELAXED); + __atomic_store_n(&sw->poll_lcores[n_lcores], lcore, + __ATOMIC_RELAXED); + } } else { EVTIM_BUF_LOG_DBG("buffered an event timer expiry event"); @@ -1011,6 +1016,7 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, uint32_t lcore_id = rte_lcore_id(); struct rte_timer *tim, *tims[nb_evtims]; uint64_t cycles; + int n_lcores; #ifdef RTE_LIBRTE_EVENTDEV_DEBUG /* Check that the service is running. */ @@ -1033,8 +1039,10 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore_id].v))) { EVTIM_LOG_DBG("Adding lcore id = %u to list of lcores to poll", lcore_id); - sw->poll_lcores[sw->n_poll_lcores] = lcore_id; - ++sw->n_poll_lcores; + n_lcores = __atomic_fetch_add(&sw->n_poll_lcores, 1, + __ATOMIC_RELAXED); + __atomic_store_n(&sw->poll_lcores[n_lcores], lcore_id, + __ATOMIC_RELAXED); } ret = rte_mempool_get_bulk(sw->tim_pool, (void **)tims, -- 2.7.4