From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E8A69A00BE; Tue, 7 Jul 2020 16:29:48 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6D8181DD8D; Tue, 7 Jul 2020 16:29:48 +0200 (CEST) Received: from mail-io1-f66.google.com (mail-io1-f66.google.com [209.85.166.66]) by dpdk.org (Postfix) with ESMTP id 6A8B81DD05 for ; Tue, 7 Jul 2020 16:29:47 +0200 (CEST) Received: by mail-io1-f66.google.com with SMTP id v8so43271370iox.2 for ; Tue, 07 Jul 2020 07:29:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=QTOnM0IV7LJF7be8xnCsiaToSQ5NTq6rWCc+sRTVXMM=; b=F/vSB3oN1DiIFcHcdeLlFJgm0DrS7KtmD0pYvV2ZzsbaceDbiOoEG8a3/ljQ6N5XuN ETkR7SnX+OtH0ei07XgwelvwxJ6AakgD6eGg9HmMY6uihH9cfAQJgQ+nl1qhCjOUkMge Ko/g9nd/Qm9eKqsUyopM10D69okfBp7PCyqfD4cbgCignZrOgpXzrXuE56XdacN8PUVc ZWeJlVhbaPqGc4NpDyz1dllzH9AElN65Cczxk84at1OPsvsotxG/A/M6nVcZY1l1nTV2 4GRPx3Mu015ta27/RNr8mieq3re46Jfu3Kg6kBYSkZhwA2UsKZbIHeZjpAgmm97ccZkh RK1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=QTOnM0IV7LJF7be8xnCsiaToSQ5NTq6rWCc+sRTVXMM=; b=RP9uwssgMrKciGsXeyWHievM6fzUmNbn6TATQPrCutXjENmDL9xYLJ3KS/HDhUkz34 l1W0Qz1j5phfwGR31hynF0I3msMJ8gyWUdO+w0JVSglfqZpFJlL1OWvEzQDgHWmwf15p dJXvSdQkWk8nRjOKOAk6TdI4II4HeSRAXGSFBcu9IfZIUP8I/IrMcM77+SWK9SNIHmyo bz3cTJJ66z81zbAIN9yeLPyKa/RcaCJVlulPQprcUW4y0z2OiXp3n1cp31KDjLTEHICo Ij5ybOVAznDfDjRSekr2xl1DGrlwvcbNbYR2D7ogZvU98wEbtdbQqBeINIR+bRUBPku8 IewA== X-Gm-Message-State: AOAM532hTtD2+GWecmzhBo6/7dH1cJrQ+QcwBBbfuaDEtFZofXFO9aRG W3tCWDxGD1lq1GKM+N0wdGfLQoUJHYR0KHuMpVQ= X-Google-Smtp-Source: ABdhPJzBwDVEvvGZjX0EanAE0O8uaDr2zRz5Ozi/fJKx/vxaIuugKWxT2L7RXoTzgT1v6+Gibbb8Q9vurX0x2V6efu4= X-Received: by 2002:a02:cb59:: with SMTP id k25mr60007901jap.112.1594132186555; Tue, 07 Jul 2020 07:29:46 -0700 (PDT) MIME-Version: 1.0 References: <1593667604-12029-1-git-send-email-phil.yang@arm.com> <1594120403-17643-1-git-send-email-phil.yang@arm.com> <1594120403-17643-4-git-send-email-phil.yang@arm.com> In-Reply-To: <1594120403-17643-4-git-send-email-phil.yang@arm.com> From: Jerin Jacob Date: Tue, 7 Jul 2020 19:59:30 +0530 Message-ID: To: Phil Yang Cc: Thomas Monjalon , Erik Gabriel Carrillo , dpdk-dev , Jerin Jacob , Honnappa Nagarahalli , David Christensen , "Ruifeng Wang (Arm Technology China)" , Dharmik Thakkar , nd , David Marchand , Ray Kinsella , Neil Horman , dodji@redhat.com Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v3 4/4] eventdev: relax smp barriers with C11 atomics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Jul 7, 2020 at 4:45 PM Phil Yang wrote: > > The impl_opaque field is shared between the timer arm and cancel > operations. Meanwhile, the state flag acts as a guard variable to > make sure the update of impl_opaque is synchronized. The original > code uses rte_smp barriers to achieve that. This patch uses C11 > atomics with an explicit one-way memory barrier instead of full > barriers rte_smp_w/rmb() to avoid the unnecessary barrier on aarch64. > > Since compilers can generate the same instructions for volatile and > non-volatile variable in C11 __atomics built-ins, so remain the volatile > keyword in front of state enum to avoid the ABI break issue. > > Signed-off-by: Phil Yang > Reviewed-by: Dharmik Thakkar > Reviewed-by: Ruifeng Wang > Acked-by: Erik Gabriel Carrillo Could you fix the following: WARNING:TYPO_SPELLING: 'opague' may be misspelled - perhaps 'opaque'? #184: FILE: lib/librte_eventdev/rte_event_timer_adapter.c:1161: + * specific opague data under the correct state. total: 0 errors, 1 warnings, 124 lines checked > --- > v3: > Fix ABI issue: revert to 'volatile enum rte_event_timer_state type state'. > > v2: > 1. Removed implementation-specific opaque data cleanup code. > 2. Replaced thread fence with atomic ACQURE/RELEASE ordering on state access. > > lib/librte_eventdev/rte_event_timer_adapter.c | 55 ++++++++++++++++++--------- > 1 file changed, 37 insertions(+), 18 deletions(-) > > diff --git a/lib/librte_eventdev/rte_event_timer_adapter.c b/lib/librte_eventdev/rte_event_timer_adapter.c > index d75415c..eb2c93a 100644 > --- a/lib/librte_eventdev/rte_event_timer_adapter.c > +++ b/lib/librte_eventdev/rte_event_timer_adapter.c > @@ -629,7 +629,8 @@ swtim_callback(struct rte_timer *tim) > sw->expired_timers[sw->n_expired_timers++] = tim; > sw->stats.evtim_exp_count++; > > - evtim->state = RTE_EVENT_TIMER_NOT_ARMED; > + __atomic_store_n(&evtim->state, RTE_EVENT_TIMER_NOT_ARMED, > + __ATOMIC_RELEASE); > } > > if (event_buffer_batch_ready(&sw->buffer)) { > @@ -1020,6 +1021,7 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, > int n_lcores; > /* Timer list for this lcore is not in use. */ > uint16_t exp_state = 0; > + enum rte_event_timer_state n_state; > > #ifdef RTE_LIBRTE_EVENTDEV_DEBUG > /* Check that the service is running. */ > @@ -1060,30 +1062,36 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, > } > > for (i = 0; i < nb_evtims; i++) { > - /* Don't modify the event timer state in these cases */ > - if (evtims[i]->state == RTE_EVENT_TIMER_ARMED) { > + n_state = __atomic_load_n(&evtims[i]->state, __ATOMIC_ACQUIRE); > + if (n_state == RTE_EVENT_TIMER_ARMED) { > rte_errno = EALREADY; > break; > - } else if (!(evtims[i]->state == RTE_EVENT_TIMER_NOT_ARMED || > - evtims[i]->state == RTE_EVENT_TIMER_CANCELED)) { > + } else if (!(n_state == RTE_EVENT_TIMER_NOT_ARMED || > + n_state == RTE_EVENT_TIMER_CANCELED)) { > rte_errno = EINVAL; > break; > } > > ret = check_timeout(evtims[i], adapter); > if (unlikely(ret == -1)) { > - evtims[i]->state = RTE_EVENT_TIMER_ERROR_TOOLATE; > + __atomic_store_n(&evtims[i]->state, > + RTE_EVENT_TIMER_ERROR_TOOLATE, > + __ATOMIC_RELAXED); > rte_errno = EINVAL; > break; > } else if (unlikely(ret == -2)) { > - evtims[i]->state = RTE_EVENT_TIMER_ERROR_TOOEARLY; > + __atomic_store_n(&evtims[i]->state, > + RTE_EVENT_TIMER_ERROR_TOOEARLY, > + __ATOMIC_RELAXED); > rte_errno = EINVAL; > break; > } > > if (unlikely(check_destination_event_queue(evtims[i], > adapter) < 0)) { > - evtims[i]->state = RTE_EVENT_TIMER_ERROR; > + __atomic_store_n(&evtims[i]->state, > + RTE_EVENT_TIMER_ERROR, > + __ATOMIC_RELAXED); > rte_errno = EINVAL; > break; > } > @@ -1099,13 +1107,18 @@ __swtim_arm_burst(const struct rte_event_timer_adapter *adapter, > SINGLE, lcore_id, NULL, evtims[i]); > if (ret < 0) { > /* tim was in RUNNING or CONFIG state */ > - evtims[i]->state = RTE_EVENT_TIMER_ERROR; > + __atomic_store_n(&evtims[i]->state, > + RTE_EVENT_TIMER_ERROR, > + __ATOMIC_RELEASE); > break; > } > > - rte_smp_wmb(); > EVTIM_LOG_DBG("armed an event timer"); > - evtims[i]->state = RTE_EVENT_TIMER_ARMED; > + /* RELEASE ordering guarantees the adapter specific value > + * changes observed before the update of state. > + */ > + __atomic_store_n(&evtims[i]->state, RTE_EVENT_TIMER_ARMED, > + __ATOMIC_RELEASE); > } > > if (i < nb_evtims) > @@ -1132,6 +1145,7 @@ swtim_cancel_burst(const struct rte_event_timer_adapter *adapter, > struct rte_timer *timp; > uint64_t opaque; > struct swtim *sw = swtim_pmd_priv(adapter); > + enum rte_event_timer_state n_state; > > #ifdef RTE_LIBRTE_EVENTDEV_DEBUG > /* Check that the service is running. */ > @@ -1143,16 +1157,18 @@ swtim_cancel_burst(const struct rte_event_timer_adapter *adapter, > > for (i = 0; i < nb_evtims; i++) { > /* Don't modify the event timer state in these cases */ > - if (evtims[i]->state == RTE_EVENT_TIMER_CANCELED) { > + /* ACQUIRE ordering guarantees the access of implementation > + * specific opague data under the correct state. > + */ > + n_state = __atomic_load_n(&evtims[i]->state, __ATOMIC_ACQUIRE); > + if (n_state == RTE_EVENT_TIMER_CANCELED) { > rte_errno = EALREADY; > break; > - } else if (evtims[i]->state != RTE_EVENT_TIMER_ARMED) { > + } else if (n_state != RTE_EVENT_TIMER_ARMED) { > rte_errno = EINVAL; > break; > } > > - rte_smp_rmb(); > - > opaque = evtims[i]->impl_opaque[0]; > timp = (struct rte_timer *)(uintptr_t)opaque; > RTE_ASSERT(timp != NULL); > @@ -1166,9 +1182,12 @@ swtim_cancel_burst(const struct rte_event_timer_adapter *adapter, > > rte_mempool_put(sw->tim_pool, (void **)timp); > > - evtims[i]->state = RTE_EVENT_TIMER_CANCELED; > - > - rte_smp_wmb(); > + /* The RELEASE ordering here pairs with atomic ordering > + * to make sure the state update data observed between > + * threads. > + */ > + __atomic_store_n(&evtims[i]->state, RTE_EVENT_TIMER_CANCELED, > + __ATOMIC_RELEASE); > } > > return i; > -- > 2.7.4 >