From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 24D4EA0526; Tue, 21 Jul 2020 21:50:57 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 350B61C06B; Tue, 21 Jul 2020 21:50:56 +0200 (CEST) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by dpdk.org (Postfix) with ESMTP id 1B5BD1C068 for ; Tue, 21 Jul 2020 21:50:54 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595361054; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=3OvtAqAVvatMfB4vEOyiSPFA9UyRgujmHMgayWb+6is=; b=Q/ffA7aYif89zzcYeY9Yo0C6DBy3FfRxb5YqU0y2FV0rVN+aFShj4MVXlXBZz3y9PqhSkg Iw2ZCfEDKAum5cuPSckh2ZhJ/30oxOsb9XIt2oK3jM925vXUR9sY3C0O0rZkjWYO0IAdlc Np8pjppA6mgCU3fT67rxek7A8d8Smrg= Received: from mail-ua1-f71.google.com (mail-ua1-f71.google.com [209.85.222.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-169-SkmBX1J0M3q4O48MzCq8PQ-1; Tue, 21 Jul 2020 15:50:50 -0400 X-MC-Unique: SkmBX1J0M3q4O48MzCq8PQ-1 Received: by mail-ua1-f71.google.com with SMTP id r41so4318794uad.1 for ; Tue, 21 Jul 2020 12:50:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3OvtAqAVvatMfB4vEOyiSPFA9UyRgujmHMgayWb+6is=; b=Va1ZMSxEOQgEZhQHFYcn05Gm7pRSwS3UR4EZUb471YWgOq6KPXSpzsAJYTLlihWHnz d6olL9nu9DzRq08rFMYeaOesIfHZ5lgydG94Mu3b5FNyx5xfURwUHeq7BNxi3t49Tvpn 1by/d8mScoBnAh/CpzG1ZlLrq5bIHkVJ/x3HUmJpwKZWDxFqdpK6vL6g/Py32WAwXbbQ 5L+YMdrrS2+3iEr6S52eSOwCGkBUqrU4WDRTrnHY0zYpkWbRU+Xlp0BmxSE5Wyf1dbcT JjYW2rB6nvN6BTWh8WtnzXpZXdPnDdC2mYHNqGygWhIQ1/nRhWnald4okPQmIckETB1S q2WA== X-Gm-Message-State: AOAM5335nFjWLoEzGypzoKzFEzP604UeTerusdXyup3p6LcV7HYJupEA 8fcLztQgsY0qvDnkgZhao2ZvNkhOipM2HUiVb82OgDD9D1T+UK1xT8vFR0JcRkMGfz/UI20dweX 8q0dte4K/y5rncZjNTeo= X-Received: by 2002:a05:6102:249:: with SMTP id a9mr20321822vsq.198.1595361049871; Tue, 21 Jul 2020 12:50:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz1RWD3XEOzRPoQYYdaHu6TkHGDBlTaWUWKl7+Wn4kdAvF9i8MCu19hFZDUeiJj4r5h2cljgiMzqYVSBqMqwQw= X-Received: by 2002:a05:6102:249:: with SMTP id a9mr20321808vsq.198.1595361049530; Tue, 21 Jul 2020 12:50:49 -0700 (PDT) MIME-Version: 1.0 References: <20200720120938.34660-1-harry.van.haaren@intel.com> <20200720143829.46280-1-harry.van.haaren@intel.com> In-Reply-To: From: David Marchand Date: Tue, 21 Jul 2020 21:50:38 +0200 Message-ID: To: Honnappa Nagarahalli Cc: Harry van Haaren , "dev@dpdk.org" , "igor.romanov@oktetlabs.ru" , "ferruh.yigit@intel.com" , nd , "aconole@redhat.com" , "l.wojciechow@partner.samsung.com" , Phil Yang X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v2 1/2] service: add API to retrieve service core active X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Jul 21, 2020 at 9:43 PM Honnappa Nagarahalli wrote: > > @@ -457,6 +458,8 @@ service_runner_func(void *arg) > > const int lcore = rte_lcore_id(); > > struct core_state *cs = &lcore_states[lcore]; > > > > + __atomic_store_n(&cs->thread_active, 1, __ATOMIC_RELAXED); > Essentially, we have to ensure that all the memory operations are contained within both stores (this one and the one below) to 'thread_active'. > We should use __ATOMIC_SEQ_CST, which will avoid any memory operations from getting hoisted above this line. > Performance should not be an issue as it will get executed only when the service core is started. > It would be good to add comment reasoning the memory order used. > > Also, what happens if the user calls 'rte_service_lcore_active' just before the above statement is executed? It might not be a valid use case, but it is good to document the race conditions and correct sequence of API calls. > > > + > > /* runstate act as the guard variable. Use load-acquire > > * memory order here to synchronize with store-release > > * in runstate update functions. > > @@ -475,9 +478,20 @@ service_runner_func(void *arg) > > cs->loops++; > > } > > > > + __atomic_store_n(&cs->thread_active, 0, __ATOMIC_RELAXED); > __ATOMIC_RELEASE would be enough. But, __ATOMIC_SEQ_CST should not cause any performance issues. But then are we missing a ACQUIRE barrier in rte_service_lcore_active? > > > return 0; > > } > > > > +int32_t > > +rte_service_lcore_active(uint32_t lcore) { > > + if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core) > > + return -EINVAL; > > + > > + return __atomic_load_n(&lcore_states[lcore].thread_active, > > + __ATOMIC_RELAXED); > > +} > > + > > int32_t > > rte_service_lcore_count(void) > > { > > diff --git a/lib/librte_eal/include/rte_service.h > > b/lib/librte_eal/include/rte_service.h > > index e2d0a6dd3..f7134b5c0 100644 > > --- a/lib/librte_eal/include/rte_service.h > > +++ b/lib/librte_eal/include/rte_service.h > > @@ -261,6 +261,15 @@ int32_t rte_service_lcore_start(uint32_t lcore_id); > > */ > > int32_t rte_service_lcore_stop(uint32_t lcore_id); > > > > +/** > > + * Reports if a service lcore is currently running. > > + * @retval 0 Service thread is not active, and has been returned to EAL. > > + * @retval 1 Service thread is in the service core polling loop. > > + * @retval -EINVAL Invalid *lcore_id* provided. > > + */ > > +__rte_experimental > > +int32_t rte_service_lcore_active(uint32_t lcore_id); > Would 'rte_service_lcore_may_be_active' better? It would be inline with 'rte_service_may_be_active'? > > I think we need additional documentation for 'rte_service_lcore_stop' to indicate that the caller should not assume that the service thread on the lcore has stopped running and should call the above API to check. +1 Additional documentation can't hurt. -- David Marchand