From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <dev-bounces@dpdk.org> Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0A4443200; Thu, 26 Oct 2023 02:32:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1ACA542DE2; Thu, 26 Oct 2023 02:32:12 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 11B21402BE for <dev@dpdk.org>; Thu, 26 Oct 2023 02:32:01 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id AA11A20B74C7; Wed, 25 Oct 2023 17:31:59 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com AA11A20B74C7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1698280319; bh=WwKFAE+BTKqgPf6JrMJiSwJubseprZaUYU8IIoc4Zx0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nPYqGz9BiLYzv48maNECuTZGpYEFITzPBElnMmLQAmfLrkWe7FPSb4iwnZgnJRcCb ollHbsJBcINo1AOuqx7OtQuYQI+4cOzI/BH9oqfWr+PSlruF2Bu+iyqQe+G6RVysd5 Zc11XovUwsZzpiRDFFaxC14qv6J2BEe/Wmn1pzjE= From: Tyler Retzlaff <roretzla@linux.microsoft.com> To: dev@dpdk.org Cc: Akhil Goyal <gakhil@marvell.com>, Anatoly Burakov <anatoly.burakov@intel.com>, Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>, Bruce Richardson <bruce.richardson@intel.com>, Chenbo Xia <chenbo.xia@intel.com>, Ciara Power <ciara.power@intel.com>, David Christensen <drc@linux.vnet.ibm.com>, David Hunt <david.hunt@intel.com>, Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>, Dmitry Malloy <dmitrym@microsoft.com>, Elena Agostini <eagostini@nvidia.com>, Erik Gabriel Carrillo <erik.g.carrillo@intel.com>, Fan Zhang <fanzhang.oss@gmail.com>, Ferruh Yigit <ferruh.yigit@amd.com>, Harman Kalra <hkalra@marvell.com>, Harry van Haaren <harry.van.haaren@intel.com>, Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>, Jerin Jacob <jerinj@marvell.com>, Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>, Matan Azrad <matan@nvidia.com>, Maxime Coquelin <maxime.coquelin@redhat.com>, Narcisa Ana Maria Vasile <navasile@linux.microsoft.com>, Nicolas Chautru <nicolas.chautru@intel.com>, Olivier Matz <olivier.matz@6wind.com>, Ori Kam <orika@nvidia.com>, Pallavi Kadam <pallavi.kadam@intel.com>, Pavan Nikhilesh <pbhagavatula@marvell.com>, Reshma Pattan <reshma.pattan@intel.com>, Sameh Gobriel <sameh.gobriel@intel.com>, Shijith Thotton <sthotton@marvell.com>, Sivaprasad Tummala <sivaprasad.tummala@amd.com>, Stephen Hemminger <stephen@networkplumber.org>, Suanming Mou <suanmingm@nvidia.com>, Sunil Kumar Kori <skori@marvell.com>, Thomas Monjalon <thomas@monjalon.net>, Viacheslav Ovsiienko <viacheslavo@nvidia.com>, Vladimir Medvedkin <vladimir.medvedkin@intel.com>, Yipeng Wang <yipeng1.wang@intel.com>, Tyler Retzlaff <roretzla@linux.microsoft.com> Subject: [PATCH v3 07/19] mbuf: use rte optional stdatomic API Date: Wed, 25 Oct 2023 17:31:42 -0700 Message-Id: <1698280314-25861-8-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1698280314-25861-1-git-send-email-roretzla@linux.microsoft.com> References: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> <1698280314-25861-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional stdatomic API Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/mbuf/rte_mbuf.h | 20 ++++++++++---------- lib/mbuf/rte_mbuf_core.h | 5 +++-- 2 files changed, 13 insertions(+), 12 deletions(-) diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h index 913c459..b8ab477 100644 --- a/lib/mbuf/rte_mbuf.h +++ b/lib/mbuf/rte_mbuf.h @@ -361,7 +361,7 @@ struct rte_pktmbuf_pool_private { static inline uint16_t rte_mbuf_refcnt_read(const struct rte_mbuf *m) { - return __atomic_load_n(&m->refcnt, __ATOMIC_RELAXED); + return rte_atomic_load_explicit(&m->refcnt, rte_memory_order_relaxed); } /** @@ -374,15 +374,15 @@ struct rte_pktmbuf_pool_private { static inline void rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value) { - __atomic_store_n(&m->refcnt, new_value, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&m->refcnt, new_value, rte_memory_order_relaxed); } /* internal */ static inline uint16_t __rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value) { - return __atomic_fetch_add(&m->refcnt, value, - __ATOMIC_ACQ_REL) + value; + return rte_atomic_fetch_add_explicit(&m->refcnt, value, + rte_memory_order_acq_rel) + value; } /** @@ -463,7 +463,7 @@ struct rte_pktmbuf_pool_private { static inline uint16_t rte_mbuf_ext_refcnt_read(const struct rte_mbuf_ext_shared_info *shinfo) { - return __atomic_load_n(&shinfo->refcnt, __ATOMIC_RELAXED); + return rte_atomic_load_explicit(&shinfo->refcnt, rte_memory_order_relaxed); } /** @@ -478,7 +478,7 @@ struct rte_pktmbuf_pool_private { rte_mbuf_ext_refcnt_set(struct rte_mbuf_ext_shared_info *shinfo, uint16_t new_value) { - __atomic_store_n(&shinfo->refcnt, new_value, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&shinfo->refcnt, new_value, rte_memory_order_relaxed); } /** @@ -502,8 +502,8 @@ struct rte_pktmbuf_pool_private { return (uint16_t)value; } - return __atomic_fetch_add(&shinfo->refcnt, value, - __ATOMIC_ACQ_REL) + value; + return rte_atomic_fetch_add_explicit(&shinfo->refcnt, value, + rte_memory_order_acq_rel) + value; } /** Mbuf prefetch */ @@ -1315,8 +1315,8 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m) * Direct usage of add primitive to avoid * duplication of comparing with one. */ - if (likely(__atomic_fetch_add(&shinfo->refcnt, -1, - __ATOMIC_ACQ_REL) - 1)) + if (likely(rte_atomic_fetch_add_explicit(&shinfo->refcnt, -1, + rte_memory_order_acq_rel) - 1)) return 1; /* Reinitialize counter before mbuf freeing. */ diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index e9bc0d1..5688683 100644 --- a/lib/mbuf/rte_mbuf_core.h +++ b/lib/mbuf/rte_mbuf_core.h @@ -19,6 +19,7 @@ #include <stdint.h> #include <rte_byteorder.h> +#include <rte_stdatomic.h> #ifdef __cplusplus extern "C" { @@ -497,7 +498,7 @@ struct rte_mbuf { * rte_mbuf_refcnt_set(). The functionality of these functions (atomic, * or non-atomic) is controlled by the RTE_MBUF_REFCNT_ATOMIC flag. */ - uint16_t refcnt; + RTE_ATOMIC(uint16_t) refcnt; /** * Number of segments. Only valid for the first segment of an mbuf @@ -674,7 +675,7 @@ struct rte_mbuf { struct rte_mbuf_ext_shared_info { rte_mbuf_extbuf_free_callback_t free_cb; /**< Free callback function */ void *fcb_opaque; /**< Free callback argument */ - uint16_t refcnt; + RTE_ATOMIC(uint16_t) refcnt; }; /** Maximum number of nb_segs allowed. */ -- 1.8.3.1