From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <dev-bounces@dpdk.org> Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B5B643183; Tue, 17 Oct 2023 01:10:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A278542670; Tue, 17 Oct 2023 01:09:26 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id B3EC040E01 for <dev@dpdk.org>; Tue, 17 Oct 2023 01:09:08 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 78DC920B74CC; Mon, 16 Oct 2023 16:09:07 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 78DC920B74CC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1697497747; bh=B8S12jXNlNv9DUxq4FZKJX37lWdv8Q4e6Zcg7165yRE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZkIGtMiRuq4GgGM63+qz8Yrv8eMWZ2iK4/v8qFKW8HVMg3GkeJDKrFboNqXW7vUmq 3cu/BRAykxoJDUp4+I+Azt257hQFrL9p9NliQJpEVWGrKtWuuMEsbOs8MxBuFLjtKt H2XKtp0vheEZ68bp8I9rp+vS4SyZcCOon3I+ySvo= From: Tyler Retzlaff <roretzla@linux.microsoft.com> To: dev@dpdk.org Cc: Akhil Goyal <gakhil@marvell.com>, Anatoly Burakov <anatoly.burakov@intel.com>, Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>, Bruce Richardson <bruce.richardson@intel.com>, Chenbo Xia <chenbo.xia@intel.com>, Ciara Power <ciara.power@intel.com>, David Christensen <drc@linux.vnet.ibm.com>, David Hunt <david.hunt@intel.com>, Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>, Dmitry Malloy <dmitrym@microsoft.com>, Elena Agostini <eagostini@nvidia.com>, Erik Gabriel Carrillo <erik.g.carrillo@intel.com>, Fan Zhang <fanzhang.oss@gmail.com>, Ferruh Yigit <ferruh.yigit@amd.com>, Harman Kalra <hkalra@marvell.com>, Harry van Haaren <harry.van.haaren@intel.com>, Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>, Jerin Jacob <jerinj@marvell.com>, Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>, Matan Azrad <matan@nvidia.com>, Maxime Coquelin <maxime.coquelin@redhat.com>, Narcisa Ana Maria Vasile <navasile@linux.microsoft.com>, Nicolas Chautru <nicolas.chautru@intel.com>, Olivier Matz <olivier.matz@6wind.com>, Ori Kam <orika@nvidia.com>, Pallavi Kadam <pallavi.kadam@intel.com>, Pavan Nikhilesh <pbhagavatula@marvell.com>, Reshma Pattan <reshma.pattan@intel.com>, Sameh Gobriel <sameh.gobriel@intel.com>, Shijith Thotton <sthotton@marvell.com>, Sivaprasad Tummala <sivaprasad.tummala@amd.com>, Stephen Hemminger <stephen@networkplumber.org>, Suanming Mou <suanmingm@nvidia.com>, Sunil Kumar Kori <skori@marvell.com>, Thomas Monjalon <thomas@monjalon.net>, Viacheslav Ovsiienko <viacheslavo@nvidia.com>, Vladimir Medvedkin <vladimir.medvedkin@intel.com>, Yipeng Wang <yipeng1.wang@intel.com>, Tyler Retzlaff <roretzla@linux.microsoft.com> Subject: [PATCH 12/21] pdump: use rte optional stdatomic API Date: Mon, 16 Oct 2023 16:08:56 -0700 Message-Id: <1697497745-20664-13-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> References: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional stdatomic API Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/pdump/rte_pdump.c | 14 +++++++------- lib/pdump/rte_pdump.h | 8 ++++---- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 53cca10..80b90c6 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -110,8 +110,8 @@ struct pdump_response { * then packet doesn't match the filter (will be ignored). */ if (cbs->filter && rcs[i] == 0) { - __atomic_fetch_add(&stats->filtered, - 1, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&stats->filtered, + 1, rte_memory_order_relaxed); continue; } @@ -127,18 +127,18 @@ struct pdump_response { p = rte_pktmbuf_copy(pkts[i], mp, 0, cbs->snaplen); if (unlikely(p == NULL)) - __atomic_fetch_add(&stats->nombuf, 1, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&stats->nombuf, 1, rte_memory_order_relaxed); else dup_bufs[d_pkts++] = p; } - __atomic_fetch_add(&stats->accepted, d_pkts, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&stats->accepted, d_pkts, rte_memory_order_relaxed); ring_enq = rte_ring_enqueue_burst(ring, (void *)&dup_bufs[0], d_pkts, NULL); if (unlikely(ring_enq < d_pkts)) { unsigned int drops = d_pkts - ring_enq; - __atomic_fetch_add(&stats->ringfull, drops, __ATOMIC_RELAXED); + rte_atomic_fetch_add_explicit(&stats->ringfull, drops, rte_memory_order_relaxed); rte_pktmbuf_free_bulk(&dup_bufs[ring_enq], drops); } } @@ -720,10 +720,10 @@ struct pdump_response { uint16_t qid; for (qid = 0; qid < nq; qid++) { - const uint64_t *perq = (const uint64_t *)&stats[port][qid]; + const RTE_ATOMIC(uint64_t) *perq = (const uint64_t __rte_atomic *)&stats[port][qid]; for (i = 0; i < sizeof(*total) / sizeof(uint64_t); i++) { - val = __atomic_load_n(&perq[i], __ATOMIC_RELAXED); + val = rte_atomic_load_explicit(&perq[i], rte_memory_order_relaxed); sum[i] += val; } } diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h index b1a3918..7feb2b6 100644 --- a/lib/pdump/rte_pdump.h +++ b/lib/pdump/rte_pdump.h @@ -233,10 +233,10 @@ enum { * The statistics are sum of both receive and transmit queues. */ struct rte_pdump_stats { - uint64_t accepted; /**< Number of packets accepted by filter. */ - uint64_t filtered; /**< Number of packets rejected by filter. */ - uint64_t nombuf; /**< Number of mbuf allocation failures. */ - uint64_t ringfull; /**< Number of missed packets due to ring full. */ + RTE_ATOMIC(uint64_t) accepted; /**< Number of packets accepted by filter. */ + RTE_ATOMIC(uint64_t) filtered; /**< Number of packets rejected by filter. */ + RTE_ATOMIC(uint64_t) nombuf; /**< Number of mbuf allocation failures. */ + RTE_ATOMIC(uint64_t) ringfull; /**< Number of missed packets due to ring full. */ uint64_t reserved[4]; /**< Reserved and pad to cache line */ }; -- 1.8.3.1