From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <dev-bounces@dpdk.org> Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DEDA443D14; Thu, 21 Mar 2024 20:21:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0941D42F7F; Thu, 21 Mar 2024 20:18:22 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id F16D342E60 for <dev@dpdk.org>; Thu, 21 Mar 2024 20:17:41 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id EEFA620B74E0; Thu, 21 Mar 2024 12:17:34 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com EEFA620B74E0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1711048656; bh=akeSi24P1ktqkHvjvvHTMBrvWEiOaU5NY244VU8btfA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZrEdKvwGyz+2jSaW4GnvHrfpSxjjwhwwV4glcIyQaL0bGWcAG/sdmRb+NjMrdf5rk c0BLG1fufpFTslgQtxdYnJaOKi7UMn7ycEp445FRLbr1gUNxkNcFZXzQAxMiSurnBy YU5NfJ9Q0fFZOdqG9DnEbX9WLgh4u7jcd4z34WOk= From: Tyler Retzlaff <roretzla@linux.microsoft.com> To: dev@dpdk.org Cc: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>, =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>, Abdullah Sevincer <abdullah.sevincer@intel.com>, Ajit Khaparde <ajit.khaparde@broadcom.com>, Alok Prasad <palok@marvell.com>, Anatoly Burakov <anatoly.burakov@intel.com>, Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>, Anoob Joseph <anoobj@marvell.com>, Bruce Richardson <bruce.richardson@intel.com>, Byron Marohn <byron.marohn@intel.com>, Chenbo Xia <chenbox@nvidia.com>, Chengwen Feng <fengchengwen@huawei.com>, Ciara Loftus <ciara.loftus@intel.com>, Ciara Power <ciara.power@intel.com>, Dariusz Sosnowski <dsosnowski@nvidia.com>, David Hunt <david.hunt@intel.com>, Devendra Singh Rawat <dsinghrawat@marvell.com>, Erik Gabriel Carrillo <erik.g.carrillo@intel.com>, Guoyang Zhou <zhouguoyang@huawei.com>, Harman Kalra <hkalra@marvell.com>, Harry van Haaren <harry.van.haaren@intel.com>, Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>, Jakub Grajciar <jgrajcia@cisco.com>, Jerin Jacob <jerinj@marvell.com>, Jeroen de Borst <jeroendb@google.com>, Jian Wang <jianwang@trustnetic.com>, Jiawen Wu <jiawenwu@trustnetic.com>, Jie Hai <haijie1@huawei.com>, Jingjing Wu <jingjing.wu@intel.com>, Joshua Washington <joshwash@google.com>, Joyce Kong <joyce.kong@arm.com>, Junfeng Guo <junfeng.guo@intel.com>, Kevin Laatz <kevin.laatz@intel.com>, Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>, Liang Ma <liangma@liangbit.com>, Long Li <longli@microsoft.com>, Maciej Czekaj <mczekaj@marvell.com>, Matan Azrad <matan@nvidia.com>, Maxime Coquelin <maxime.coquelin@redhat.com>, Nicolas Chautru <nicolas.chautru@intel.com>, Ori Kam <orika@nvidia.com>, Pavan Nikhilesh <pbhagavatula@marvell.com>, Peter Mccarthy <peter.mccarthy@intel.com>, Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>, Reshma Pattan <reshma.pattan@intel.com>, Rosen Xu <rosen.xu@intel.com>, Ruifeng Wang <ruifeng.wang@arm.com>, Rushil Gupta <rushilg@google.com>, Sameh Gobriel <sameh.gobriel@intel.com>, Sivaprasad Tummala <sivaprasad.tummala@amd.com>, Somnath Kotur <somnath.kotur@broadcom.com>, Stephen Hemminger <stephen@networkplumber.org>, Suanming Mou <suanmingm@nvidia.com>, Sunil Kumar Kori <skori@marvell.com>, Sunil Uttarwar <sunilprakashrao.uttarwar@amd.com>, Tetsuya Mukawa <mtetsuyah@gmail.com>, Vamsi Attunuru <vattunuru@marvell.com>, Viacheslav Ovsiienko <viacheslavo@nvidia.com>, Vladimir Medvedkin <vladimir.medvedkin@intel.com>, Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>, Yipeng Wang <yipeng1.wang@intel.com>, Yisen Zhuang <yisen.zhuang@huawei.com>, Yuying Zhang <Yuying.Zhang@intel.com>, Yuying Zhang <yuying.zhang@intel.com>, Ziyang Xuan <xuanziyang2@huawei.com>, Tyler Retzlaff <roretzla@linux.microsoft.com> Subject: [PATCH v2 31/45] baseband/acc: use rte stdatomic API Date: Thu, 21 Mar 2024 12:17:18 -0700 Message-Id: <1711048652-7512-32-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1711048652-7512-1-git-send-email-roretzla@linux.microsoft.com> References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com> <1711048652-7512-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API. Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- drivers/baseband/acc/rte_acc100_pmd.c | 36 +++++++++++++-------------- drivers/baseband/acc/rte_vrb_pmd.c | 46 +++++++++++++++++++++++------------ 2 files changed, 48 insertions(+), 34 deletions(-) diff --git a/drivers/baseband/acc/rte_acc100_pmd.c b/drivers/baseband/acc/rte_acc100_pmd.c index 4f666e5..ee50b9c 100644 --- a/drivers/baseband/acc/rte_acc100_pmd.c +++ b/drivers/baseband/acc/rte_acc100_pmd.c @@ -3673,8 +3673,8 @@ desc_idx = acc_desc_idx_tail(q, *dequeued_descs); desc = q->ring_addr + desc_idx; - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -3728,8 +3728,8 @@ uint16_t current_dequeued_descs = 0, descs_in_tb; desc = acc_desc_tail(q, *dequeued_descs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -3742,8 +3742,8 @@ /* Check if last CB in TB is ready to dequeue (and thus * the whole TB) - checking sdone bit. If not return. */ - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)last_desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)last_desc, + rte_memory_order_relaxed); if (!(atom_desc.rsp.val & ACC_SDONE)) return -1; @@ -3755,8 +3755,8 @@ while (i < descs_in_tb) { desc = acc_desc_tail(q, *dequeued_descs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); rsp.val = atom_desc.rsp.val; rte_bbdev_log_debug("Resp. desc %p: %x descs %d cbs %d\n", desc, rsp.val, descs_in_tb, desc->req.numCBs); @@ -3793,8 +3793,8 @@ struct rte_bbdev_dec_op *op; desc = acc_desc_tail(q, dequeued_cbs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -3846,8 +3846,8 @@ struct rte_bbdev_dec_op *op; desc = acc_desc_tail(q, dequeued_cbs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -3902,8 +3902,8 @@ uint8_t cbs_in_tb = 1, cb_idx = 0; desc = acc_desc_tail(q, dequeued_cbs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -3919,8 +3919,8 @@ /* Check if last CB in TB is ready to dequeue (and thus * the whole TB) - checking sdone bit. If not return. */ - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)last_desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)last_desc, + rte_memory_order_relaxed); if (!(atom_desc.rsp.val & ACC_SDONE)) return -1; @@ -3930,8 +3930,8 @@ /* Read remaining CBs if exists */ while (cb_idx < cbs_in_tb) { desc = acc_desc_tail(q, dequeued_cbs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); rsp.val = atom_desc.rsp.val; rte_bbdev_log_debug("Resp. desc %p: %x r %d c %d\n", desc, rsp.val, cb_idx, cbs_in_tb); diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c index 88b1104..f7c54be 100644 --- a/drivers/baseband/acc/rte_vrb_pmd.c +++ b/drivers/baseband/acc/rte_vrb_pmd.c @@ -3119,7 +3119,8 @@ desc_idx = acc_desc_idx_tail(q, *dequeued_descs); desc = q->ring_addr + desc_idx; - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); if (*dequeued_ops + desc->req.numCBs > max_requested_ops) return -1; @@ -3157,7 +3158,8 @@ struct rte_bbdev_enc_op *op; desc = acc_desc_tail(q, *dequeued_descs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit. */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -3192,7 +3194,8 @@ uint16_t current_dequeued_descs = 0, descs_in_tb; desc = acc_desc_tail(q, *dequeued_descs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); if (*dequeued_ops + 1 > max_requested_ops) return -1; @@ -3208,7 +3211,8 @@ /* Check if last CB in TB is ready to dequeue (and thus * the whole TB) - checking sdone bit. If not return. */ - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)last_desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)last_desc, + rte_memory_order_relaxed); if (!(atom_desc.rsp.val & ACC_SDONE)) return -1; @@ -3220,7 +3224,8 @@ while (i < descs_in_tb) { desc = acc_desc_tail(q, *dequeued_descs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); rsp.val = atom_desc.rsp.val; vrb_update_dequeued_operation(desc, rsp, &op->status, aq_dequeued, true, false); @@ -3246,7 +3251,8 @@ struct rte_bbdev_dec_op *op; desc = acc_desc_tail(q, dequeued_cbs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit. */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -3290,7 +3296,8 @@ struct rte_bbdev_dec_op *op; desc = acc_desc_tail(q, dequeued_cbs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit. */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -3346,7 +3353,8 @@ uint32_t tb_crc_check = 0; desc = acc_desc_tail(q, dequeued_cbs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit. */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -3362,7 +3370,8 @@ /* Check if last CB in TB is ready to dequeue (and thus the whole TB) - checking sdone bit. * If not return. */ - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)last_desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)last_desc, + rte_memory_order_relaxed); if (!(atom_desc.rsp.val & ACC_SDONE)) return -1; @@ -3372,7 +3381,8 @@ /* Read remaining CBs if exists. */ while (cb_idx < cbs_in_tb) { desc = acc_desc_tail(q, dequeued_cbs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); rsp.val = atom_desc.rsp.val; rte_bbdev_log_debug("Resp. desc %p: %x %x %x", desc, rsp.val, desc->rsp.add_info_0, @@ -3790,7 +3800,8 @@ struct rte_bbdev_fft_op *op; desc = acc_desc_tail(q, dequeued_cbs); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -4116,7 +4127,8 @@ uint8_t descs_in_op, i; desc = acc_desc_tail(q, dequeued_ops); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); /* Check fdone bit. */ if (!(atom_desc.rsp.val & ACC_FDONE)) @@ -4127,7 +4139,8 @@ /* Get last CB. */ last_desc = acc_desc_tail(q, dequeued_ops + descs_in_op - 1); /* Check if last op is ready to dequeue by checking fdone bit. If not exit. */ - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)last_desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)last_desc, + rte_memory_order_relaxed); if (!(atom_desc.rsp.val & ACC_FDONE)) return -1; #ifdef RTE_LIBRTE_BBDEV_DEBUG @@ -4137,8 +4150,8 @@ for (i = 1; i < descs_in_op - 1; i++) { last_desc = q->ring_addr + ((q->sw_ring_tail + dequeued_ops + i) & q->sw_ring_wrap_mask); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)last_desc, - __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit( + (uint64_t __rte_atomic *)last_desc, rte_memory_order_relaxed); if (!(atom_desc.rsp.val & ACC_FDONE)) return -1; } @@ -4154,7 +4167,8 @@ for (i = 0; i < descs_in_op; i++) { desc = q->ring_addr + ((q->sw_ring_tail + dequeued_ops + i) & q->sw_ring_wrap_mask); - atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc, __ATOMIC_RELAXED); + atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc, + rte_memory_order_relaxed); rsp.val = atom_desc.rsp.val; vrb_update_dequeued_operation(desc, rsp, &op->status, aq_dequeued, true, false); -- 1.8.3.1