From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E1A9241DB5; Thu, 2 Mar 2023 17:19:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E17D742D36; Thu, 2 Mar 2023 17:18:40 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 8CA5640EE3 for ; Thu, 2 Mar 2023 17:18:31 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id 8B26120BC5F3; Thu, 2 Mar 2023 08:18:30 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 8B26120BC5F3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677773910; bh=is13LnW3lk/PNcbCuNe7a5/BMV9QpkAVZTDDm6orioA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fzLz3vN2jo3EnlLYBP9AWAL8dCD5DvcpHwD+yh1RhHDIOxrt9LABCoA17yRmKBOxB 6UTlRg7g2sUN1PTDBFhjg1caYWR1zAkDdpdIXaWQi7ZBScCkzpQPogkbhXUncL7PuZ V2bAzwan30hH69SjBJca9CYECDGTtGai6clrcftI= From: Tyler Retzlaff To: dev@dpdk.org Cc: Honnappa.Nagarahalli@arm.com, thomas@monjalon.net, bruce.richardson@intel.com, mb@smartsharesystems.com, Ruifeng.Wang@arm.com, maxime.coquelin@redhat.com, Tyler Retzlaff Subject: [PATCH v2 12/17] net/cxgbe: use previous value atomic fetch operations Date: Thu, 2 Mar 2023 08:18:17 -0800 Message-Id: <1677773902-5167-13-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1677773902-5167-1-git-send-email-roretzla@linux.microsoft.com> References: <1677718068-2412-1-git-send-email-roretzla@linux.microsoft.com> <1677773902-5167-1-git-send-email-roretzla@linux.microsoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use __atomic_fetch_{add,and,or,sub,xor} instead of __atomic_{add,and,or,sub,xor}_fetch when we have no interest in the result of the operation. Reduces unnecessary codegen that provided the result of the atomic operation that was not used. Change brings closer alignment with atomics available in C11 standard and will reduce review effort when they are integrated. Signed-off-by: Tyler Retzlaff Acked-by: Morten Brørup --- drivers/net/cxgbe/clip_tbl.c | 2 +- drivers/net/cxgbe/cxgbe_main.c | 12 ++++++------ drivers/net/cxgbe/l2t.c | 4 ++-- drivers/net/cxgbe/mps_tcam.c | 2 +- drivers/net/cxgbe/smt.c | 4 ++-- 5 files changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/net/cxgbe/clip_tbl.c b/drivers/net/cxgbe/clip_tbl.c index 072fc74..ce715f2 100644 --- a/drivers/net/cxgbe/clip_tbl.c +++ b/drivers/net/cxgbe/clip_tbl.c @@ -129,7 +129,7 @@ static struct clip_entry *t4_clip_alloc(struct rte_eth_dev *dev, ce->type = FILTER_TYPE_IPV4; } } else { - __atomic_add_fetch(&ce->refcnt, 1, __ATOMIC_RELAXED); + __atomic_fetch_add(&ce->refcnt, 1, __ATOMIC_RELAXED); } t4_os_unlock(&ce->lock); } diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c index f8dd833..c479454 100644 --- a/drivers/net/cxgbe/cxgbe_main.c +++ b/drivers/net/cxgbe/cxgbe_main.c @@ -418,14 +418,14 @@ void cxgbe_remove_tid(struct tid_info *t, unsigned int chan, unsigned int tid, if (t->tid_tab[tid]) { t->tid_tab[tid] = NULL; - __atomic_sub_fetch(&t->conns_in_use, 1, __ATOMIC_RELAXED); + __atomic_fetch_sub(&t->conns_in_use, 1, __ATOMIC_RELAXED); if (t->hash_base && tid >= t->hash_base) { if (family == FILTER_TYPE_IPV4) - __atomic_sub_fetch(&t->hash_tids_in_use, 1, + __atomic_fetch_sub(&t->hash_tids_in_use, 1, __ATOMIC_RELAXED); } else { if (family == FILTER_TYPE_IPV4) - __atomic_sub_fetch(&t->tids_in_use, 1, + __atomic_fetch_sub(&t->tids_in_use, 1, __ATOMIC_RELAXED); } } @@ -448,15 +448,15 @@ void cxgbe_insert_tid(struct tid_info *t, void *data, unsigned int tid, t->tid_tab[tid] = data; if (t->hash_base && tid >= t->hash_base) { if (family == FILTER_TYPE_IPV4) - __atomic_add_fetch(&t->hash_tids_in_use, 1, + __atomic_fetch_add(&t->hash_tids_in_use, 1, __ATOMIC_RELAXED); } else { if (family == FILTER_TYPE_IPV4) - __atomic_add_fetch(&t->tids_in_use, 1, + __atomic_fetch_add(&t->tids_in_use, 1, __ATOMIC_RELAXED); } - __atomic_add_fetch(&t->conns_in_use, 1, __ATOMIC_RELAXED); + __atomic_fetch_add(&t->conns_in_use, 1, __ATOMIC_RELAXED); } /** diff --git a/drivers/net/cxgbe/l2t.c b/drivers/net/cxgbe/l2t.c index 66f5789..21f4019 100644 --- a/drivers/net/cxgbe/l2t.c +++ b/drivers/net/cxgbe/l2t.c @@ -15,7 +15,7 @@ void cxgbe_l2t_release(struct l2t_entry *e) { if (__atomic_load_n(&e->refcnt, __ATOMIC_RELAXED) != 0) - __atomic_sub_fetch(&e->refcnt, 1, __ATOMIC_RELAXED); + __atomic_fetch_sub(&e->refcnt, 1, __ATOMIC_RELAXED); } /** @@ -162,7 +162,7 @@ static struct l2t_entry *t4_l2t_alloc_switching(struct rte_eth_dev *dev, dev_debug(adap, "Failed to write L2T entry: %d", ret); } else { - __atomic_add_fetch(&e->refcnt, 1, __ATOMIC_RELAXED); + __atomic_fetch_add(&e->refcnt, 1, __ATOMIC_RELAXED); } t4_os_unlock(&e->lock); } diff --git a/drivers/net/cxgbe/mps_tcam.c b/drivers/net/cxgbe/mps_tcam.c index abbf06e..017741f 100644 --- a/drivers/net/cxgbe/mps_tcam.c +++ b/drivers/net/cxgbe/mps_tcam.c @@ -76,7 +76,7 @@ int cxgbe_mpstcam_alloc(struct port_info *pi, const u8 *eth_addr, t4_os_write_lock(&mpstcam->lock); entry = cxgbe_mpstcam_lookup(adap->mpstcam, eth_addr, mask); if (entry) { - __atomic_add_fetch(&entry->refcnt, 1, __ATOMIC_RELAXED); + __atomic_fetch_add(&entry->refcnt, 1, __ATOMIC_RELAXED); t4_os_write_unlock(&mpstcam->lock); return entry->idx; } diff --git a/drivers/net/cxgbe/smt.c b/drivers/net/cxgbe/smt.c index 810c757..4e14a73 100644 --- a/drivers/net/cxgbe/smt.c +++ b/drivers/net/cxgbe/smt.c @@ -170,7 +170,7 @@ static struct smt_entry *t4_smt_alloc_switching(struct rte_eth_dev *dev, e->state = SMT_STATE_SWITCHING; __atomic_store_n(&e->refcnt, 1, __ATOMIC_RELAXED); } else { - __atomic_add_fetch(&e->refcnt, 1, __ATOMIC_RELAXED); + __atomic_fetch_add(&e->refcnt, 1, __ATOMIC_RELAXED); } t4_os_unlock(&e->lock); } @@ -196,7 +196,7 @@ struct smt_entry *cxgbe_smt_alloc_switching(struct rte_eth_dev *dev, u8 *smac) void cxgbe_smt_release(struct smt_entry *e) { if (__atomic_load_n(&e->refcnt, __ATOMIC_RELAXED) != 0) - __atomic_sub_fetch(&e->refcnt, 1, __ATOMIC_RELAXED); + __atomic_fetch_sub(&e->refcnt, 1, __ATOMIC_RELAXED); } /** -- 1.8.3.1