From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A5A4A04C3; Fri, 22 Nov 2019 16:44:48 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 787061B42A; Fri, 22 Nov 2019 16:44:41 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 780892C08 for ; Fri, 22 Nov 2019 16:44:39 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xAMFeDN2023936 for ; Fri, 22 Nov 2019 07:44:38 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=nwJuzgUW5NaLOPcngI3V3a17iE6Zz9EdHwW2yvSkUpE=; b=n2BW1SAKVToqWBGy/e1CZnMlIMnguUR3Sgdl6+X3vPrA3WPPKGvw5JyVP43p9dHDW6QD XKLWPa4hcRuWVqTaDjFpvU1vrjRAYnZv31mgCikNIIkdWSmm09QZ04KudeNPA8Azp3yj RIBbPXSnhGQaNF5ouTgU+rDqnCAcM18uJDjuc+N1nS0j5c0+ac/H2IWs2a0VntAZzSRp lp4MLST5dvDaMIXO7wte73pyFuzK5MFNqCFJcg6+JMiRxmIvsXrkAD71m+xPRkfnWRXT VRNt5p9d1cmlCv0pbVLP2LdogaoCPnATbQFsxtwvre+voqRp3m6mwrXFZwTebx/mP+FQ Ew== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2weafba24d-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 22 Nov 2019 07:44:38 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 22 Nov 2019 07:44:36 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 22 Nov 2019 07:44:36 -0800 Received: from BG-LT7430.marvell.com (unknown [10.28.17.72]) by maili.marvell.com (Postfix) with ESMTP id E49793F703F; Fri, 22 Nov 2019 07:44:34 -0800 (PST) From: To: , Pavan Nikhilesh CC: Date: Fri, 22 Nov 2019 21:14:27 +0530 Message-ID: <20191122154431.17416-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20191122154431.17416-1-pbhagavatula@marvell.com> References: <20191122154431.17416-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,18.0.572 definitions=2019-11-22_03:2019-11-21,2019-11-22 signatures=0 Subject: [dpdk-dev] [PATCH v3 2/5] event/octeontx2: use opposite bucket to store current chunk X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Since TIM buckets are always aligned to 32B and our cache line size being 128B, we will always have a cache miss when reading current_chunk pointer. Avoid the cache miss by storing the current_chunk pointer in the bucket opposite to the current bucket. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_tim_worker.h | 69 ++++++++++++++--------- 1 file changed, 41 insertions(+), 28 deletions(-) diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h index c896b5433..7b771fbca 100644 --- a/drivers/event/octeontx2/otx2_tim_worker.h +++ b/drivers/event/octeontx2/otx2_tim_worker.h @@ -115,20 +115,29 @@ tim_bkt_clr_nent(struct otx2_tim_bkt *bktp) return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL); } -static __rte_always_inline struct otx2_tim_bkt * +static __rte_always_inline void tim_get_target_bucket(struct otx2_tim_ring * const tim_ring, - const uint32_t rel_bkt, const uint8_t flag) + const uint32_t rel_bkt, struct otx2_tim_bkt **bkt, + struct otx2_tim_bkt **mirr_bkt, const uint8_t flag) { const uint64_t bkt_cyc = rte_rdtsc() - tim_ring->ring_start_cyc; uint32_t bucket = rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) + rel_bkt; + uint32_t mirr_bucket = 0; - if (flag & OTX2_TIM_BKT_MOD) + if (flag & OTX2_TIM_BKT_MOD) { bucket = bucket % tim_ring->nb_bkts; - if (flag & OTX2_TIM_BKT_AND) + mirr_bucket = (bucket + (tim_ring->nb_bkts >> 1)) % + tim_ring->nb_bkts; + } + if (flag & OTX2_TIM_BKT_AND) { bucket = bucket & (tim_ring->nb_bkts - 1); + mirr_bucket = (bucket + (tim_ring->nb_bkts >> 1)) & + (tim_ring->nb_bkts - 1); + } - return &tim_ring->bkt[bucket]; + *bkt = &tim_ring->bkt[bucket]; + *mirr_bkt = &tim_ring->bkt[mirr_bucket]; } static struct otx2_tim_ent * @@ -153,6 +162,7 @@ tim_clr_bkt(struct otx2_tim_ring * const tim_ring, static struct otx2_tim_ent * tim_refill_chunk(struct otx2_tim_bkt * const bkt, + struct otx2_tim_bkt * const mirr_bkt, struct otx2_tim_ring * const tim_ring) { struct otx2_tim_ent *chunk; @@ -162,8 +172,8 @@ tim_refill_chunk(struct otx2_tim_bkt * const bkt, (void **)&chunk))) return NULL; if (bkt->nb_entry) { - *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t) - bkt->current_chunk) + + *(uint64_t *)(((struct otx2_tim_ent *) + mirr_bkt->current_chunk) + tim_ring->nb_chunk_slots) = (uintptr_t)chunk; } else { @@ -180,6 +190,7 @@ tim_refill_chunk(struct otx2_tim_bkt * const bkt, static struct otx2_tim_ent * tim_insert_chunk(struct otx2_tim_bkt * const bkt, + struct otx2_tim_bkt * const mirr_bkt, struct otx2_tim_ring * const tim_ring) { struct otx2_tim_ent *chunk; @@ -190,7 +201,7 @@ tim_insert_chunk(struct otx2_tim_bkt * const bkt, *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0; if (bkt->nb_entry) { *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t) - bkt->current_chunk) + + mirr_bkt->current_chunk) + tim_ring->nb_chunk_slots) = (uintptr_t)chunk; } else { bkt->first_chunk = (uintptr_t)chunk; @@ -205,14 +216,15 @@ tim_add_entry_sp(struct otx2_tim_ring * const tim_ring, const struct otx2_tim_ent * const pent, const uint8_t flags) { + struct otx2_tim_bkt *mirr_bkt; struct otx2_tim_ent *chunk; struct otx2_tim_bkt *bkt; uint64_t lock_sema; int16_t rem; - bkt = tim_get_target_bucket(tim_ring, rel_bkt, flags); - __retry: + tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt, flags); + /* Get Bucket sema*/ lock_sema = tim_bkt_fetch_sema_lock(bkt); @@ -232,7 +244,7 @@ tim_add_entry_sp(struct otx2_tim_ring * const tim_ring, : [hbt] "=&r" (hbt_state) : [w1] "r" ((&bkt->w1)) : "memory" - ); + ); #else do { hbt_state = __atomic_load_n(&bkt->w1, @@ -246,15 +258,14 @@ tim_add_entry_sp(struct otx2_tim_ring * const tim_ring, } } } - /* Insert the work. */ rem = tim_bkt_fetch_rem(lock_sema); if (!rem) { if (flags & OTX2_TIM_ENA_FB) - chunk = tim_refill_chunk(bkt, tim_ring); + chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring); if (flags & OTX2_TIM_ENA_DFB) - chunk = tim_insert_chunk(bkt, tim_ring); + chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring); if (unlikely(chunk == NULL)) { bkt->chunk_remainder = 0; @@ -264,10 +275,10 @@ tim_add_entry_sp(struct otx2_tim_ring * const tim_ring, tim->state = RTE_EVENT_TIMER_ERROR; return -ENOMEM; } - bkt->current_chunk = (uintptr_t)chunk; + mirr_bkt->current_chunk = (uintptr_t)chunk; bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1; } else { - chunk = (struct otx2_tim_ent *)(uintptr_t)bkt->current_chunk; + chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk; chunk += tim_ring->nb_chunk_slots - rem; } @@ -291,13 +302,14 @@ tim_add_entry_mp(struct otx2_tim_ring * const tim_ring, const struct otx2_tim_ent * const pent, const uint8_t flags) { + struct otx2_tim_bkt *mirr_bkt; struct otx2_tim_ent *chunk; struct otx2_tim_bkt *bkt; uint64_t lock_sema; int16_t rem; __retry: - bkt = tim_get_target_bucket(tim_ring, rel_bkt, flags); + tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt, flags); /* Get Bucket sema*/ lock_sema = tim_bkt_fetch_sema_lock(bkt); @@ -317,7 +329,7 @@ tim_add_entry_mp(struct otx2_tim_ring * const tim_ring, : [hbt] "=&r" (hbt_state) : [w1] "r" ((&bkt->w1)) : "memory" - ); + ); #else do { hbt_state = __atomic_load_n(&bkt->w1, @@ -358,9 +370,9 @@ tim_add_entry_mp(struct otx2_tim_ring * const tim_ring, } else if (!rem) { /* Only one thread can be here*/ if (flags & OTX2_TIM_ENA_FB) - chunk = tim_refill_chunk(bkt, tim_ring); + chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring); if (flags & OTX2_TIM_ENA_DFB) - chunk = tim_insert_chunk(bkt, tim_ring); + chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring); if (unlikely(chunk == NULL)) { tim_bkt_set_rem(bkt, 0); @@ -375,11 +387,11 @@ tim_add_entry_mp(struct otx2_tim_ring * const tim_ring, (-tim_bkt_fetch_rem(lock_sema))) lock_sema = __atomic_load_n(&bkt->w1, __ATOMIC_ACQUIRE); - bkt->current_chunk = (uintptr_t)chunk; + mirr_bkt->current_chunk = (uintptr_t)chunk; __atomic_store_n(&bkt->chunk_remainder, tim_ring->nb_chunk_slots - 1, __ATOMIC_RELEASE); } else { - chunk = (struct otx2_tim_ent *)bkt->current_chunk; + chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk; chunk += tim_ring->nb_chunk_slots - rem; *chunk = *pent; } @@ -420,6 +432,7 @@ tim_add_entry_brst(struct otx2_tim_ring * const tim_ring, const uint16_t nb_timers, const uint8_t flags) { struct otx2_tim_ent *chunk = NULL; + struct otx2_tim_bkt *mirr_bkt; struct otx2_tim_bkt *bkt; uint16_t chunk_remainder; uint16_t index = 0; @@ -428,7 +441,7 @@ tim_add_entry_brst(struct otx2_tim_ring * const tim_ring, uint8_t lock_cnt; __retry: - bkt = tim_get_target_bucket(tim_ring, rel_bkt, flags); + tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt, flags); /* Only one thread beyond this. */ lock_sema = tim_bkt_inc_lock(bkt); @@ -477,7 +490,7 @@ tim_add_entry_brst(struct otx2_tim_ring * const tim_ring, crem = tim_ring->nb_chunk_slots - chunk_remainder; if (chunk_remainder && crem) { chunk = ((struct otx2_tim_ent *) - (uintptr_t)bkt->current_chunk) + crem; + mirr_bkt->current_chunk) + crem; index = tim_cpy_wrk(index, chunk_remainder, chunk, tim, ents, bkt); @@ -486,9 +499,9 @@ tim_add_entry_brst(struct otx2_tim_ring * const tim_ring, } if (flags & OTX2_TIM_ENA_FB) - chunk = tim_refill_chunk(bkt, tim_ring); + chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring); if (flags & OTX2_TIM_ENA_DFB) - chunk = tim_insert_chunk(bkt, tim_ring); + chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring); if (unlikely(chunk == NULL)) { tim_bkt_dec_lock(bkt); @@ -497,14 +510,14 @@ tim_add_entry_brst(struct otx2_tim_ring * const tim_ring, return crem; } *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0; - bkt->current_chunk = (uintptr_t)chunk; + mirr_bkt->current_chunk = (uintptr_t)chunk; tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt); rem = nb_timers - chunk_remainder; tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem); tim_bkt_add_nent(bkt, rem); } else { - chunk = (struct otx2_tim_ent *)(uintptr_t)bkt->current_chunk; + chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk; chunk += (tim_ring->nb_chunk_slots - chunk_remainder); tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt); -- 2.17.1