From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 871B21B296; Fri, 2 Nov 2018 12:21:42 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A99F415AD; Fri, 2 Nov 2018 04:21:41 -0700 (PDT) Received: from net-arm-c2400.shanghai.arm.com (net-arm-c2400.shanghai.arm.com [10.169.42.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C994D3F71E; Fri, 2 Nov 2018 04:21:39 -0700 (PDT) From: Gavin Hu To: dev@dpdk.org Cc: thomas@monjalon.net, stephen@networkplumber.org, olivier.matz@6wind.com, chaozhu@linux.vnet.ibm.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, jerin.jacob@caviumnetworks.com, Honnappa.Nagarahalli@arm.com, gavin.hu@arm.com, stable@dpdk.org Date: Fri, 2 Nov 2018 19:21:27 +0800 Message-Id: <1541157688-40012-2-git-send-email-gavin.hu@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1541157688-40012-1-git-send-email-gavin.hu@arm.com> References: <1541157688-40012-1-git-send-email-gavin.hu@arm.com> In-Reply-To: <1541066031-29125-1-git-send-email-gavin.hu@arm.com> References: <1541066031-29125-1-git-send-email-gavin.hu@arm.com> Subject: [dpdk-stable] [PATCH v5 1/2] ring: synchronize the load and store of the tail X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Nov 2018 11:21:43 -0000 Synchronize the load-acquire of the tail and the store-release within update_tail, the store release ensures all the ring operations, enqueue or dequeue, are seen by the observers on the other side as soon as they see the updated tail. The load-acquire is needed here as the data dependency is not a reliable way for ordering as the compiler might break it by saving to temporary values to boost performance. When computing the free_entries and avail_entries, use atomic semantics to load the heads and tails instead. The patch was benchmarked with test/ring_perf_autotest and it decreases the enqueue/dequeue latency by 5% ~ 27.6% with two lcores, the real gains are dependent on the number of lcores, depth of the ring, SPSC or MPMC. For 1 lcore, it also improves a little, about 3 ~ 4%. It is a big improvement, in case of MPMC, with two lcores and ring size of 32, it saves latency up to (3.26-2.36)/3.26 = 27.6%. This patch is a bug fix, while the improvement is a bonus. In our analysis the improvement comes from the cacheline pre-filling after hoisting load- acquire from _atomic_compare_exchange_n up above. The test command: $sudo ./test/test/test -l 16-19,44-47,72-75,100-103 -n 4 --socket-mem=\ 1024 -- -i Test result with this patch(two cores): SP/SC bulk enq/dequeue (size: 8): 5.86 MP/MC bulk enq/dequeue (size: 8): 10.15 SP/SC bulk enq/dequeue (size: 32): 1.94 MP/MC bulk enq/dequeue (size: 32): 2.36 In comparison of the test result without this patch: SP/SC bulk enq/dequeue (size: 8): 6.67 MP/MC bulk enq/dequeue (size: 8): 13.12 SP/SC bulk enq/dequeue (size: 32): 2.04 MP/MC bulk enq/dequeue (size: 32): 3.26 Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option") Cc: stable@dpdk.org Signed-off-by: Gavin Hu Reviewed-by: Honnappa Nagarahalli Reviewed-by: Steve Capper Reviewed-by: Ola Liljedahl Reviewed-by: Jia He Acked-by: Jerin Jacob Tested-by: Jerin Jacob --- lib/librte_ring/rte_ring_c11_mem.h | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/lib/librte_ring/rte_ring_c11_mem.h b/lib/librte_ring/rte_ring_c11_mem.h index 94df3c4..52da95a 100644 --- a/lib/librte_ring/rte_ring_c11_mem.h +++ b/lib/librte_ring/rte_ring_c11_mem.h @@ -57,6 +57,7 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, uint32_t *free_entries) { const uint32_t capacity = r->capacity; + uint32_t cons_tail; unsigned int max = n; int success; @@ -67,13 +68,18 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, *old_head = __atomic_load_n(&r->prod.head, __ATOMIC_ACQUIRE); - /* - * The subtraction is done between two unsigned 32bits value + /* load-acquire synchronize with store-release of ht->tail + * in update_tail. + */ + cons_tail = __atomic_load_n(&r->cons.tail, + __ATOMIC_ACQUIRE); + + /* The subtraction is done between two unsigned 32bits value * (the result is always modulo 32 bits even if we have * *old_head > cons_tail). So 'free_entries' is always between 0 * and capacity (which is < size). */ - *free_entries = (capacity + r->cons.tail - *old_head); + *free_entries = (capacity + cons_tail - *old_head); /* check that we have enough room in ring */ if (unlikely(n > *free_entries)) @@ -125,21 +131,29 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, uint32_t *entries) { unsigned int max = n; + uint32_t prod_tail; int success; /* move cons.head atomically */ do { /* Restore n as it may change every loop */ n = max; + *old_head = __atomic_load_n(&r->cons.head, __ATOMIC_ACQUIRE); + /* this load-acquire synchronize with store-release of ht->tail + * in update_tail. + */ + prod_tail = __atomic_load_n(&r->prod.tail, + __ATOMIC_ACQUIRE); + /* The subtraction is done between two unsigned 32bits value * (the result is always modulo 32 bits even if we have * cons_head > prod_tail). So 'entries' is always between 0 * and size(ring)-1. */ - *entries = (r->prod.tail - *old_head); + *entries = (prod_tail - *old_head); /* Set the actual entries for dequeue */ if (n > *entries) -- 2.7.4