From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 938351B3D2; Fri, 2 Nov 2018 12:21:44 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CBBE5A78; Fri, 2 Nov 2018 04:21:43 -0700 (PDT) Received: from net-arm-c2400.shanghai.arm.com (net-arm-c2400.shanghai.arm.com [10.169.42.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EBEA13F71E; Fri, 2 Nov 2018 04:21:41 -0700 (PDT) From: Gavin Hu To: dev@dpdk.org Cc: thomas@monjalon.net, stephen@networkplumber.org, olivier.matz@6wind.com, chaozhu@linux.vnet.ibm.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com, jerin.jacob@caviumnetworks.com, Honnappa.Nagarahalli@arm.com, gavin.hu@arm.com, stable@dpdk.org Date: Fri, 2 Nov 2018 19:21:28 +0800 Message-Id: <1541157688-40012-3-git-send-email-gavin.hu@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1541157688-40012-1-git-send-email-gavin.hu@arm.com> References: <1541157688-40012-1-git-send-email-gavin.hu@arm.com> In-Reply-To: <1541066031-29125-1-git-send-email-gavin.hu@arm.com> References: <1541066031-29125-1-git-send-email-gavin.hu@arm.com> Subject: [dpdk-dev] [PATCH v5 2/2] ring: move the atomic load of head above the loop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Nov 2018 11:21:45 -0000 In __rte_ring_move_prod_head, move the __atomic_load_n up and out of the do {} while loop as upon failure the old_head will be updated, another load is costly and not necessary. This helps a little on the latency,about 1~5%. Test result with the patch(two cores): SP/SC bulk enq/dequeue (size: 8): 5.64 MP/MC bulk enq/dequeue (size: 8): 9.58 SP/SC bulk enq/dequeue (size: 32): 1.98 MP/MC bulk enq/dequeue (size: 32): 2.30 Fixes: 39368ebfc606 ("ring: introduce C11 memory model barrier option") Cc: stable@dpdk.org Signed-off-by: Gavin Hu Reviewed-by: Honnappa Nagarahalli Reviewed-by: Steve Capper Reviewed-by: Ola Liljedahl Reviewed-by: Jia He Acked-by: Jerin Jacob Tested-by: Jerin Jacob --- doc/guides/rel_notes/release_18_11.rst | 7 +++++++ lib/librte_ring/rte_ring_c11_mem.h | 10 ++++------ 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst index 376128f..b68afab 100644 --- a/doc/guides/rel_notes/release_18_11.rst +++ b/doc/guides/rel_notes/release_18_11.rst @@ -69,6 +69,13 @@ New Features checked out against that dma mask and rejected if out of range. If more than one device has addressing limitations, the dma mask is the more restricted one. +* **Updated the ring library with C11 memory model.** + + Updated the ring library with C11 memory model, in our tests the changes + decreased latency by 27~29% and 3~15% for MPMC and SPSC cases respectively. + The real improvements may vary with the number of contending lcores and the + size of ring. + * **Added hot-unplug handle mechanism.** ``rte_dev_hotplug_handle_enable`` and ``rte_dev_hotplug_handle_disable`` are diff --git a/lib/librte_ring/rte_ring_c11_mem.h b/lib/librte_ring/rte_ring_c11_mem.h index 52da95a..7bc74a4 100644 --- a/lib/librte_ring/rte_ring_c11_mem.h +++ b/lib/librte_ring/rte_ring_c11_mem.h @@ -61,13 +61,11 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, unsigned int max = n; int success; + *old_head = __atomic_load_n(&r->prod.head, __ATOMIC_ACQUIRE); do { /* Reset n to the initial burst count */ n = max; - *old_head = __atomic_load_n(&r->prod.head, - __ATOMIC_ACQUIRE); - /* load-acquire synchronize with store-release of ht->tail * in update_tail. */ @@ -93,6 +91,7 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, if (is_sp) r->prod.head = *new_head, success = 1; else + /* on failure, *old_head is updated */ success = __atomic_compare_exchange_n(&r->prod.head, old_head, *new_head, 0, __ATOMIC_ACQUIRE, @@ -135,13 +134,11 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, int success; /* move cons.head atomically */ + *old_head = __atomic_load_n(&r->cons.head, __ATOMIC_ACQUIRE); do { /* Restore n as it may change every loop */ n = max; - *old_head = __atomic_load_n(&r->cons.head, - __ATOMIC_ACQUIRE); - /* this load-acquire synchronize with store-release of ht->tail * in update_tail. */ @@ -166,6 +163,7 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, if (is_sc) r->cons.head = *new_head, success = 1; else + /* on failure, *old_head will be updated */ success = __atomic_compare_exchange_n(&r->cons.head, old_head, *new_head, 0, __ATOMIC_ACQUIRE, -- 2.7.4