DPDK patches and discussions
 help / color / mirror / Atom feed
From: Wathsala Vithanage <wathsala.vithanage@arm.com>
To: honnappa.nagarahalli@arm.com, konstantin.v.ananyev@yandex.ru,
	thomas@monjalon.net, ruifeng.wang@arm.com
Cc: dev@dpdk.org, nd@arm.com,
	Wathsala Vithanage <wathsala.vithanage@arm.com>
Subject: [RFC] ring: remove unnecessary fences in C11 ring for performance
Date: Mon, 12 Jun 2023 19:47:15 +0000	[thread overview]
Message-ID: <20230612194716.1050379-1-wathsala.vithanage@arm.com> (raw)

ACQUIRE fence between the two loads established program order to
prevent use of an outdated consumer or producer tail values in the
computation of free_entries and entries in the two functions
respectively. However, (A) the problem described above cannot be solved
by establishing the program order by using a fence, and (B) the
resulting values for free_entries and entries are safe for use even
with an outdated head value.

(A) First example below shows an example where load of the old_head
and the cons_tail is reordered resulting in a free_entries computation
that involves an outdated consumer tail. The second is a counter
example where the same outdated consumer tail is used to compute the
free_entries even with an ACQUIRE fence.

Initial values for both (i) example, and (ii) counter example are
capacity = 32
r->cons.tail = 5
r->cons.head = 5
r->prod.head = 10
r-prod.tail = 10

Assume CPU2 adds 2 elements to the ring and CPU3 removes 5 elements
from the ring.

(i) Example: The two loads are reversed. Program order violated.
---------------------------------------------------------------------
   cpu1(producer)        cpu2(producer)          cpu3(consumer)
load r->cons.tail=5
      ^
      |
      |
      |              load r->prod.head=10
      |              load r->cons.tail=5
      |            store r->prod.head(+n=2)=12
   stalled             <-- enqueue ----->
      |            store r->prod.tail(+n)=12
      |
      |
      |                                      load r->cons.head  = 5
      |                                      load r->prod.tail  = 12
      |                                   store r->cons.head(+n=5)=10
      |                                         <...dequeue.....>
      |                                     store r->cons.tail(+n)=10
      |
      v
load r->prod.head=12

Above shows that the value read from r->cons.tail to cons_tail in cpu1
is 5 which is outdated because r->cons.tail was later updated by cpu3.
In this instance the state is...
CPU1: cons_tail=5, old_head=12, free_entries=(32+5-12)=25
RING: r->cons.tail=10, r->prod.head=12

(ii) Example: ACQUIRE fence between the loads. Program order preserved.
-----------------------------------------------------------------------
   cpu1(producer)        cpu2(producer)          cpu3(consumer)
      |
      |
      |              load r->prod.head=10
      |              load r->cons.tail=5
      |            store r->prod.head(+n=2)=12
      |               <-- enqueue ----->
      |            store r->prod.tail(+n)=12
      |
      |
load r->prod.head=12
      ^
      |
      |                                      load r->cons.head  = 5
      |                                      load r->prod.tail  = 12
   stalled                                 store r->cons.head(+n=5)=10
      |                                        <...dequeue.....>
      |                                               ^
      |                                               |
      v                                               |
load r->cons.tail=5                                stalled
                                                      |
                                                      |
                                                      v
                                           store r->cons.tail(+n)= 10

Above shows that the value read from r->cons.tail to cons_tail in cpu1
is 5 which is outdated because r->cons.tail was later updated by cpu3.
In this instance with the ACQUIRE fence the state is...
CPU1: cons_tail=5, old_head=12, free_entries=(32+5-12)=25
RING: r->cons.tail=10, r->prod.head=12

Example ii even with an ACQUIRE fence to prevent reordering of the
loads of old_head and cons_tail produces the same result for the
free_entries variable, and the system ends up in the same state up
to that point as example i. Same can be shown for the consumer threads
as well. Therefore, the ACQUIRE fences do not solve the problem they
intended to solve.
If each thread is required to use the latest r->cons.head,r->cons.tail,
r->prod.head, and r->prod.tail values in their computations of
free_entries and entries, there should be a real-time ordering of all
the threads. Establishing program order in a single thread only provides
guarantees of sequential-consistency which does not include real-time
ordering of all threads.

(B) The values computed for free_entries, and entries are always safe
without the ACQUIRE fence, because the resulting values are equal or
less than the number of actual free solts or elements in the ring.
This can be seen in both example i and ii, where free_entries
computation ended up with 25, even when ring has 30 free_entries.
Enqueueing 25 elements into a ring that has space for 30 is safe.
Same can be shown for consumer threads as well. Therefore, the ACQUIRE
fence is not required for safety reasons.

Due to (A) and (B) above the ACQUIRE fence between the load of the head
and load of the tail in both consumer and producer head updates can be
removed to improve the performance of the ring library.

Performance on Arm N1
Gain relative to generic implementation
+-------------------------------------------------------------------+
| Bulk enq/dequeue count on size 8 (Arm N1)			    |
+-------------------------------------------------------------------+
| Generic             | C11 atomics          | C11 atomics improved |
+-------------------------------------------------------------------+
| Total count: 766730 | Total count: 651686  | Total count: 728546  |
|                     |        Gain: -15%    |        Gain: -4%     |
+-------------------------------------------------------------------+
+-------------------------------------------------------------------+
| Bulk enq/dequeue count on size 32 (Arm N1)                        |
+-------------------------------------------------------------------+
| Generic             | C11 atomics          | C11 atomics improved |
+-------------------------------------------------------------------+
| Total count: 816745 | Total count: 646385  | Total count: 784019  |
|                     |        Gain: -21%    |        Gain: -4%     |
+-------------------------------------------------------------------+

Performance on x86-64 Cascade Lake
Gain relative to generic implementation
+-------------------------------------------------------------------+
| Bulk enq/dequeue count on size 8                                  |
+-------------------------------------------------------------------+
| Generic             | C11 atomics          | C11 atomics improved |
+-------------------------------------------------------------------+
| Total count: 165786 | Total count: 168971  | Total count: 168811  |
|                     |        Gain: 1.9%    |        Gain: %1.8    |
+-------------------------------------------------------------------+
+-------------------------------------------------------------------+
| Bulk enq/dequeue count on size 32                                 |
+-------------------------------------------------------------------+
| Generic             | C11 atomics          | C11 atomics improved |
+-------------------------------------------------------------------+
| Total count: 150729 | Total count: 153162  | Total count: 150303  |
|                     |        Gain: 1.6%    |        Gain: -0.2%   |
+-------------------------------------------------------------------+

Wathsala Vithanage (1):
  ring: improving performance with C11

 .mailmap                    | 1 +
 lib/ring/rte_ring_c11_pvt.h | 6 ------
 2 files changed, 1 insertion(+), 6 deletions(-)

-- 
2.25.1


             reply	other threads:[~2023-06-12 19:47 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-12 19:47 Wathsala Vithanage [this message]
2023-06-12 19:47 ` Wathsala Vithanage

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230612194716.1050379-1-wathsala.vithanage@arm.com \
    --to=wathsala.vithanage@arm.com \
    --cc=dev@dpdk.org \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=konstantin.v.ananyev@yandex.ru \
    --cc=nd@arm.com \
    --cc=ruifeng.wang@arm.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).