* [dpdk-stable] [PATCH v6] ring: guarantee load/load order in enqueue and dequeue
[not found] ` <1510284642-7442-1-git-send-email-hejianet@gmail.com>
@ 2017-11-10 3:30 ` Jia He
2017-11-12 17:51 ` Thomas Monjalon
0 siblings, 1 reply; 2+ messages in thread
From: Jia He @ 2017-11-10 3:30 UTC (permalink / raw)
To: jerin.jacob, dev, olivier.matz
Cc: konstantin.ananyev, bruce.richardson, jianbo.liu, hemant.agrawal,
Jia He, Jia He, jie2.liu, bing.zhao, stable
We watched a rte panic of mbuf_autotest in our qualcomm arm64 server
(Amberwing).
Root cause:
In __rte_ring_move_cons_head()
...
do {
/* Restore n as it may change every loop */
n = max;
*old_head = r->cons.head; //1st load
const uint32_t prod_tail = r->prod.tail; //2nd load
In weak memory order architectures(powerpc,arm), the 2nd load might be
reodered before the 1st load, that makes *entries is bigger than we wanted.
This nasty reording messed enque/deque up.
cpu1(producer) cpu2(consumer) cpu3(consumer)
load r->prod.tail
in enqueue:
load r->cons.tail
load r->prod.head
store r->prod.tail
load r->cons.head
load r->prod.tail
...
store r->cons.{head,tail}
load r->cons.head
Then, r->cons.head will be bigger than prod_tail, then make *entries very
big and the consumer will go forward incorrectly.
After this patch, the old cons.head will be recaculated after failure of
rte_atomic32_cmpset
There is no such issue on X86, because X86 is strong memory order model.
But rte_smp_rmb() doesn't have impact on runtime performance on X86, so
keep the same code without architectures specific concerns.
Signed-off-by: Jia He <jia.he@hxt-semitech.com>
Signed-off-by: jie2.liu@hxt-semitech.com
Signed-off-by: bing.zhao@hxt-semitech.com
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@arm.com>
Cc: stable@dpdk.org
---
lib/librte_ring/rte_ring.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 5e9b3b7..e924438 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -409,6 +409,12 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
n = max;
*old_head = r->prod.head;
+
+ /* add rmb barrier to avoid load/load reorder in weak
+ * memory model. It is noop on x86
+ */
+ rte_smp_rmb();
+
const uint32_t cons_tail = r->cons.tail;
/*
* The subtraction is done between two unsigned 32bits value
@@ -517,6 +523,12 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc,
n = max;
*old_head = r->cons.head;
+
+ /* add rmb barrier to avoid load/load reorder in weak
+ * memory model. It is noop on x86
+ */
+ rte_smp_rmb();
+
const uint32_t prod_tail = r->prod.tail;
/* The subtraction is done between two unsigned 32bits value
* (the result is always modulo 32 bits even if we have
--
2.7.4
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [dpdk-stable] [PATCH v6] ring: guarantee load/load order in enqueue and dequeue
2017-11-10 3:30 ` [dpdk-stable] [PATCH v6] ring: guarantee load/load order in enqueue and dequeue Jia He
@ 2017-11-12 17:51 ` Thomas Monjalon
0 siblings, 0 replies; 2+ messages in thread
From: Thomas Monjalon @ 2017-11-12 17:51 UTC (permalink / raw)
To: Jia He, Jia He
Cc: stable, jerin.jacob, dev, olivier.matz, konstantin.ananyev,
bruce.richardson, jianbo.liu, hemant.agrawal, jie2.liu,
bing.zhao
10/11/2017 04:30, Jia He:
> We watched a rte panic of mbuf_autotest in our qualcomm arm64 server
> (Amberwing).
>
> Root cause:
> In __rte_ring_move_cons_head()
> ...
> do {
> /* Restore n as it may change every loop */
> n = max;
>
> *old_head = r->cons.head; //1st load
> const uint32_t prod_tail = r->prod.tail; //2nd load
>
> In weak memory order architectures(powerpc,arm), the 2nd load might be
> reodered before the 1st load, that makes *entries is bigger than we wanted.
> This nasty reording messed enque/deque up.
>
> cpu1(producer) cpu2(consumer) cpu3(consumer)
> load r->prod.tail
> in enqueue:
> load r->cons.tail
> load r->prod.head
>
> store r->prod.tail
>
> load r->cons.head
> load r->prod.tail
> ...
> store r->cons.{head,tail}
> load r->cons.head
>
> Then, r->cons.head will be bigger than prod_tail, then make *entries very
> big and the consumer will go forward incorrectly.
>
> After this patch, the old cons.head will be recaculated after failure of
> rte_atomic32_cmpset
>
> There is no such issue on X86, because X86 is strong memory order model.
> But rte_smp_rmb() doesn't have impact on runtime performance on X86, so
> keep the same code without architectures specific concerns.
>
> Signed-off-by: Jia He <jia.he@hxt-semitech.com>
> Signed-off-by: jie2.liu@hxt-semitech.com
> Signed-off-by: bing.zhao@hxt-semitech.com
> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Acked-by: Jianbo Liu <jianbo.liu@arm.com>
> Cc: stable@dpdk.org
Applied, thanks
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-11-12 17:51 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <1510278669-8489-1-git-send-email-hejianet@gmail.com>
[not found] ` <1510284642-7442-1-git-send-email-hejianet@gmail.com>
2017-11-10 3:30 ` [dpdk-stable] [PATCH v6] ring: guarantee load/load order in enqueue and dequeue Jia He
2017-11-12 17:51 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).