From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f68.google.com (mail-pg0-f68.google.com [74.125.83.68]) by dpdk.org (Postfix) with ESMTP id DB6F51B640; Fri, 10 Nov 2017 04:31:12 +0100 (CET) Received: by mail-pg0-f68.google.com with SMTP id y5so6423055pgq.7; Thu, 09 Nov 2017 19:31:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=srhWSueyfYLY2r2HYxpezhWttsLn24VWfpt3c9tud5I=; b=K/ltRk5dGwWwhbnJI8b/RZUi7CSGJYKQqTNoWYZOXkU+7IETC2bU65GMTfEml9jV07 Ct6clB36J/lD3SKqMpOb+co9ESrnFWFlsQcrg2kXFTuWypdk4o2wvYcR1nxQ4pV1ScUd Vt81zSpXL7N69ABVDHXFT92GlC+teuf5INQPaNalB0enqVsi9iDk9QxL4O591qfa623K QOgxLEs8OnuLZYDKTHprbL6n+7cQHocQLsJXeR/4RnB8M8SVT8F5eglWsRhvp5ehNhrK gOF4zxQ2vs7roNnlGZ6C0DkobpQ6rIosURTaPKzvIg8/vrVwG7KuzfaCbPPO6wUlwEi+ EmJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=srhWSueyfYLY2r2HYxpezhWttsLn24VWfpt3c9tud5I=; b=q1qCOFWfTUERo5msf121xb+5sBdb0gYJV44k/V9NVGpe3ds/Ad8J1E4ttQErx9Hf1v sAR56m8WA9DaSqZVYipEDW1ClFCpLJokGPwpsXwjv8zCbhtiPVp7t8s6cPIo0uf+Xah2 c42UCoBS9d+BQhq3/VNXDaVlPWGBSAK1PCVBlyzRjoQfwLYLwq7C/7lM1ILCpqIl0lNQ CCfWeKEMrPBmwFES1UJka/M1wXRe4gCZzS5Hq1JxkelbJXkNlYA6qlJxeAIJ4WNCSoz6 qh3ANIBQBwFfcI1biWQeDq2YD/FfcC0qu+HC9fAb0HmvkTEPzbt12u1AVtUsGRu2DNji CkXw== X-Gm-Message-State: AJaThX7a1uQmlqPI7d264/de4EJgvcU5WKlGdHlDElob/nsw4tLKp9jz 8H5UlSkOixWc5IRP2aBdEv54CQ== X-Google-Smtp-Source: ABhQp+R+xRDpg/tv4LnKrLnGeP1MTpF1hsdNiEF1QEcAtwN7Wiy6MhJq3ZpiWzxg6F0X6TrQrjf9RA== X-Received: by 10.99.56.82 with SMTP id h18mr2668592pgn.281.1510284672175; Thu, 09 Nov 2017 19:31:12 -0800 (PST) Received: from nfv-demo01.hxtcorp.net ([38.106.11.25]) by smtp.gmail.com with ESMTPSA id g24sm15992608pfk.0.2017.11.09.19.31.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 09 Nov 2017 19:31:11 -0800 (PST) From: Jia He To: jerin.jacob@caviumnetworks.com, dev@dpdk.org, olivier.matz@6wind.com Cc: konstantin.ananyev@intel.com, bruce.richardson@intel.com, jianbo.liu@arm.com, hemant.agrawal@nxp.com, Jia He , Jia He , jie2.liu@hxt-semitech.com, bing.zhao@hxt-semitech.com, stable@dpdk.org Date: Fri, 10 Nov 2017 03:30:42 +0000 Message-Id: <1510284642-7442-2-git-send-email-hejianet@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1510284642-7442-1-git-send-email-hejianet@gmail.com> References: <1510278669-8489-1-git-send-email-hejianet@gmail.com> <1510284642-7442-1-git-send-email-hejianet@gmail.com> Subject: [dpdk-dev] [PATCH v6] ring: guarantee load/load order in enqueue and dequeue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Nov 2017 03:31:13 -0000 We watched a rte panic of mbuf_autotest in our qualcomm arm64 server (Amberwing). Root cause: In __rte_ring_move_cons_head() ... do { /* Restore n as it may change every loop */ n = max; *old_head = r->cons.head; //1st load const uint32_t prod_tail = r->prod.tail; //2nd load In weak memory order architectures(powerpc,arm), the 2nd load might be reodered before the 1st load, that makes *entries is bigger than we wanted. This nasty reording messed enque/deque up. cpu1(producer) cpu2(consumer) cpu3(consumer) load r->prod.tail in enqueue: load r->cons.tail load r->prod.head store r->prod.tail load r->cons.head load r->prod.tail ... store r->cons.{head,tail} load r->cons.head Then, r->cons.head will be bigger than prod_tail, then make *entries very big and the consumer will go forward incorrectly. After this patch, the old cons.head will be recaculated after failure of rte_atomic32_cmpset There is no such issue on X86, because X86 is strong memory order model. But rte_smp_rmb() doesn't have impact on runtime performance on X86, so keep the same code without architectures specific concerns. Signed-off-by: Jia He Signed-off-by: jie2.liu@hxt-semitech.com Signed-off-by: bing.zhao@hxt-semitech.com Acked-by: Jerin Jacob Acked-by: Jianbo Liu Cc: stable@dpdk.org --- lib/librte_ring/rte_ring.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 5e9b3b7..e924438 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -409,6 +409,12 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp, n = max; *old_head = r->prod.head; + + /* add rmb barrier to avoid load/load reorder in weak + * memory model. It is noop on x86 + */ + rte_smp_rmb(); + const uint32_t cons_tail = r->cons.tail; /* * The subtraction is done between two unsigned 32bits value @@ -517,6 +523,12 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, n = max; *old_head = r->cons.head; + + /* add rmb barrier to avoid load/load reorder in weak + * memory model. It is noop on x86 + */ + rte_smp_rmb(); + const uint32_t prod_tail = r->prod.tail; /* The subtraction is done between two unsigned 32bits value * (the result is always modulo 32 bits even if we have -- 2.7.4