From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f67.google.com (mail-pg0-f67.google.com [74.125.83.67]) by dpdk.org (Postfix) with ESMTP id 5C8FE1B608 for ; Fri, 3 Nov 2017 02:47:16 +0100 (CET) Received: by mail-pg0-f67.google.com with SMTP id y5so1183091pgq.7 for ; Thu, 02 Nov 2017 18:47:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding; bh=wDQAPp9VNTwqXIoSMJ9OA7danLlngyF4k1S7iy4NhGM=; b=M1MUJcB5DWdJH068pRt7qwiWtvi90Ol9K1gtJdYT8UoCkPTPC25vZbb1A/5Z1drdiu duS2kD40hvOyap1WB0fAbtTGJQ78sdkTiZXnK6QADPsicpOLB9SdzOYn1IWVJdDgwhzF AWVajCdf0iCnqi3bvqJ2ZIUSToDeksX39KR6ZAF/z5xUwPBTISP/QYRYWONQIYEQkBuW 5uuWtrbpoMV9V87mUi0zS82yFX2MuqkkbqB0Q3EVrYP6BZQXxIXwzdu2kNRi8uLsB+5E sRPvsymyncb0Z3qfvSCo62wgD3WlA/kmzhbXLRw69BuMnlv9catulg76TCwOXad0oSMc A+Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=wDQAPp9VNTwqXIoSMJ9OA7danLlngyF4k1S7iy4NhGM=; b=nRp4F5j5x4N99GhDqPg/MZdWLVdRKiMCCQ4usyuHxkkdryVSWRPWqB8/Sw72J1isCy uXpN2CMmpKJfuWCaZOP5SOAVqrXaTB2AT53SnYYJAVfUT+bSq97Vo7/eCow9cjrsKxjx t3DkZboIpovXWx/Zyb4+OvHrEd3yL5zG4eFw0fgyeIfRT9oe8o9GdoLnqFDaEQ4S6w9+ I8JoO4XPFwHaTTTaOKQiZv/nibD3kfoiTrQiQmw+W/7paUl6k+08BQXy5Lzg4JqUQfXZ g2DmolEfvxkVyWv492Ui0xCnDgTQJ1BvtxhAfbBuP0vUrYuv1Jvnot348shUYXjY5yco lddQ== X-Gm-Message-State: AMCzsaUsIGRf1lnqML1sBxIlYuvdLu+JlGrsUlEqHoGmJqVWLqe321Yu 7/+nUbLpxpK/IG93vRGT4to= X-Google-Smtp-Source: ABhQp+Q0nv+VMekZCvGlDYPlrAXv0F5hPp4Rb+2lFXajozMCMDxSHxKlFzAZsqX2FZyT0QThnl7IeA== X-Received: by 10.98.16.81 with SMTP id y78mr5920392pfi.114.1509673635607; Thu, 02 Nov 2017 18:47:15 -0700 (PDT) Received: from [0.0.0.0] (67.209.179.165.16clouds.com. [67.209.179.165]) by smtp.gmail.com with ESMTPSA id k20sm8111150pfg.141.2017.11.02.18.46.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Nov 2017 18:47:14 -0700 (PDT) To: Jerin Jacob Cc: dev@dpdk.org, olivier.matz@6wind.com, konstantin.ananyev@intel.com, bruce.richardson@intel.com, jianbo.liu@arm.com, hemant.agrawal@nxp.com, jie2.liu@hxt-semitech.com, bing.zhao@hxt-semitech.com, jia.he@hxt-semitech.com References: <1509612210-5499-1-git-send-email-hejianet@gmail.com> <20171102172337.GB1478@jerin> From: Jia He Message-ID: <25192429-8369-ac3d-44b0-c1b1d7182ef0@gmail.com> Date: Fri, 3 Nov 2017 09:46:40 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: <20171102172337.GB1478@jerin> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2] ring: guarantee ordering of cons/prod loading when doing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Nov 2017 01:47:16 -0000 Hi Jerin On 11/3/2017 1:23 AM, Jerin Jacob Wrote: > -----Original Message----- >> Date: Thu, 2 Nov 2017 08:43:30 +0000 >> From: Jia He >> To: jerin.jacob@caviumnetworks.com, dev@dpdk.org, olivier.matz@6wind.com >> Cc: konstantin.ananyev@intel.com, bruce.richardson@intel.com, >> jianbo.liu@arm.com, hemant.agrawal@nxp.com, Jia He , >> jie2.liu@hxt-semitech.com, bing.zhao@hxt-semitech.com, >> jia.he@hxt-semitech.com >> Subject: [PATCH v2] ring: guarantee ordering of cons/prod loading when doing >> X-Mailer: git-send-email 2.7.4 >> >> We watched a rte panic of mbuf_autotest in our qualcomm arm64 server. >> As for the possible race condition, please refer to [1]. > Hi Jia, > > In addition to Konstantin comments. Please find below some review > comments. >> Furthermore, there are 2 options as suggested by Jerin: >> 1. use rte_smp_rmb >> 2. use load_acquire/store_release(refer to [2]). >> CONFIG_RTE_ATOMIC_ACQUIRE_RELEASE_BARRIER_PREFER is provided, and by > I think, The better name would be CONFIG_RTE_RING_USE_C11_MEM_MODEL > or something like that. Ok, but how to distinguish following 2 options? CONFIG_RTE_RING_USE_C11_MEM_MODEL doesn't seem to be enough 1. use rte_smp_rmb 2. use load_acquire/store_release(refer to [2]). >> default it is n; >> >> --- >> Changelog: >> V2: let users choose whether using load_acquire/store_release >> V1: rte_smp_rmb() between 2 loads >> >> Signed-off-by: Jia He >> Signed-off-by: jie2.liu@hxt-semitech.com >> Signed-off-by: bing.zhao@hxt-semitech.com >> Signed-off-by: jia.he@hxt-semitech.com >> Suggested-by: jerin.jacob@caviumnetworks.com >> --- >> lib/librte_ring/Makefile | 4 +++- >> lib/librte_ring/rte_ring.h | 38 ++++++++++++++++++++++++------ >> lib/librte_ring/rte_ring_arm64.h | 48 ++++++++++++++++++++++++++++++++++++++ >> lib/librte_ring/rte_ring_generic.h | 45 +++++++++++++++++++++++++++++++++++ >> 4 files changed, 127 insertions(+), 8 deletions(-) >> create mode 100644 lib/librte_ring/rte_ring_arm64.h >> create mode 100644 lib/librte_ring/rte_ring_generic.h >> >> diff --git a/lib/librte_ring/Makefile b/lib/librte_ring/Makefile >> index e34d9d9..fa57a86 100644 >> --- a/lib/librte_ring/Makefile >> +++ b/lib/librte_ring/Makefile >> @@ -45,6 +45,8 @@ LIBABIVER := 1 >> SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c >> >> # install includes >> -SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h >> +SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h \ >> + rte_ring_arm64.h \ > It is really not specific to arm64. We could rename it to rte_ring_c11_mem.h or > something like that to reflect the implementation based on c11 memory > model. > > >> + rte_ring_generic.h >> >> include $(RTE_SDK)/mk/rte.lib.mk >> diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h >> index 5e9b3b7..943b1f9 100644 >> --- a/lib/librte_ring/rte_ring.h >> +++ b/lib/librte_ring/rte_ring.h >> @@ -103,6 +103,18 @@ extern "C" { >> #include >> #include >> >> +/* In those strong memory models (e.g. x86), there is no need to add rmb() >> + * between load and load. >> + * In those weak models(powerpc/arm), there are 2 choices for the users >> + * 1.use rmb() memory barrier >> + * 2.use one-direcion load_acquire/store_release barrier >> + * It depends on performance test results. */ >> +#ifdef RTE_ARCH_ARM64 > s/RTE_ARCH_ARM64/RTE_RING_USE_C11_MEM_MODEL and update the generic arm64 config. > By that way it can used by another architecture like ppc if they choose to do so. > > >> +#include "rte_ring_arm64.h" >> +#else >> +#include "rte_ring_generic.h" >> +#endif >> + >> #define RTE_TAILQ_RING_NAME "RTE_RING" >> >> enum rte_ring_queue_behavior { >> @@ -368,7 +380,7 @@ update_tail(struct rte_ring_headtail *ht, uint32_t old_val, uint32_t new_val, >> while (unlikely(ht->tail != old_val)) >> rte_pause(); >> >> - ht->tail = new_val; >> + arch_rte_atomic_store(&ht->tail, new_val, __ATOMIC_RELEASE); >> } >> >> /** >> @@ -408,7 +420,8 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp, >> /* Reset n to the initial burst count */ >> n = max; >> >> - *old_head = r->prod.head; >> + *old_head = arch_rte_atomic_load(&r->prod.head, >> + __ATOMIC_ACQUIRE); > Same as Konstantin comments, i.e move to this function to c11 memory model > header file > >> const uint32_t cons_tail = r->cons.tail; >> /* >> * The subtraction is done between two unsigned 32bits value >> @@ -430,8 +443,10 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp, >> if (is_sp) >> r->prod.head = *new_head, success = 1; >> else >> - success = rte_atomic32_cmpset(&r->prod.head, >> - *old_head, *new_head); >> + success = arch_rte_atomic32_cmpset(&r->prod.head, >> + old_head, *new_head, >> + 0, __ATOMIC_ACQUIRE, >> + __ATOMIC_RELAXED); >> } while (unlikely(success == 0)); >> return n; >> } >> @@ -470,7 +485,10 @@ __rte_ring_do_enqueue(struct rte_ring *r, void * const *obj_table, >> goto end; >> >> ENQUEUE_PTRS(r, &r[1], prod_head, obj_table, n, void *); >> + >> +#ifndef RTE_ATOMIC_ACQUIRE_RELEASE_BARRIER_PREFER > This #define will be removed if the function moves. > >> rte_smp_wmb(); >> +#endif >> >> update_tail(&r->prod, prod_head, prod_next, is_sp); >> end: >> @@ -516,7 +534,8 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, >> /* Restore n as it may change every loop */ >> n = max; >> >> - *old_head = r->cons.head; >> + *old_head = arch_rte_atomic_load(&r->cons.head, >> + __ATOMIC_ACQUIRE); >> const uint32_t prod_tail = r->prod.tail; >> /* The subtraction is done between two unsigned 32bits value >> * (the result is always modulo 32 bits even if we have >> @@ -535,8 +554,10 @@ __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, >> if (is_sc) >> r->cons.head = *new_head, success = 1; >> else >> - success = rte_atomic32_cmpset(&r->cons.head, *old_head, >> - *new_head); >> + success = arch_rte_atomic32_cmpset(&r->cons.head, >> + old_head, *new_head, >> + 0, __ATOMIC_ACQUIRE, >> + __ATOMIC_RELAXED); >> } while (unlikely(success == 0)); >> return n; >> } >> @@ -575,7 +596,10 @@ __rte_ring_do_dequeue(struct rte_ring *r, void **obj_table, >> goto end; >> >> DEQUEUE_PTRS(r, &r[1], cons_head, obj_table, n, void *); >> + >> +#ifndef RTE_ATOMIC_ACQUIRE_RELEASE_BARRIER_PREFER > This #define will be removed if the function moves. > >> rte_smp_rmb(); >> +#endif -- Cheers, Jia