From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 9A8C9D48B for ; Wed, 29 Mar 2017 15:10:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1490793026; x=1522329026; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=iT6Ukl6IrAQ7aGlnxMAvJjeq9iUL9IK5XJcSDWi+tHE=; b=Dh6MqKB/dqlOMpyXWMet0XH0L7cElVlmDnVWb/ygbyOK8qPDvdA4Fstn m2nCjop/6Gc4A6CdkQ7fOcRd0Ptlwg==; Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2017 06:10:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,241,1486454400"; d="scan'208";a="839655229" Received: from sivswdev01.ir.intel.com ([10.237.217.45]) by FMSMGA003.fm.intel.com with ESMTP; 29 Mar 2017 06:10:05 -0700 From: Bruce Richardson To: olivier.matz@6wind.com Cc: dev@dpdk.org, Bruce Richardson Date: Wed, 29 Mar 2017 14:09:32 +0100 Message-Id: <20170329130941.31190-6-bruce.richardson@intel.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20170329130941.31190-1-bruce.richardson@intel.com> References: <20170328203606.27457-1-bruce.richardson@intel.com> <20170329130941.31190-1-bruce.richardson@intel.com> Subject: [dpdk-dev] [PATCH v5 05/14] ring: remove the yield when waiting for tail update X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Mar 2017 13:10:27 -0000 There was a compile time setting to enable a ring to yield when it entered a loop in mp or mc rings waiting for the tail pointer update. Build time settings are not recommended for enabling/disabling features, and since this was off by default, remove it completely. If needed, a runtime enabled equivalent can be used. Signed-off-by: Bruce Richardson Reviewed-by: Yuanhan Liu Acked-by: Olivier Matz --- config/common_base | 1 - doc/guides/prog_guide/env_abstraction_layer.rst | 5 ---- doc/guides/rel_notes/release_17_05.rst | 1 + lib/librte_ring/rte_ring.h | 35 +++++-------------------- 4 files changed, 7 insertions(+), 35 deletions(-) diff --git a/config/common_base b/config/common_base index 69e91ae..2d54ddf 100644 --- a/config/common_base +++ b/config/common_base @@ -452,7 +452,6 @@ CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y # Compile librte_ring # CONFIG_RTE_LIBRTE_RING=y -CONFIG_RTE_RING_PAUSE_REP_COUNT=0 # # Compile librte_mempool diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 10a10a8..7c39cd2 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -352,11 +352,6 @@ Known Issues 3. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR. - ``RTE_RING_PAUSE_REP_COUNT`` is defined for rte_ring to reduce contention. It's mainly for case 2, a yield is issued after number of times pause repeat. - - It adds a sched_yield() syscall if the thread spins for too long while waiting on the other thread to finish its operations on the ring. - This gives the preempted thread a chance to proceed and finish with the ring enqueue/dequeue operation. - + rte_timer Running ``rte_timer_manager()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed. diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst index 50123c2..25d8549 100644 --- a/doc/guides/rel_notes/release_17_05.rst +++ b/doc/guides/rel_notes/release_17_05.rst @@ -134,6 +134,7 @@ API Changes * removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS`` * removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG`` + * removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT`` ABI Changes ----------- diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 2777b41..f8ac7f5 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -114,11 +114,6 @@ enum rte_ring_queue_behavior { #define RTE_RING_NAMESIZE (RTE_MEMZONE_NAMESIZE - \ sizeof(RTE_RING_MZ_PREFIX) + 1) -#ifndef RTE_RING_PAUSE_REP_COUNT -#define RTE_RING_PAUSE_REP_COUNT 0 /**< Yield after pause num of times, no yield - * if RTE_RING_PAUSE_REP not defined. */ -#endif - struct rte_memzone; /* forward declaration, so as not to require memzone.h */ #if RTE_CACHE_LINE_SIZE < 128 @@ -393,7 +388,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table, uint32_t cons_tail, free_entries; const unsigned max = n; int success; - unsigned i, rep = 0; + unsigned int i; uint32_t mask = r->mask; int ret; @@ -447,18 +442,9 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table, * If there are other enqueues in progress that preceded us, * we need to wait for them to complete */ - while (unlikely(r->prod.tail != prod_head)) { + while (unlikely(r->prod.tail != prod_head)) rte_pause(); - /* Set RTE_RING_PAUSE_REP_COUNT to avoid spin too long waiting - * for other thread finish. It gives pre-empted thread a chance - * to proceed and finish with ring dequeue operation. */ - if (RTE_RING_PAUSE_REP_COUNT && - ++rep == RTE_RING_PAUSE_REP_COUNT) { - rep = 0; - sched_yield(); - } - } r->prod.tail = prod_next; return ret; } @@ -491,7 +477,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table, { uint32_t prod_head, cons_tail; uint32_t prod_next, free_entries; - unsigned i; + unsigned int i; uint32_t mask = r->mask; int ret; @@ -568,7 +554,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table, uint32_t cons_next, entries; const unsigned max = n; int success; - unsigned i, rep = 0; + unsigned int i; uint32_t mask = r->mask; /* Avoid the unnecessary cmpset operation below, which is also @@ -613,18 +599,9 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table, * If there are other dequeues in progress that preceded us, * we need to wait for them to complete */ - while (unlikely(r->cons.tail != cons_head)) { + while (unlikely(r->cons.tail != cons_head)) rte_pause(); - /* Set RTE_RING_PAUSE_REP_COUNT to avoid spin too long waiting - * for other thread finish. It gives pre-empted thread a chance - * to proceed and finish with ring dequeue operation. */ - if (RTE_RING_PAUSE_REP_COUNT && - ++rep == RTE_RING_PAUSE_REP_COUNT) { - rep = 0; - sched_yield(); - } - } r->cons.tail = cons_next; return behavior == RTE_RING_QUEUE_FIXED ? 0 : n; @@ -659,7 +636,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table, { uint32_t cons_head, prod_tail; uint32_t cons_next, entries; - unsigned i; + unsigned int i; uint32_t mask = r->mask; cons_head = r->cons.head; -- 2.9.3