From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id B02B65AB9 for ; Thu, 29 Jan 2015 01:25:42 +0100 (CET) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP; 28 Jan 2015 16:21:20 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,483,1418112000"; d="scan'208";a="519357003" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by orsmga003.jf.intel.com with ESMTP; 28 Jan 2015 16:18:29 -0800 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t0T0PcJO009193; Thu, 29 Jan 2015 08:25:38 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t0T0PZcM005277; Thu, 29 Jan 2015 08:25:37 +0800 Received: (from cliang18@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t0T0PZxu005273; Thu, 29 Jan 2015 08:25:35 +0800 From: Cunming Liang To: dev@dpdk.org Date: Thu, 29 Jan 2015 08:24:31 +0800 Message-Id: <1422491072-5114-16-git-send-email-cunming.liang@intel.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1422491072-5114-1-git-send-email-cunming.liang@intel.com> References: <1422428365-5875-1-git-send-email-cunming.liang@intel.com> <1422491072-5114-1-git-send-email-cunming.liang@intel.com> Subject: [dpdk-dev] [PATCH v3 15/16] ring: add sched_yield to avoid spin forever X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 Jan 2015 00:25:44 -0000 It does a gentle yield after spin for a while. It reduces the wasting by spin when the preemption happens. Signed-off-by: Cunming Liang --- lib/librte_ring/rte_ring.h | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 39bacdd..c16da6e 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -126,6 +126,7 @@ struct rte_ring_debug_stats { #define RTE_RING_NAMESIZE 32 /**< The maximum length of a ring name. */ #define RTE_RING_MZ_PREFIX "RG_" +#define RTE_RING_PAUSE_REP 0x100 /**< yield after num of times pause. */ /** * An RTE ring structure. @@ -410,7 +411,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table, uint32_t cons_tail, free_entries; const unsigned max = n; int success; - unsigned i; + unsigned i, rep; uint32_t mask = r->prod.mask; int ret; @@ -468,8 +469,14 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table, * If there are other enqueues in progress that preceded us, * we need to wait for them to complete */ - while (unlikely(r->prod.tail != prod_head)) - rte_pause(); + do { + for (rep = RTE_RING_PAUSE_REP; + rep != 0 && r->prod.tail != prod_head; rep--) + rte_pause(); + + if (rep == 0) + sched_yield(); + }while(rep == 0); r->prod.tail = prod_next; return ret; @@ -589,7 +596,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table, uint32_t cons_next, entries; const unsigned max = n; int success; - unsigned i; + unsigned i, rep; uint32_t mask = r->prod.mask; /* move cons.head atomically */ @@ -634,8 +641,14 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table, * If there are other dequeues in progress that preceded us, * we need to wait for them to complete */ - while (unlikely(r->cons.tail != cons_head)) - rte_pause(); + do { + for (rep = RTE_RING_PAUSE_REP; + rep != 0 && r->cons.tail != cons_head; rep--) + rte_pause(); + + if (rep == 0) + sched_yield(); + }while(rep == 0); __RING_STAT_ADD(r, deq_success, n); r->cons.tail = cons_next; -- 1.8.1.4