From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 46E652946 for ; Thu, 1 Sep 2016 12:03:12 +0200 (CEST) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP; 01 Sep 2016 03:03:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,266,1470726000"; d="scan'208";a="3859648" Received: from sie-lab-212-251.ir.intel.com (HELO silpixa00381635.ir.intel.com) ([10.237.212.251]) by orsmga004.jf.intel.com with ESMTP; 01 Sep 2016 03:03:10 -0700 From: Jasvinder Singh To: dev@dpdk.org Cc: cristian.dumitrescu@intel.com Date: Thu, 1 Sep 2016 11:11:04 +0100 Message-Id: <1472724664-1400-1-git-send-email-jasvinder.singh@intel.com> X-Mailer: git-send-email 2.5.5 Subject: [dpdk-dev] [PATCH] examples/qos_sched: fix packets dequeue operation from ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Sep 2016 10:03:12 -0000 The app_worker_thread() and app_mixed_thread() use rte_ring_sc_dequeue_bulk to dequeue packets from the ring and this imposes restriction on number of packets in software ring to be greater than the specified value to start actual dequeue operation, thus, adds latency to those packets. Therefore, rte_ring_sc_dequeue_bulk is replaced with rte_ring_sc_dequeue_burst. Fixes: de3cfa2c9823 ("sched: initial import") Suggested-by: Yang, Tao Y Signed-off-by: Jasvinder Singh --- examples/qos_sched/app_thread.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c index 3c678cc..70fdcdb 100644 --- a/examples/qos_sched/app_thread.c +++ b/examples/qos_sched/app_thread.c @@ -215,17 +215,16 @@ app_worker_thread(struct thread_conf **confs) while ((conf = confs[conf_idx])) { uint32_t nb_pkt; - int retval; /* Read packet from the ring */ - retval = rte_ring_sc_dequeue_bulk(conf->rx_ring, (void **)mbufs, + nb_pkt = rte_ring_sc_dequeue_burst(conf->rx_ring, (void **)mbufs, burst_conf.ring_burst); - if (likely(retval == 0)) { + if (likely(nb_pkt)) { int nb_sent = rte_sched_port_enqueue(conf->sched_port, mbufs, - burst_conf.ring_burst); + nb_pkt); - APP_STATS_ADD(conf->stat.nb_drop, burst_conf.ring_burst - nb_sent); - APP_STATS_ADD(conf->stat.nb_rx, burst_conf.ring_burst); + APP_STATS_ADD(conf->stat.nb_drop, nb_pkt - nb_sent); + APP_STATS_ADD(conf->stat.nb_rx, nb_pkt); } nb_pkt = rte_sched_port_dequeue(conf->sched_port, mbufs, @@ -250,17 +249,16 @@ app_mixed_thread(struct thread_conf **confs) while ((conf = confs[conf_idx])) { uint32_t nb_pkt; - int retval; /* Read packet from the ring */ - retval = rte_ring_sc_dequeue_bulk(conf->rx_ring, (void **)mbufs, + nb_pkt = rte_ring_sc_dequeue_burst(conf->rx_ring, (void **)mbufs, burst_conf.ring_burst); - if (likely(retval == 0)) { + if (likely(nb_pkt)) { int nb_sent = rte_sched_port_enqueue(conf->sched_port, mbufs, - burst_conf.ring_burst); + nb_pkt); - APP_STATS_ADD(conf->stat.nb_drop, burst_conf.ring_burst - nb_sent); - APP_STATS_ADD(conf->stat.nb_rx, burst_conf.ring_burst); + APP_STATS_ADD(conf->stat.nb_drop, nb_pkt - nb_sent); + APP_STATS_ADD(conf->stat.nb_rx, nb_pkt); } -- 2.5.5