From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 8DE384C7B for ; Thu, 1 Mar 2018 14:53:22 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Mar 2018 05:53:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.47,408,1515484800"; d="scan'208";a="30696229" Received: from unknown (HELO saesrv02-S2600CWR.intel.com) ([10.224.122.203]) by FMSMGA003.fm.intel.com with ESMTP; 01 Mar 2018 05:53:20 -0800 From: Vipin Varghese To: dev@dpdk.org, harry.van.haaren@intel.com Cc: Vipin Varghese Date: Fri, 2 Mar 2018 01:04:59 +0530 Message-Id: <1519932900-10571-1-git-send-email-vipin.varghese@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [dpdk-dev] [PATCH 1/2] event/sw: code refractor to reduce the fetch stall X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Mar 2018 13:53:22 -0000 With rearranging the code to prefetch the contents before loop check increases performance from single and multistage atomic pipeline. Signed-off-by: Vipin Varghese --- drivers/event/sw/sw_evdev_scheduler.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/event/sw/sw_evdev_scheduler.c b/drivers/event/sw/sw_evdev_scheduler.c index e3a41e0..70d1970 100644 --- a/drivers/event/sw/sw_evdev_scheduler.c +++ b/drivers/event/sw/sw_evdev_scheduler.c @@ -44,12 +44,13 @@ sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid, uint32_t qid_id = qid->id; iq_dequeue_burst(sw, &qid->iq[iq_num], qes, count); - for (i = 0; i < count; i++) { - const struct rte_event *qe = &qes[i]; - const uint16_t flow_id = SW_HASH_FLOWID(qes[i].flow_id); - struct sw_fid_t *fid = &qid->fids[flow_id]; - int cq = fid->cq; + const struct rte_event *qe = &qes[0]; + const uint16_t flow_id = SW_HASH_FLOWID(qes[0].flow_id); + struct sw_fid_t *fid = &qid->fids[flow_id]; + int cq = fid->cq; + + for (i = 0; i < count; i++) { if (cq < 0) { uint32_t cq_idx = qid->cq_next_tx++; if (qid->cq_next_tx == qid->cq_num_mapped_cqs) @@ -101,6 +102,13 @@ sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid, &sw->cq_ring_space[cq]); p->cq_buf_count = 0; } + + if (likely(i+1 < count)) { + qe = (qes + i + 1); + flow_id = SW_HASH_FLOWID(qes[i + 1].flow_id); + fid = &qid->fids[flow_id]; + cq = fid->cq; + } } iq_put_back(sw, &qid->iq[iq_num], blocked_qes, nb_blocked); -- 2.7.4