From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id BA7F929C7 for ; Mon, 10 Apr 2017 17:56:41 +0200 (CEST) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga104.jf.intel.com with ESMTP; 10 Apr 2017 08:56:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,182,1488873600"; d="scan'208";a="87318738" Received: from silpixa00398672.ir.intel.com ([10.237.223.128]) by fmsmga006.fm.intel.com with ESMTP; 10 Apr 2017 08:56:39 -0700 From: Harry van Haaren To: dev@dpdk.org Cc: jerin.jacob@caviumnetworks.com, Harry van Haaren Date: Mon, 10 Apr 2017 16:56:43 +0100 Message-Id: <1491839803-172566-1-git-send-email-harry.van.haaren@intel.com> X-Mailer: git-send-email 2.7.4 Subject: [dpdk-dev] [PATCH] event/sw: fix hashing of flow on ordered ingress X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Apr 2017 15:56:42 -0000 The flow id of packets was not being hashed on ingress on an ordered queue. Fix by applying same hashing as is applied in the atomic queue case. The hashing itself is broken out into a macro to avoid duplication of code. Fixes: 617995dfc5b2 ("event/sw: add scheduling logic") Signed-off-by: Harry van Haaren --- drivers/event/sw/sw_evdev_scheduler.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/event/sw/sw_evdev_scheduler.c b/drivers/event/sw/sw_evdev_scheduler.c index 77a16d7..e008b51 100644 --- a/drivers/event/sw/sw_evdev_scheduler.c +++ b/drivers/event/sw/sw_evdev_scheduler.c @@ -51,6 +51,8 @@ #define MAX_PER_IQ_DEQUEUE 48 #define FLOWID_MASK (SW_QID_NUM_FIDS-1) +/* use cheap bit mixing, we only need to lose a few bits */ +#define SW_HASH_FLOWID(f) (((f) ^ (f >> 10)) & FLOWID_MASK) static inline uint32_t sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid, @@ -72,9 +74,7 @@ sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid, iq_ring_dequeue_burst(qid->iq[iq_num], qes, count); for (i = 0; i < count; i++) { const struct rte_event *qe = &qes[i]; - /* use cheap bit mixing, we only need to lose a few bits */ - uint32_t flow_id32 = (qes[i].flow_id) ^ (qes[i].flow_id >> 10); - const uint16_t flow_id = FLOWID_MASK & flow_id32; + const uint16_t flow_id = SW_HASH_FLOWID(qes[i].flow_id); struct sw_fid_t *fid = &qid->fids[flow_id]; int cq = fid->cq; @@ -183,8 +183,7 @@ sw_schedule_parallel_to_cq(struct sw_evdev *sw, struct sw_qid * const qid, qid->stats.tx_pkts++; const int head = (p->hist_head & (SW_PORT_HIST_LIST-1)); - - p->hist_list[head].fid = qe->flow_id; + p->hist_list[head].fid = SW_HASH_FLOWID(qe->flow_id); p->hist_list[head].qid = qid_id; if (keep_order) -- 2.7.4