From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 5715D6A96 for ; Wed, 1 Oct 2014 10:38:05 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 01 Oct 2014 01:44:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.04,630,1406617200"; d="scan'208";a="581750673" Received: from bricha3-mobl.ger.corp.intel.com (HELO bricha3-mobl.ir.intel.com) ([10.243.20.27]) by orsmga001.jf.intel.com with SMTP; 01 Oct 2014 01:44:46 -0700 Received: by bricha3-mobl.ir.intel.com (sSMTP sendmail emulation); Wed, 01 Oct 2014 09:44:45 +0001 Date: Wed, 1 Oct 2014 09:44:45 +0100 From: Bruce Richardson To: Hiroshi Shimamoto Message-ID: <20141001084445.GC1204@BRICHA3-MOBL> References: <7F861DC0615E0C47A872E6F3C5FCDDBD02AE26C5@BPXM14GP.gisp.nec.co.jp> <20140930143242.GI2193@hmsreliant.think-freely.org> <7F861DC0615E0C47A872E6F3C5FCDDBD02AE2D37@BPXM14GP.gisp.nec.co.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7F861DC0615E0C47A872E6F3C5FCDDBD02AE2D37@BPXM14GP.gisp.nec.co.jp> Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.22 (2013-10-16) Cc: "dev@dpdk.org" , Hayato Momma Subject: Re: [dpdk-dev] [memnic PATCH v2 6/7] pmd: add branch hint in recv/xmit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Oct 2014 08:38:05 -0000 On Tue, Sep 30, 2014 at 11:52:00PM +0000, Hiroshi Shimamoto wrote: > Hi, > > > Subject: Re: [dpdk-dev] [memnic PATCH v2 6/7] pmd: add branch hint in recv/xmit > > > > On Tue, Sep 30, 2014 at 11:14:40AM +0000, Hiroshi Shimamoto wrote: > > > From: Hiroshi Shimamoto > > > > > > To reduce instruction cache miss, add branch condition hints into > > > recv/xmit functions. This improves a bit performance. > > > > > > We can see performance improvements with memnic-tester. > > > Using Xeon E5-2697 v2 @ 2.70GHz, 4 vCPU. > > > size | before | after > > > 64 | 5.54Mpps | 5.55Mpps > > > 128 | 5.46Mpps | 5.44Mpps > > > 256 | 5.21Mpps | 5.22Mpps > > > 512 | 4.50Mpps | 4.52Mpps > > > 1024 | 3.71Mpps | 3.73Mpps > > > 1280 | 3.21Mpps | 3.22Mpps > > > 1518 | 2.92Mpps | 2.93Mpps > > > > > > Signed-off-by: Hiroshi Shimamoto > > > Reviewed-by: Hayato Momma > > > --- > > > pmd/pmd_memnic.c | 18 +++++++++--------- > > > 1 file changed, 9 insertions(+), 9 deletions(-) > > > > > > diff --git a/pmd/pmd_memnic.c b/pmd/pmd_memnic.c > > > index 7fc3093..875d3ea 100644 > > > --- a/pmd/pmd_memnic.c > > > +++ b/pmd/pmd_memnic.c > > > @@ -289,26 +289,26 @@ static uint16_t memnic_recv_pkts(void *rx_queue, > > > int idx, next; > > > struct rte_eth_stats *st = &adapter->stats[rte_lcore_id()]; > > > > > > - if (!adapter->nic->hdr.valid) > > > + if (unlikely(!adapter->nic->hdr.valid)) > > > return 0; > > > > > > pkts = bytes = errs = 0; > > > idx = adapter->up_idx; > > > for (nr = 0; nr < nb_pkts; nr++) { > > > p = &data->packets[idx]; > > > - if (p->status != MEMNIC_PKT_ST_FILLED) > > > + if (unlikely(p->status != MEMNIC_PKT_ST_FILLED)) > > > break; > > > /* prefetch the next area */ > > > next = idx; > > > - if (++next >= MEMNIC_NR_PACKET) > > > + if (unlikely(++next >= MEMNIC_NR_PACKET)) > > > next = 0; > > > rte_prefetch0(&data->packets[next]); > > > - if (p->len > framesz) { > > > + if (unlikely(p->len > framesz)) { > > > errs++; > > > goto drop; > > > } > > > mb = rte_pktmbuf_alloc(adapter->mp); > > > - if (!mb) > > > + if (unlikely(!mb)) > > > break; > > > > > > rte_memcpy(rte_pktmbuf_mtod(mb, void *), p->data, p->len); > > > @@ -350,7 +350,7 @@ static uint16_t memnic_xmit_pkts(void *tx_queue, > > > uint64_t pkts, bytes, errs; > > > uint32_t framesz = adapter->framesz; > > > > > > - if (!adapter->nic->hdr.valid) > > > + if (unlikely(!adapter->nic->hdr.valid)) > > > return 0; > > > > > > pkts = bytes = errs = 0; > > > @@ -360,7 +360,7 @@ static uint16_t memnic_xmit_pkts(void *tx_queue, > > > struct rte_mbuf *sg; > > > void *ptr; > > > > > > - if (pkt_len > framesz) { > > > + if (unlikely(pkt_len > framesz)) { > > > errs++; > > > break; > > > } > > > @@ -379,7 +379,7 @@ retry: > > > goto retry; > > > } > > > > > > - if (idx != ACCESS_ONCE(adapter->down_idx)) { > > > + if (unlikely(idx != ACCESS_ONCE(adapter->down_idx))) { > > Why are you using ACCESS_ONCE here? Or for that matter, anywhere else in this > > PMD? The whole idea of the ACCESS_ONCE macro is to assign a value to a variable > > once and prevent it from getting reloaded from memory at a later time, this is > > exactly contrary to that, both in the sense that you're explicitly reloading the > > same variable multiple times, and that you're using it as part of a comparison > > operation, rather than an asignment operation > > ACCESS_ONCE prevents compiler optimization and ensures load from memory. > There could be multiple threads which read/write that index. > We should compare the value previous and the current value in memory. > In that reason, I use ACCESS_ONCE macro to get value in the memory. Should you not just make the variable volatile? That's the normal way to guarantee reads from memory and prevent the compiler caching things in registers. /Bruce > > thanks, > Hiroshi > > > > > Neil >