From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 64F45F94 for ; Tue, 11 Oct 2016 10:51:08 +0200 (CEST) Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga104.jf.intel.com with ESMTP; 11 Oct 2016 01:51:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,476,1473145200"; d="scan'208";a="18343630" Received: from irsmsx109.ger.corp.intel.com ([163.33.3.23]) by fmsmga005.fm.intel.com with ESMTP; 11 Oct 2016 01:51:06 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.196]) by IRSMSX109.ger.corp.intel.com ([169.254.13.6]) with mapi id 14.03.0248.002; Tue, 11 Oct 2016 09:51:05 +0100 From: "Ananyev, Konstantin" To: Vladyslav Buslov , "Wu, Jingjing" , "Yigit, Ferruh" , "Zhang, Helin" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH] net/i40e: add additional prefetch instructions for bulk rx Thread-Index: AQHSDoty5qmJQuE9cEe4W4mjR9JxzKChxdCAgAA9VwCAAHMpoA== Date: Tue, 11 Oct 2016 08:51:04 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772583F0C0408@irsmsx105.ger.corp.intel.com> References: <20160714172719.17502-1-vladyslav.buslov@harmonicinc.com> <20160714172719.17502-2-vladyslav.buslov@harmonicinc.com> <18156776-3658-a97d-3fbc-19c1a820a04d@intel.com> <9BB6961774997848B5B42BEC655768F80E277DFC@SHSMSX103.ccr.corp.intel.com> In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] net/i40e: add additional prefetch instructions for bulk rx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Oct 2016 08:51:09 -0000 Hi Vladislav, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vladyslav Buslov > Sent: Monday, October 10, 2016 6:06 PM > To: Wu, Jingjing ; Yigit, Ferruh ; Zhang, Helin > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH] net/i40e: add additional prefetch instruc= tions for bulk rx >=20 > > -----Original Message----- > > From: Wu, Jingjing [mailto:jingjing.wu@intel.com] > > Sent: Monday, October 10, 2016 4:26 PM > > To: Yigit, Ferruh; Vladyslav Buslov; Zhang, Helin > > Cc: dev@dpdk.org > > Subject: RE: [dpdk-dev] [PATCH] net/i40e: add additional prefetch > > instructions for bulk rx > > > > > > > > > -----Original Message----- > > > From: Yigit, Ferruh > > > Sent: Wednesday, September 14, 2016 9:25 PM > > > To: Vladyslav Buslov ; Zhang, Helin > > > ; Wu, Jingjing > > > Cc: dev@dpdk.org > > > Subject: Re: [dpdk-dev] [PATCH] net/i40e: add additional prefetch > > > instructions for bulk rx > > > > > > On 7/14/2016 6:27 PM, Vladyslav Buslov wrote: > > > > Added prefetch of first packet payload cacheline in > > > > i40e_rx_scan_hw_ring Added prefetch of second mbuf cacheline in > > > > i40e_rx_alloc_bufs > > > > > > > > Signed-off-by: Vladyslav Buslov > > > > --- > > > > drivers/net/i40e/i40e_rxtx.c | 7 +++++-- > > > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/drivers/net/i40e/i40e_rxtx.c > > > > b/drivers/net/i40e/i40e_rxtx.c index d3cfb98..e493fb4 100644 > > > > --- a/drivers/net/i40e/i40e_rxtx.c > > > > +++ b/drivers/net/i40e/i40e_rxtx.c > > > > @@ -1003,6 +1003,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue > > *rxq) > > > > /* Translate descriptor info to mbuf parameters */ > > > > for (j =3D 0; j < nb_dd; j++) { > > > > mb =3D rxep[j].mbuf; > > > > + rte_prefetch0(RTE_PTR_ADD(mb->buf_addr, > > > RTE_PKTMBUF_HEADROOM)); > > > > Why did prefetch here? I think if application need to deal with packet,= it is > > more suitable to put it in application. > > > > > > qword1 =3D rte_le_to_cpu_64(\ > > > > rxdp[j].wb.qword1.status_error_len)= ; > > > > pkt_len =3D ((qword1 & > > > I40E_RXD_QW1_LENGTH_PBUF_MASK) >> > > > > @@ -1086,9 +1087,11 @@ i40e_rx_alloc_bufs(struct i40e_rx_queue > > *rxq) > > > > > > > > rxdp =3D &rxq->rx_ring[alloc_idx]; > > > > for (i =3D 0; i < rxq->rx_free_thresh; i++) { > > > > - if (likely(i < (rxq->rx_free_thresh - 1))) > > > > + if (likely(i < (rxq->rx_free_thresh - 1))) { > > > > /* Prefetch next mbuf */ > > > > - rte_prefetch0(rxep[i + 1].mbuf); > > > > + rte_prefetch0(&rxep[i + 1].mbuf->cacheline0= ); > > > > + rte_prefetch0(&rxep[i + 1].mbuf->cacheline1= ); I think there are rte_mbuf_prefetch_part1/part2 defined in rte_mbuf.h, specially for that case. > > > > + } > > Agree with this change. And when I test it by testpmd with iofwd, no > > performance increase is observed but minor decrease. > > Can you share will us when it will benefit the performance in your scen= ario ? > > > > > > Thanks > > Jingjing >=20 > Hello Jingjing, >=20 > Thanks for code review. >=20 > My use case: We have simple distributor thread that receives packets from= port and distributes them among worker threads according to > VLAN and MAC address hash. >=20 > While working on performance optimization we determined that most of CPU = usage of this thread is in DPDK. > As and optimization we decided to switch to rx burst alloc function, howe= ver that caused additional performance degradation compared to > scatter rx mode. > In profiler two major culprits were: > 1. Access to packet data Eth header in application code. (cache miss) > 2. Setting next packet descriptor field to NULL in DPDK i40e_rx_alloc_b= ufs code. (this field is in second descriptor cache line that was not > prefetched) I wonder what will happen if we'll remove any prefetches here? Would it make things better or worse (and by how much)? > After applying my fixes performance improved compared to scatter rx mode. >=20 > I assumed that prefetch of first cache line of packet data belongs to DPD= K because it is done in scatter rx mode. (in > i40e_recv_scattered_pkts) > It can be moved to application side but IMO it is better to be consistent= across all rx modes. I would agree with Jingjing here, probably PMD should avoid to prefetch pac= ket's data.=20 Konstantin