From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id F3FB35A4B for ; Mon, 7 Sep 2015 14:58:02 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP; 07 Sep 2015 05:58:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,485,1437462000"; d="scan'208";a="800020884" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by fmsmga002.fm.intel.com with ESMTP; 07 Sep 2015 05:58:01 -0700 Received: from irsmsx103.ger.corp.intel.com ([169.254.3.251]) by IRSMSX104.ger.corp.intel.com ([163.33.3.159]) with mapi id 14.03.0224.002; Mon, 7 Sep 2015 13:57:07 +0100 From: "Richardson, Bruce" To: Zoltan Kiss , "dev@dpdk.org" Thread-Topic: [PATCH] ixgbe: prefetch packet headers in vector PMD receive function Thread-Index: AQHQ5Orfy1XDbo4m6UqzmyFGbLUN4J4w9hQAgAAYzeA= Date: Mon, 7 Sep 2015 12:57:07 +0000 Message-ID: <59AF69C657FD0841A61C55336867B5B0359227DF@IRSMSX103.ger.corp.intel.com> References: <1441135036-7491-1-git-send-email-zoltan.kiss@linaro.org> <55ED8252.1020900@linaro.org> In-Reply-To: <55ED8252.1020900@linaro.org> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] ixgbe: prefetch packet headers in vector PMD receive function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Sep 2015 12:58:03 -0000 > -----Original Message----- > From: Zoltan Kiss [mailto:zoltan.kiss@linaro.org] > Sent: Monday, September 7, 2015 1:26 PM > To: dev@dpdk.org > Cc: Ananyev, Konstantin; Richardson, Bruce > Subject: Re: [PATCH] ixgbe: prefetch packet headers in vector PMD receive > function >=20 > Hi, >=20 > I just realized I've missed the "[PATCH]" tag from the subject. Did anyon= e > had time to review this? >=20 Hi Zoltan, the big thing that concerns me with this is the addition of new instruction= s for each packet in the fast path. Ideally, this prefetching would be better han= dled in the application itself, as for some apps, e.g. those using pipelining, t= he core doing the RX from the NIC may not touch the packet data at all, and th= e prefetches will instead cause a performance slowdown. Is it possible to get the same performance increase - or something close to= it - by making changes in OVS? Regards, /Bruce > Regards, >=20 > Zoltan >=20 > On 01/09/15 20:17, Zoltan Kiss wrote: > > The lack of this prefetch causes a significant performance drop in > > OVS-DPDK: 13.3 Mpps instead of 14 when forwarding 64 byte packets. > > Even though OVS prefetches the next packet's header before it starts > > processing the current one, it doesn't get there fast enough. This > > aligns with the behaviour of other receive functions. > > > > Signed-off-by: Zoltan Kiss > > --- > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > > b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > > index cf25a53..51299fa 100644 > > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > > @@ -502,6 +502,15 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, > struct rte_mbuf **rx_pkts, > > _mm_storeu_si128((void *)&rx_pkts[pos]- > >rx_descriptor_fields1, > > pkt_mb1); > > > > + rte_packet_prefetch((char*)(rx_pkts[pos]->buf_addr) + > > + RTE_PKTMBUF_HEADROOM); > > + rte_packet_prefetch((char*)(rx_pkts[pos + 1]->buf_addr) > + > > + RTE_PKTMBUF_HEADROOM); > > + rte_packet_prefetch((char*)(rx_pkts[pos + 2]->buf_addr) > + > > + RTE_PKTMBUF_HEADROOM); > > + rte_packet_prefetch((char*)(rx_pkts[pos + 3]->buf_addr) > + > > + RTE_PKTMBUF_HEADROOM); > > + > > /* C.4 calc avaialbe number of desc */ > > var =3D __builtin_popcountll(_mm_cvtsi128_si64(staterr= )); > > nb_pkts_recd +=3D var; > >