From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id ACE8F91AC for ; Mon, 5 Oct 2015 16:09:33 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP; 05 Oct 2015 07:09:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,639,1437462000"; d="scan'208";a="784789820" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by orsmga001.jf.intel.com with ESMTP; 05 Oct 2015 07:09:29 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.252]) by IRSMSX104.ger.corp.intel.com ([169.254.5.46]) with mapi id 14.03.0248.002; Mon, 5 Oct 2015 15:09:28 +0100 From: "Ananyev, Konstantin" To: Rahul Lakkireddy Thread-Topic: [dpdk-dev] [PATCH 1/6] cxgbe: Optimize forwarding performance for 40G Thread-Index: AQHQ/1WGdPnh59kwZk2x66JtkTt7TZ5cxifQgAAA6ACAAB5QoA== Date: Mon, 5 Oct 2015 14:09:27 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836AA37E2@irsmsx105.ger.corp.intel.com> References: <318fc8559675b1157e7f049a6a955a6a2059bac7.1443704150.git.rahul.lakkireddy@chelsio.com> <20151005100620.GA2487@scalar.blr.asicdesigners.com> <2601191342CEEE43887BDE71AB97725836AA36CF@irsmsx105.ger.corp.intel.com> <20151005124205.GA24533@scalar.blr.asicdesigners.com> In-Reply-To: <20151005124205.GA24533@scalar.blr.asicdesigners.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" , Felix Marti , Nirranjan Kirubaharan , Kumar A S Subject: Re: [dpdk-dev] [PATCH 1/6] cxgbe: Optimize forwarding performance for 40G X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Oct 2015 14:09:34 -0000 Hi Rahul, > -----Original Message----- > From: Rahul Lakkireddy [mailto:rahul.lakkireddy@chelsio.com] > Sent: Monday, October 05, 2015 1:42 PM > To: Ananyev, Konstantin > Cc: Aaron Conole; dev@dpdk.org; Felix Marti; Kumar A S; Nirranjan Kirubah= aran > Subject: Re: [dpdk-dev] [PATCH 1/6] cxgbe: Optimize forwarding performanc= e for 40G >=20 > Hi Konstantin, >=20 > On Monday, October 10/05/15, 2015 at 04:46:40 -0700, Ananyev, Konstantin = wrote: > > > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Rahul Lakkireddy > > > Sent: Monday, October 05, 2015 11:06 AM > > > To: Aaron Conole > > > Cc: dev@dpdk.org; Felix Marti; Kumar A S; Nirranjan Kirubaharan > > > Subject: Re: [dpdk-dev] [PATCH 1/6] cxgbe: Optimize forwarding perfor= mance for 40G > > > > > > Hi Aaron, > > > > > > On Friday, October 10/02/15, 2015 at 14:48:28 -0700, Aaron Conole wro= te: > > > > Hi Rahul, > > > > > > > > Rahul Lakkireddy writes: > > > > > > > > > Update sge initialization with respect to free-list manager confi= guration > > > > > and ingress arbiter. Also update refill logic to refill mbufs onl= y after > > > > > a certain threshold for rx. Optimize tx packet prefetch and free= . > > > > <> > > > > > for (i =3D 0; i < sd->coalesce.idx; i++) { > > > > > - rte_pktmbuf_free(sd->coalesce.mbuf[i]); > > > > > + struct rte_mbuf *tmp =3D sd->coalesce.mbuf[i]; > > > > > + > > > > > + do { > > > > > + struct rte_mbuf *next =3D tmp->next; > > > > > + > > > > > + rte_pktmbuf_free_seg(tmp); > > > > > + tmp =3D next; > > > > > + } while (tmp); > > > > > sd->coalesce.mbuf[i] =3D NULL; > > > > Pardon my ignorance here, but rte_pktmbuf_free does this work. I ca= n't > > > > actually see much difference between your rewrite of this block, an= d > > > > the implementation of rte_pktmbuf_free() (apart from moving your br= anch > > > > to the end of the function). Did your microbenchmarking really show= this > > > > as an improvement? > > > > > > > > Thanks for your time, > > > > Aaron > > > > > > rte_pktmbuf_free calls rte_mbuf_sanity_check which does a lot of > > > checks. > > > > Only when RTE_LIBRTE_MBUF_DEBUG is enabled in your config. > > By default it is switched off. >=20 > Right. I clearly missed this. > I am running with default config only btw. >=20 > > > > > This additional check seems redundant for single segment > > > packets since rte_pktmbuf_free_seg also performs rte_mbuf_sanity_chec= k. > > > > > > Several PMDs already prefer to use rte_pktmbuf_free_seg directly over > > > rte_pktmbuf_free as it is faster. > > > > Other PMDs use rte_pktmbuf_free_seg() as each TD has an associated > > with it segment. So as HW is done with the TD, SW frees associated segm= ent. > > In your case I don't see any point in re-implementing rte_pktmbuf_free(= ) manually, > > and I don't think it would be any faster. > > > > Konstantin >=20 > As I mentioned below, I am clearly seeing a difference of 1 Mpps. And 1 > Mpps is not a small difference IMHO. Agree with you here - it is a significant difference. >=20 > When running l3fwd with 8 queues, I also collected a perf report. > When using rte_pktmbuf_free, I see that it eats up around 6% cpu as > below in perf top report:- > -------------------- > 32.00% l3fwd [.] cxgbe_poll > 22.25% l3fwd [.] t4_eth_xmit > 20.30% l3fwd [.] main_loop > 6.77% l3fwd [.] rte_pktmbuf_free > 4.86% l3fwd [.] refill_fl_usembufs > 2.00% l3fwd [.] write_sgl > ..... > -------------------- >=20 > While, when using rte_pktmbuf_free_seg directly, I don't see above > problem. perf top report now comes as:- > ------------------- > 33.36% l3fwd [.] cxgbe_poll > 32.69% l3fwd [.] t4_eth_xmit > 19.05% l3fwd [.] main_loop > 5.21% l3fwd [.] refill_fl_usembufs > 2.40% l3fwd [.] write_sgl > .... > ------------------- I don't think these 6% disappeared anywhere. As I can see, now t4_eth_xmit() increased by roughly same amount (you still have same job to do). To me it looks like in that case compiler didn't really inline rte_pktmbuf_= free(). Wonder can you add 'always_inline' attribute to the rte_pktmbuf_free(), and see would it make any difference? Konstantin=20 >=20 > I obviously missed the debug flag for rte_mbuf_sanity_check. > However, there is a clear difference of 1 Mpps. I don't know if its the > change between while construct used in rte_pktmbuf_free and the > do..while construct that I used - is making the difference. >=20 >=20 > > > > > > > > The forwarding perf. improvement with only this particular block is > > > around 1 Mpps for 64B packets when using l3fwd with 8 queues. > > > > > > Thanks, > > > Rahul