From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from sender163-mail.zoho.com (sender163-mail.zoho.com [74.201.84.163]) by dpdk.org (Postfix) with ESMTP id D4A905A64 for ; Thu, 13 Aug 2015 03:51:36 +0200 (CEST) Received: from localhost (177.92.41.23 [177.92.41.23]) by mx.zohomail.com with SMTPS id 1439430693492510.322533069873; Wed, 12 Aug 2015 18:51:33 -0700 (PDT) Date: Wed, 12 Aug 2015 22:51:29 -0300 From: Flavio Leitner To: "Mcnamara, John" Message-ID: <20150813015129.GA7791@x240.home> References: <1AFA2937E172CD4DB2FD9318443C060ED9BDC6@IRSMSX103.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Aug 2015 01:51:37 -0000 On Tue, Aug 11, 2015 at 01:10:10PM +0000, Mcnamara, John wrote: > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Weglicki, MichalX > > Sent: Tuesday, August 11, 2015 11:40 AM > > To: dev@dpdk.org > > Subject: [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop. > > > > Hello, > > > > Currently I'm integrating OVS head with DPDK 2.1. Based on my tests > > performance in all scenarios (confirmed on Phy2Phy and Vhostuser) has > > dropped about 10%. Please find example results below: > > Also: > > > Michal: > > It seems I can fix it on OVS side by passing old hardcoded > > size(2048 + RTE_PKTMBUF_HEADROOM) as argument instead of NULL. > > Hi, > > In commit 1d493a49490fa the bahaviour of rte_pktmbuf_pool_init() changed: > > commit 1d493a49490fa90e09689d49280cff0d51d0193e > Author: Olivier Matz > Date: Wed Apr 22 11:57:18 2015 +0200 > > mbuf: fix data room size calculation in pool init > > Previously passing opaque_arg == NULL initialized mbuf_data_room_size = 2048 + RTE_PKTMBUF_HEADROOM. > > Now it is set as follows: > > + /* if no structure is provided, assume no mbuf private area */ > + user_mbp_priv = opaque_arg; > + if (user_mbp_priv == NULL) { > + default_mbp_priv.mbuf_priv_size = 0; > + if (mp->elt_size > sizeof(struct rte_mbuf)) > + roomsz = mp->elt_size - sizeof(struct rte_mbuf); > + else > + roomsz = 0; > + default_mbp_priv.mbuf_data_room_size = roomsz; > + user_mbp_priv = &default_mbp_priv; > + } > > A workaround, for OVS, would be to pass the new opaque_arg struct with the required default set. However, perhaps this should be fixed in DPDK. > > The updated doc in the same patch says: > > +DPDK 2.0 to DPDK 2.1 > +-------------------- > + > +* The second argument of rte_pktmbuf_pool_init(mempool, opaque) is now a > + pointer to a struct rte_pktmbuf_pool_private instead of a uint16_t > + casted into a pointer. Backward compatibility is preserved when the > + argument was NULL which is the majority of use cases, but not if the > + opaque pointer was not NULL, as it is not technically feasible. In > + this case, the application has to be modified to properly fill a > + rte_pktmbuf_pool_private structure and pass it to > + rte_pktmbuf_pool_init(). > + > > I think the OVS issue shows that backward compatibility isn't preserved (in the strictest sense). The text is referring to the fact that passing NULL is still valid and wouldn't segfault because it isn't passing a valid rte_pktmbuf_pool_private. > Should this be fixed? Opinions? If we fix OVS, then older OVS versions will compile but see a performance issue with new DPDK code. On the other hand, fixed OVS won't work with previous DPDK code. Those are bad user experiences. The option would be to fix DPDK to be true backwards compatible and allocate the old default value for the NULL case as before. fbl