* [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop.
@ 2015-08-11 10:40 Weglicki, MichalX
2015-08-11 13:10 ` Mcnamara, John
0 siblings, 1 reply; 3+ messages in thread
From: Weglicki, MichalX @ 2015-08-11 10:40 UTC (permalink / raw)
To: dev
Hello,
Currently I'm integrating OVS head with DPDK 2.1. Based on my tests performance in all scenarios (confirmed on Phy2Phy and Vhostuser) has dropped about 10%. Please find example results below:
PHY2PHY (Bidirectional)
DPDK 2.1:
Iteration 1: 14,164,987 pps
Iteration 2: 13,866,386 pps
Iteration 3: 14,093,153 pps
DPDK 2.0:
Iteration 1: 15,406,646 pps
Iteration 2: 15,410,182 pps
Iteration 3: 15,404,678 pps
VHOSTUser:
DPDK 2.1:
Iteration 1: 3,328,678 pps
Iteration 2: 3,367,130 pps
Iteration 3: 3,259,899 pps
DPDK 2.0:
Iteration 1: 3,786,969 pps
Iteration 2: 3,736,117 pps
Iteration 3: 3,557,150 pps
Based on my test last commit where performance was the same was: 37f9a7270e800df5c603b2c76c73ed3bca3328d9
Bad commit: 1d493a49490fa90e09689d49280cff0d51d0193e
You can find diff in attachment.
Change is still compatible in OVS because in rte_pktmbuf_pool_init, NULL is passed as second argument.
Did anyone noticed such performance drop? Could anyone give me some information why this implementation has changed? Any hints how to get back to previous performance would be very appreciated.
Br,
Michal.
--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop.
2015-08-11 10:40 [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop Weglicki, MichalX
@ 2015-08-11 13:10 ` Mcnamara, John
2015-08-13 1:51 ` Flavio Leitner
0 siblings, 1 reply; 3+ messages in thread
From: Mcnamara, John @ 2015-08-11 13:10 UTC (permalink / raw)
To: dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Weglicki, MichalX
> Sent: Tuesday, August 11, 2015 11:40 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop.
>
> Hello,
>
> Currently I'm integrating OVS head with DPDK 2.1. Based on my tests
> performance in all scenarios (confirmed on Phy2Phy and Vhostuser) has
> dropped about 10%. Please find example results below:
Also:
> Michal:
> It seems I can fix it on OVS side by passing old hardcoded
> size(2048 + RTE_PKTMBUF_HEADROOM) as argument instead of NULL.
Hi,
In commit 1d493a49490fa the bahaviour of rte_pktmbuf_pool_init() changed:
commit 1d493a49490fa90e09689d49280cff0d51d0193e
Author: Olivier Matz <olivier.matz@6wind.com>
Date: Wed Apr 22 11:57:18 2015 +0200
mbuf: fix data room size calculation in pool init
Previously passing opaque_arg == NULL initialized mbuf_data_room_size = 2048 + RTE_PKTMBUF_HEADROOM.
Now it is set as follows:
+ /* if no structure is provided, assume no mbuf private area */
+ user_mbp_priv = opaque_arg;
+ if (user_mbp_priv == NULL) {
+ default_mbp_priv.mbuf_priv_size = 0;
+ if (mp->elt_size > sizeof(struct rte_mbuf))
+ roomsz = mp->elt_size - sizeof(struct rte_mbuf);
+ else
+ roomsz = 0;
+ default_mbp_priv.mbuf_data_room_size = roomsz;
+ user_mbp_priv = &default_mbp_priv;
+ }
A workaround, for OVS, would be to pass the new opaque_arg struct with the required default set. However, perhaps this should be fixed in DPDK.
The updated doc in the same patch says:
+DPDK 2.0 to DPDK 2.1
+--------------------
+
+* The second argument of rte_pktmbuf_pool_init(mempool, opaque) is now a
+ pointer to a struct rte_pktmbuf_pool_private instead of a uint16_t
+ casted into a pointer. Backward compatibility is preserved when the
+ argument was NULL which is the majority of use cases, but not if the
+ opaque pointer was not NULL, as it is not technically feasible. In
+ this case, the application has to be modified to properly fill a
+ rte_pktmbuf_pool_private structure and pass it to
+ rte_pktmbuf_pool_init().
+
I think the OVS issue shows that backward compatibility isn't preserved (in the strictest sense).
Should this be fixed? Opinions?
John.
--
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop.
2015-08-11 13:10 ` Mcnamara, John
@ 2015-08-13 1:51 ` Flavio Leitner
0 siblings, 0 replies; 3+ messages in thread
From: Flavio Leitner @ 2015-08-13 1:51 UTC (permalink / raw)
To: Mcnamara, John; +Cc: dev
On Tue, Aug 11, 2015 at 01:10:10PM +0000, Mcnamara, John wrote:
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Weglicki, MichalX
> > Sent: Tuesday, August 11, 2015 11:40 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop.
> >
> > Hello,
> >
> > Currently I'm integrating OVS head with DPDK 2.1. Based on my tests
> > performance in all scenarios (confirmed on Phy2Phy and Vhostuser) has
> > dropped about 10%. Please find example results below:
>
> Also:
>
> > Michal:
> > It seems I can fix it on OVS side by passing old hardcoded
> > size(2048 + RTE_PKTMBUF_HEADROOM) as argument instead of NULL.
>
> Hi,
>
> In commit 1d493a49490fa the bahaviour of rte_pktmbuf_pool_init() changed:
>
> commit 1d493a49490fa90e09689d49280cff0d51d0193e
> Author: Olivier Matz <olivier.matz@6wind.com>
> Date: Wed Apr 22 11:57:18 2015 +0200
>
> mbuf: fix data room size calculation in pool init
>
> Previously passing opaque_arg == NULL initialized mbuf_data_room_size = 2048 + RTE_PKTMBUF_HEADROOM.
>
> Now it is set as follows:
>
> + /* if no structure is provided, assume no mbuf private area */
> + user_mbp_priv = opaque_arg;
> + if (user_mbp_priv == NULL) {
> + default_mbp_priv.mbuf_priv_size = 0;
> + if (mp->elt_size > sizeof(struct rte_mbuf))
> + roomsz = mp->elt_size - sizeof(struct rte_mbuf);
> + else
> + roomsz = 0;
> + default_mbp_priv.mbuf_data_room_size = roomsz;
> + user_mbp_priv = &default_mbp_priv;
> + }
>
> A workaround, for OVS, would be to pass the new opaque_arg struct with the required default set. However, perhaps this should be fixed in DPDK.
>
> The updated doc in the same patch says:
>
> +DPDK 2.0 to DPDK 2.1
> +--------------------
> +
> +* The second argument of rte_pktmbuf_pool_init(mempool, opaque) is now a
> + pointer to a struct rte_pktmbuf_pool_private instead of a uint16_t
> + casted into a pointer. Backward compatibility is preserved when the
> + argument was NULL which is the majority of use cases, but not if the
> + opaque pointer was not NULL, as it is not technically feasible. In
> + this case, the application has to be modified to properly fill a
> + rte_pktmbuf_pool_private structure and pass it to
> + rte_pktmbuf_pool_init().
> +
>
> I think the OVS issue shows that backward compatibility isn't preserved (in the strictest sense).
The text is referring to the fact that passing NULL is still valid and
wouldn't segfault because it isn't passing a valid rte_pktmbuf_pool_private.
> Should this be fixed? Opinions?
If we fix OVS, then older OVS versions will compile but see a performance
issue with new DPDK code. On the other hand, fixed OVS won't work with
previous DPDK code. Those are bad user experiences. The option would
be to fix DPDK to be true backwards compatible and allocate the old default
value for the NULL case as before.
fbl
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-08-13 1:51 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-08-11 10:40 [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop Weglicki, MichalX
2015-08-11 13:10 ` Mcnamara, John
2015-08-13 1:51 ` Flavio Leitner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).