DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Pierre Pfister (ppfister)" <ppfister@cisco.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: Yuanhan Liu <yuanhan.liu@linux.intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] virtio: tx with can_push when VERSION_1 is set
Date: Tue, 22 Nov 2016 09:54:58 +0000	[thread overview]
Message-ID: <A0EE9DE0-C82E-44CF-BB85-245488B78034@cisco.com> (raw)
In-Reply-To: <5c99bc35-839b-84b8-1df9-db80f288d7b7@redhat.com>

Hello Maxime,

> Le 9 nov. 2016 à 15:51, Maxime Coquelin <maxime.coquelin@redhat.com> a écrit :
> 
> Hi Pierre,
> 
> On 11/09/2016 01:42 PM, Pierre Pfister (ppfister) wrote:
>> Hello Maxime,
>> 
>> Sorry for the late reply.
>> 
>> 
>>> Le 8 nov. 2016 à 10:44, Maxime Coquelin <maxime.coquelin@redhat.com> a écrit :
>>> 
>>> Hi Pierre,
>>> 
>>> On 11/08/2016 10:31 AM, Pierre Pfister (ppfister) wrote:
>>>> Current virtio driver advertises VERSION_1 support,
>>>> but does not handle device's VERSION_1 support when
>>>> sending packets (it looks for ANY_LAYOUT feature,
>>>> which is absent).
>>>> 
>>>> This patch enables 'can_push' in tx path when VERSION_1
>>>> is advertised by the device.
>>>> 
>>>> This significantly improves small packets forwarding rate
>>>> towards devices advertising VERSION_1 feature.
>>> I think it depends whether offloading is enabled or not.
>>> If no offloading enabled, I measured significant drop.
>>> Indeed, when no offloading is enabled, the Tx path in Virtio
>>> does not access the virtio header before your patch, as the header is memset to zero at device init time.
>>> With your patch, it gets memset to zero at every transmit in the hot
>>> path.
>> 
>> Right. On the virtio side that is true, but on the device side, we have to access the header anyway.
> No more now, if no offload features have been negotiated.
> I have done a patch that landed in v16.11 to skip header parsing in
> this case.
> That said, we still have to access its descriptor.
> 
>> And accessing two descriptors (with the address resolution and memory fetch which comes with it)
>> is a costy operation compared to a single one.
>> In the case indirect descriptors are used, this is 1 desc access instead or 3.
> I agree this is far from being optimal.
> 
>> And in the case chained descriptors are used, this doubles the number of packets that you can put in your queue.
>> 
>> Those are the results in my PHY -> VM (testpmd) -> PHY setup
>> Traffic is flowing bidirectionally. Numbers are for lossless-rates.
>> 
>> When chained buffers are used for dpdk's TX: 2x2.13Mpps
>> When indirect descriptors are used for dpdk's TX: 2x2.38Mpps
>> When shallow buffers are used for dpdk's TX (with this patch): 2x2.42Mpps
> When I tried it, I also did PVP 0% benchmark, and I got opposite results. Chained and indirect cases were significantly better.
> 
> My PVP setup was using a single NIC and single Virtio PMD, and NIC2VM
> forwarding was IO mode done with testpmd on host, and Rx->Tx forwarding
> was macswap mode on guest side.
> 
> I also saw some perf regression when running simple tespmd test on both
> ends.
> 
> Yuanhan, did you run some benchmark with your series enabling
> ANY_LAYOUT?

It was enabled. But the specs specify that VERSION_1 includes ANY_LAYOUT.
Therefor, Qemu removes ANY_LAYOUT when VERSION_1 is set.

We can keep arguing about which is fastest. I guess we have different setups and different results, so we probably are deadlocked here.
But in any case, the current code is inconsistent, as it uses single descriptor when ANY_LAYOUT is set, but not when VERSION_1 is set.

I believe it makes sense to use single-descriptor any time it is possible, but you are free to think otherwise.
Please make a call and make the code consistent (removes single-descriptors all together, or use them when VERSION_1 is set too). Otherwise it just creates yet-another testing headache. 

Thanks,

- Pierre 

> 
>> 
>> I must also note that qemu 2.5 does not seem to deal with VERSION_1 and ANY_LAYOUT correctly.
>> The patch I am proposing here works for qemu 2.7, but with qemu 2.5, testpmd still behaves as if ANY_LAYOUT (or VERSION_1) was not available. This is not catastrophic. But just note that you will not see performance in some cases with qemu 2.5.
> 
> Thanks for the info.
> 
> Regards,
> Maxime


  reply	other threads:[~2016-11-22  9:55 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-08  9:31 Pierre Pfister (ppfister)
2016-11-08  9:44 ` Maxime Coquelin
2016-11-09 12:42   ` Pierre Pfister (ppfister)
2016-11-09 14:42     ` Maxime Coquelin
2016-11-09 14:51     ` Maxime Coquelin
2016-11-22  9:54       ` Pierre Pfister (ppfister) [this message]
2016-11-22 13:17         ` Maxime Coquelin
2016-11-30  9:09           ` Pierre Pfister (ppfister)
2016-11-30  9:18             ` [dpdk-dev] [PATCH v2] " Pierre Pfister (ppfister)
2016-11-30 10:19               ` Yuanhan Liu
2016-12-07  9:42               ` Yuanhan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A0EE9DE0-C82E-44CF-BB85-245488B78034@cisco.com \
    --to=ppfister@cisco.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=yuanhan.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).