DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xie, Huawei" <huawei.xie@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	Tetsuya Mukawa <mukawa@igel.co.jp>,
	"Traynor, Kevin" <kevin.traynor@intel.com>,
	"Tan, Jianfeng" <jianfeng.tan@intel.com>,
	Yuanhan Liu <yuanhan.liu@linux.intel.com>
Subject: Re: [dpdk-dev] [RFC PATCH] avail idx update optimizations
Date: Thu, 28 Apr 2016 09:17:46 +0000	[thread overview]
Message-ID: <C37D651A908B024F974696C65296B57B4C72E975@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <C37D651A908B024F974696C65296B57B4C71B952@SHSMSX101.ccr.corp.intel.com>

On 4/24/2016 9:24 PM, Xie, Huawei wrote:
> On 4/24/2016 5:15 PM, Michael S. Tsirkin wrote:
>> On Sun, Apr 24, 2016 at 02:45:22AM +0000, Xie, Huawei wrote:
>>> Forget to cc the mailing list.
>>>
>>> On 4/22/2016 9:53 PM, Xie, Huawei wrote:
>>>> Hi:
>>>>
>>>> This is a series of virtio/vhost idx/ring update optimizations for cache
>>>> to cache transfer. Actually I don't expect many of them as virtio/vhost
>>>> has already done quite right.
>> Hmm - is it a series or a single patch?
> Others in my mind is caching a copy of avail index in vhost. Use the
> cached copy if there are enough slots and then sync with the index in
> the ring.
> Haven't evaluated that idea.

Tried cached vhost idx which gives a slight better perf, but will hold
this patch, as i guess we couldn't blindly set this cached avail idx to
0, which might cause issue in live migration.

>
>>>> For this patch, in a VM2VM test, i observed ~6% performance increase.
>> Interesting. In that case, it seems likely that new ring layout
>> would give you an even bigger performance gain.
>> Could you take a look at tools/virtio/ringtest/ring.c
>> in latest Linux and tell me what do you think?
>> In particular, I know you looked at using vectored instructions
>> to do ring updates - would the layout in tools/virtio/ringtest/ring.c
>> interfere with that?
> Thanks. Would check. You know i have ever tried fixing avail ring in the
> virtio driver. In purely vhost->virtio test, it could have 2~3 times
> performance boost, but it isn't that obvious if with the physical nic
> involved, investigating that issue.
> I had planned to sync with you whether it is generic enough that we
> could have a negotiated feature, either for in order descriptor
> processing or like fixed avail ring. Would check your new ring layout.
>
>
>>>> In VM1, run testpmd with txonly mode
>>>> In VM2, run testpmd with rxonly mode
>>>> In host, run testpmd(with two vhostpmds) with io forward
>>>>
>>>> Michael:
>>>> We have talked about this method when i tried the fixed ring.
>>>>
>>>>
>>>> On 4/22/2016 5:12 PM, Xie, Huawei wrote:
>>>>> eliminate unnecessary cache to cache transfer between virtio and vhost
>>>>> core
>> Yes I remember proposing this, but you probably should include the
>> explanation about why this works in he commit log:
>>
>> - pre-format avail ring with expected descriptor index values
>> - as long as entries are consumed in-order, there's no
>>   need to modify the avail ring
>> - as long as avail ring is not modified, it can be
>>   valid in caches of both consumer and producer
> Yes, would add the explanation in the formal patch.
>
>
>>>>> ---
>>>>>  drivers/net/virtio/virtqueue.h | 3 ++-
>>>>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
>>>>> index 4e9239e..8c46a83 100644
>>>>> --- a/drivers/net/virtio/virtqueue.h
>>>>> +++ b/drivers/net/virtio/virtqueue.h
>>>>> @@ -302,7 +302,8 @@ vq_update_avail_ring(struct virtqueue *vq, uint16_t desc_idx)
>>>>>  	 * descriptor.
>>>>>  	 */
>>>>>  	avail_idx = (uint16_t)(vq->vq_avail_idx & (vq->vq_nentries - 1));
>>>>> -	vq->vq_ring.avail->ring[avail_idx] = desc_idx;
>>>>> +	if (unlikely(vq->vq_ring.avail->ring[avail_idx] != desc_idx))
>>>>> +		vq->vq_ring.avail->ring[avail_idx] = desc_idx;
>>>>>  	vq->vq_avail_idx++;
>>>>>  }
>>>>>  
>


  reply	other threads:[~2016-04-28  9:18 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-21 17:18 Huawei Xie
     [not found] ` <C37D651A908B024F974696C65296B57B4C713327@SHSMSX101.ccr.corp.intel.com>
2016-04-24  2:45   ` Xie, Huawei
2016-04-24  9:15     ` Michael S. Tsirkin
2016-04-24 13:23       ` Xie, Huawei
2016-04-28  9:17         ` Xie, Huawei [this message]
2016-04-27  8:53 ` [dpdk-dev] [PATCH] virtio: avoid avail ring entry index update if equal Huawei Xie
2016-04-28  6:19   ` Yuanhan Liu
2016-04-28  8:14     ` Thomas Monjalon
2016-04-28 13:15       ` Xie, Huawei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C37D651A908B024F974696C65296B57B4C72E975@SHSMSX101.ccr.corp.intel.com \
    --to=huawei.xie@intel.com \
    --cc=dev@dpdk.org \
    --cc=jianfeng.tan@intel.com \
    --cc=kevin.traynor@intel.com \
    --cc=mst@redhat.com \
    --cc=mukawa@igel.co.jp \
    --cc=stephen@networkplumber.org \
    --cc=yuanhan.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).