DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Tiwei Bie <tiwei.bie@intel.com>
Cc: dev@dpdk.org, i.maximets@samsung.com, zhihong.wang@intel.com,
	jfreiman@redhat.com, mst@redhat.com
Subject: Re: [dpdk-dev] [PATCH v2] vhost: batch used descs chains write-back with packed ring
Date: Thu, 20 Dec 2018 10:27:46 +0100	[thread overview]
Message-ID: <543cf4ff-4712-da50-8a26-d51d6dfaa8d7@redhat.com> (raw)
In-Reply-To: <c3a2f456-1e53-21ec-e55c-0a2143620607@redhat.com>



On 12/20/18 9:49 AM, Maxime Coquelin wrote:
> 
> 
> On 12/20/18 5:44 AM, Tiwei Bie wrote:
>> On Wed, Dec 19, 2018 at 10:29:52AM +0100, Maxime Coquelin wrote:
>>> Instead of writing back descriptors chains in order, let's
>>> write the first chain flags last in order to improve batching.
>>>
>>> With Kernel's pktgen benchmark, ~3% performance gain is measured.
>>>
>>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>> ---
>>>
>>> V2:
>>> Revert back to initial implementation to have a write
>>> barrier before every descs flags store, but still
>>> store first desc flags last. (Missing barrier reported
>>> by Ilya)
>>>
>>>
>>>   lib/librte_vhost/virtio_net.c | 19 ++++++++++++++++---
>>>   1 file changed, 16 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/lib/librte_vhost/virtio_net.c 
>>> b/lib/librte_vhost/virtio_net.c
>>> index 8c657a101..de436af79 100644
>>> --- a/lib/librte_vhost/virtio_net.c
>>> +++ b/lib/librte_vhost/virtio_net.c
>>> @@ -97,6 +97,8 @@ flush_shadow_used_ring_packed(struct virtio_net *dev,
>>>   {
>>>       int i;
>>>       uint16_t used_idx = vq->last_used_idx;
>>> +    uint16_t head_idx = vq->last_used_idx;
>>> +    uint16_t head_flags = 0;
>>>       /* Split loop in two to save memory barriers */
>>>       for (i = 0; i < vq->shadow_used_idx; i++) {
>>> @@ -126,12 +128,17 @@ flush_shadow_used_ring_packed(struct virtio_net 
>>> *dev,
>>>               flags &= ~VRING_DESC_F_AVAIL;
>>>           }
>>> -        vq->desc_packed[vq->last_used_idx].flags = flags;
>>> +        if (i > 0) {
>>> +            vq->desc_packed[vq->last_used_idx].flags = flags;
>>> -        vhost_log_cache_used_vring(dev, vq,
>>> +            vhost_log_cache_used_vring(dev, vq,
>>>                       vq->last_used_idx *
>>>                       sizeof(struct vring_packed_desc),
>>>                       sizeof(struct vring_packed_desc));
>>> +        } else {
>>> +            head_idx = vq->last_used_idx;
>>> +            head_flags = flags;
>>> +        }
>>>           vq->last_used_idx += vq->shadow_used_packed[i].count;
>>>           if (vq->last_used_idx >= vq->size) {
>>> @@ -140,7 +147,13 @@ flush_shadow_used_ring_packed(struct virtio_net 
>>> *dev,
>>>           }
>>>       }
>>> -    rte_smp_wmb();
>>> +    vq->desc_packed[head_idx].flags = head_flags;
>>> +
>>> +    vhost_log_cache_used_vring(dev, vq,
>>> +                vq->last_used_idx *
>>
>> Should be head_idx.
> 
> Oh yes, thanks for spotting this.
> 
>>
>>> +                sizeof(struct vring_packed_desc),
>>> +                sizeof(struct vring_packed_desc));
>>> +
>>>       vq->shadow_used_idx = 0;
>>
>> A wmb() is needed before log_cache_sync?
> 
> I think you're right, I was wrong but thought we had a barrier in cache
> sync function.
> That's not very important for x86, but I think it should be preferable 
> to do it in vhost_log_cache_sync(), if logging is enabled.
> 
> What do you think?

I'll keep it in this function for now, as I think we cannot remove the
one in the split variant so it would mean having two barriers in that
case.

>>>       vhost_log_cache_sync(dev, vq);
>>>   }
>>> -- 
>>> 2.17.2
>>>

  reply	other threads:[~2018-12-20  9:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-19  9:29 Maxime Coquelin
2018-12-19 16:43 ` Michael S. Tsirkin
2018-12-20  4:44 ` Tiwei Bie
2018-12-20  8:49   ` Maxime Coquelin
2018-12-20  9:27     ` Maxime Coquelin [this message]
2018-12-20 14:03   ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=543cf4ff-4712-da50-8a26-d51d6dfaa8d7@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=dev@dpdk.org \
    --cc=i.maximets@samsung.com \
    --cc=jfreiman@redhat.com \
    --cc=mst@redhat.com \
    --cc=tiwei.bie@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).