DPDK patches and discussions
 help / color / mirror / Atom feed
From: Kevin Traynor <ktraynor@redhat.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: dev@dpdk.org, "Liu, Yong" <yong.liu@intel.com>,
	Maxime Coquelin <mcoqueli@redhat.com>,
	Adrian Moreno Zapata <amorenoz@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [dpdk-dev] [PATCH] vhost: flush shadow tx if there is no more packets
Date: Tue, 4 Feb 2020 13:48:54 +0000	[thread overview]
Message-ID: <edaf7ef4-621e-69ed-57d7-e9838990a238@redhat.com> (raw)
In-Reply-To: <CAJaqyWcGMOg3kg9A55izQ5Ey202ucxKEbwMW43+nVmsQh2_Xcg@mail.gmail.com>

On 04/02/2020 09:23, Eugenio Perez Martin wrote:
> Hi Kevin!
> 
> Sorry, thanks for noticing it! It fixes commit ("31d6c6a5b vhost: optimize
> packed ring dequeue"), what was not present on 18.11 version (I've checked
> that v19.08 does not contain the failure).
> 

Right, in that case the issue is present on 19.11 stable, so it's worth
adding the tags to get it fixed in 19.11 stable.

> Do I need to send another patch version with corrected commit message?
> 

Probably Maxime can do it on applying if you ask nicely :-)

> Thanks!
> 
> On Fri, Jan 31, 2020 at 7:38 PM Kevin Traynor <ktraynor@redhat.com> wrote:
> 
>> Hi Eugenio,
>>
>> On 29/01/2020 19:33, Eugenio Pérez wrote:
>>> The current implementation of vhost_net in packed vring tries to fill
>>> the shadow vector before send any actual changes to the guest. While
>>> this can be beneficial for the throughput, it conflicts with some
>>> bufferfloats methods like the linux kernel napi, that stops
>>> transmitting packets if there are too much bytes/buffers in the
>>> driver.
>>>
>>> To solve it, we flush the shadow packets at the end of
>>> virtio_dev_tx_packed if we have starved the vring, i.e., the next
>>> buffer is not available for the device.
>>>
>>> Since this last check can be expensive because of the atomic, we only
>>> check it if we have not obtained the expected (count) packets. If it
>>> happens to obtain "count" packets and there is no more available
>>> packets the caller needs to keep call virtio_dev_tx_packed again.
>>>
>>
>> It seems to be fixing an issue and should be considered for stable
>> branches? You can add the tags needed in the commit message here:
>>
>> Fixes: <commit that introduced bug/missed this case>
>> Cc: stable@dpdk.org
>>
>>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>>> ---
>>>  lib/librte_vhost/virtio_net.c | 27 ++++++++++++++++++++++++++-
>>>  1 file changed, 26 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/lib/librte_vhost/virtio_net.c
>> b/lib/librte_vhost/virtio_net.c
>>> index 21c311732..ac2842b2d 100644
>>> --- a/lib/librte_vhost/virtio_net.c
>>> +++ b/lib/librte_vhost/virtio_net.c
>>> @@ -2133,6 +2133,20 @@ virtio_dev_tx_packed_zmbuf(struct virtio_net *dev,
>>>       return pkt_idx;
>>>  }
>>>
>>> +static __rte_always_inline bool
>>> +next_desc_is_avail(const struct vhost_virtqueue *vq)
>>> +{
>>> +     bool wrap_counter = vq->avail_wrap_counter;
>>> +     uint16_t next_used_idx = vq->last_used_idx + 1;
>>> +
>>> +     if (next_used_idx >= vq->size) {
>>> +             next_used_idx -= vq->size;
>>> +             wrap_counter ^= 1;
>>> +     }
>>> +
>>> +     return desc_is_avail(&vq->desc_packed[next_used_idx],
>> wrap_counter);
>>> +}
>>> +
>>>  static __rte_noinline uint16_t
>>>  virtio_dev_tx_packed(struct virtio_net *dev,
>>>                    struct vhost_virtqueue *vq,
>>> @@ -2165,9 +2179,20 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>>>
>>>       } while (remained);
>>>
>>> -     if (vq->shadow_used_idx)
>>> +     if (vq->shadow_used_idx) {
>>>               do_data_copy_dequeue(vq);
>>>
>>> +             if (remained && !next_desc_is_avail(vq)) {
>>> +                     /*
>>> +                      * The guest may be waiting to TX some buffers to
>>> +                      * enqueue more to avoid bufferfloat, so we try to
>>> +                      * reduce latency here.
>>> +                      */
>>> +                     vhost_flush_dequeue_shadow_packed(dev, vq);
>>> +                     vhost_vring_call_packed(dev, vq);
>>> +             }
>>> +     }
>>> +
>>>       return pkt_idx;
>>>  }
>>>
>>>
>>
>>
> 


  reply	other threads:[~2020-02-04 13:49 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-09 15:47 [dpdk-dev] [Bug 383] dpdk virtio_user lack of notifications make vhost_net+napi stops tx buffers bugzilla
2020-01-09 15:55 ` eperezma
2020-01-15  7:05   ` Liu, Yong
2020-01-29 19:33 ` [dpdk-dev] [PATCH] vhost: flush shadow tx if there is no more packets Eugenio Pérez
2020-01-31 18:38   ` Kevin Traynor
2020-02-04  9:23     ` Eugenio Perez Martin
2020-02-04 13:48       ` Kevin Traynor [this message]
2020-02-04 15:05         ` Eugenio Perez Martin
2020-02-04 15:10           ` Maxime Coquelin
2020-02-05  9:09   ` Maxime Coquelin
2020-02-05  9:47   ` Maxime Coquelin
2020-02-05 15:45     ` Eugenio Perez Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=edaf7ef4-621e-69ed-57d7-e9838990a238@redhat.com \
    --to=ktraynor@redhat.com \
    --cc=amorenoz@redhat.com \
    --cc=dev@dpdk.org \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=mcoqueli@redhat.com \
    --cc=mst@redhat.com \
    --cc=yong.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).