From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06A25A0531; Tue, 4 Feb 2020 10:24:17 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1276C1C00F; Tue, 4 Feb 2020 10:24:17 +0100 (CET) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by dpdk.org (Postfix) with ESMTP id 0EAC21C00D for ; Tue, 4 Feb 2020 10:24:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1580808254; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=bT8N5AhBq4OANorZ6xtFuVfVcHNOUIQQWXeNizkCTps=; b=FKH+Ymk/RO6ZWqoUzjrZacC93qwNf1EsaPBxGRR/duxHGEOFclnOxTYnoeOFGPH4QU90C7 c5JxHTpDAvZHpU5VYVfPcyFn2//2+K4IYdePiisCu+JD2f5FFdsqJCUfa9dcGDjoJjYG+Z tc927S94T3HAxhJy3AngtkBbonnomWg= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-40-yIGRzuHpN6yIC4e8tuMkGw-1; Tue, 04 Feb 2020 04:24:12 -0500 Received: by mail-qt1-f198.google.com with SMTP id y3so11944159qti.15 for ; Tue, 04 Feb 2020 01:24:12 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=bT8N5AhBq4OANorZ6xtFuVfVcHNOUIQQWXeNizkCTps=; b=YRzALus/AbfIXyQxoxq09MUKIRXo5t7/71beqLhbBvzOTYag9nCq1XIbZbLWAjO+Kn wvz8cel8NAa7KRs2ylIB6UYqWDH4s3S6YQvtcYtsgfljXlZaqOtuie1p+P10xq1q9hEb B49SS3Xu9yg2RxqdJXIKEonc8IgiKCO4YnVhNe+koKOwdBRlQjaZo/uLxhqy6J9iUn+V D5i0wjPhPkUvDBg/C+cU4z/xIa2BmE3Nf2DYrow+prvKh8BBljDR9Kr//curJbbhJgml EbAjxYKCn2HRd/uXyTaHbBIr18g3n50U+ud8a9bjPfGqKqhQozhyiz68HpJ3SrvjeHdj N7FA== X-Gm-Message-State: APjAAAVi4lWP3KALn19TbaDeXWjRnIlb3ncHTWRQdhkSiXkWrXfUTqLD hnQMPROxZDMQ0DkBnSQiwXU8cND5kMEQNwGQ6gquSnb0js4elSDJvBS3jgODhHuKYu04MxkGllo e4oFGG0DVix6jRNX6wQY= X-Received: by 2002:ac8:176f:: with SMTP id u44mr14447263qtk.379.1580808251720; Tue, 04 Feb 2020 01:24:11 -0800 (PST) X-Google-Smtp-Source: APXvYqy19I8ZX7NHcT8MUjxJSCa3qdF/NbwkFNRDuUx0hmti1t43JITcDzbuZG3tV66/Wwqq9ab6lHmMti6ZC9q3NiI= X-Received: by 2002:ac8:176f:: with SMTP id u44mr14447243qtk.379.1580808251476; Tue, 04 Feb 2020 01:24:11 -0800 (PST) MIME-Version: 1.0 References: <20200129193310.9157-1-eperezma@redhat.com> <1bdb5d16-2a39-1a19-9c47-69b8cb4607a1@redhat.com> In-Reply-To: <1bdb5d16-2a39-1a19-9c47-69b8cb4607a1@redhat.com> From: Eugenio Perez Martin Date: Tue, 4 Feb 2020 10:23:35 +0100 Message-ID: To: Kevin Traynor Cc: dev@dpdk.org, "Liu, Yong" , Maxime Coquelin , Adrian Moreno Zapata , Jason Wang , "Michael S. Tsirkin" X-MC-Unique: yIGRzuHpN6yIC4e8tuMkGw-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH] vhost: flush shadow tx if there is no more packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Kevin! Sorry, thanks for noticing it! It fixes commit ("31d6c6a5b vhost: optimize packed ring dequeue"), what was not present on 18.11 version (I've checked that v19.08 does not contain the failure). Do I need to send another patch version with corrected commit message? Thanks! On Fri, Jan 31, 2020 at 7:38 PM Kevin Traynor wrote: > Hi Eugenio, > > On 29/01/2020 19:33, Eugenio P=C3=A9rez wrote: > > The current implementation of vhost_net in packed vring tries to fill > > the shadow vector before send any actual changes to the guest. While > > this can be beneficial for the throughput, it conflicts with some > > bufferfloats methods like the linux kernel napi, that stops > > transmitting packets if there are too much bytes/buffers in the > > driver. > > > > To solve it, we flush the shadow packets at the end of > > virtio_dev_tx_packed if we have starved the vring, i.e., the next > > buffer is not available for the device. > > > > Since this last check can be expensive because of the atomic, we only > > check it if we have not obtained the expected (count) packets. If it > > happens to obtain "count" packets and there is no more available > > packets the caller needs to keep call virtio_dev_tx_packed again. > > > > It seems to be fixing an issue and should be considered for stable > branches? You can add the tags needed in the commit message here: > > Fixes: > Cc: stable@dpdk.org > > > Signed-off-by: Eugenio P=C3=A9rez > > --- > > lib/librte_vhost/virtio_net.c | 27 ++++++++++++++++++++++++++- > > 1 file changed, 26 insertions(+), 1 deletion(-) > > > > diff --git a/lib/librte_vhost/virtio_net.c > b/lib/librte_vhost/virtio_net.c > > index 21c311732..ac2842b2d 100644 > > --- a/lib/librte_vhost/virtio_net.c > > +++ b/lib/librte_vhost/virtio_net.c > > @@ -2133,6 +2133,20 @@ virtio_dev_tx_packed_zmbuf(struct virtio_net *de= v, > > return pkt_idx; > > } > > > > +static __rte_always_inline bool > > +next_desc_is_avail(const struct vhost_virtqueue *vq) > > +{ > > + bool wrap_counter =3D vq->avail_wrap_counter; > > + uint16_t next_used_idx =3D vq->last_used_idx + 1; > > + > > + if (next_used_idx >=3D vq->size) { > > + next_used_idx -=3D vq->size; > > + wrap_counter ^=3D 1; > > + } > > + > > + return desc_is_avail(&vq->desc_packed[next_used_idx], > wrap_counter); > > +} > > + > > static __rte_noinline uint16_t > > virtio_dev_tx_packed(struct virtio_net *dev, > > struct vhost_virtqueue *vq, > > @@ -2165,9 +2179,20 @@ virtio_dev_tx_packed(struct virtio_net *dev, > > > > } while (remained); > > > > - if (vq->shadow_used_idx) > > + if (vq->shadow_used_idx) { > > do_data_copy_dequeue(vq); > > > > + if (remained && !next_desc_is_avail(vq)) { > > + /* > > + * The guest may be waiting to TX some buffers to > > + * enqueue more to avoid bufferfloat, so we try t= o > > + * reduce latency here. > > + */ > > + vhost_flush_dequeue_shadow_packed(dev, vq); > > + vhost_vring_call_packed(dev, vq); > > + } > > + } > > + > > return pkt_idx; > > } > > > > > >