From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5EA4AA0535; Tue, 4 Feb 2020 16:06:33 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 333D71C1F5; Tue, 4 Feb 2020 16:06:33 +0100 (CET) Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by dpdk.org (Postfix) with ESMTP id 90F321C1EC for ; Tue, 4 Feb 2020 16:06:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1580828791; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=to7ZtRzhXpBocR9RXIHB97NgutgVUtL4woFyx4YwdWc=; b=I0GaKUz7X7yH+8rR8w/teVrEVT4DWxSbZuXS4IBSRnHR5s+y0IeTCO+GCdb7kcbiPHdfJ4 HyPCD2t/E4SenK/fj0td44c3in4BYT++zl7eIm4bplDrvFrCp+vraHIlfv85jPnnZm6JU1 NoYG7N1APyZUoMbvr9Kxm2SuDHhK31Y= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-394-DitE6tDOOAW4WvqLO7Aubg-1; Tue, 04 Feb 2020 10:06:26 -0500 Received: by mail-qv1-f71.google.com with SMTP id dw11so11800195qvb.16 for ; Tue, 04 Feb 2020 07:06:25 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=to7ZtRzhXpBocR9RXIHB97NgutgVUtL4woFyx4YwdWc=; b=lAmzAAU03yhfAf4ObD9jxjeOVjCHe9MEx3L70+PAeDpQtDaH+E0KB8m/JXj4Cwb0ur 6A3nLlk9R8a/IyqksDlfxEwrz9B1PkCN5Q6m6NhvXJPXSvXG2podYcRkN1BNurIjkiGV H/hYvh/PWzckumTgKh2wk1mb4eL8LFnwgBFvQz95aEkJxAI4pNPsYYtRvmJWuRizGveS f0j0dF0+V4phBU2M5j+F9w3x+JUyT7nniX1R0OSWy3iQndJgpglGcAvfkAIR8tVuljmJ nOr46FoavQpUgb56VKqRS7pucRB4cVMzV1tgdwVWG6Fh68GTN2H18HU5d1DAlg5EL9fF 9hYg== X-Gm-Message-State: APjAAAVHXXL4iAeb60mdsjz0xEn7zDb27vEr6AAddFA29q4LQcpfBn6x dwl2uY9+disuTIce8TszyUcGFrwtuIyj+/3YgABMSsLNjV9xI147NYkY9uatLOq99ufcAA614TI oxMdxoBJqOOefIX97SEg= X-Received: by 2002:ae9:ebd8:: with SMTP id b207mr12882575qkg.353.1580828784127; Tue, 04 Feb 2020 07:06:24 -0800 (PST) X-Google-Smtp-Source: APXvYqzMNOdEpVH+eltahO/n2ufEy+7retMCy+J1pjMPHWDjoIu+Clm1qOs8RTDK58TdAqnovPro1Qq1lY2u1MB23T8= X-Received: by 2002:ae9:ebd8:: with SMTP id b207mr12882420qkg.353.1580828782404; Tue, 04 Feb 2020 07:06:22 -0800 (PST) MIME-Version: 1.0 References: <20200129193310.9157-1-eperezma@redhat.com> <1bdb5d16-2a39-1a19-9c47-69b8cb4607a1@redhat.com> In-Reply-To: From: Eugenio Perez Martin Date: Tue, 4 Feb 2020 16:05:46 +0100 Message-ID: To: Kevin Traynor , Maxime Coquelin Cc: dev@dpdk.org, "Liu, Yong" , Adrian Moreno Zapata , Jason Wang , "Michael S. Tsirkin" X-MC-Unique: DitE6tDOOAW4WvqLO7Aubg-1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH] vhost: flush shadow tx if there is no more packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Ouch, my bad again, sorry :). I've forwarded the patch to stable@, please let me know if I need to do something else. Maxime, please let me know if I need to send a new version with the "Fixed: " tag :). Thanks! On Tue, Feb 4, 2020 at 2:49 PM Kevin Traynor wrote: > On 04/02/2020 09:23, Eugenio Perez Martin wrote: > > Hi Kevin! > > > > Sorry, thanks for noticing it! It fixes commit ("31d6c6a5b vhost: > optimize > > packed ring dequeue"), what was not present on 18.11 version (I've > checked > > that v19.08 does not contain the failure). > > > > Right, in that case the issue is present on 19.11 stable, so it's worth > adding the tags to get it fixed in 19.11 stable. > > > Do I need to send another patch version with corrected commit message? > > > > Probably Maxime can do it on applying if you ask nicely :-) > > > Thanks! > > > > On Fri, Jan 31, 2020 at 7:38 PM Kevin Traynor > wrote: > > > >> Hi Eugenio, > >> > >> On 29/01/2020 19:33, Eugenio P=C3=A9rez wrote: > >>> The current implementation of vhost_net in packed vring tries to fill > >>> the shadow vector before send any actual changes to the guest. While > >>> this can be beneficial for the throughput, it conflicts with some > >>> bufferfloats methods like the linux kernel napi, that stops > >>> transmitting packets if there are too much bytes/buffers in the > >>> driver. > >>> > >>> To solve it, we flush the shadow packets at the end of > >>> virtio_dev_tx_packed if we have starved the vring, i.e., the next > >>> buffer is not available for the device. > >>> > >>> Since this last check can be expensive because of the atomic, we only > >>> check it if we have not obtained the expected (count) packets. If it > >>> happens to obtain "count" packets and there is no more available > >>> packets the caller needs to keep call virtio_dev_tx_packed again. > >>> > >> > >> It seems to be fixing an issue and should be considered for stable > >> branches? You can add the tags needed in the commit message here: > >> > >> Fixes: > >> Cc: stable@dpdk.org > >> > >>> Signed-off-by: Eugenio P=C3=A9rez > >>> --- > >>> lib/librte_vhost/virtio_net.c | 27 ++++++++++++++++++++++++++- > >>> 1 file changed, 26 insertions(+), 1 deletion(-) > >>> > >>> diff --git a/lib/librte_vhost/virtio_net.c > >> b/lib/librte_vhost/virtio_net.c > >>> index 21c311732..ac2842b2d 100644 > >>> --- a/lib/librte_vhost/virtio_net.c > >>> +++ b/lib/librte_vhost/virtio_net.c > >>> @@ -2133,6 +2133,20 @@ virtio_dev_tx_packed_zmbuf(struct virtio_net > *dev, > >>> return pkt_idx; > >>> } > >>> > >>> +static __rte_always_inline bool > >>> +next_desc_is_avail(const struct vhost_virtqueue *vq) > >>> +{ > >>> + bool wrap_counter =3D vq->avail_wrap_counter; > >>> + uint16_t next_used_idx =3D vq->last_used_idx + 1; > >>> + > >>> + if (next_used_idx >=3D vq->size) { > >>> + next_used_idx -=3D vq->size; > >>> + wrap_counter ^=3D 1; > >>> + } > >>> + > >>> + return desc_is_avail(&vq->desc_packed[next_used_idx], > >> wrap_counter); > >>> +} > >>> + > >>> static __rte_noinline uint16_t > >>> virtio_dev_tx_packed(struct virtio_net *dev, > >>> struct vhost_virtqueue *vq, > >>> @@ -2165,9 +2179,20 @@ virtio_dev_tx_packed(struct virtio_net *dev, > >>> > >>> } while (remained); > >>> > >>> - if (vq->shadow_used_idx) > >>> + if (vq->shadow_used_idx) { > >>> do_data_copy_dequeue(vq); > >>> > >>> + if (remained && !next_desc_is_avail(vq)) { > >>> + /* > >>> + * The guest may be waiting to TX some buffers = to > >>> + * enqueue more to avoid bufferfloat, so we try > to > >>> + * reduce latency here. > >>> + */ > >>> + vhost_flush_dequeue_shadow_packed(dev, vq); > >>> + vhost_vring_call_packed(dev, vq); > >>> + } > >>> + } > >>> + > >>> return pkt_idx; > >>> } > >>> > >>> > >> > >> > > > >