From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 041FFA00C3; Mon, 3 Oct 2022 12:07:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A2F2E40DFB; Mon, 3 Oct 2022 12:07:38 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 59ADB40695 for ; Mon, 3 Oct 2022 12:07:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1664791656; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dcOdKFZUvKQbnmQW8aIQR/1UFxEOezGQR/Ijkb8jugU=; b=grb3D/Q/jaEi2PoF36yTH1aXgtoaVQEFu+9Q0NPf2BPAG89VekaBaQI2fooozqMdtzpGSS bnpRBZGCy8RWXPaZxHXnMYAeUvVxd1aasu2MAQ/NaHc92Qtu7q3PTjyYuq+g/CR9c2CWy8 ZSdA9uxwu+EZjyRcLidszXxarWswets= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-605-zxXQimaqOXO5UHDsznXUuA-1; Mon, 03 Oct 2022 06:07:34 -0400 X-MC-Unique: zxXQimaqOXO5UHDsznXUuA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AADB8101A528; Mon, 3 Oct 2022 10:07:33 +0000 (UTC) Received: from [10.39.208.19] (unknown [10.39.208.19]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D0571492B04; Mon, 3 Oct 2022 10:07:31 +0000 (UTC) Message-ID: <2ffe72d3-08ea-3baf-49c7-7510af00fbcd@redhat.com> Date: Mon, 3 Oct 2022 12:07:29 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.3.0 Subject: Re: [PATH 1/2] vhost: fix descs count in async vhost packed ring To: Cheng Jiang , chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, wenwux.ma@intel.com, yuanx.wang@intel.com, yvonnex.yang@intel.com, xingguang.he@intel.com References: <20220822043126.19340-1-cheng1.jiang@intel.com> <20220822043126.19340-2-cheng1.jiang@intel.com> From: Maxime Coquelin In-Reply-To: <20220822043126.19340-2-cheng1.jiang@intel.com> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 8/22/22 06:31, Cheng Jiang wrote: > When vhost receive packets from the front-end using packed virtqueue, it receives* > might use multiple descriptors for one packet, so we need calculate and so we need to* > record the descriptor number for each packet to update available > descriptor counter and used descriptor counter, and rollback when DMA > ring is full. This is a fix, so the Fixes tag should be present, and stable@dpdk.org cc'ed. > Signed-off-by: Cheng Jiang > --- > lib/vhost/virtio_net.c | 24 +++++++++++++++--------- > 1 file changed, 15 insertions(+), 9 deletions(-) > > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c > index 35fa4670fd..bfc6d65b7c 100644 > --- a/lib/vhost/virtio_net.c > +++ b/lib/vhost/virtio_net.c > @@ -3553,14 +3553,15 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev, > } > > static __rte_always_inline void > -vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, uint16_t buf_id) > +vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, > + uint16_t buf_id, uint16_t count) > { > struct vhost_async *async = vq->async; > uint16_t idx = async->buffer_idx_packed; > > async->buffers_packed[idx].id = buf_id; > async->buffers_packed[idx].len = 0; > - async->buffers_packed[idx].count = 1; > + async->buffers_packed[idx].count = count; > > async->buffer_idx_packed++; > if (async->buffer_idx_packed >= vq->size) > @@ -3581,6 +3582,8 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, > uint16_t nr_vec = 0; > uint32_t buf_len; > struct buf_vector buf_vec[BUF_VECTOR_MAX]; > + struct vhost_async *async = vq->async; > + struct async_inflight_info *pkts_info = async->pkts_info; > static bool allocerr_warned; > > if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx, &desc_count, > @@ -3609,8 +3612,12 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, > return -1; > } > > + pkts_info[slot_idx].descs = desc_count; > + > /* update async shadow packed ring */ > - vhost_async_shadow_dequeue_single_packed(vq, buf_id); > + vhost_async_shadow_dequeue_single_packed(vq, buf_id, desc_count); > + > + vq_inc_last_avail_packed(vq, desc_count); > > return err; > } > @@ -3649,9 +3656,6 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, > } > > pkts_info[slot_idx].mbuf = pkt; > - > - vq_inc_last_avail_packed(vq, 1); > - > } > > n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx, > @@ -3662,6 +3666,8 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, > pkt_err = pkt_idx - n_xfer; > > if (unlikely(pkt_err)) { > + uint16_t descs_err = 0; > + > pkt_idx -= pkt_err; > > /** > @@ -3678,10 +3684,10 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, > } > > /* recover available ring */ > - if (vq->last_avail_idx >= pkt_err) { > - vq->last_avail_idx -= pkt_err; > + if (vq->last_avail_idx >= descs_err) { > + vq->last_avail_idx -= descs_err; > } else { > - vq->last_avail_idx += vq->size - pkt_err; > + vq->last_avail_idx += vq->size - descs_err; > vq->avail_wrap_counter ^= 1; > } > } I'm not sure to understand, isn't descs_err always 0 here? Maxime