From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7CA71A0544 for ; Tue, 11 Oct 2022 05:46:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5A79E42C09; Tue, 11 Oct 2022 05:46:03 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 738A142C05; Tue, 11 Oct 2022 05:46:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665459961; x=1696995961; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vSfFHC85/WaO6Y1PMgN5HNek0gX21+p9DQGQgGt7+Ac=; b=WsgF82vLXrgu98GiUyHPB37Or1YawoAAel+tAJwI+E0WGU+D2MfnCvKe l4vwFM3X/Krf4/c8egc1k3lBdNAaZQqMiVfPj6YKx4XlXS02yD+4VpBIZ dCpJ9fD0cfnZXLb7j/KoJhx3v+SI0erDVutwJNUjs0bUs+QWvRjemNeWB meeVTuuqnCRJutO/7VjZ+1FCsPokBjBnf7YMJ6Inc90UKTACMlWxhvbVf rBJGW/ENBJQAaVNWk2sSM4vHJwRcGSOS5jIknhaZfkx/3n47r1PzIKkxK xQmHLbvJ6v4BFNbrK+3Xj/0+zy1tOeBJg9R/A0a5MXF7xlF3JbWmvjDJZ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="330872914" X-IronPort-AV: E=Sophos;i="5.95,175,1661842800"; d="scan'208";a="330872914" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 20:45:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="730809829" X-IronPort-AV: E=Sophos;i="5.95,175,1661842800"; d="scan'208";a="730809829" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.118.237]) by fmsmga002.fm.intel.com with ESMTP; 10 Oct 2022 20:45:56 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, wenwux.ma@intel.com, yuanx.wang@intel.com, yvonnex.yang@intel.com, xingguang.he@intel.com, Cheng Jiang , stable@dpdk.org Subject: [PATCH v2 1/2] vhost: fix descs count in async vhost packed ring Date: Tue, 11 Oct 2022 03:08:02 +0000 Message-Id: <20221011030803.16746-2-cheng1.jiang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221011030803.16746-1-cheng1.jiang@intel.com> References: <20220822043126.19340-1-cheng1.jiang@intel.com> <20221011030803.16746-1-cheng1.jiang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org When vhost receive packets from the front-end using packed virtqueue, it might use multiple descriptors for one packet, so we need calculate and record the descriptor number for each packet to update available descriptor counter and used descriptor counter, and rollback when DMA ring is full. Fixes: fe8477ebbd94 ("vhost: support async packed ring dequeue") Cc: stable@dpdk.org Signed-off-by: Cheng Jiang --- lib/vhost/virtio_net.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 8f4d0f0502..457ac2e92a 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -3548,14 +3548,15 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev, } static __rte_always_inline void -vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, uint16_t buf_id) +vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, + uint16_t buf_id, uint16_t count) { struct vhost_async *async = vq->async; uint16_t idx = async->buffer_idx_packed; async->buffers_packed[idx].id = buf_id; async->buffers_packed[idx].len = 0; - async->buffers_packed[idx].count = 1; + async->buffers_packed[idx].count = count; async->buffer_idx_packed++; if (async->buffer_idx_packed >= vq->size) @@ -3576,6 +3577,8 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, uint16_t nr_vec = 0; uint32_t buf_len; struct buf_vector buf_vec[BUF_VECTOR_MAX]; + struct vhost_async *async = vq->async; + struct async_inflight_info *pkts_info = async->pkts_info; static bool allocerr_warned; if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx, &desc_count, @@ -3604,8 +3607,12 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, return -1; } + pkts_info[slot_idx].descs = desc_count; + /* update async shadow packed ring */ - vhost_async_shadow_dequeue_single_packed(vq, buf_id); + vhost_async_shadow_dequeue_single_packed(vq, buf_id, desc_count); + + vq_inc_last_avail_packed(vq, desc_count); return err; } @@ -3644,9 +3651,6 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } pkts_info[slot_idx].mbuf = pkt; - - vq_inc_last_avail_packed(vq, 1); - } n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx, @@ -3657,6 +3661,8 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { + uint16_t descs_err = 0; + pkt_idx -= pkt_err; /** @@ -3673,10 +3679,10 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } /* recover available ring */ - if (vq->last_avail_idx >= pkt_err) { - vq->last_avail_idx -= pkt_err; + if (vq->last_avail_idx >= descs_err) { + vq->last_avail_idx -= descs_err; } else { - vq->last_avail_idx += vq->size - pkt_err; + vq->last_avail_idx += vq->size - descs_err; vq->avail_wrap_counter ^= 1; } } -- 2.35.1