From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9DD13A0547; Thu, 9 Sep 2021 09:11:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 871C1406B4; Thu, 9 Sep 2021 09:11:46 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id C7AFC4003E for ; Thu, 9 Sep 2021 09:11:45 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10101"; a="306271621" X-IronPort-AV: E=Sophos;i="5.85,279,1624345200"; d="scan'208";a="306271621" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Sep 2021 00:11:44 -0700 X-IronPort-AV: E=Sophos;i="5.85,279,1624345200"; d="scan'208";a="504099068" Received: from unknown (HELO localhost.localdomain) ([10.240.183.50]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Sep 2021 00:11:40 -0700 From: Yuan Wang To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, Sunil.Pai.G@intel.com, jiayu.hu@intel.com, xuan.ding@intel.com, cheng1.jiang@intel.com, wenwux.ma@intel.com, yvonnex.yang@intel.com, Yuan Wang Date: Thu, 9 Sep 2021 06:58:06 +0000 Message-Id: <20210909065807.812145-2-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210909065807.812145-1-yuanx.wang@intel.com> References: <20210909065807.812145-1-yuanx.wang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH 1/2] vhost: support to clear in-flight packets for async dequeue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. Signed-off-by: Yuan Wang --- lib/vhost/virtio_net.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index e0159b53e3..7f6183a929 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -27,6 +27,11 @@ #define VHOST_ASYNC_BATCH_THRESHOLD 32 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, + struct vhost_virtqueue *vq, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, bool legacy_ol_flags); + static __rte_always_inline bool rxvq_is_mergeable(struct virtio_net *dev) { @@ -2119,11 +2124,6 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(ERR, "(%d) %s: invalid virtqueue idx %d.\n", - dev->vid, __func__, queue_id); - return 0; - } vq = dev->virtqueue[queue_id]; @@ -2133,7 +2133,11 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count); + if ((queue_id % 2) == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, queue_id, pkts, count, + dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); return n_pkts_cpl; } -- 2.25.1