From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D38A2A09EF; Tue, 22 Dec 2020 04:15:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 85A11CA8F; Tue, 22 Dec 2020 04:14:57 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id A7546C9DC for ; Tue, 22 Dec 2020 04:14:54 +0100 (CET) IronPort-SDR: aQakeZeK7Jf5E6f7nuY9p2fQ7UiFK42Zw4LOViWSEL7XpjJBNfDK2QuCsUHZjlZa5lABfZVTom Tq0AO8koNlng== X-IronPort-AV: E=McAfee;i="6000,8403,9842"; a="155028244" X-IronPort-AV: E=Sophos;i="5.78,438,1599548400"; d="scan'208";a="155028244" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2020 19:14:53 -0800 IronPort-SDR: 5o9DoD+XCzO4+ArmW1xXMbGGRLM1sES+O2LwQy+RYWHWJLslbXsTH344Zfy5Q93VxQIRcF9nsM XwdF3qILnN2A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,438,1599548400"; d="scan'208";a="416199086" Received: from npg_dpdk_virtio_jiayuhu_07.sh.intel.com ([10.67.118.193]) by orsmga001.jf.intel.com with ESMTP; 21 Dec 2020 19:14:50 -0800 From: Jiayu Hu To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, cheng1.jiang@intel.com, yinan.wang@intel.com, jiayu.hu@intel.com Date: Tue, 22 Dec 2020 04:46:04 -0500 Message-Id: <1608630365-131192-2-git-send-email-jiayu.hu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1608630365-131192-1-git-send-email-jiayu.hu@intel.com> References: <1607678500-172518-1-git-send-email-jiayu.hu@intel.com> <1608630365-131192-1-git-send-email-jiayu.hu@intel.com> Subject: [dpdk-dev] [Patch v2 1/2] vhost: cleanup async enqueue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch removes unnecessary check and function calls, and it changes appropriate types for internal variables and fixes typos. Signed-off-by: Jiayu Hu --- lib/librte_vhost/rte_vhost_async.h | 6 +++--- lib/librte_vhost/virtio_net.c | 16 ++++++++-------- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/lib/librte_vhost/rte_vhost_async.h b/lib/librte_vhost/rte_vhost_async.h index c73bd7c..3be4ee4 100644 --- a/lib/librte_vhost/rte_vhost_async.h +++ b/lib/librte_vhost/rte_vhost_async.h @@ -147,8 +147,8 @@ __rte_experimental int rte_vhost_async_channel_unregister(int vid, uint16_t queue_id); /** - * This function submit enqueue data to async engine. This function has - * no guranttee to the transfer completion upon return. Applications + * This function submits enqueue data to async engine. This function has + * no guarantee to the transfer completion upon return. Applications * should poll transfer status by rte_vhost_poll_enqueue_completed() * * @param vid @@ -167,7 +167,7 @@ uint16_t rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count); /** - * This function check async completion status for a specific vhost + * This function checks async completion status for a specific vhost * device queue. Packets which finish copying (enqueue) operation * will be returned in an array. * diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 6c51286..fc654be 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1128,8 +1128,11 @@ async_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, } out: - async_fill_iter(src_it, tlen, src_iovec, tvec_idx); - async_fill_iter(dst_it, tlen, dst_iovec, tvec_idx); + if (tlen) { + async_fill_iter(src_it, tlen, src_iovec, tvec_idx); + async_fill_iter(dst_it, tlen, dst_iovec, tvec_idx); + } else + src_it->count = 0; return error; } @@ -1492,10 +1495,9 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct rte_vhost_iov_iter *src_it = it_pool; struct rte_vhost_iov_iter *dst_it = it_pool + 1; uint16_t n_free_slot, slot_idx = 0; - uint16_t pkt_err = 0; uint16_t segs_await = 0; struct async_inflight_info *pkts_info = vq->async_pkts_info; - int n_pkts = 0; + uint32_t n_pkts = 0, pkt_err = 0; avail_head = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE); @@ -1553,11 +1555,9 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, /* * conditions to trigger async device transfer: * - buffered packet number reaches transfer threshold - * - this is the last packet in the burst enqueue * - unused async iov number is less than max vhost vector */ if (pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD || - (pkt_idx == count - 1 && pkt_burst_idx) || (VHOST_MAX_ASYNC_VEC / 2 - segs_await < BUF_VECTOR_MAX)) { n_pkts = vq->async_ops.transfer_data(dev->vid, @@ -1569,7 +1569,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, segs_await = 0; vq->async_pkts_inflight_n += pkt_burst_idx; - if (unlikely(n_pkts < (int)pkt_burst_idx)) { + if (unlikely(n_pkts < pkt_burst_idx)) { /* * log error packets number here and do actual * error processing when applications poll @@ -1589,7 +1589,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, queue_id, tdes, 0, pkt_burst_idx); vq->async_pkts_inflight_n += pkt_burst_idx; - if (unlikely(n_pkts < (int)pkt_burst_idx)) + if (unlikely(n_pkts < pkt_burst_idx)) pkt_err = pkt_burst_idx - n_pkts; } -- 2.7.4