From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id EC4B158D4 for ; Tue, 3 May 2016 02:42:56 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP; 02 May 2016 17:42:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,570,1455004800"; d="scan'208";a="967323216" Received: from yliu-dev.sh.intel.com ([10.239.67.162]) by orsmga002.jf.intel.com with ESMTP; 02 May 2016 17:42:55 -0700 From: Yuanhan Liu To: dev@dpdk.org Cc: huawei.xie@intel.com, Yuanhan Liu Date: Mon, 2 May 2016 17:46:17 -0700 Message-Id: <1462236378-7604-3-git-send-email-yuanhan.liu@linux.intel.com> X-Mailer: git-send-email 1.9.0 In-Reply-To: <1462236378-7604-1-git-send-email-yuanhan.liu@linux.intel.com> References: <1462236378-7604-1-git-send-email-yuanhan.liu@linux.intel.com> Subject: [dpdk-dev] [PATCH 2/3] vhost: optimize dequeue for small packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 03 May 2016 00:42:57 -0000 Both current kernel virtio driver and DPDK virtio driver use at least 2 desc buffer for Tx: the first for storing the header, and the others for storing the data. Therefore, we could fetch the first data desc buf before the main loop, and do the copy first before the check of "are we done yet?". This could save one check for small packets, that just have one data desc buffer and need one mbuf to store it. Signed-off-by: Yuanhan Liu --- lib/librte_vhost/vhost_rxtx.c | 52 ++++++++++++++++++++++++++++++------------- 1 file changed, 36 insertions(+), 16 deletions(-) diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c index 2c3b810..34d6ed1 100644 --- a/lib/librte_vhost/vhost_rxtx.c +++ b/lib/librte_vhost/vhost_rxtx.c @@ -753,18 +753,48 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, return -1; desc_addr = gpa_to_vva(dev, desc->addr); - rte_prefetch0((void *)(uintptr_t)desc_addr); - - /* Retrieve virtio net header */ hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr); - desc_avail = desc->len - dev->vhost_hlen; - desc_offset = dev->vhost_hlen; + rte_prefetch0(hdr); + + /* + * Both current kernel virio driver and DPDK virtio driver + * use at least 2 desc bufferr for Tx: the first for storing + * the header, and others for storing the data. + */ + if (likely(desc->len == dev->vhost_hlen)) { + desc = &vq->desc[desc->next]; + + desc_addr = gpa_to_vva(dev, desc->addr); + rte_prefetch0((void *)(uintptr_t)desc_addr); + + desc_offset = 0; + desc_avail = desc->len; + nr_desc += 1; + + PRINT_PACKET(dev, (uintptr_t)desc_addr, desc->len, 0); + } else { + desc_avail = desc->len - dev->vhost_hlen; + desc_offset = dev->vhost_hlen; + } mbuf_offset = 0; mbuf_avail = m->buf_len - RTE_PKTMBUF_HEADROOM; - while (desc_avail != 0 || (desc->flags & VRING_DESC_F_NEXT) != 0) { + while (1) { + cpy_len = RTE_MIN(desc_avail, mbuf_avail); + rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, mbuf_offset), + (void *)((uintptr_t)(desc_addr + desc_offset)), + cpy_len); + + mbuf_avail -= cpy_len; + mbuf_offset += cpy_len; + desc_avail -= cpy_len; + desc_offset += cpy_len; + /* This desc reaches to its end, get the next one */ if (desc_avail == 0) { + if ((desc->flags & VRING_DESC_F_NEXT) == 0) + break; + if (unlikely(desc->next >= vq->size || ++nr_desc >= vq->size)) return -1; @@ -800,16 +830,6 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, mbuf_offset = 0; mbuf_avail = cur->buf_len - RTE_PKTMBUF_HEADROOM; } - - cpy_len = RTE_MIN(desc_avail, mbuf_avail); - rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, mbuf_offset), - (void *)((uintptr_t)(desc_addr + desc_offset)), - cpy_len); - - mbuf_avail -= cpy_len; - mbuf_offset += cpy_len; - desc_avail -= cpy_len; - desc_offset += cpy_len; } prev->data_len = mbuf_offset; -- 1.9.0