From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 9EAD05A71 for ; Sun, 25 Oct 2015 16:35:20 +0100 (CET) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP; 25 Oct 2015 08:35:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,196,1444719600"; d="scan'208";a="835080141" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2015 08:35:19 -0700 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t9PFZHvm016106; Sun, 25 Oct 2015 23:35:17 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t9PFZEEl018324; Sun, 25 Oct 2015 23:35:16 +0800 Received: (from hxie5@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t9PFZEaM018320; Sun, 25 Oct 2015 23:35:14 +0800 From: Huawei Xie To: dev@dpdk.org Date: Sun, 25 Oct 2015 23:35:00 +0800 Message-Id: <1445787304-18267-4-git-send-email-huawei.xie@intel.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1445787304-18267-1-git-send-email-huawei.xie@intel.com> References: <1443537953-23917-1-git-send-email-huawei.xie@intel.com> <1445787304-18267-1-git-send-email-huawei.xie@intel.com> Subject: [dpdk-dev] [PATCH v5 3/7] virtio: rx/tx ring layout optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Oct 2015 15:35:21 -0000 Changes in V4: - fix the error in tx ring layout chart in this commit message. In DPDK based switching envrioment, mostly vhost runs on a dedicated core while virtio processing in guest VMs runs on different cores. Take RX for example, with generic implementation, for each guest buffer, a) virtio driver allocates a descriptor from free descriptor list b) modify the entry of avail ring to point to allocated descriptor c) after packet is received, free the descriptor When vhost fetches the avail ring, it need to fetch the modified L1 cache from virtio core, which is a heavy cost in current CPU implementation. This idea of this optimization is: allocate the fixed descriptor for each entry of avail ring, so avail ring will always be the same during the run. This removes L1M cache transfer from virtio core to vhost core for avail ring. (Note we couldn't avoid the cache transfer for descriptors). Besides, descriptor allocation and free operation is eliminated. This also makes vector procesing possible to further accelerate the processing. This is the layout for the avail ring(take 256 ring entries for example), with each entry pointing to the descriptor with the same index. avail idx + | +----+----+---+-------------+------+ | 0 | 1 | 2 | ... | 254 | 255 | avail ring +-+--+-+--+-+-+---------+---+--+---+ | | | | | | | | | | | | v v v | v v +-+--+-+--+-+-+---------+---+--+---+ | 0 | 1 | 2 | ... | 254 | 255 | desc ring +----+----+---+-------------+------+ | | +----+----+---+-------------+------+ | 0 | 1 | 2 | | 254 | 255 | used ring +----+----+---+-------------+------+ | + This is the ring layout for TX. As we need one virtio header for each xmit packet, we have 128 slots available. ++ || || +-----+-----+-----+--------------+------+------+------+ | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | | | || | | | v v v || v v v +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | 128 | 129 | ... | 255 || 128 | 129 | ... | 255 | desc ring for virtio_net_hdr +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | | | || | | | v v v || v v v +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring for tx dat +-----+-----+-----+--------------+------+------+------+ || || ++ Signed-off-by: Huawei Xie --- drivers/net/virtio/virtio_rxtx.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 5c00e9d..7c82a6a 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -302,6 +302,12 @@ virtio_dev_vring_start(struct virtqueue *vq, int queue_type) nbufs = 0; error = ENOSPC; + if (use_simple_rxtx) + for (i = 0; i < vq->vq_nentries; i++) { + vq->vq_ring.avail->ring[i] = i; + vq->vq_ring.desc[i].flags = VRING_DESC_F_WRITE; + } + memset(&vq->fake_mbuf, 0, sizeof(vq->fake_mbuf)); for (i = 0; i < RTE_PMD_VIRTIO_RX_MAX_BURST; i++) vq->sw_ring[vq->vq_nentries + i] = &vq->fake_mbuf; @@ -332,6 +338,24 @@ virtio_dev_vring_start(struct virtqueue *vq, int queue_type) VIRTIO_WRITE_REG_4(vq->hw, VIRTIO_PCI_QUEUE_PFN, vq->mz->phys_addr >> VIRTIO_PCI_QUEUE_ADDR_SHIFT); } else if (queue_type == VTNET_TQ) { + if (use_simple_rxtx) { + int mid_idx = vq->vq_nentries >> 1; + for (i = 0; i < mid_idx; i++) { + vq->vq_ring.avail->ring[i] = i + mid_idx; + vq->vq_ring.desc[i + mid_idx].next = i; + vq->vq_ring.desc[i + mid_idx].addr = + vq->virtio_net_hdr_mem + + mid_idx * vq->hw->vtnet_hdr_size; + vq->vq_ring.desc[i + mid_idx].len = + vq->hw->vtnet_hdr_size; + vq->vq_ring.desc[i + mid_idx].flags = + VRING_DESC_F_NEXT; + vq->vq_ring.desc[i].flags = 0; + } + for (i = mid_idx; i < vq->vq_nentries; i++) + vq->vq_ring.avail->ring[i] = i; + } + VIRTIO_WRITE_REG_2(vq->hw, VIRTIO_PCI_QUEUE_SEL, vq->vq_queue_index); VIRTIO_WRITE_REG_4(vq->hw, VIRTIO_PCI_QUEUE_PFN, -- 1.8.1.4