From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DBF2FA0613 for ; Wed, 25 Sep 2019 11:34:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7A9511BF0E; Wed, 25 Sep 2019 11:33:24 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 347081BEC2 for ; Wed, 25 Sep 2019 11:33:01 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Sep 2019 02:32:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,547,1559545200"; d="scan'208";a="213986327" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.142]) by fmsmga004.fm.intel.com with ESMTP; 25 Sep 2019 02:32:51 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com, stephen@networkplumber.org, gavin.hu@arm.com Cc: dev@dpdk.org, Marvin Liu Date: Thu, 26 Sep 2019 01:13:29 +0800 Message-Id: <20190925171329.63734-16-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190925171329.63734-1-yong.liu@intel.com> References: <20190919163643.24130-2-yong.liu@intel.com> <20190925171329.63734-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v3 15/15] vhost: optimize packed ring dequeue when in-order X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When VIRTIO_F_IN_ORDER feature is negotiated, vhost can optimize dequeue function by only update first used descriptor. Signed-off-by: Marvin Liu diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index e3872e384..1e113fb3a 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -31,6 +31,12 @@ rxvq_is_mergeable(struct virtio_net *dev) return dev->features & (1ULL << VIRTIO_NET_F_MRG_RXBUF); } +static __rte_always_inline bool +virtio_net_is_inorder(struct virtio_net *dev) +{ + return dev->features & (1ULL << VIRTIO_F_IN_ORDER); +} + static bool is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring) { @@ -214,6 +220,29 @@ flush_used_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } } +static __rte_always_inline void +update_dequeue_batch_packed_inorder(struct vhost_virtqueue *vq, uint16_t id) +{ + vq->shadow_used_packed[0].id = id; + + if (!vq->shadow_used_idx) { + vq->dequeue_shadow_head = vq->last_used_idx; + vq->shadow_used_packed[0].len = 0; + vq->shadow_used_packed[0].count = 1; + vq->shadow_used_packed[0].used_idx = vq->last_used_idx; + vq->shadow_used_packed[0].used_wrap_counter = + vq->used_wrap_counter; + + vq->shadow_used_idx = 1; + } + + vq->last_used_idx += PACKED_BATCH_SIZE; + if (vq->last_used_idx >= vq->size) { + vq->used_wrap_counter ^= 1; + vq->last_used_idx -= vq->size; + } +} + static __rte_always_inline void update_dequeue_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, uint16_t *ids) @@ -321,6 +350,32 @@ update_dequeue_shadow_used_ring_packed(struct vhost_virtqueue *vq, } } +static __rte_always_inline void +update_dequeue_shadow_used_ring_packed_inorder(struct vhost_virtqueue *vq, + uint16_t buf_id, uint16_t count) +{ + vq->shadow_used_packed[0].id = buf_id; + + if (!vq->shadow_used_idx) { + vq->dequeue_shadow_head = vq->last_used_idx; + + vq->shadow_used_packed[0].len = 0; + vq->shadow_used_packed[0].count = count; + vq->shadow_used_packed[0].used_idx = vq->last_used_idx; + vq->shadow_used_packed[0].used_wrap_counter = + vq->used_wrap_counter; + + vq->shadow_used_idx = 1; + } + + vq->last_used_idx += count; + + if (vq->last_used_idx >= vq->size) { + vq->used_wrap_counter ^= 1; + vq->last_used_idx -= vq->size; + } +} + static inline void do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq) { @@ -1801,8 +1856,12 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, (void *)(uintptr_t)(desc_addrs[i] + buf_offset), pkts[i]->pkt_len); } + if (virtio_net_is_inorder(dev)) + update_dequeue_batch_packed_inorder(vq, + ids[PACKED_BATCH_MASK]); + else + update_dequeue_batch_packed(dev, vq, ids); - update_dequeue_batch_packed(dev, vq, ids); if (virtio_net_with_host_offload(dev)) { UNROLL_PRAGMA(UNROLL_PRAGMA_PARAM) @@ -1865,7 +1924,11 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, &desc_count)) return -1; - update_dequeue_shadow_used_ring_packed(vq, buf_id, desc_count); + if (virtio_net_is_inorder(dev)) + update_dequeue_shadow_used_ring_packed_inorder(vq, buf_id, + desc_count); + else + update_dequeue_shadow_used_ring_packed(vq, buf_id, desc_count); vq->last_avail_idx += desc_count; if (vq->last_avail_idx >= vq->size) { -- 2.17.1