From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C50AA0588; Fri, 17 Apr 2020 04:39:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 238DE1DE10; Fri, 17 Apr 2020 04:39:41 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 2848F1DDDF; Fri, 17 Apr 2020 04:39:38 +0200 (CEST) IronPort-SDR: GgQQ7c6rvhcw7j+dhjvMq7p58oeKYvbLpW0te8A27jgVfYSM+TuJCSOT0mRXJgvbN1XF483PNF sIOE7jRtuNBw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2020 19:39:38 -0700 IronPort-SDR: IBVzRB4MIrlXIeijXZUeDYY39DKFQR/Kz2rAxY6M/nX9X/oLtzbB+lg+dJbyICkV/IT5PIC8gO oeeS3K76xVKw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,393,1580803200"; d="scan'208";a="428074081" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.58]) by orsmga005.jf.intel.com with ESMTP; 16 Apr 2020 19:39:35 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, xiaolong.ye@intel.com, zhihong.wang@intel.com, eperezma@redhat.com Cc: dev@dpdk.org, Marvin Liu , stable@dpdk.org Date: Fri, 17 Apr 2020 10:39:05 +0800 Message-Id: <20200417023905.34801-1-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200401212926.74989-1-yong.liu@intel.com> References: <20200401212926.74989-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH v2] vhost: fix shadow update X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Defer shadow ring update introduces functional issue which has been described in Eugenio's fix patch. The current implementation of vhost_net in packed vring tries to fill the shadow vector before send any actual changes to the guest. While this can be beneficial for the throughput, it conflicts with some bufferfloats methods like the linux kernel napi, that stops transmitting packets if there are too much bytes/buffers in the driver. It also introduces performance issue when frontend run much faster than backend. Frontend may not be able to collect available descs when shadow update is deferred. That will harm RFC2544 throughput. Appropriate choice is to remove deferred shadowed update method. Now shadowed used descs are flushed at the end of dequeue function. Fixes: 31d6c6a5b820 ("vhost: optimize packed ring dequeue") Cc: stable@dpdk.org Signed-off-by: Marvin Liu Tested-by: Wang, Yinan diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 210415904..4a7531943 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -382,25 +382,6 @@ vhost_shadow_enqueue_single_packed(struct virtio_net *dev, } } -static __rte_always_inline void -vhost_flush_dequeue_packed(struct virtio_net *dev, - struct vhost_virtqueue *vq) -{ - int shadow_count; - if (!vq->shadow_used_idx) - return; - - shadow_count = vq->last_used_idx - vq->shadow_last_used_idx; - if (shadow_count <= 0) - shadow_count += vq->size; - - if ((uint32_t)shadow_count >= (vq->size - MAX_PKT_BURST)) { - do_data_copy_dequeue(vq); - vhost_flush_dequeue_shadow_packed(dev, vq); - vhost_vring_call_packed(dev, vq); - } -} - /* avoid write operation when necessary, to lessen cache issues */ #define ASSIGN_UNLESS_EQUAL(var, val) do { \ if ((var) != (val)) \ @@ -2133,20 +2114,6 @@ virtio_dev_tx_packed_zmbuf(struct virtio_net *dev, return pkt_idx; } -static __rte_always_inline bool -next_desc_is_avail(const struct vhost_virtqueue *vq) -{ - bool wrap_counter = vq->avail_wrap_counter; - uint16_t next_used_idx = vq->last_used_idx + 1; - - if (next_used_idx >= vq->size) { - next_used_idx -= vq->size; - wrap_counter ^= 1; - } - - return desc_is_avail(&vq->desc_packed[next_used_idx], wrap_counter); -} - static __rte_noinline uint16_t virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -2163,7 +2130,6 @@ virtio_dev_tx_packed(struct virtio_net *dev, if (remained >= PACKED_BATCH_SIZE) { if (!virtio_dev_tx_batch_packed(dev, vq, mbuf_pool, &pkts[pkt_idx])) { - vhost_flush_dequeue_packed(dev, vq); pkt_idx += PACKED_BATCH_SIZE; remained -= PACKED_BATCH_SIZE; continue; @@ -2173,7 +2139,6 @@ virtio_dev_tx_packed(struct virtio_net *dev, if (virtio_dev_tx_single_packed(dev, vq, mbuf_pool, &pkts[pkt_idx])) break; - vhost_flush_dequeue_packed(dev, vq); pkt_idx++; remained--; @@ -2182,15 +2147,8 @@ virtio_dev_tx_packed(struct virtio_net *dev, if (vq->shadow_used_idx) { do_data_copy_dequeue(vq); - if (remained && !next_desc_is_avail(vq)) { - /* - * The guest may be waiting to TX some buffers to - * enqueue more to avoid bufferfloat, so we try to - * reduce latency here. - */ - vhost_flush_dequeue_shadow_packed(dev, vq); - vhost_vring_call_packed(dev, vq); - } + vhost_flush_dequeue_shadow_packed(dev, vq); + vhost_vring_call_packed(dev, vq); } return pkt_idx; -- 2.17.1