patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH v2] vhost: fix shadow update
       [not found] <20200401212926.74989-1-yong.liu@intel.com>
@ 2020-04-17  2:39 ` Marvin Liu
  2020-04-17 13:29   ` Maxime Coquelin
  2020-04-17 17:08   ` Maxime Coquelin
  0 siblings, 2 replies; 3+ messages in thread
From: Marvin Liu @ 2020-04-17  2:39 UTC (permalink / raw)
  To: maxime.coquelin, xiaolong.ye, zhihong.wang, eperezma
  Cc: dev, Marvin Liu, stable

Defer shadow ring update introduces functional issue which has been
described in Eugenio's fix patch.

The current implementation of vhost_net in packed vring tries to fill
the shadow vector before send any actual changes to the guest. While
this can be beneficial for the throughput, it conflicts with some
bufferfloats methods like the linux kernel napi, that stops
transmitting packets if there are too much bytes/buffers in the
driver.

It also introduces performance issue when frontend run much faster than
backend. Frontend may not be able to collect available descs when shadow
update is deferred. That will harm RFC2544 throughput.

Appropriate choice is to remove deferred shadowed update method.
Now shadowed used descs are flushed at the end of dequeue function.

Fixes: 31d6c6a5b820 ("vhost: optimize packed ring dequeue")
Cc: stable@dpdk.org

Signed-off-by: Marvin Liu <yong.liu@intel.com>
Tested-by: Wang, Yinan <yinan.wang@intel.com>

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 210415904..4a7531943 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -382,25 +382,6 @@ vhost_shadow_enqueue_single_packed(struct virtio_net *dev,
 	}
 }
 
-static __rte_always_inline void
-vhost_flush_dequeue_packed(struct virtio_net *dev,
-			   struct vhost_virtqueue *vq)
-{
-	int shadow_count;
-	if (!vq->shadow_used_idx)
-		return;
-
-	shadow_count = vq->last_used_idx - vq->shadow_last_used_idx;
-	if (shadow_count <= 0)
-		shadow_count += vq->size;
-
-	if ((uint32_t)shadow_count >= (vq->size - MAX_PKT_BURST)) {
-		do_data_copy_dequeue(vq);
-		vhost_flush_dequeue_shadow_packed(dev, vq);
-		vhost_vring_call_packed(dev, vq);
-	}
-}
-
 /* avoid write operation when necessary, to lessen cache issues */
 #define ASSIGN_UNLESS_EQUAL(var, val) do {	\
 	if ((var) != (val))			\
@@ -2133,20 +2114,6 @@ virtio_dev_tx_packed_zmbuf(struct virtio_net *dev,
 	return pkt_idx;
 }
 
-static __rte_always_inline bool
-next_desc_is_avail(const struct vhost_virtqueue *vq)
-{
-	bool wrap_counter = vq->avail_wrap_counter;
-	uint16_t next_used_idx = vq->last_used_idx + 1;
-
-	if (next_used_idx >= vq->size) {
-		next_used_idx -= vq->size;
-		wrap_counter ^= 1;
-	}
-
-	return desc_is_avail(&vq->desc_packed[next_used_idx], wrap_counter);
-}
-
 static __rte_noinline uint16_t
 virtio_dev_tx_packed(struct virtio_net *dev,
 		     struct vhost_virtqueue *vq,
@@ -2163,7 +2130,6 @@ virtio_dev_tx_packed(struct virtio_net *dev,
 		if (remained >= PACKED_BATCH_SIZE) {
 			if (!virtio_dev_tx_batch_packed(dev, vq, mbuf_pool,
 							&pkts[pkt_idx])) {
-				vhost_flush_dequeue_packed(dev, vq);
 				pkt_idx += PACKED_BATCH_SIZE;
 				remained -= PACKED_BATCH_SIZE;
 				continue;
@@ -2173,7 +2139,6 @@ virtio_dev_tx_packed(struct virtio_net *dev,
 		if (virtio_dev_tx_single_packed(dev, vq, mbuf_pool,
 						&pkts[pkt_idx]))
 			break;
-		vhost_flush_dequeue_packed(dev, vq);
 		pkt_idx++;
 		remained--;
 
@@ -2182,15 +2147,8 @@ virtio_dev_tx_packed(struct virtio_net *dev,
 	if (vq->shadow_used_idx) {
 		do_data_copy_dequeue(vq);
 
-		if (remained && !next_desc_is_avail(vq)) {
-			/*
-			 * The guest may be waiting to TX some buffers to
-			 * enqueue more to avoid bufferfloat, so we try to
-			 * reduce latency here.
-			 */
-			vhost_flush_dequeue_shadow_packed(dev, vq);
-			vhost_vring_call_packed(dev, vq);
-		}
+		vhost_flush_dequeue_shadow_packed(dev, vq);
+		vhost_vring_call_packed(dev, vq);
 	}
 
 	return pkt_idx;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-stable] [PATCH v2] vhost: fix shadow update
  2020-04-17  2:39 ` [dpdk-stable] [PATCH v2] vhost: fix shadow update Marvin Liu
@ 2020-04-17 13:29   ` Maxime Coquelin
  2020-04-17 17:08   ` Maxime Coquelin
  1 sibling, 0 replies; 3+ messages in thread
From: Maxime Coquelin @ 2020-04-17 13:29 UTC (permalink / raw)
  To: Marvin Liu, xiaolong.ye, zhihong.wang, eperezma; +Cc: dev, stable



On 4/17/20 4:39 AM, Marvin Liu wrote:
> Defer shadow ring update introduces functional issue which has been
> described in Eugenio's fix patch.
> 
> The current implementation of vhost_net in packed vring tries to fill
> the shadow vector before send any actual changes to the guest. While
> this can be beneficial for the throughput, it conflicts with some
> bufferfloats methods like the linux kernel napi, that stops
> transmitting packets if there are too much bytes/buffers in the
> driver.
> 
> It also introduces performance issue when frontend run much faster than
> backend. Frontend may not be able to collect available descs when shadow
> update is deferred. That will harm RFC2544 throughput.
> 
> Appropriate choice is to remove deferred shadowed update method.
> Now shadowed used descs are flushed at the end of dequeue function.
> 
> Fixes: 31d6c6a5b820 ("vhost: optimize packed ring dequeue")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
> Tested-by: Wang, Yinan <yinan.wang@intel.com>
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-stable] [PATCH v2] vhost: fix shadow update
  2020-04-17  2:39 ` [dpdk-stable] [PATCH v2] vhost: fix shadow update Marvin Liu
  2020-04-17 13:29   ` Maxime Coquelin
@ 2020-04-17 17:08   ` Maxime Coquelin
  1 sibling, 0 replies; 3+ messages in thread
From: Maxime Coquelin @ 2020-04-17 17:08 UTC (permalink / raw)
  To: Marvin Liu, xiaolong.ye, zhihong.wang, eperezma; +Cc: dev, stable



On 4/17/20 4:39 AM, Marvin Liu wrote:
> Defer shadow ring update introduces functional issue which has been
> described in Eugenio's fix patch.
> 
> The current implementation of vhost_net in packed vring tries to fill
> the shadow vector before send any actual changes to the guest. While
> this can be beneficial for the throughput, it conflicts with some
> bufferfloats methods like the linux kernel napi, that stops
> transmitting packets if there are too much bytes/buffers in the
> driver.
> 
> It also introduces performance issue when frontend run much faster than
> backend. Frontend may not be able to collect available descs when shadow
> update is deferred. That will harm RFC2544 throughput.
> 
> Appropriate choice is to remove deferred shadowed update method.
> Now shadowed used descs are flushed at the end of dequeue function.
> 
> Fixes: 31d6c6a5b820 ("vhost: optimize packed ring dequeue")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Marvin Liu <yong.liu@intel.com>
> Tested-by: Wang, Yinan <yinan.wang@intel.com>

Applied to dpdk-next-virtio/master

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-04-17 17:08 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20200401212926.74989-1-yong.liu@intel.com>
2020-04-17  2:39 ` [dpdk-stable] [PATCH v2] vhost: fix shadow update Marvin Liu
2020-04-17 13:29   ` Maxime Coquelin
2020-04-17 17:08   ` Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).