DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v1] vhost: fix mbuf allocation failures
@ 2020-04-28  9:52 Sivaprasad Tummala
  2020-04-29  8:43 ` Maxime Coquelin
  2020-05-04 17:11 ` [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure Sivaprasad Tummala
  0 siblings, 2 replies; 12+ messages in thread
From: Sivaprasad Tummala @ 2020-04-28  9:52 UTC (permalink / raw)
  To: Maxime Coquelin, Zhihong Wang, Xiaolong Ye; +Cc: dev

vhost buffer allocation is successful for packets that fit
into a linear buffer. If it fails, vhost library is expected
to drop the current buffer descriptor and skip to the next.

The patch fixes the error scenario by skipping to next descriptor.
Note: Drop counters are not currently supported.

Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
---
 lib/librte_vhost/virtio_net.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 37c47c7dc..b0d3a85c2 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1688,6 +1688,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 {
 	uint16_t i;
 	uint16_t free_entries;
+	uint16_t dropped = 0;
 
 	if (unlikely(dev->dequeue_zero_copy)) {
 		struct zcopy_mbuf *zmbuf, *next;
@@ -1751,8 +1752,19 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			update_shadow_used_ring_split(vq, head_idx, 0);
 
 		pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
-		if (unlikely(pkts[i] == NULL))
+		if (unlikely(pkts[i] == NULL)) {
+			/*
+			 * mbuf allocation fails for jumbo packets with
+			 * linear buffer flag set. Drop this packet and
+			 * proceed with the next available descriptor to
+			 * avoid HOL blocking
+			 */
+			VHOST_LOG_DATA(WARNING,
+				"Failed to allocate memory for mbuf. Packet dropped!\n");
+			dropped += 1;
+			i++;
 			break;
+		}
 
 		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
 				mbuf_pool);
@@ -1796,7 +1808,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 	}
 
-	return i;
+	return (i - dropped);
 }
 
 static __rte_always_inline int
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v1] vhost: fix mbuf allocation failures
  2020-04-28  9:52 [dpdk-dev] [PATCH v1] vhost: fix mbuf allocation failures Sivaprasad Tummala
@ 2020-04-29  8:43 ` Maxime Coquelin
  2020-04-29 17:35   ` Flavio Leitner
  2020-05-04 17:11 ` [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure Sivaprasad Tummala
  1 sibling, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2020-04-29  8:43 UTC (permalink / raw)
  To: Sivaprasad Tummala, Zhihong Wang, Xiaolong Ye; +Cc: dev, Flavio Leitner

Hi Sivaprasad,

On 4/28/20 11:52 AM, Sivaprasad Tummala wrote:
> vhost buffer allocation is successful for packets that fit
> into a linear buffer. If it fails, vhost library is expected
> to drop the current buffer descriptor and skip to the next.
> 
> The patch fixes the error scenario by skipping to next descriptor.
> Note: Drop counters are not currently supported.

Fixes tag is missing here, and stable@dpdk.org should be cc'ed if
necessary.

> Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
> ---
>  lib/librte_vhost/virtio_net.c | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 37c47c7dc..b0d3a85c2 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -1688,6 +1688,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  {

You only fix split ring path, but not packed ring.

>  	uint16_t i;
>  	uint16_t free_entries;
> +	uint16_t dropped = 0;
>  
>  	if (unlikely(dev->dequeue_zero_copy)) {
>  		struct zcopy_mbuf *zmbuf, *next;
> @@ -1751,8 +1752,19 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  			update_shadow_used_ring_split(vq, head_idx, 0);
>  
>  		pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
> -		if (unlikely(pkts[i] == NULL))
> +		if (unlikely(pkts[i] == NULL)) {
> +			/*
> +			 * mbuf allocation fails for jumbo packets with
> +			 * linear buffer flag set. Drop this packet and
> +			 * proceed with the next available descriptor to
> +			 * avoid HOL blocking
> +			 */
> +			VHOST_LOG_DATA(WARNING,
> +				"Failed to allocate memory for mbuf. Packet dropped!\n");

I think we need a better logging, otherwise it is going to flood the log
file quite rapidly if issue happens. Either some rate-limited logging or
warn-once would be better.

The warning message could be also improved, because when using linear
buffers, one would expect that the size of the mbufs could handle a
jumbo frame. So it should differentiate two

> +			dropped += 1;
> +			i++;
>  			break;
> +		}
>  
>  		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
>  				mbuf_pool);
> @@ -1796,7 +1808,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  		}
>  	}
>  
> -	return i;
> +	return (i - dropped);
>  }
>  
>  static __rte_always_inline int
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v1] vhost: fix mbuf allocation failures
  2020-04-29  8:43 ` Maxime Coquelin
@ 2020-04-29 17:35   ` Flavio Leitner
  2020-04-30  7:13     ` Tummala, Sivaprasad
  0 siblings, 1 reply; 12+ messages in thread
From: Flavio Leitner @ 2020-04-29 17:35 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: Sivaprasad Tummala, Zhihong Wang, Xiaolong Ye, dev

On Wed, Apr 29, 2020 at 10:43:01AM +0200, Maxime Coquelin wrote:
> Hi Sivaprasad,
> 
> On 4/28/20 11:52 AM, Sivaprasad Tummala wrote:
> > vhost buffer allocation is successful for packets that fit
> > into a linear buffer. If it fails, vhost library is expected
> > to drop the current buffer descriptor and skip to the next.
> > 
> > The patch fixes the error scenario by skipping to next descriptor.
> > Note: Drop counters are not currently supported.

In that case shouldn't we continue to process the ring?

Also, don't we have the same issue with copy_desc_to_mbuf() 
and get_zmbuf()?

fbl

> Fixes tag is missing here, and stable@dpdk.org should be cc'ed if
> necessary.
> 
> > Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
> > ---
> >  lib/librte_vhost/virtio_net.c | 16 ++++++++++++++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> > 
> > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> > index 37c47c7dc..b0d3a85c2 100644
> > --- a/lib/librte_vhost/virtio_net.c
> > +++ b/lib/librte_vhost/virtio_net.c
> > @@ -1688,6 +1688,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> >  {
> 
> You only fix split ring path, but not packed ring.
> 
> >  	uint16_t i;
> >  	uint16_t free_entries;
> > +	uint16_t dropped = 0;
> >  
> >  	if (unlikely(dev->dequeue_zero_copy)) {
> >  		struct zcopy_mbuf *zmbuf, *next;
> > @@ -1751,8 +1752,19 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> >  			update_shadow_used_ring_split(vq, head_idx, 0);
> >  
> >  		pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
> > -		if (unlikely(pkts[i] == NULL))
> > +		if (unlikely(pkts[i] == NULL)) {
> > +			/*
> > +			 * mbuf allocation fails for jumbo packets with
> > +			 * linear buffer flag set. Drop this packet and
> > +			 * proceed with the next available descriptor to
> > +			 * avoid HOL blocking
> > +			 */
> > +			VHOST_LOG_DATA(WARNING,
> > +				"Failed to allocate memory for mbuf. Packet dropped!\n");
> 
> I think we need a better logging, otherwise it is going to flood the log
> file quite rapidly if issue happens. Either some rate-limited logging or
> warn-once would be better.
> 
> The warning message could be also improved, because when using linear
> buffers, one would expect that the size of the mbufs could handle a
> jumbo frame. So it should differentiate two
> 
> > +			dropped += 1;
> > +			i++;
> >  			break;
> > +		}
> >  
> >  		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
> >  				mbuf_pool);
> > @@ -1796,7 +1808,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> >  		}
> >  	}
> >  
> > -	return i;
> > +	return (i - dropped);
> >  }
> >  
> >  static __rte_always_inline int
> > 
> 

-- 
fbl


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v1] vhost: fix mbuf allocation failures
  2020-04-29 17:35   ` Flavio Leitner
@ 2020-04-30  7:13     ` Tummala, Sivaprasad
  0 siblings, 0 replies; 12+ messages in thread
From: Tummala, Sivaprasad @ 2020-04-30  7:13 UTC (permalink / raw)
  To: Flavio Leitner, Maxime Coquelin; +Cc: Wang, Zhihong, Ye, Xiaolong, dev

Hi Flavio,



Thanks for your comments.



snipped



> > The patch fixes the error scenario by skipping to next descriptor.

> > Note: Drop counters are not currently supported.



In that case shouldn't we continue to process the ring?

Yes, we are updating the loop index and following the required clean-up.



Also, don't we have the same issue with copy_desc_to_mbuf()

Thank you. Will update in the V2 patch.



and get_zmbuf()?

This patch is not targeted for zero-copy cases.



fbl



snipped

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure
  2020-04-28  9:52 [dpdk-dev] [PATCH v1] vhost: fix mbuf allocation failures Sivaprasad Tummala
  2020-04-29  8:43 ` Maxime Coquelin
@ 2020-05-04 17:11 ` Sivaprasad Tummala
  2020-05-04 19:32   ` Flavio Leitner
  2020-05-08 11:17   ` [dpdk-dev] [PATCH v3] vhost: fix mbuf allocation failures Sivaprasad Tummala
  1 sibling, 2 replies; 12+ messages in thread
From: Sivaprasad Tummala @ 2020-05-04 17:11 UTC (permalink / raw)
  To: Maxime Coquelin, Zhihong Wang, Xiaolong Ye; +Cc: dev, stable, fbl

vhost buffer allocation is successful for packets that fit
into a linear buffer. If it fails, vhost library is expected
to drop the current packet and skip to the next.

The patch fixes the error scenario by skipping to next packet.
Note: Drop counters are not currently supported.

Fixes: c3ff0ac70acb ("vhost: improve performance by supporting large buffer")
Cc: stable@dpdk.org
Cc: fbl@sysclose.org

---
v2:
 * fixed review comments - Maxime Coquelin
 * fixed mbuf alloc errors for packed virtqueues - Maxime Coquelin
 * fixed mbuf copy errors - Flavio Leitner

Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
---
 lib/librte_vhost/virtio_net.c | 50 ++++++++++++++++++++++++++---------
 1 file changed, 37 insertions(+), 13 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 1fc30c681..764c514fd 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1674,6 +1674,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 {
 	uint16_t i;
 	uint16_t free_entries;
+	uint16_t dropped = 0;
 
 	if (unlikely(dev->dequeue_zero_copy)) {
 		struct zcopy_mbuf *zmbuf, *next;
@@ -1737,13 +1738,31 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			update_shadow_used_ring_split(vq, head_idx, 0);
 
 		pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
-		if (unlikely(pkts[i] == NULL))
+		if (unlikely(pkts[i] == NULL)) {
+			/*
+			 * mbuf allocation fails for jumbo packets when external
+			 * buffer allocation is not allowed and linear buffer
+			 * is required. Drop this packet.
+			 */
+#ifdef RTE_LIBRTE_VHOST_DEBUG
+			VHOST_LOG_DATA(ERR,
+				"Failed to allocate memory for mbuf. Packet dropped!\n");
+#endif
+			dropped += 1;
+			i++;
 			break;
+		}
 
 		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
 				mbuf_pool);
 		if (unlikely(err)) {
 			rte_pktmbuf_free(pkts[i]);
+#ifdef RTE_LIBRTE_VHOST_DEBUG
+			VHOST_LOG_DATA(ERR,
+				"Failed to copy desc to mbuf. Packet dropped!\n");
+#endif
+			dropped += 1;
+			i++;
 			break;
 		}
 
@@ -1753,6 +1772,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			zmbuf = get_zmbuf(vq);
 			if (!zmbuf) {
 				rte_pktmbuf_free(pkts[i]);
+				dropped += 1;
+				i++;
 				break;
 			}
 			zmbuf->mbuf = pkts[i];
@@ -1782,7 +1803,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 	}
 
-	return i;
+	return (i - dropped);
 }
 
 static __rte_always_inline int
@@ -1946,21 +1967,24 @@ virtio_dev_tx_single_packed(struct virtio_net *dev,
 			    struct rte_mbuf **pkts)
 {
 
-	uint16_t buf_id, desc_count;
+	uint16_t buf_id, desc_count = 0;
+	int ret;
 
-	if (vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id,
-					&desc_count))
-		return -1;
+	ret = vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id,
+					&desc_count);
 
-	if (virtio_net_is_inorder(dev))
-		vhost_shadow_dequeue_single_packed_inorder(vq, buf_id,
-							   desc_count);
-	else
-		vhost_shadow_dequeue_single_packed(vq, buf_id, desc_count);
+	if (likely(desc_count > 0)) {
+		if (virtio_net_is_inorder(dev))
+			vhost_shadow_dequeue_single_packed_inorder(vq, buf_id,
+								   desc_count);
+		else
+			vhost_shadow_dequeue_single_packed(vq, buf_id,
+					desc_count);
 
-	vq_inc_last_avail_packed(vq, desc_count);
+		vq_inc_last_avail_packed(vq, desc_count);
+	}
 
-	return 0;
+	return ret;
 }
 
 static __rte_always_inline int
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure
  2020-05-04 17:11 ` [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure Sivaprasad Tummala
@ 2020-05-04 19:32   ` Flavio Leitner
  2020-05-05  5:48     ` Tummala, Sivaprasad
  2020-05-08 11:17   ` [dpdk-dev] [PATCH v3] vhost: fix mbuf allocation failures Sivaprasad Tummala
  1 sibling, 1 reply; 12+ messages in thread
From: Flavio Leitner @ 2020-05-04 19:32 UTC (permalink / raw)
  To: Sivaprasad Tummala
  Cc: Maxime Coquelin, Zhihong Wang, Xiaolong Ye, dev, stable

On Mon, May 04, 2020 at 10:41:17PM +0530, Sivaprasad Tummala wrote:
> vhost buffer allocation is successful for packets that fit
> into a linear buffer. If it fails, vhost library is expected
> to drop the current packet and skip to the next.
> 
> The patch fixes the error scenario by skipping to next packet.
> Note: Drop counters are not currently supported.
> 
> Fixes: c3ff0ac70acb ("vhost: improve performance by supporting large buffer")
> Cc: stable@dpdk.org
> Cc: fbl@sysclose.org
> 
> ---
> v2:
>  * fixed review comments - Maxime Coquelin
>  * fixed mbuf alloc errors for packed virtqueues - Maxime Coquelin
>  * fixed mbuf copy errors - Flavio Leitner
> 
> Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
> ---
>  lib/librte_vhost/virtio_net.c | 50 ++++++++++++++++++++++++++---------
>  1 file changed, 37 insertions(+), 13 deletions(-)
> 
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 1fc30c681..764c514fd 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -1674,6 +1674,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  {
>  	uint16_t i;
>  	uint16_t free_entries;
> +	uint16_t dropped = 0;
>  
>  	if (unlikely(dev->dequeue_zero_copy)) {
>  		struct zcopy_mbuf *zmbuf, *next;
> @@ -1737,13 +1738,31 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  			update_shadow_used_ring_split(vq, head_idx, 0);
>  
>  		pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
> -		if (unlikely(pkts[i] == NULL))
> +		if (unlikely(pkts[i] == NULL)) {
> +			/*
> +			 * mbuf allocation fails for jumbo packets when external
> +			 * buffer allocation is not allowed and linear buffer
> +			 * is required. Drop this packet.
> +			 */
> +#ifdef RTE_LIBRTE_VHOST_DEBUG
> +			VHOST_LOG_DATA(ERR,
> +				"Failed to allocate memory for mbuf. Packet dropped!\n");
> +#endif

That message is useful to spot that missing packets that happens
once in a while, so we should be able to see it even in production
without debug enabled. However, we can't let it flood the log.

I am not sure if librte eal has this functionality, but if not you
could limit by using a static bool:

static bool allocerr_warned = false;

if (allocerr_warned) {
    VHOST_LOG_DATA(ERR,
    "Failed to allocate memory for mbuf. Packet dropped!\n");
    allocerr_warned = true;
}



> +			dropped += 1;
> +			i++;
>  			break;
> +		}
>  
>  		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
>  				mbuf_pool);
>  		if (unlikely(err)) {
>  			rte_pktmbuf_free(pkts[i]);
> +#ifdef RTE_LIBRTE_VHOST_DEBUG
> +			VHOST_LOG_DATA(ERR,
> +				"Failed to copy desc to mbuf. Packet dropped!\n");
> +#endif

Same here.


> +			dropped += 1;
> +			i++;
>  			break;
>  		}
>  
> @@ -1753,6 +1772,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  			zmbuf = get_zmbuf(vq);
>  			if (!zmbuf) {
>  				rte_pktmbuf_free(pkts[i]);
> +				dropped += 1;
> +				i++;
>  				break;
>  			}
>  			zmbuf->mbuf = pkts[i];
> @@ -1782,7 +1803,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  		}
>  	}
>  
> -	return i;
> +	return (i - dropped);
>  }
>  
>  static __rte_always_inline int
> @@ -1946,21 +1967,24 @@ virtio_dev_tx_single_packed(struct virtio_net *dev,
>  			    struct rte_mbuf **pkts)
>  {
>  
> -	uint16_t buf_id, desc_count;
> +	uint16_t buf_id, desc_count = 0;
> +	int ret;
>  
> -	if (vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id,
> -					&desc_count))
> -		return -1;
> +	ret = vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id,
> +					&desc_count);
>  
> -	if (virtio_net_is_inorder(dev))
> -		vhost_shadow_dequeue_single_packed_inorder(vq, buf_id,
> -							   desc_count);
> -	else
> -		vhost_shadow_dequeue_single_packed(vq, buf_id, desc_count);
> +	if (likely(desc_count > 0)) {


The vhost_dequeue_single_packed() could return -1 with desc_count > 0
and this change doesn't handle that.

Thanks,
fbl


> +		if (virtio_net_is_inorder(dev))
> +			vhost_shadow_dequeue_single_packed_inorder(vq, buf_id,
> +								   desc_count);
> +		else
> +			vhost_shadow_dequeue_single_packed(vq, buf_id,
> +					desc_count);
>  
> -	vq_inc_last_avail_packed(vq, desc_count);
> +		vq_inc_last_avail_packed(vq, desc_count);
> +	}
>  
> -	return 0;
> +	return ret;
>  }
>  
>  static __rte_always_inline int
> -- 
> 2.17.1
> 

-- 
fbl

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure
  2020-05-04 19:32   ` Flavio Leitner
@ 2020-05-05  5:48     ` Tummala, Sivaprasad
  2020-05-05  8:20       ` Maxime Coquelin
  0 siblings, 1 reply; 12+ messages in thread
From: Tummala, Sivaprasad @ 2020-05-05  5:48 UTC (permalink / raw)
  To: Flavio Leitner; +Cc: Maxime Coquelin, Wang, Zhihong, Ye, Xiaolong, dev, stable

Hi Flavio,



Thanks for your comments.



SNIPPED



>                      pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);

> -                   if (unlikely(pkts[i] == NULL))

> +                  if (unlikely(pkts[i] == NULL)) {

> +                              /*

> +                              * mbuf allocation fails for jumbo packets when external

> +                              * buffer allocation is not allowed and linear buffer

> +                              * is required. Drop this packet.

> +                              */

> +#ifdef RTE_LIBRTE_VHOST_DEBUG

> +                              VHOST_LOG_DATA(ERR,

> +                                          "Failed to allocate memory for mbuf. Packet dropped!\n"); #endif



That message is useful to spot that missing packets that happens once in a while, so we should be able to see it even in production without debug enabled. However, we can't let it flood the log.

Agreed, but VHOST_LOG wrapper does not have rate limit functionality.





I am not sure if librte eal has this functionality, but if not you could limit by using a static bool:



static bool allocerr_warned = false;



if (allocerr_warned) {

    VHOST_LOG_DATA(ERR,

    "Failed to allocate memory for mbuf. Packet dropped!\n");

    allocerr_warned = true;

}



This is good idea, but having a static variable makes it file scope making it to  entire VHOST devices. Hence if the intention is to implement device specific

log rate limit, should not we resort to `dev->allocerr_warn` counter mechanism, which resets after n failures `#define LOG_ALLOCFAIL 32`.



SNIPPED



>  static __rte_always_inline int

> @@ -1946,21 +1967,24 @@ virtio_dev_tx_single_packed(struct virtio_net *dev,

>                                      struct rte_mbuf **pkts)

>  {

>

> -       uint16_t buf_id, desc_count;

> +      uint16_t buf_id, desc_count = 0;

> +      int ret;

>

> -       if (vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id,

> -                                                       &desc_count))

> -                   return -1;

> +      ret = vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id,

> +                                                      &desc_count);

>

> -       if (virtio_net_is_inorder(dev))

> -                   vhost_shadow_dequeue_single_packed_inorder(vq, buf_id,

> -                                                                                  desc_count);

> -       else

> -                   vhost_shadow_dequeue_single_packed(vq, buf_id, desc_count);

> +      if (likely(desc_count > 0)) {





The vhost_dequeue_single_packed() could return -1 with desc_count > 0 and this change doesn't handle that.


Yes, as per my current understanding in either success or failure we need to flush the descriptors `desc_count` to handle this issue.
Is there an expectation  for partial or incomplete packet where `num_desc` is greater than 0, we need to preserve it.


SNIPPED



Thanks & Regards,

Sivaprasad

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure
  2020-05-05  5:48     ` Tummala, Sivaprasad
@ 2020-05-05  8:20       ` Maxime Coquelin
  2020-05-05 11:56         ` Tummala, Sivaprasad
  0 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2020-05-05  8:20 UTC (permalink / raw)
  To: Tummala, Sivaprasad, Flavio Leitner
  Cc: Wang, Zhihong, Ye, Xiaolong, dev, stable

(Please try to avoid HTML for the replies, it makes it hard to follow)

See my replies below:

On 5/5/20 7:48 AM, Tummala, Sivaprasad wrote:
> Hi Flavio,
> 
>  
> 
> Thanks for your comments.
> 
>  
> 
> SNIPPED
> 
>  
> 
>>                      pkts[i] = virtio_dev_pktmbuf_alloc(dev,
> mbuf_pool, buf_len);
> 
>> -                   if (unlikely(pkts[i] == NULL))
> 
>> +                  if (unlikely(pkts[i] == NULL)) {
> 
>> +                              /*
> 
>> +                              * mbuf allocation fails for jumbo
> packets when external
> 
>> +                              * buffer allocation is not allowed and
> linear buffer
> 
>> +                              * is required. Drop this packet.
> 
>> +                              */
> 
>> +#ifdef RTE_LIBRTE_VHOST_DEBUG
> 
>> +                              VHOST_LOG_DATA(ERR,
> 
>> +                                          "Failed to allocate memory
> for mbuf. Packet dropped!\n"); #endif
> 
>  
> 
> That message is useful to spot that missing packets that happens once in
> a while, so we should be able to see it even in production without debug
> enabled. However, we can't let it flood the log.
> 
> Agreed, but VHOST_LOG wrapper does not have rate limit functionality.
> 
>  
> 
>  
> 
> I am not sure if librte eal has this functionality, but if not you could
> limit by using a static bool:
> 
>  
> 
> static bool allocerr_warned = false;
> 
>  
> 
> if (allocerr_warned) {
> 
>     VHOST_LOG_DATA(ERR,
> 
>     "Failed to allocate memory for mbuf. Packet dropped!\n");
> 
>     allocerr_warned = true;
> 
> }
> 
>  
> 
> This is good idea, but having a static variable makes it file scope
> making it to  entire VHOST devices. Hence if the intention is to
> implement device specific
> 
> log rate limit, should not we resort to `dev->allocerr_warn` counter
> mechanism, which resets after n failures `#define LOG_ALLOCFAIL 32`.

I prefer Flavio's proposal, it would have less performance impact than
increasing struct virtio_net size. As soon as we can see the error
popping once in the log message, it gives some clues on what to
investigate. Maybe providing more details on the failure could help,
like printing the pool name and the requested length.

Maxime


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure
  2020-05-05  8:20       ` Maxime Coquelin
@ 2020-05-05 11:56         ` Tummala, Sivaprasad
  0 siblings, 0 replies; 12+ messages in thread
From: Tummala, Sivaprasad @ 2020-05-05 11:56 UTC (permalink / raw)
  To: Maxime Coquelin, Flavio Leitner; +Cc: Wang, Zhihong, Ye, Xiaolong, dev, stable

Hi Maxime, 

Thanks for your comments. 

SNIPPED

if (allocerr_warned) {
> 
>     VHOST_LOG_DATA(ERR,
> 
>     "Failed to allocate memory for mbuf. Packet dropped!\n");
> 
>     allocerr_warned = true;
> 
> }
> 
>  
> 
> This is good idea, but having a static variable makes it file scope 
> making it to  entire VHOST devices. Hence if the intention is to 
> implement device specific
> 
> log rate limit, should not we resort to `dev->allocerr_warn` counter 
> mechanism, which resets after n failures `#define LOG_ALLOCFAIL 32`.

I prefer Flavio's proposal, it would have less performance impact than increasing struct virtio_net size. As soon as we can see the error popping once in the log message, it gives some clues on what to investigate. Maybe providing more details on the failure could help, like printing the pool name and the requested length.

Agreed. Change in the next patch, sample format as `VHOST_DATA : Failed mbuf alloc of size 2054 from mbuf_pool_socket_0 on /tmp/vhost1.`

SNIPPED

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3] vhost: fix mbuf allocation failures
  2020-05-04 17:11 ` [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure Sivaprasad Tummala
  2020-05-04 19:32   ` Flavio Leitner
@ 2020-05-08 11:17   ` Sivaprasad Tummala
  2020-05-15  7:29     ` Maxime Coquelin
  2020-05-15  8:36     ` Maxime Coquelin
  1 sibling, 2 replies; 12+ messages in thread
From: Sivaprasad Tummala @ 2020-05-08 11:17 UTC (permalink / raw)
  To: Maxime Coquelin, Zhihong Wang, Xiaolong Ye; +Cc: dev, fbl, stable

vhost buffer allocation is successful for packets that fit
into a linear buffer. If it fails, vhost library is expected
to drop the current packet and skip to the next.

The patch fixes the error scenario by skipping to next packet.
Note: Drop counters are not currently supported.

Fixes: c3ff0ac70acb ("vhost: improve performance by supporting large buffer")
Cc: fbl@sysclose.org
Cc: stable@dpdk.org

v3:
 * fixed review comments - Flavio Leitner

v2:
 * fixed review comments - Maxime Coquelin
 * fixed mbuf alloc errors for packed virtqueues - Maxime Coquelin
 * fixed mbuf copy errors - Flavio Leitner

Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
---
 lib/librte_vhost/virtio_net.c | 70 +++++++++++++++++++++++++++--------
 1 file changed, 55 insertions(+), 15 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 1fc30c681..a85d77897 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1674,6 +1674,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 {
 	uint16_t i;
 	uint16_t free_entries;
+	uint16_t dropped = 0;
+	static bool allocerr_warned;
 
 	if (unlikely(dev->dequeue_zero_copy)) {
 		struct zcopy_mbuf *zmbuf, *next;
@@ -1737,13 +1739,35 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			update_shadow_used_ring_split(vq, head_idx, 0);
 
 		pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
-		if (unlikely(pkts[i] == NULL))
+		if (unlikely(pkts[i] == NULL)) {
+			/*
+			 * mbuf allocation fails for jumbo packets when external
+			 * buffer allocation is not allowed and linear buffer
+			 * is required. Drop this packet.
+			 */
+			if (!allocerr_warned) {
+				VHOST_LOG_DATA(ERR,
+					"Failed mbuf alloc of size %d from %s on %s.\n",
+					buf_len, mbuf_pool->name, dev->ifname);
+				allocerr_warned = true;
+			}
+			dropped += 1;
+			i++;
 			break;
+		}
 
 		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
 				mbuf_pool);
 		if (unlikely(err)) {
 			rte_pktmbuf_free(pkts[i]);
+			if (!allocerr_warned) {
+				VHOST_LOG_DATA(ERR,
+					"Failed to copy desc to mbuf on %s.\n",
+					dev->ifname);
+				allocerr_warned = true;
+			}
+			dropped += 1;
+			i++;
 			break;
 		}
 
@@ -1753,6 +1777,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			zmbuf = get_zmbuf(vq);
 			if (!zmbuf) {
 				rte_pktmbuf_free(pkts[i]);
+				dropped += 1;
+				i++;
 				break;
 			}
 			zmbuf->mbuf = pkts[i];
@@ -1782,7 +1808,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 	}
 
-	return i;
+	return (i - dropped);
 }
 
 static __rte_always_inline int
@@ -1914,6 +1940,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev,
 	uint32_t buf_len;
 	uint16_t nr_vec = 0;
 	int err;
+	static bool allocerr_warned;
 
 	if (unlikely(fill_vec_buf_packed(dev, vq,
 					 vq->last_avail_idx, desc_count,
@@ -1924,14 +1951,24 @@ vhost_dequeue_single_packed(struct virtio_net *dev,
 
 	*pkts = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len);
 	if (unlikely(*pkts == NULL)) {
-		VHOST_LOG_DATA(ERR,
-			"Failed to allocate memory for mbuf.\n");
+		if (!allocerr_warned) {
+			VHOST_LOG_DATA(ERR,
+				"Failed mbuf alloc of size %d from %s on %s.\n",
+				buf_len, mbuf_pool->name, dev->ifname);
+			allocerr_warned = true;
+		}
 		return -1;
 	}
 
 	err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, *pkts,
 				mbuf_pool);
 	if (unlikely(err)) {
+		if (!allocerr_warned) {
+			VHOST_LOG_DATA(ERR,
+				"Failed to copy desc to mbuf on %s.\n",
+				dev->ifname);
+			allocerr_warned = true;
+		}
 		rte_pktmbuf_free(*pkts);
 		return -1;
 	}
@@ -1946,21 +1983,24 @@ virtio_dev_tx_single_packed(struct virtio_net *dev,
 			    struct rte_mbuf **pkts)
 {
 
-	uint16_t buf_id, desc_count;
+	uint16_t buf_id, desc_count = 0;
+	int ret;
 
-	if (vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id,
-					&desc_count))
-		return -1;
+	ret = vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id,
+					&desc_count);
 
-	if (virtio_net_is_inorder(dev))
-		vhost_shadow_dequeue_single_packed_inorder(vq, buf_id,
-							   desc_count);
-	else
-		vhost_shadow_dequeue_single_packed(vq, buf_id, desc_count);
+	if (likely(desc_count > 0)) {
+		if (virtio_net_is_inorder(dev))
+			vhost_shadow_dequeue_single_packed_inorder(vq, buf_id,
+								   desc_count);
+		else
+			vhost_shadow_dequeue_single_packed(vq, buf_id,
+					desc_count);
 
-	vq_inc_last_avail_packed(vq, desc_count);
+		vq_inc_last_avail_packed(vq, desc_count);
+	}
 
-	return 0;
+	return ret;
 }
 
 static __rte_always_inline int
-- 
2.17.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v3] vhost: fix mbuf allocation failures
  2020-05-08 11:17   ` [dpdk-dev] [PATCH v3] vhost: fix mbuf allocation failures Sivaprasad Tummala
@ 2020-05-15  7:29     ` Maxime Coquelin
  2020-05-15  8:36     ` Maxime Coquelin
  1 sibling, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2020-05-15  7:29 UTC (permalink / raw)
  To: Sivaprasad Tummala, Zhihong Wang, Xiaolong Ye; +Cc: dev, fbl, stable



On 5/8/20 1:17 PM, Sivaprasad Tummala wrote:
> vhost buffer allocation is successful for packets that fit
> into a linear buffer. If it fails, vhost library is expected
> to drop the current packet and skip to the next.
> 
> The patch fixes the error scenario by skipping to next packet.
> Note: Drop counters are not currently supported.
> 
> Fixes: c3ff0ac70acb ("vhost: improve performance by supporting large buffer")
> Cc: fbl@sysclose.org
> Cc: stable@dpdk.org
> 
> v3:
>  * fixed review comments - Flavio Leitner
> 
> v2:
>  * fixed review comments - Maxime Coquelin
>  * fixed mbuf alloc errors for packed virtqueues - Maxime Coquelin
>  * fixed mbuf copy errors - Flavio Leitner
> 
> Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
> ---
>  lib/librte_vhost/virtio_net.c | 70 +++++++++++++++++++++++++++--------
>  1 file changed, 55 insertions(+), 15 deletions(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v3] vhost: fix mbuf allocation failures
  2020-05-08 11:17   ` [dpdk-dev] [PATCH v3] vhost: fix mbuf allocation failures Sivaprasad Tummala
  2020-05-15  7:29     ` Maxime Coquelin
@ 2020-05-15  8:36     ` Maxime Coquelin
  1 sibling, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2020-05-15  8:36 UTC (permalink / raw)
  To: Sivaprasad Tummala, Zhihong Wang, Xiaolong Ye; +Cc: dev, fbl, stable



On 5/8/20 1:17 PM, Sivaprasad Tummala wrote:
> vhost buffer allocation is successful for packets that fit
> into a linear buffer. If it fails, vhost library is expected
> to drop the current packet and skip to the next.
> 
> The patch fixes the error scenario by skipping to next packet.
> Note: Drop counters are not currently supported.
> 
> Fixes: c3ff0ac70acb ("vhost: improve performance by supporting large buffer")
> Cc: fbl@sysclose.org
> Cc: stable@dpdk.org
> 
> v3:
>  * fixed review comments - Flavio Leitner
> 
> v2:
>  * fixed review comments - Maxime Coquelin
>  * fixed mbuf alloc errors for packed virtqueues - Maxime Coquelin
>  * fixed mbuf copy errors - Flavio Leitner
> 
> Signed-off-by: Sivaprasad Tummala <Sivaprasad.Tummala@intel.com>
> ---
>  lib/librte_vhost/virtio_net.c | 70 +++++++++++++++++++++++++++--------
>  1 file changed, 55 insertions(+), 15 deletions(-)

Applied to dpdk-next-virtio/master.

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-05-15  8:36 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-28  9:52 [dpdk-dev] [PATCH v1] vhost: fix mbuf allocation failures Sivaprasad Tummala
2020-04-29  8:43 ` Maxime Coquelin
2020-04-29 17:35   ` Flavio Leitner
2020-04-30  7:13     ` Tummala, Sivaprasad
2020-05-04 17:11 ` [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure Sivaprasad Tummala
2020-05-04 19:32   ` Flavio Leitner
2020-05-05  5:48     ` Tummala, Sivaprasad
2020-05-05  8:20       ` Maxime Coquelin
2020-05-05 11:56         ` Tummala, Sivaprasad
2020-05-08 11:17   ` [dpdk-dev] [PATCH v3] vhost: fix mbuf allocation failures Sivaprasad Tummala
2020-05-15  7:29     ` Maxime Coquelin
2020-05-15  8:36     ` Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).