DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] net/pcap: improve rxtx statistics
@ 2021-08-26  3:16 Qiming Chen
  2021-08-26  3:23 ` [dpdk-dev] [PATCH v2] " Qiming Chen
  0 siblings, 1 reply; 12+ messages in thread
From: Qiming Chen @ 2021-08-26  3:16 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Qiming Chen

In the receiving direction, if alloc mbuf or jumbo process failed, there
is no err_pkts count, which makes it difficult to locate the problem.
In the sending direction, if the pcap_sendpacket function returns
EMSGSIZE, it means that the size of the sent packet exceeds the buffer
size provided, and the corresponding mbuf needs to be released, otherwise
it will cause the mbuf to leak.

Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
---
 drivers/net/pcap/pcap_ethdev.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a43..5b39dbfeb0 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -297,8 +297,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			break;
 
 		mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
-		if (unlikely(mbuf == NULL))
-			break;
+		if (unlikely(mbuf == NULL)) {
+			pcap_q->rx_stat.err_pkts++;
+			continue;
+		}
 
 		if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
 			/* pcap packet will fit in the mbuf, can copy it */
@@ -311,6 +313,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 						       mbuf,
 						       packet,
 						       header.caplen) == -1)) {
+				pcap_q->rx_stat.err_pkts++;
 				rte_pktmbuf_free(mbuf);
 				break;
 			}
@@ -490,8 +493,13 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		 */
 		ret = pcap_sendpacket(pcap,
 			rte_pktmbuf_read(mbuf, 0, len, temp_data), len);
-		if (unlikely(ret != 0))
+		if (unlikely(ret != 0)) {
+			if (errno == EMSGSIZE) {
+				rte_pktmbuf_free(mbuf);
+			}
 			break;
+		}
+
 		num_tx++;
 		tx_bytes += len;
 		rte_pktmbuf_free(mbuf);
@@ -784,6 +792,7 @@ eth_stats_reset(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		internal->rx_queue[i].rx_stat.pkts = 0;
 		internal->rx_queue[i].rx_stat.bytes = 0;
+		internal->rx_queue[i].rx_stat.err_pkts = 0;
 		queue_missed_stat_reset(dev, i);
 	}
 
-- 
2.30.1.windows.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v2] net/pcap: improve rxtx statistics
  2021-08-26  3:16 [dpdk-dev] [PATCH] net/pcap: improve rxtx statistics Qiming Chen
@ 2021-08-26  3:23 ` Qiming Chen
  2021-09-08 12:29   ` Ferruh Yigit
  2021-09-09  2:45   ` [dpdk-dev] [PATCH v3] net/pcap: improve rx statistics Qiming Chen
  0 siblings, 2 replies; 12+ messages in thread
From: Qiming Chen @ 2021-08-26  3:23 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Qiming Chen

In the receiving direction, if alloc mbuf or jumbo process failed, there
is no err_pkts count, which makes it difficult to locate the problem.
In the sending direction, if the pcap_sendpacket function returns
EMSGSIZE, it means that the size of the sent packet exceeds the buffer
size provided, and the corresponding mbuf needs to be released, otherwise
it will cause the mbuf to leak.

Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
---
v2:
  Clear coding style issues.
---
 drivers/net/pcap/pcap_ethdev.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a43..4606f8ff60 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -297,8 +297,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			break;
 
 		mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
-		if (unlikely(mbuf == NULL))
-			break;
+		if (unlikely(mbuf == NULL)) {
+			pcap_q->rx_stat.err_pkts++;
+			continue;
+		}
 
 		if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
 			/* pcap packet will fit in the mbuf, can copy it */
@@ -311,6 +313,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 						       mbuf,
 						       packet,
 						       header.caplen) == -1)) {
+				pcap_q->rx_stat.err_pkts++;
 				rte_pktmbuf_free(mbuf);
 				break;
 			}
@@ -490,8 +493,12 @@ eth_pcap_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		 */
 		ret = pcap_sendpacket(pcap,
 			rte_pktmbuf_read(mbuf, 0, len, temp_data), len);
-		if (unlikely(ret != 0))
+		if (unlikely(ret != 0)) {
+			if (errno == EMSGSIZE)
+				rte_pktmbuf_free(mbuf);
 			break;
+		}
+
 		num_tx++;
 		tx_bytes += len;
 		rte_pktmbuf_free(mbuf);
@@ -784,6 +791,7 @@ eth_stats_reset(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		internal->rx_queue[i].rx_stat.pkts = 0;
 		internal->rx_queue[i].rx_stat.bytes = 0;
+		internal->rx_queue[i].rx_stat.err_pkts = 0;
 		queue_missed_stat_reset(dev, i);
 	}
 
-- 
2.30.1.windows.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v2] net/pcap: improve rxtx statistics
  2021-08-26  3:23 ` [dpdk-dev] [PATCH v2] " Qiming Chen
@ 2021-09-08 12:29   ` Ferruh Yigit
  2021-09-09  2:45   ` [dpdk-dev] [PATCH v3] net/pcap: improve rx statistics Qiming Chen
  1 sibling, 0 replies; 12+ messages in thread
From: Ferruh Yigit @ 2021-09-08 12:29 UTC (permalink / raw)
  To: Qiming Chen, dev

On 8/26/2021 4:23 AM, Qiming Chen wrote:
> In the receiving direction, if alloc mbuf or jumbo process failed, there
> is no err_pkts count, which makes it difficult to locate the problem.

ack, but patch adds two places that updates the 'err_pkts'. One of them is where
mbuf allocation failed, and for this case there is a specific stats field,
'rx_nombuf'. Can you please update the patch to use this field?

> In the sending direction, if the pcap_sendpacket function returns
> EMSGSIZE, it means that the size of the sent packet exceeds the buffer
> size provided, and the corresponding mbuf needs to be released, otherwise
> it will cause the mbuf to leak.
> 

PMD Tx shouldn't free mbufs that are not sent, it is application's
responsibility to free (or resend, or whatever application wants).
Please drop this part of the patch.

> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>

<...>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v3] net/pcap: improve rx statistics
  2021-08-26  3:23 ` [dpdk-dev] [PATCH v2] " Qiming Chen
  2021-09-08 12:29   ` Ferruh Yigit
@ 2021-09-09  2:45   ` Qiming Chen
  2021-09-09  3:29     ` Stephen Hemminger
  2021-09-09  4:16     ` [dpdk-dev] [PATCH v4] " Qiming Chen
  1 sibling, 2 replies; 12+ messages in thread
From: Qiming Chen @ 2021-09-09  2:45 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Qiming Chen

In the receiving direction, if alloc mbuf or jumbo process failed, there
is no err_pkts count, which makes it difficult to locate the problem.
Because alloc mbuf failed, the rx_nombuf field is counted.

Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
---
v2:
  Clear coding style issues.
v3:
  1) Send direction does not release mbuf.
  2) Failed to alloc mbuf is counted to the rx_nombuf field.
---
 drivers/net/pcap/pcap_ethdev.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a43..64b0dbf0e4 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -297,8 +297,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			break;
 
 		mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
-		if (unlikely(mbuf == NULL))
-			break;
+		if (unlikely(mbuf == NULL)) {
+			pcap_q->rx_stat.err_pkts++;
+			continue;
+		}
 
 		if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
 			/* pcap packet will fit in the mbuf, can copy it */
@@ -311,6 +313,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 						       mbuf,
 						       packet,
 						       header.caplen) == -1)) {
+				pcap_q->rx_stat.err_pkts++;
 				rte_pktmbuf_free(mbuf);
 				break;
 			}
@@ -742,7 +745,7 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
 	unsigned int i;
 	unsigned long rx_packets_total = 0, rx_bytes_total = 0;
-	unsigned long rx_missed_total = 0;
+	unsigned long rx_missed_total = 0, rx_nombuf = 0;
 	unsigned long tx_packets_total = 0, tx_bytes_total = 0;
 	unsigned long tx_packets_err_total = 0;
 	const struct pmd_internals *internal = dev->data->dev_private;
@@ -751,6 +754,7 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			i < dev->data->nb_rx_queues; i++) {
 		stats->q_ipackets[i] = internal->rx_queue[i].rx_stat.pkts;
 		stats->q_ibytes[i] = internal->rx_queue[i].rx_stat.bytes;
+		rx_nombuf += internal->rx_queue[i].rx_stat.err_pkts;
 		rx_packets_total += stats->q_ipackets[i];
 		rx_bytes_total += stats->q_ibytes[i];
 		rx_missed_total += queue_missed_stat_get(dev, i);
@@ -771,6 +775,7 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->opackets = tx_packets_total;
 	stats->obytes = tx_bytes_total;
 	stats->oerrors = tx_packets_err_total;
+	stats->rx_nombuf = rx_nombuf;
 
 	return 0;
 }
@@ -784,6 +789,7 @@ eth_stats_reset(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		internal->rx_queue[i].rx_stat.pkts = 0;
 		internal->rx_queue[i].rx_stat.bytes = 0;
+		internal->rx_queue[i].rx_stat.err_pkts = 0;
 		queue_missed_stat_reset(dev, i);
 	}
 
-- 
2.30.1.windows.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v3] net/pcap: improve rx statistics
  2021-09-09  2:45   ` [dpdk-dev] [PATCH v3] net/pcap: improve rx statistics Qiming Chen
@ 2021-09-09  3:29     ` Stephen Hemminger
  2021-09-09 10:20       ` Ferruh Yigit
  2021-09-09  4:16     ` [dpdk-dev] [PATCH v4] " Qiming Chen
  1 sibling, 1 reply; 12+ messages in thread
From: Stephen Hemminger @ 2021-09-09  3:29 UTC (permalink / raw)
  To: Qiming Chen; +Cc: dev, ferruh.yigit

On Thu,  9 Sep 2021 10:45:31 +0800
Qiming Chen <chenqiming_huawei@163.com> wrote:

> In the receiving direction, if alloc mbuf or jumbo process failed, there
> is no err_pkts count, which makes it difficult to locate the problem.
> Because alloc mbuf failed, the rx_nombuf field is counted.
> 
> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
> ---
> v2:
>   Clear coding style issues.
> v3:
>   1) Send direction does not release mbuf.
>   2) Failed to alloc mbuf is counted to the rx_nombuf field.

Looks good, the field "err_pkts" is a confusing name for me.

On Tx it means packets dropped because pcap_sendpacket() returned error.
Looking inside libpcap that means send() failed. On Linux this is
a send on a PF_PACKET socket and it appears to be a blocking socket().
So these errors are not transient conditions.

On Rx it means packets dropped because out of mbufs.

Perhaps a comment or renaming the field would helped.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v4] net/pcap: improve rx statistics
  2021-09-09  2:45   ` [dpdk-dev] [PATCH v3] net/pcap: improve rx statistics Qiming Chen
  2021-09-09  3:29     ` Stephen Hemminger
@ 2021-09-09  4:16     ` Qiming Chen
  2021-09-09  8:05       ` Ferruh Yigit
  2021-09-09  8:23       ` [dpdk-dev] [PATCH v5] " Qiming Chen
  1 sibling, 2 replies; 12+ messages in thread
From: Qiming Chen @ 2021-09-09  4:16 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Qiming Chen

In the receiving direction, if alloc mbuf or jumbo process failed, there
is no err_pkts count, which makes it difficult to locate the problem.
Because alloc mbuf failed, the rx_nombuf field is counted.

Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
---
v2:
  Clear coding style issues.
v3:
  1) Send direction does not release mbuf.
  2) Failed to alloc mbuf is counted to the rx_nombuf field.
v4:
  Add rx_nombuf field.
---
 drivers/net/pcap/pcap_ethdev.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a43..47db6b9b33 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -51,6 +51,7 @@ struct queue_stat {
 	volatile unsigned long pkts;
 	volatile unsigned long bytes;
 	volatile unsigned long err_pkts;
+	volatile unsigned long rx_nombuf;
 };
 
 struct queue_missed_stat {
@@ -297,8 +298,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			break;
 
 		mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
-		if (unlikely(mbuf == NULL))
-			break;
+		if (unlikely(mbuf == NULL)) {
+			pcap_q->rx_stat.rx_nombuf++;
+			continue;
+		}
 
 		if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
 			/* pcap packet will fit in the mbuf, can copy it */
@@ -311,6 +314,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 						       mbuf,
 						       packet,
 						       header.caplen) == -1)) {
+				pcap_q->rx_stat.rx_nombuf++;
 				rte_pktmbuf_free(mbuf);
 				break;
 			}
@@ -742,7 +746,7 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
 	unsigned int i;
 	unsigned long rx_packets_total = 0, rx_bytes_total = 0;
-	unsigned long rx_missed_total = 0;
+	unsigned long rx_missed_total = 0, rx_nombuf = 0;
 	unsigned long tx_packets_total = 0, tx_bytes_total = 0;
 	unsigned long tx_packets_err_total = 0;
 	const struct pmd_internals *internal = dev->data->dev_private;
@@ -751,6 +755,7 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			i < dev->data->nb_rx_queues; i++) {
 		stats->q_ipackets[i] = internal->rx_queue[i].rx_stat.pkts;
 		stats->q_ibytes[i] = internal->rx_queue[i].rx_stat.bytes;
+		rx_nombuf += internal->rx_queue[i].rx_stat.rx_nombuf;
 		rx_packets_total += stats->q_ipackets[i];
 		rx_bytes_total += stats->q_ibytes[i];
 		rx_missed_total += queue_missed_stat_get(dev, i);
@@ -771,6 +776,7 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->opackets = tx_packets_total;
 	stats->obytes = tx_bytes_total;
 	stats->oerrors = tx_packets_err_total;
+	stats->rx_nombuf = rx_nombuf;
 
 	return 0;
 }
@@ -784,6 +790,8 @@ eth_stats_reset(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		internal->rx_queue[i].rx_stat.pkts = 0;
 		internal->rx_queue[i].rx_stat.bytes = 0;
+		internal->rx_queue[i].rx_stat.err_pkts = 0;
+		internal->rx_queue[i].rx_stat.rx_nombuf = 0;
 		queue_missed_stat_reset(dev, i);
 	}
 
-- 
2.30.1.windows.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v4] net/pcap: improve rx statistics
  2021-09-09  4:16     ` [dpdk-dev] [PATCH v4] " Qiming Chen
@ 2021-09-09  8:05       ` Ferruh Yigit
  2021-09-09  8:23       ` [dpdk-dev] [PATCH v5] " Qiming Chen
  1 sibling, 0 replies; 12+ messages in thread
From: Ferruh Yigit @ 2021-09-09  8:05 UTC (permalink / raw)
  To: Qiming Chen, dev

On 9/9/2021 5:16 AM, Qiming Chen wrote:
> In the receiving direction, if alloc mbuf or jumbo process failed, there
> is no err_pkts count, which makes it difficult to locate the problem.
> Because alloc mbuf failed, the rx_nombuf field is counted.
> 

Please fix './devtools/check-git-log.sh' warnings.

> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
> ---
> v2:
>   Clear coding style issues.
> v3:
>   1) Send direction does not release mbuf.
>   2) Failed to alloc mbuf is counted to the rx_nombuf field.
> v4:
>   Add rx_nombuf field.

<...>

> @@ -297,8 +298,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>  			break;
>  
>  		mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
> -		if (unlikely(mbuf == NULL))
> -			break;
> +		if (unlikely(mbuf == NULL)) {
> +			pcap_q->rx_stat.rx_nombuf++;
> +			continue;

Not sure to update to 'continue' here. I guess both works but if allocating an
mbuf failed, keeping continue to the loop may cause more mbuf allocation
failure, 'break' may give more time to have mbufs available.

Also the patch is related to adding stats, so lets not update the behavior in
this patch.

> +		}
>  
>  		if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
>  			/* pcap packet will fit in the mbuf, can copy it */
> @@ -311,6 +314,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>  						       mbuf,
>  						       packet,
>  						       header.caplen) == -1)) {
> +				pcap_q->rx_stat.rx_nombuf++;

This one should update 'err_pkts'.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v5] net/pcap: improve rx statistics
  2021-09-09  4:16     ` [dpdk-dev] [PATCH v4] " Qiming Chen
  2021-09-09  8:05       ` Ferruh Yigit
@ 2021-09-09  8:23       ` Qiming Chen
  2021-09-09 10:20         ` Ferruh Yigit
  2021-09-09 12:28         ` [dpdk-dev] [PATCH v6] " Qiming Chen
  1 sibling, 2 replies; 12+ messages in thread
From: Qiming Chen @ 2021-09-09  8:23 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Qiming Chen

In the receiving direction, if alloc mbuf or jumbo process failed, there
is no err_pkts count, which makes it difficult to locate the problem.
Because alloc mbuf failed, the rx_nombuf field is counted.

Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
---
 drivers/net/pcap/pcap_ethdev.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a43..766e239f02 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -51,6 +51,7 @@ struct queue_stat {
 	volatile unsigned long pkts;
 	volatile unsigned long bytes;
 	volatile unsigned long err_pkts;
+	volatile unsigned long rx_nombuf;
 };
 
 struct queue_missed_stat {
@@ -297,8 +298,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			break;
 
 		mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
-		if (unlikely(mbuf == NULL))
+		if (unlikely(mbuf == NULL)) {
+			pcap_q->rx_stat.rx_nombuf++;
 			break;
+		}
 
 		if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
 			/* pcap packet will fit in the mbuf, can copy it */
@@ -311,6 +314,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 						       mbuf,
 						       packet,
 						       header.caplen) == -1)) {
+				pcap_q->rx_stat.err_pkts++;
 				rte_pktmbuf_free(mbuf);
 				break;
 			}
@@ -742,7 +746,7 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
 	unsigned int i;
 	unsigned long rx_packets_total = 0, rx_bytes_total = 0;
-	unsigned long rx_missed_total = 0;
+	unsigned long rx_missed_total = 0, rx_nombuf = 0, rx_err_total = 0;
 	unsigned long tx_packets_total = 0, tx_bytes_total = 0;
 	unsigned long tx_packets_err_total = 0;
 	const struct pmd_internals *internal = dev->data->dev_private;
@@ -751,6 +755,8 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			i < dev->data->nb_rx_queues; i++) {
 		stats->q_ipackets[i] = internal->rx_queue[i].rx_stat.pkts;
 		stats->q_ibytes[i] = internal->rx_queue[i].rx_stat.bytes;
+		rx_nombuf += internal->rx_queue[i].rx_stat.rx_nombuf;
+		rx_err_total += internal->rx_queue[i].rx_stat.err_pkts;
 		rx_packets_total += stats->q_ipackets[i];
 		rx_bytes_total += stats->q_ibytes[i];
 		rx_missed_total += queue_missed_stat_get(dev, i);
@@ -768,6 +774,8 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->ipackets = rx_packets_total;
 	stats->ibytes = rx_bytes_total;
 	stats->imissed = rx_missed_total;
+	stats->ierrors = rx_err_total;
+	stats->rx_nombuf = rx_nombuf;
 	stats->opackets = tx_packets_total;
 	stats->obytes = tx_bytes_total;
 	stats->oerrors = tx_packets_err_total;
@@ -784,6 +792,8 @@ eth_stats_reset(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		internal->rx_queue[i].rx_stat.pkts = 0;
 		internal->rx_queue[i].rx_stat.bytes = 0;
+		internal->rx_queue[i].rx_stat.err_pkts = 0;
+		internal->rx_queue[i].rx_stat.rx_nombuf = 0;
 		queue_missed_stat_reset(dev, i);
 	}
 
-- 
2.30.1.windows.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v3] net/pcap: improve rx statistics
  2021-09-09  3:29     ` Stephen Hemminger
@ 2021-09-09 10:20       ` Ferruh Yigit
  0 siblings, 0 replies; 12+ messages in thread
From: Ferruh Yigit @ 2021-09-09 10:20 UTC (permalink / raw)
  To: Stephen Hemminger, Qiming Chen; +Cc: dev

On 9/9/2021 4:29 AM, Stephen Hemminger wrote:
> On Thu,  9 Sep 2021 10:45:31 +0800
> Qiming Chen <chenqiming_huawei@163.com> wrote:
> 
>> In the receiving direction, if alloc mbuf or jumbo process failed, there
>> is no err_pkts count, which makes it difficult to locate the problem.
>> Because alloc mbuf failed, the rx_nombuf field is counted.
>>
>> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
>> ---
>> v2:
>>   Clear coding style issues.
>> v3:
>>   1) Send direction does not release mbuf.
>>   2) Failed to alloc mbuf is counted to the rx_nombuf field.
> 
> Looks good, the field "err_pkts" is a confusing name for me.
> 
> On Tx it means packets dropped because pcap_sendpacket() returned error.
> Looking inside libpcap that means send() failed. On Linux this is
> a send on a PF_PACKET socket and it appears to be a blocking socket().
> So these errors are not transient conditions.
> 
> On Rx it means packets dropped because out of mbufs.
> 

In later versions of the pathc, out of mbufs updates the 'rx_nombuf' value, and
pcap Rx API error updates the 'err_pkts'.

> Perhaps a comment or renaming the field would helped.
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v5] net/pcap: improve rx statistics
  2021-09-09  8:23       ` [dpdk-dev] [PATCH v5] " Qiming Chen
@ 2021-09-09 10:20         ` Ferruh Yigit
  2021-09-09 12:28         ` [dpdk-dev] [PATCH v6] " Qiming Chen
  1 sibling, 0 replies; 12+ messages in thread
From: Ferruh Yigit @ 2021-09-09 10:20 UTC (permalink / raw)
  To: Qiming Chen, dev

On 9/9/2021 9:23 AM, Qiming Chen wrote:
> In the receiving direction, if alloc mbuf or jumbo process failed, there
> is no err_pkts count, which makes it difficult to locate the problem.
> Because alloc mbuf failed, the rx_nombuf field is counted.
> 
> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>

<...>

> @@ -742,7 +746,7 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
>  {
>  	unsigned int i;
>  	unsigned long rx_packets_total = 0, rx_bytes_total = 0;
> -	unsigned long rx_missed_total = 0;
> +	unsigned long rx_missed_total = 0, rx_nombuf = 0, rx_err_total = 0;

Since all other variables for same purpose has '_total' suffix, let's add it to
the 'rx_nombuf' too.

And can you please start a new line for new two variables:
unsigned long rx_nombuf_total = 0, rx_err_total = 0;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [dpdk-dev] [PATCH v6] net/pcap: improve rx statistics
  2021-09-09  8:23       ` [dpdk-dev] [PATCH v5] " Qiming Chen
  2021-09-09 10:20         ` Ferruh Yigit
@ 2021-09-09 12:28         ` Qiming Chen
  2021-09-09 14:45           ` Ferruh Yigit
  1 sibling, 1 reply; 12+ messages in thread
From: Qiming Chen @ 2021-09-09 12:28 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Qiming Chen

In the receiving direction, if alloc mbuf or jumbo process failed, there
is no err_pkts count, which makes it difficult to locate the problem.
Because alloc mbuf failed, the rx_nombuf field is counted.

Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>
---
 drivers/net/pcap/pcap_ethdev.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a43..497d58819b 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -51,6 +51,7 @@ struct queue_stat {
 	volatile unsigned long pkts;
 	volatile unsigned long bytes;
 	volatile unsigned long err_pkts;
+	volatile unsigned long rx_nombuf;
 };
 
 struct queue_missed_stat {
@@ -297,8 +298,10 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			break;
 
 		mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
-		if (unlikely(mbuf == NULL))
+		if (unlikely(mbuf == NULL)) {
+			pcap_q->rx_stat.rx_nombuf++;
 			break;
+		}
 
 		if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
 			/* pcap packet will fit in the mbuf, can copy it */
@@ -311,6 +314,7 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 						       mbuf,
 						       packet,
 						       header.caplen) == -1)) {
+				pcap_q->rx_stat.err_pkts++;
 				rte_pktmbuf_free(mbuf);
 				break;
 			}
@@ -743,6 +747,7 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	unsigned int i;
 	unsigned long rx_packets_total = 0, rx_bytes_total = 0;
 	unsigned long rx_missed_total = 0;
+	unsigned long rx_nombuf_total = 0, rx_err_total = 0;
 	unsigned long tx_packets_total = 0, tx_bytes_total = 0;
 	unsigned long tx_packets_err_total = 0;
 	const struct pmd_internals *internal = dev->data->dev_private;
@@ -751,6 +756,8 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			i < dev->data->nb_rx_queues; i++) {
 		stats->q_ipackets[i] = internal->rx_queue[i].rx_stat.pkts;
 		stats->q_ibytes[i] = internal->rx_queue[i].rx_stat.bytes;
+		rx_nombuf_total += internal->rx_queue[i].rx_stat.rx_nombuf;
+		rx_err_total += internal->rx_queue[i].rx_stat.err_pkts;
 		rx_packets_total += stats->q_ipackets[i];
 		rx_bytes_total += stats->q_ibytes[i];
 		rx_missed_total += queue_missed_stat_get(dev, i);
@@ -768,6 +775,8 @@ eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	stats->ipackets = rx_packets_total;
 	stats->ibytes = rx_bytes_total;
 	stats->imissed = rx_missed_total;
+	stats->ierrors = rx_err_total;
+	stats->rx_nombuf = rx_nombuf_total;
 	stats->opackets = tx_packets_total;
 	stats->obytes = tx_bytes_total;
 	stats->oerrors = tx_packets_err_total;
@@ -784,6 +793,8 @@ eth_stats_reset(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		internal->rx_queue[i].rx_stat.pkts = 0;
 		internal->rx_queue[i].rx_stat.bytes = 0;
+		internal->rx_queue[i].rx_stat.err_pkts = 0;
+		internal->rx_queue[i].rx_stat.rx_nombuf = 0;
 		queue_missed_stat_reset(dev, i);
 	}
 
-- 
2.30.1.windows.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-dev] [PATCH v6] net/pcap: improve rx statistics
  2021-09-09 12:28         ` [dpdk-dev] [PATCH v6] " Qiming Chen
@ 2021-09-09 14:45           ` Ferruh Yigit
  0 siblings, 0 replies; 12+ messages in thread
From: Ferruh Yigit @ 2021-09-09 14:45 UTC (permalink / raw)
  To: Qiming Chen, dev

On 9/9/2021 1:28 PM, Qiming Chen wrote:
> In the receiving direction, if alloc mbuf or jumbo process failed, there
> is no err_pkts count, which makes it difficult to locate the problem.
> Because alloc mbuf failed, the rx_nombuf field is counted.
> 
> Signed-off-by: Qiming Chen <chenqiming_huawei@163.com>

Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>

Applied to dpdk-next-net/main, thanks.


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-09-09 14:45 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-26  3:16 [dpdk-dev] [PATCH] net/pcap: improve rxtx statistics Qiming Chen
2021-08-26  3:23 ` [dpdk-dev] [PATCH v2] " Qiming Chen
2021-09-08 12:29   ` Ferruh Yigit
2021-09-09  2:45   ` [dpdk-dev] [PATCH v3] net/pcap: improve rx statistics Qiming Chen
2021-09-09  3:29     ` Stephen Hemminger
2021-09-09 10:20       ` Ferruh Yigit
2021-09-09  4:16     ` [dpdk-dev] [PATCH v4] " Qiming Chen
2021-09-09  8:05       ` Ferruh Yigit
2021-09-09  8:23       ` [dpdk-dev] [PATCH v5] " Qiming Chen
2021-09-09 10:20         ` Ferruh Yigit
2021-09-09 12:28         ` [dpdk-dev] [PATCH v6] " Qiming Chen
2021-09-09 14:45           ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).