* [PATCH] net/gve: fix Rx no mbufs stats counter update @ 2023-02-19 0:30 Levend Sayar 2023-02-19 17:35 ` Stephen Hemminger 2023-02-20 15:19 ` [PATCH v2] " Levend Sayar 0 siblings, 2 replies; 26+ messages in thread From: Levend Sayar @ 2023-02-19 0:30 UTC (permalink / raw) To: junfeng.guo; +Cc: dev, Levend Sayar rx no_mbufs stats counter update is added for another error case. Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") Cc: junfeng.guo@intel.com Signed-off-by: Levend Sayar <levendsayar@gmail.com> --- drivers/net/gve/gve_rx.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 66fbcf3930..b0427731f8 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -24,6 +24,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) nmb = rte_pktmbuf_alloc(rxq->mpool); if (!nmb) break; + rxq->sw_ring[idx + i] = nmb; } if (i != nb_alloc) { @@ -59,9 +60,13 @@ gve_rx_refill(struct gve_rx_queue *rxq) nmb = rte_pktmbuf_alloc(rxq->mpool); if (!nmb) break; + rxq->sw_ring[idx + i] = nmb; } - nb_alloc = i; + if (i != nb_alloc) { + rxq->no_mbufs += nb_alloc - i; + nb_alloc = i; + } } rxq->nb_avail -= nb_alloc; next_avail += nb_alloc; -- 2.37.1 (Apple Git-137.1) ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH] net/gve: fix Rx no mbufs stats counter update 2023-02-19 0:30 [PATCH] net/gve: fix Rx no mbufs stats counter update Levend Sayar @ 2023-02-19 17:35 ` Stephen Hemminger 2023-02-19 20:43 ` Levend Sayar 2023-02-20 15:19 ` [PATCH v2] " Levend Sayar 1 sibling, 1 reply; 26+ messages in thread From: Stephen Hemminger @ 2023-02-19 17:35 UTC (permalink / raw) To: Levend Sayar; +Cc: junfeng.guo, dev On Sun, 19 Feb 2023 03:30:59 +0300 Levend Sayar <levendsayar@gmail.com> wrote: > rx no_mbufs stats counter update is added for another error case. > > Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") > Cc: junfeng.guo@intel.com > > Signed-off-by: Levend Sayar <levendsayar@gmail.com> > --- > drivers/net/gve/gve_rx.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c > index 66fbcf3930..b0427731f8 100644 > --- a/drivers/net/gve/gve_rx.c > +++ b/drivers/net/gve/gve_rx.c > @@ -24,6 +24,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) > nmb = rte_pktmbuf_alloc(rxq->mpool); > if (!nmb) > break; > + > rxq->sw_ring[idx + i] = nmb; > } > if (i != nb_alloc) { Looks like accidental whitespace change included in this patch. > @@ -59,9 +60,13 @@ gve_rx_refill(struct gve_rx_queue *rxq) > nmb = rte_pktmbuf_alloc(rxq->mpool); > if (!nmb) > break; > + > rxq->sw_ring[idx + i] = nmb; > } > - nb_alloc = i; > + if (i != nb_alloc) { > + rxq->no_mbufs += nb_alloc - i; > + nb_alloc = i; > + } Would be better to add unlikely() here like: if (unlikely(i < nb_alloc)) { rxq->no_mbufs += nb_alloc - i; nb_alloc = i; } Or eliminate conditional branch in hot path completely. rxq->no_mbufs += nb_alloc - i; nb_alloc = i; Or better yet refactor code here to use rte_pktmbuf_alloc_bulk() which does single ring operation. > } > rxq->nb_avail -= nb_alloc; > next_avail += nb_alloc; ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH] net/gve: fix Rx no mbufs stats counter update 2023-02-19 17:35 ` Stephen Hemminger @ 2023-02-19 20:43 ` Levend Sayar 2023-02-19 22:59 ` Stephen Hemminger 0 siblings, 1 reply; 26+ messages in thread From: Levend Sayar @ 2023-02-19 20:43 UTC (permalink / raw) To: Stephen Hemminger; +Cc: junfeng.guo, dev [-- Attachment #1: Type: text/plain, Size: 2078 bytes --] > On 19 Feb 2023, at 20:35, Stephen Hemminger <stephen@networkplumber.org> wrote: > > On Sun, 19 Feb 2023 03:30:59 +0300 > Levend Sayar <levendsayar@gmail.com <mailto:levendsayar@gmail.com>> wrote: > >> rx no_mbufs stats counter update is added for another error case. >> >> Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") >> Cc: junfeng.guo@intel.com >> >> Signed-off-by: Levend Sayar <levendsayar@gmail.com> >> --- >> drivers/net/gve/gve_rx.c | 7 ++++++- >> 1 file changed, 6 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c >> index 66fbcf3930..b0427731f8 100644 >> --- a/drivers/net/gve/gve_rx.c >> +++ b/drivers/net/gve/gve_rx.c >> @@ -24,6 +24,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) >> nmb = rte_pktmbuf_alloc(rxq->mpool); >> if (!nmb) >> break; >> + >> rxq->sw_ring[idx + i] = nmb; >> } >> if (i != nb_alloc) { > > Looks like accidental whitespace change included in this patch. LS: Right. Let me correct. >> @@ -59,9 +60,13 @@ gve_rx_refill(struct gve_rx_queue *rxq) >> nmb = rte_pktmbuf_alloc(rxq->mpool); >> if (!nmb) >> break; >> + >> rxq->sw_ring[idx + i] = nmb; >> } >> - nb_alloc = i; >> + if (i != nb_alloc) { >> + rxq->no_mbufs += nb_alloc - i; >> + nb_alloc = i; >> + } > > Would be better to add unlikely() here like: > if (unlikely(i < nb_alloc)) { > rxq->no_mbufs += nb_alloc - i; > nb_alloc = i; > } > > Or eliminate conditional branch in hot path completely. > rxq->no_mbufs += nb_alloc - i; > nb_alloc = i; > > Or better yet refactor code here to use rte_pktmbuf_alloc_bulk() which > does single ring operation. > >> } >> rxq->nb_avail -= nb_alloc; >> next_avail += nb_alloc; LS: “unlikely” can be added. You’re right. Code already tries to make a bulk allocation first. If that bulk allocation does not work, it tries to allocate one my one. I will supersede this one and create v2. Thanks Stephen. Best, Levend [-- Attachment #2: Type: text/html, Size: 26472 bytes --] ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH] net/gve: fix Rx no mbufs stats counter update 2023-02-19 20:43 ` Levend Sayar @ 2023-02-19 22:59 ` Stephen Hemminger 0 siblings, 0 replies; 26+ messages in thread From: Stephen Hemminger @ 2023-02-19 22:59 UTC (permalink / raw) To: Levend Sayar; +Cc: junfeng.guo, dev On Sun, 19 Feb 2023 23:43:08 +0300 Levend Sayar <levendsayar@gmail.com> wrote: > LS: “unlikely” can be added. You’re right. Code already tries to make a bulk allocation first. > If that bulk allocation does not work, it tries to allocate one my one. That seems like a unnecessary step and unlikely to help. Unless the user abuses driver by giving a very small mbuf pool. ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v2] net/gve: fix Rx no mbufs stats counter update 2023-02-19 0:30 [PATCH] net/gve: fix Rx no mbufs stats counter update Levend Sayar 2023-02-19 17:35 ` Stephen Hemminger @ 2023-02-20 15:19 ` Levend Sayar 2023-02-20 21:11 ` [PATCH v3 1/2] " Levend Sayar 1 sibling, 1 reply; 26+ messages in thread From: Levend Sayar @ 2023-02-20 15:19 UTC (permalink / raw) To: junfeng.guo; +Cc: dev, Levend Sayar rx no_mbufs stats counter update is added for another error case. Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") Cc: junfeng.guo@intel.com Signed-off-by: Levend Sayar <levendsayar@gmail.com> --- drivers/net/gve/gve_rx.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 66fbcf3930..d346efa57c 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -61,7 +61,10 @@ gve_rx_refill(struct gve_rx_queue *rxq) break; rxq->sw_ring[idx + i] = nmb; } - nb_alloc = i; + if (i != nb_alloc) { + rxq->no_mbufs += nb_alloc - i; + nb_alloc = i; + } } rxq->nb_avail -= nb_alloc; next_avail += nb_alloc; -- 2.37.1 (Apple Git-137.1) ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-20 15:19 ` [PATCH v2] " Levend Sayar @ 2023-02-20 21:11 ` Levend Sayar 2023-02-20 21:11 ` [PATCH v3 2/2] net/gve: add Rx/Tx queue stats as extended stats Levend Sayar ` (2 more replies) 0 siblings, 3 replies; 26+ messages in thread From: Levend Sayar @ 2023-02-20 21:11 UTC (permalink / raw) To: junfeng.guo; +Cc: dev, Levend Sayar rx no_mbufs stats counter update is added for another error case. Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") Cc: junfeng.guo@intel.com Signed-off-by: Levend Sayar <levendsayar@gmail.com> --- drivers/net/gve/gve_rx.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index 66fbcf3930..d346efa57c 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -61,7 +61,10 @@ gve_rx_refill(struct gve_rx_queue *rxq) break; rxq->sw_ring[idx + i] = nmb; } - nb_alloc = i; + if (i != nb_alloc) { + rxq->no_mbufs += nb_alloc - i; + nb_alloc = i; + } } rxq->nb_avail -= nb_alloc; next_avail += nb_alloc; -- 2.37.1 (Apple Git-137.1) ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v3 2/2] net/gve: add Rx/Tx queue stats as extended stats 2023-02-20 21:11 ` [PATCH v3 1/2] " Levend Sayar @ 2023-02-20 21:11 ` Levend Sayar 2023-02-20 22:57 ` Ferruh Yigit 2023-02-21 14:13 ` [PATCH v4] " Levend Sayar 2023-02-20 22:57 ` [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update Ferruh Yigit 2023-02-21 15:58 ` Ferruh Yigit 2 siblings, 2 replies; 26+ messages in thread From: Levend Sayar @ 2023-02-20 21:11 UTC (permalink / raw) To: junfeng.guo; +Cc: dev, Levend Sayar Google Virtual NIC rx/tx queue stats are added as extended stats. Signed-off-by: Levend Sayar <levendsayar@gmail.com> --- drivers/net/gve/gve_ethdev.c | 143 +++++++++++++++++++++++++++++++---- drivers/net/gve/gve_ethdev.h | 27 +++++-- drivers/net/gve/gve_rx.c | 10 +-- drivers/net/gve/gve_tx.c | 11 +-- 4 files changed, 160 insertions(+), 31 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index fef2458a16..8b6e4b1322 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -9,6 +9,18 @@ const char gve_version_str[] = GVE_VERSION; static const char gve_version_prefix[] = GVE_VERSION_PREFIX; +static const struct gve_xstats_name_offset tx_xstats_name_offset[] = { + { "packets", offsetof(struct gve_tx_stats, packets) }, + { "bytes", offsetof(struct gve_tx_stats, bytes) }, + { "errors", offsetof(struct gve_tx_stats, errors) }, +}; + +static const struct gve_xstats_name_offset rx_xstats_name_offset[] = { + { "packets", offsetof(struct gve_rx_stats, packets) }, + { "bytes", offsetof(struct gve_rx_stats, bytes) }, + { "errors", offsetof(struct gve_rx_stats, errors) }, +}; + static void gve_write_version(uint8_t *driver_version_register) { @@ -328,9 +340,9 @@ gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) if (txq == NULL) continue; - stats->opackets += txq->packets; - stats->obytes += txq->bytes; - stats->oerrors += txq->errors; + stats->opackets += txq->stats.packets; + stats->obytes += txq->stats.bytes; + stats->oerrors += txq->stats.errors; } for (i = 0; i < dev->data->nb_rx_queues; i++) { @@ -338,10 +350,10 @@ gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) if (rxq == NULL) continue; - stats->ipackets += rxq->packets; - stats->ibytes += rxq->bytes; - stats->ierrors += rxq->errors; - stats->rx_nombuf += rxq->no_mbufs; + stats->ipackets += rxq->stats.packets; + stats->ibytes += rxq->stats.bytes; + stats->ierrors += rxq->stats.errors; + stats->rx_nombuf += rxq->stats.no_mbufs; } return 0; @@ -357,9 +369,7 @@ gve_dev_stats_reset(struct rte_eth_dev *dev) if (txq == NULL) continue; - txq->packets = 0; - txq->bytes = 0; - txq->errors = 0; + memset(&txq->stats, 0, sizeof(txq->stats)); } for (i = 0; i < dev->data->nb_rx_queues; i++) { @@ -367,10 +377,7 @@ gve_dev_stats_reset(struct rte_eth_dev *dev) if (rxq == NULL) continue; - rxq->packets = 0; - rxq->bytes = 0; - rxq->errors = 0; - rxq->no_mbufs = 0; + memset(&rxq->stats, 0, sizeof(rxq->stats)); } return 0; @@ -403,6 +410,112 @@ gve_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return 0; } +static int +gve_xstats_count(struct rte_eth_dev *dev) +{ + uint16_t i, count = 0; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + if (dev->data->tx_queues[i]) + count += RTE_DIM(tx_xstats_name_offset); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (dev->data->rx_queues[i]) + count += RTE_DIM(rx_xstats_name_offset); + } + + return count; +} + +static int +gve_xstats_get(struct rte_eth_dev *dev, + struct rte_eth_xstat *xstats, + unsigned int size) +{ + uint16_t i, count = 0; + + if (!xstats) + return (size == 0) ? gve_xstats_count(dev) : -EINVAL; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const struct gve_tx_queue *txq = dev->data->tx_queues[i]; + if (!txq) + continue; + + if (count >= size) + break; + + uint16_t j = 0; + const char *stats = (const char *)&txq->stats; + for (j = 0; j < RTE_DIM(tx_xstats_name_offset); j++, count++) { + xstats[count].id = count; + xstats[count].value = *(const uint64_t *) + (stats + tx_xstats_name_offset[j].offset); + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const struct gve_rx_queue *rxq = dev->data->rx_queues[i]; + if (!rxq) + continue; + + if (count >= size) + break; + + uint16_t j = 0; + const char *stats = (const char *)&rxq->stats; + for (j = 0; j < RTE_DIM(rx_xstats_name_offset); j++, count++) { + xstats[count].id = count; + xstats[count].value = *(const uint64_t *) + (stats + rx_xstats_name_offset[j].offset); + } + } + + return count; +} + +static int +gve_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int size) +{ + uint16_t i, count = 0; + + if (!xstats_names) + return (size == 0) ? gve_xstats_count(dev) : -EINVAL; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + if (!dev->data->tx_queues[i]) + continue; + + if (count >= size) + break; + + uint16_t j = 0; + for (; j < RTE_DIM(tx_xstats_name_offset); j++) + snprintf(xstats_names[count++].name, + RTE_ETH_XSTATS_NAME_SIZE, + "tx_q%u_%s", i, tx_xstats_name_offset[j].name); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (!dev->data->rx_queues[i]) + continue; + + if (count >= size) + break; + + uint16_t j = 0; + for (; j < RTE_DIM(rx_xstats_name_offset); j++) + snprintf(xstats_names[count++].name, + RTE_ETH_XSTATS_NAME_SIZE, + "rx_q%u_%s", i, rx_xstats_name_offset[j].name); + } + + return count; +} + static const struct eth_dev_ops gve_eth_dev_ops = { .dev_configure = gve_dev_configure, .dev_start = gve_dev_start, @@ -417,6 +530,8 @@ static const struct eth_dev_ops gve_eth_dev_ops = { .stats_get = gve_dev_stats_get, .stats_reset = gve_dev_stats_reset, .mtu_set = gve_dev_mtu_set, + .xstats_get = gve_xstats_get, + .xstats_get_names = gve_xstats_get_names, }; static void diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 64e571bcae..bc351d808b 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -67,6 +67,24 @@ struct gve_tx_iovec { uint32_t iov_len; }; +struct gve_tx_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; +}; + +struct gve_rx_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t no_mbufs; +}; + +struct gve_xstats_name_offset { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + struct gve_tx_queue { volatile union gve_tx_desc *tx_desc_ring; const struct rte_memzone *mz; @@ -93,9 +111,7 @@ struct gve_tx_queue { struct gve_tx_iovec *iov_ring; /* stats items */ - uint64_t packets; - uint64_t bytes; - uint64_t errors; + struct gve_tx_stats stats; uint16_t port_id; uint16_t queue_id; @@ -136,10 +152,7 @@ struct gve_rx_queue { struct gve_queue_page_list *qpl; /* stats items */ - uint64_t packets; - uint64_t bytes; - uint64_t errors; - uint64_t no_mbufs; + struct gve_rx_stats stats; struct gve_priv *hw; const struct rte_memzone *qres_mz; diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index d346efa57c..b52f924689 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -27,7 +27,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) rxq->sw_ring[idx + i] = nmb; } if (i != nb_alloc) { - rxq->no_mbufs += nb_alloc - i; + rxq->stats.no_mbufs += nb_alloc - i; nb_alloc = i; } } @@ -62,7 +62,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) rxq->sw_ring[idx + i] = nmb; } if (i != nb_alloc) { - rxq->no_mbufs += nb_alloc - i; + rxq->stats.no_mbufs += nb_alloc - i; nb_alloc = i; } } @@ -106,7 +106,7 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) break; if (rxd->flags_seq & GVE_RXF_ERR) { - rxq->errors++; + rxq->stats.errors++; continue; } @@ -154,8 +154,8 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) gve_rx_refill(rxq); if (nb_rx) { - rxq->packets += nb_rx; - rxq->bytes += bytes; + rxq->stats.packets += nb_rx; + rxq->stats.bytes += bytes; } return nb_rx; diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index 9b41c59358..fee3b939c7 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -366,9 +366,9 @@ gve_tx_burst_qpl(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txq->tx_tail = tx_tail; txq->sw_tail = sw_id; - txq->packets += nb_tx; - txq->bytes += bytes; - txq->errors += nb_pkts - nb_tx; + txq->stats.packets += nb_tx; + txq->stats.bytes += bytes; + txq->stats.errors += nb_pkts - nb_tx; } return nb_tx; @@ -455,8 +455,9 @@ gve_tx_burst_ra(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) rte_write32(rte_cpu_to_be_32(tx_tail), txq->qtx_tail); txq->tx_tail = tx_tail; - txq->packets += nb_tx; - txq->bytes += bytes; + txq->stats.packets += nb_tx; + txq->stats.bytes += bytes; + txq->stats.errors += nb_pkts - nb_tx; } return nb_tx; -- 2.37.1 (Apple Git-137.1) ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 2/2] net/gve: add Rx/Tx queue stats as extended stats 2023-02-20 21:11 ` [PATCH v3 2/2] net/gve: add Rx/Tx queue stats as extended stats Levend Sayar @ 2023-02-20 22:57 ` Ferruh Yigit 2023-02-21 11:11 ` Levend Sayar 2023-02-21 14:13 ` [PATCH v4] " Levend Sayar 1 sibling, 1 reply; 26+ messages in thread From: Ferruh Yigit @ 2023-02-20 22:57 UTC (permalink / raw) To: Levend Sayar, junfeng.guo; +Cc: dev On 2/20/2023 9:11 PM, Levend Sayar wrote: > Google Virtual NIC rx/tx queue stats are added as extended stats. > > Signed-off-by: Levend Sayar <levendsayar@gmail.com> Thank you for the update, mainly looks good to me, there area a few minor questions/comments below. <...> > +static const struct gve_xstats_name_offset tx_xstats_name_offset[] = { > + { "packets", offsetof(struct gve_tx_stats, packets) }, > + { "bytes", offsetof(struct gve_tx_stats, bytes) }, > + { "errors", offsetof(struct gve_tx_stats, errors) }, > +}; > + It is possible to create macros to get offsets to prevent any mistakes but not quite sure if it is needed with above limited number of items, so up to you, I mean something like: #define RX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_rx_stats, X) #define TX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_tx_stats, X) > +static const struct gve_xstats_name_offset rx_xstats_name_offset[] = { > + { "packets", offsetof(struct gve_rx_stats, packets) }, > + { "bytes", offsetof(struct gve_rx_stats, bytes) }, > + { "errors", offsetof(struct gve_rx_stats, errors) }, > +}; > + Is 'no_mbufs' omitted intentionally? <...> > > +static int > +gve_xstats_count(struct rte_eth_dev *dev) > +{ > + uint16_t i, count = 0; > + > + for (i = 0; i < dev->data->nb_tx_queues; i++) { > + if (dev->data->tx_queues[i]) Normally 'dev->data->tx_queues[i]' shouldn't be NULL, but I can see driver checks this in a few locations. Is there a case that 'dev->data->tx_queues[i]' can be NULL where "0 <= i < dev->data->nb_tx_queues" ? > + count += RTE_DIM(tx_xstats_name_offset); > + } > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > + if (dev->data->rx_queues[i]) > + count += RTE_DIM(rx_xstats_name_offset); > + } > + > + return count; > +} > + > +static int > +gve_xstats_get(struct rte_eth_dev *dev, > + struct rte_eth_xstat *xstats, > + unsigned int size) > +{ > + uint16_t i, count = 0; > + > + if (!xstats) > + return (size == 0) ? gve_xstats_count(dev) : -EINVAL; Error case (xstats == NULL && size != 0) should be handled by ethdev, so driver can only check "xstats == NULL" case. btw, although we have mixed usage and not very strict on it, coding convention requires testing against NULL [1] instead of '!'. [1] https://doc.dpdk.org/guides/contributing/coding_style.html#null-pointers > + > + for (i = 0; i < dev->data->nb_tx_queues; i++) { > + const struct gve_tx_queue *txq = dev->data->tx_queues[i]; > + if (!txq) > + continue; > + similar to above comment, I expect 'txq' not to be NULL, isn't this correct for the driver? > + if (count >= size) > + break; > + Instead of above check in a loop, it is possible to check once at the beginning of the function like count = gve_xstats_count(dev) if (size < count) return count; Because when there is not enough room in the xstats array, API doesn't expect existing elements of array to be correct, returning a value bigger than 'size' indicates error case and content of xstats is invalid anyway. > + uint16_t j = 0; > + const char *stats = (const char *)&txq->stats; Can you please move variable declarations at the beginning of the scope, for above variables it is just after for statement, according coding convention. > + for (j = 0; j < RTE_DIM(tx_xstats_name_offset); j++, count++) { > + xstats[count].id = count; > + xstats[count].value = *(const uint64_t *) > + (stats + tx_xstats_name_offset[j].offset); > + } > + } > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > + const struct gve_rx_queue *rxq = dev->data->rx_queues[i]; > + if (!rxq) > + continue; > + > + if (count >= size) > + break; > + > + uint16_t j = 0; > + const char *stats = (const char *)&rxq->stats; > + for (j = 0; j < RTE_DIM(rx_xstats_name_offset); j++, count++) { > + xstats[count].id = count; > + xstats[count].value = *(const uint64_t *) > + (stats + rx_xstats_name_offset[j].offset); > + } > + } > + > + return count; > +} > + > +static int > +gve_xstats_get_names(struct rte_eth_dev *dev, > + struct rte_eth_xstat_name *xstats_names, > + unsigned int size) > +{ > + uint16_t i, count = 0; > + > + if (!xstats_names) > + return (size == 0) ? gve_xstats_count(dev) : -EINVAL; > + > + for (i = 0; i < dev->data->nb_tx_queues; i++) { > + if (!dev->data->tx_queues[i]) > + continue; > + > + if (count >= size) > + break; > + > + uint16_t j = 0; > + for (; j < RTE_DIM(tx_xstats_name_offset); j++) > + snprintf(xstats_names[count++].name, > + RTE_ETH_XSTATS_NAME_SIZE, > + "tx_q%u_%s", i, tx_xstats_name_offset[j].name); > + } > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > + if (!dev->data->rx_queues[i]) > + continue; > + > + if (count >= size) > + break; > + > + uint16_t j = 0; > + for (; j < RTE_DIM(rx_xstats_name_offset); j++) > + snprintf(xstats_names[count++].name, > + RTE_ETH_XSTATS_NAME_SIZE, > + "rx_q%u_%s", i, rx_xstats_name_offset[j].name); > + } > + > + return count; > +} > + comments on 'gve_xstats_get()' valid here too. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 2/2] net/gve: add Rx/Tx queue stats as extended stats 2023-02-20 22:57 ` Ferruh Yigit @ 2023-02-21 11:11 ` Levend Sayar 0 siblings, 0 replies; 26+ messages in thread From: Levend Sayar @ 2023-02-21 11:11 UTC (permalink / raw) To: Ferruh Yigit; +Cc: Guo, Junfeng, dev Thanks Ferruh for the review. My comments are inlined. > On 21 Feb 2023, at 01:57, Ferruh Yigit <ferruh.yigit@amd.com> wrote: > > On 2/20/2023 9:11 PM, Levend Sayar wrote: >> Google Virtual NIC rx/tx queue stats are added as extended stats. >> >> Signed-off-by: Levend Sayar <levendsayar@gmail.com> > > Thank you for the update, mainly looks good to me, there area a few > minor questions/comments below. > > <...> >> +static const struct gve_xstats_name_offset tx_xstats_name_offset[] = { >> + { "packets", offsetof(struct gve_tx_stats, packets) }, >> + { "bytes", offsetof(struct gve_tx_stats, bytes) }, >> + { "errors", offsetof(struct gve_tx_stats, errors) }, >> +}; >> + > > It is possible to create macros to get offsets to prevent any mistakes > but not quite sure if it is needed with above limited number of items, > so up to you, I mean something like: > > #define RX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_rx_stats, X) > #define TX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_tx_stats, X) > LS: I take dpdk code as reference and mimicked the usage on rte_ethdev.c here. Your proposal surely applicable. >> +static const struct gve_xstats_name_offset rx_xstats_name_offset[] = { >> + { "packets", offsetof(struct gve_rx_stats, packets) }, >> + { "bytes", offsetof(struct gve_rx_stats, bytes) }, >> + { "errors", offsetof(struct gve_rx_stats, errors) }, >> +}; >> + > > Is 'no_mbufs' omitted intentionally? > > <...> LS: no_mbufs are accumulated as rx_mbuf_allocation_errors on basic stats. But yes, it can be queue based also. >> >> +static int >> +gve_xstats_count(struct rte_eth_dev *dev) >> +{ >> + uint16_t i, count = 0; >> + >> + for (i = 0; i < dev->data->nb_tx_queues; i++) { >> + if (dev->data->tx_queues[i]) > > Normally 'dev->data->tx_queues[i]' shouldn't be NULL, but I can see > driver checks this in a few locations. > > Is there a case that 'dev->data->tx_queues[i]' can be NULL where "0 <= i > < dev->data->nb_tx_queues" ? LS: You’re right. On my previous patches I erased that checks but that parts are reviewed as noise. So I am aligned with the rest of the code. In fact, there is no gap like that. In dev_start, Queues are created and whenever queue can not be created, it returns error at that point. So dev_start will return an error at the end. So if device successfully started, all rx/tx queues must be created Successfully. Therefore no need to check. > >> + count += RTE_DIM(tx_xstats_name_offset); >> + } >> + >> + for (i = 0; i < dev->data->nb_rx_queues; i++) { >> + if (dev->data->rx_queues[i]) >> + count += RTE_DIM(rx_xstats_name_offset); >> + } >> + >> + return count; >> +} >> + >> +static int >> +gve_xstats_get(struct rte_eth_dev *dev, >> + struct rte_eth_xstat *xstats, >> + unsigned int size) >> +{ >> + uint16_t i, count = 0; >> + >> + if (!xstats) >> + return (size == 0) ? gve_xstats_count(dev) : -EINVAL; > > Error case (xstats == NULL && size != 0) should be handled by ethdev, so > driver can only check "xstats == NULL" case. > > btw, although we have mixed usage and not very strict on it, coding > convention requires testing against NULL [1] instead of '!'. > > [1] > https://doc.dpdk.org/guides/contributing/coding_style.html#null-pointers LS: You’re right. rte_eth_xstats_get checks for (xstats == NULL && size != 0). Let me correct that part and use NULL. > >> + >> + for (i = 0; i < dev->data->nb_tx_queues; i++) { >> + const struct gve_tx_queue *txq = dev->data->tx_queues[i]; >> + if (!txq) >> + continue; >> + > > similar to above comment, I expect 'txq' not to be NULL, isn't this > correct for the driver? > >> + if (count >= size) >> + break; >> + > > Instead of above check in a loop, it is possible to check once at the > beginning of the function like > > count = gve_xstats_count(dev) > if (size < count) > return count; > > Because when there is not enough room in the xstats array, API doesn't > expect existing elements of array to be correct, returning a value > bigger than 'size' indicates error case and content of xstats is invalid > anyway. LS: Let’s say NIC has 20 xstats. Your application allocates xstats memory enough to hold 30 and request 30 xtstats. NIC will return 20 xstats. That’s ok. But if application allocates memory For 10 xstats and request 10, it’s taken as error although Nic can fill first 10 xstats. So size must be higher or equal to number of xstats. Right? > >> + uint16_t j = 0; >> + const char *stats = (const char *)&txq->stats; > > Can you please move variable declarations at the beginning of the scope, > for above variables it is just after for statement, according coding > convention. LS: Sure I can do. I supposed minimizing scope is better approach. But not here I guess. What is the rationale behind not declaring a variable in a for loop like? for (int i =0 ... That’s a quite surprise for me when I got an error from checkpatches.sh. > >> + for (j = 0; j < RTE_DIM(tx_xstats_name_offset); j++, count++) { >> + xstats[count].id = count; >> + xstats[count].value = *(const uint64_t *) >> + (stats + tx_xstats_name_offset[j].offset); >> + } >> + } >> + >> + for (i = 0; i < dev->data->nb_rx_queues; i++) { >> + const struct gve_rx_queue *rxq = dev->data->rx_queues[i]; >> + if (!rxq) >> + continue; >> + >> + if (count >= size) >> + break; >> + >> + uint16_t j = 0; >> + const char *stats = (const char *)&rxq->stats; >> + for (j = 0; j < RTE_DIM(rx_xstats_name_offset); j++, count++) { >> + xstats[count].id = count; >> + xstats[count].value = *(const uint64_t *) >> + (stats + rx_xstats_name_offset[j].offset); >> + } >> + } >> + >> + return count; >> +} >> + >> +static int >> +gve_xstats_get_names(struct rte_eth_dev *dev, >> + struct rte_eth_xstat_name *xstats_names, >> + unsigned int size) >> +{ >> + uint16_t i, count = 0; >> + >> + if (!xstats_names) >> + return (size == 0) ? gve_xstats_count(dev) : -EINVAL; >> + >> + for (i = 0; i < dev->data->nb_tx_queues; i++) { >> + if (!dev->data->tx_queues[i]) >> + continue; >> + >> + if (count >= size) >> + break; >> + >> + uint16_t j = 0; >> + for (; j < RTE_DIM(tx_xstats_name_offset); j++) >> + snprintf(xstats_names[count++].name, >> + RTE_ETH_XSTATS_NAME_SIZE, >> + "tx_q%u_%s", i, tx_xstats_name_offset[j].name); >> + } >> + >> + for (i = 0; i < dev->data->nb_rx_queues; i++) { >> + if (!dev->data->rx_queues[i]) >> + continue; >> + >> + if (count >= size) >> + break; >> + >> + uint16_t j = 0; >> + for (; j < RTE_DIM(rx_xstats_name_offset); j++) >> + snprintf(xstats_names[count++].name, >> + RTE_ETH_XSTATS_NAME_SIZE, >> + "rx_q%u_%s", i, rx_xstats_name_offset[j].name); >> + } >> + >> + return count; >> +} >> + > > comments on 'gve_xstats_get()' valid here too. > ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v4] net/gve: add Rx/Tx queue stats as extended stats 2023-02-20 21:11 ` [PATCH v3 2/2] net/gve: add Rx/Tx queue stats as extended stats Levend Sayar 2023-02-20 22:57 ` Ferruh Yigit @ 2023-02-21 14:13 ` Levend Sayar 2023-02-21 14:18 ` [PATCH v5] " Levend Sayar 1 sibling, 1 reply; 26+ messages in thread From: Levend Sayar @ 2023-02-21 14:13 UTC (permalink / raw) To: junfeng.guo; +Cc: dev, Levend Sayar Google Virtual NIC rx/tx queue stats are added as extended stats. Signed-off-by: Levend Sayar <levendsayar@gmail.com> --- drivers/net/gve/gve_ethdev.c | 137 +++++++++++++++++++++++++++++++---- drivers/net/gve/gve_ethdev.h | 28 +++++-- drivers/net/gve/gve_rx.c | 12 +-- drivers/net/gve/gve_tx.c | 11 +-- 4 files changed, 157 insertions(+), 31 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index fef2458a16..a95bb041ea 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -6,9 +6,26 @@ #include "base/gve_adminq.h" #include "base/gve_register.h" +#define TX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_tx_stats, x) +#define RX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_rx_stats, x) + const char gve_version_str[] = GVE_VERSION; static const char gve_version_prefix[] = GVE_VERSION_PREFIX; +static const struct gve_xstats_name_offset tx_xstats_name_offset[] = { + { "packets", TX_QUEUE_STATS_OFFSET(packets) }, + { "bytes", TX_QUEUE_STATS_OFFSET(bytes) }, + { "errors", TX_QUEUE_STATS_OFFSET(errors) }, +}; + +static const struct gve_xstats_name_offset rx_xstats_name_offset[] = { + { "packets", RX_QUEUE_STATS_OFFSET(packets) }, + { "bytes", RX_QUEUE_STATS_OFFSET(bytes) }, + { "errors", RX_QUEUE_STATS_OFFSET(errors) }, + { "mbuf_alloc_errors", RX_QUEUE_STATS_OFFSET(no_mbufs) }, + { "mbuf_alloc_errors_bulk", RX_QUEUE_STATS_OFFSET(no_mbufs_bulk) }, +}; + static void gve_write_version(uint8_t *driver_version_register) { @@ -328,9 +345,9 @@ gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) if (txq == NULL) continue; - stats->opackets += txq->packets; - stats->obytes += txq->bytes; - stats->oerrors += txq->errors; + stats->opackets += txq->stats.packets; + stats->obytes += txq->stats.bytes; + stats->oerrors += txq->stats.errors; } for (i = 0; i < dev->data->nb_rx_queues; i++) { @@ -338,10 +355,10 @@ gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) if (rxq == NULL) continue; - stats->ipackets += rxq->packets; - stats->ibytes += rxq->bytes; - stats->ierrors += rxq->errors; - stats->rx_nombuf += rxq->no_mbufs; + stats->ipackets += rxq->stats.packets; + stats->ibytes += rxq->stats.bytes; + stats->ierrors += rxq->stats.errors; + stats->rx_nombuf += rxq->stats.no_mbufs; } return 0; @@ -357,9 +374,7 @@ gve_dev_stats_reset(struct rte_eth_dev *dev) if (txq == NULL) continue; - txq->packets = 0; - txq->bytes = 0; - txq->errors = 0; + memset(&txq->stats, 0, sizeof(txq->stats)); } for (i = 0; i < dev->data->nb_rx_queues; i++) { @@ -367,10 +382,7 @@ gve_dev_stats_reset(struct rte_eth_dev *dev) if (rxq == NULL) continue; - rxq->packets = 0; - rxq->bytes = 0; - rxq->errors = 0; - rxq->no_mbufs = 0; + memset(&rxq->stats, 0, sizeof(rxq->stats)); } return 0; @@ -403,6 +415,101 @@ gve_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return 0; } +static int +gve_xstats_count(struct rte_eth_dev *dev) +{ + uint16_t i, count = 0; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + if (dev->data->tx_queues[i]) + count += RTE_DIM(tx_xstats_name_offset); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (dev->data->rx_queues[i]) + count += RTE_DIM(rx_xstats_name_offset); + } + + return count; +} + +static int +gve_xstats_get(struct rte_eth_dev *dev, + struct rte_eth_xstat *xstats, + unsigned int size) +{ + uint16_t i, j, count = gve_xstats_count(dev); + const char *stats; + + if ((xstats == NULL) || (size < count)) + return count; + + count = 0; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const struct gve_tx_queue *txq = dev->data->tx_queues[i]; + if (txq == NULL) + continue; + + stats = (const char *)&txq->stats; + for (j = 0; j < RTE_DIM(tx_xstats_name_offset); j++, count++) { + xstats[count].id = count; + xstats[count].value = *(const uint64_t *) + (stats + tx_xstats_name_offset[j].offset); + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const struct gve_rx_queue *rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + stats = (const char *)&rxq->stats; + for (j = 0; j < RTE_DIM(rx_xstats_name_offset); j++, count++) { + xstats[count].id = count; + xstats[count].value = *(const uint64_t *) + (stats + rx_xstats_name_offset[j].offset); + } + } + + return count; +} + +static int +gve_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int size) +{ + uint16_t i, j, count = gve_xstats_count(dev); + + if ((xstats_names == NULL) || (size < count)) + return count; + + count = 0; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + if (dev->data->tx_queues[i] == NULL) + continue; + + for (j = 0; j < RTE_DIM(tx_xstats_name_offset); j++) + snprintf(xstats_names[count++].name, + RTE_ETH_XSTATS_NAME_SIZE, + "tx_q%u_%s", i, tx_xstats_name_offset[j].name); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (dev->data->rx_queues[i] == NULL) + continue; + + for (j = 0; j < RTE_DIM(rx_xstats_name_offset); j++) + snprintf(xstats_names[count++].name, + RTE_ETH_XSTATS_NAME_SIZE, + "rx_q%u_%s", i, rx_xstats_name_offset[j].name); + } + + return count; +} + static const struct eth_dev_ops gve_eth_dev_ops = { .dev_configure = gve_dev_configure, .dev_start = gve_dev_start, @@ -417,6 +524,8 @@ static const struct eth_dev_ops gve_eth_dev_ops = { .stats_get = gve_dev_stats_get, .stats_reset = gve_dev_stats_reset, .mtu_set = gve_dev_mtu_set, + .xstats_get = gve_xstats_get, + .xstats_get_names = gve_xstats_get_names, }; static void diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 64e571bcae..42a02cf5d4 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -67,6 +67,25 @@ struct gve_tx_iovec { uint32_t iov_len; }; +struct gve_tx_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; +}; + +struct gve_rx_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t no_mbufs; + uint64_t no_mbufs_bulk; +}; + +struct gve_xstats_name_offset { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + struct gve_tx_queue { volatile union gve_tx_desc *tx_desc_ring; const struct rte_memzone *mz; @@ -93,9 +112,7 @@ struct gve_tx_queue { struct gve_tx_iovec *iov_ring; /* stats items */ - uint64_t packets; - uint64_t bytes; - uint64_t errors; + struct gve_tx_stats stats; uint16_t port_id; uint16_t queue_id; @@ -136,10 +153,7 @@ struct gve_rx_queue { struct gve_queue_page_list *qpl; /* stats items */ - uint64_t packets; - uint64_t bytes; - uint64_t errors; - uint64_t no_mbufs; + struct gve_rx_stats stats; struct gve_priv *hw; const struct rte_memzone *qres_mz; diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index d346efa57c..8d8f94efff 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) if (nb_alloc <= rxq->nb_avail) { diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc); if (diag < 0) { + rxq->stats.no_mbufs_bulk++; for (i = 0; i < nb_alloc; i++) { nmb = rte_pktmbuf_alloc(rxq->mpool); if (!nmb) @@ -27,7 +28,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) rxq->sw_ring[idx + i] = nmb; } if (i != nb_alloc) { - rxq->no_mbufs += nb_alloc - i; + rxq->stats.no_mbufs += nb_alloc - i; nb_alloc = i; } } @@ -55,6 +56,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) nb_alloc = rxq->nb_rx_desc - idx; diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc); if (diag < 0) { + rxq->stats.no_mbufs_bulk++; for (i = 0; i < nb_alloc; i++) { nmb = rte_pktmbuf_alloc(rxq->mpool); if (!nmb) @@ -62,7 +64,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) rxq->sw_ring[idx + i] = nmb; } if (i != nb_alloc) { - rxq->no_mbufs += nb_alloc - i; + rxq->stats.no_mbufs += nb_alloc - i; nb_alloc = i; } } @@ -106,7 +108,7 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) break; if (rxd->flags_seq & GVE_RXF_ERR) { - rxq->errors++; + rxq->stats.errors++; continue; } @@ -154,8 +156,8 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) gve_rx_refill(rxq); if (nb_rx) { - rxq->packets += nb_rx; - rxq->bytes += bytes; + rxq->stats.packets += nb_rx; + rxq->stats.bytes += bytes; } return nb_rx; diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index 9b41c59358..fee3b939c7 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -366,9 +366,9 @@ gve_tx_burst_qpl(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txq->tx_tail = tx_tail; txq->sw_tail = sw_id; - txq->packets += nb_tx; - txq->bytes += bytes; - txq->errors += nb_pkts - nb_tx; + txq->stats.packets += nb_tx; + txq->stats.bytes += bytes; + txq->stats.errors += nb_pkts - nb_tx; } return nb_tx; @@ -455,8 +455,9 @@ gve_tx_burst_ra(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) rte_write32(rte_cpu_to_be_32(tx_tail), txq->qtx_tail); txq->tx_tail = tx_tail; - txq->packets += nb_tx; - txq->bytes += bytes; + txq->stats.packets += nb_tx; + txq->stats.bytes += bytes; + txq->stats.errors += nb_pkts - nb_tx; } return nb_tx; -- 2.37.1 (Apple Git-137.1) ^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats 2023-02-21 14:13 ` [PATCH v4] " Levend Sayar @ 2023-02-21 14:18 ` Levend Sayar 2023-02-21 15:58 ` Ferruh Yigit 0 siblings, 1 reply; 26+ messages in thread From: Levend Sayar @ 2023-02-21 14:18 UTC (permalink / raw) To: junfeng.guo; +Cc: dev, Levend Sayar Google Virtual NIC rx/tx queue stats are added as extended stats. Signed-off-by: Levend Sayar <levendsayar@gmail.com> --- drivers/net/gve/gve_ethdev.c | 137 +++++++++++++++++++++++++++++++---- drivers/net/gve/gve_ethdev.h | 28 +++++-- drivers/net/gve/gve_rx.c | 12 +-- drivers/net/gve/gve_tx.c | 11 +-- 4 files changed, 157 insertions(+), 31 deletions(-) diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index fef2458a16..21f0c0fca2 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -6,9 +6,26 @@ #include "base/gve_adminq.h" #include "base/gve_register.h" +#define TX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_tx_stats, x) +#define RX_QUEUE_STATS_OFFSET(x) offsetof(struct gve_rx_stats, x) + const char gve_version_str[] = GVE_VERSION; static const char gve_version_prefix[] = GVE_VERSION_PREFIX; +static const struct gve_xstats_name_offset tx_xstats_name_offset[] = { + { "packets", TX_QUEUE_STATS_OFFSET(packets) }, + { "bytes", TX_QUEUE_STATS_OFFSET(bytes) }, + { "errors", TX_QUEUE_STATS_OFFSET(errors) }, +}; + +static const struct gve_xstats_name_offset rx_xstats_name_offset[] = { + { "packets", RX_QUEUE_STATS_OFFSET(packets) }, + { "bytes", RX_QUEUE_STATS_OFFSET(bytes) }, + { "errors", RX_QUEUE_STATS_OFFSET(errors) }, + { "mbuf_alloc_errors", RX_QUEUE_STATS_OFFSET(no_mbufs) }, + { "mbuf_alloc_errors_bulk", RX_QUEUE_STATS_OFFSET(no_mbufs_bulk) }, +}; + static void gve_write_version(uint8_t *driver_version_register) { @@ -328,9 +345,9 @@ gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) if (txq == NULL) continue; - stats->opackets += txq->packets; - stats->obytes += txq->bytes; - stats->oerrors += txq->errors; + stats->opackets += txq->stats.packets; + stats->obytes += txq->stats.bytes; + stats->oerrors += txq->stats.errors; } for (i = 0; i < dev->data->nb_rx_queues; i++) { @@ -338,10 +355,10 @@ gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) if (rxq == NULL) continue; - stats->ipackets += rxq->packets; - stats->ibytes += rxq->bytes; - stats->ierrors += rxq->errors; - stats->rx_nombuf += rxq->no_mbufs; + stats->ipackets += rxq->stats.packets; + stats->ibytes += rxq->stats.bytes; + stats->ierrors += rxq->stats.errors; + stats->rx_nombuf += rxq->stats.no_mbufs; } return 0; @@ -357,9 +374,7 @@ gve_dev_stats_reset(struct rte_eth_dev *dev) if (txq == NULL) continue; - txq->packets = 0; - txq->bytes = 0; - txq->errors = 0; + memset(&txq->stats, 0, sizeof(txq->stats)); } for (i = 0; i < dev->data->nb_rx_queues; i++) { @@ -367,10 +382,7 @@ gve_dev_stats_reset(struct rte_eth_dev *dev) if (rxq == NULL) continue; - rxq->packets = 0; - rxq->bytes = 0; - rxq->errors = 0; - rxq->no_mbufs = 0; + memset(&rxq->stats, 0, sizeof(rxq->stats)); } return 0; @@ -403,6 +415,101 @@ gve_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return 0; } +static int +gve_xstats_count(struct rte_eth_dev *dev) +{ + uint16_t i, count = 0; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + if (dev->data->tx_queues[i]) + count += RTE_DIM(tx_xstats_name_offset); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (dev->data->rx_queues[i]) + count += RTE_DIM(rx_xstats_name_offset); + } + + return count; +} + +static int +gve_xstats_get(struct rte_eth_dev *dev, + struct rte_eth_xstat *xstats, + unsigned int size) +{ + uint16_t i, j, count = gve_xstats_count(dev); + const char *stats; + + if (xstats == NULL || size < count) + return count; + + count = 0; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + const struct gve_tx_queue *txq = dev->data->tx_queues[i]; + if (txq == NULL) + continue; + + stats = (const char *)&txq->stats; + for (j = 0; j < RTE_DIM(tx_xstats_name_offset); j++, count++) { + xstats[count].id = count; + xstats[count].value = *(const uint64_t *) + (stats + tx_xstats_name_offset[j].offset); + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + const struct gve_rx_queue *rxq = dev->data->rx_queues[i]; + if (rxq == NULL) + continue; + + stats = (const char *)&rxq->stats; + for (j = 0; j < RTE_DIM(rx_xstats_name_offset); j++, count++) { + xstats[count].id = count; + xstats[count].value = *(const uint64_t *) + (stats + rx_xstats_name_offset[j].offset); + } + } + + return count; +} + +static int +gve_xstats_get_names(struct rte_eth_dev *dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int size) +{ + uint16_t i, j, count = gve_xstats_count(dev); + + if (xstats_names == NULL || size < count) + return count; + + count = 0; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + if (dev->data->tx_queues[i] == NULL) + continue; + + for (j = 0; j < RTE_DIM(tx_xstats_name_offset); j++) + snprintf(xstats_names[count++].name, + RTE_ETH_XSTATS_NAME_SIZE, + "tx_q%u_%s", i, tx_xstats_name_offset[j].name); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (dev->data->rx_queues[i] == NULL) + continue; + + for (j = 0; j < RTE_DIM(rx_xstats_name_offset); j++) + snprintf(xstats_names[count++].name, + RTE_ETH_XSTATS_NAME_SIZE, + "rx_q%u_%s", i, rx_xstats_name_offset[j].name); + } + + return count; +} + static const struct eth_dev_ops gve_eth_dev_ops = { .dev_configure = gve_dev_configure, .dev_start = gve_dev_start, @@ -417,6 +524,8 @@ static const struct eth_dev_ops gve_eth_dev_ops = { .stats_get = gve_dev_stats_get, .stats_reset = gve_dev_stats_reset, .mtu_set = gve_dev_mtu_set, + .xstats_get = gve_xstats_get, + .xstats_get_names = gve_xstats_get_names, }; static void diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 64e571bcae..42a02cf5d4 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -67,6 +67,25 @@ struct gve_tx_iovec { uint32_t iov_len; }; +struct gve_tx_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; +}; + +struct gve_rx_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t no_mbufs; + uint64_t no_mbufs_bulk; +}; + +struct gve_xstats_name_offset { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + struct gve_tx_queue { volatile union gve_tx_desc *tx_desc_ring; const struct rte_memzone *mz; @@ -93,9 +112,7 @@ struct gve_tx_queue { struct gve_tx_iovec *iov_ring; /* stats items */ - uint64_t packets; - uint64_t bytes; - uint64_t errors; + struct gve_tx_stats stats; uint16_t port_id; uint16_t queue_id; @@ -136,10 +153,7 @@ struct gve_rx_queue { struct gve_queue_page_list *qpl; /* stats items */ - uint64_t packets; - uint64_t bytes; - uint64_t errors; - uint64_t no_mbufs; + struct gve_rx_stats stats; struct gve_priv *hw; const struct rte_memzone *qres_mz; diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index d346efa57c..8d8f94efff 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) if (nb_alloc <= rxq->nb_avail) { diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc); if (diag < 0) { + rxq->stats.no_mbufs_bulk++; for (i = 0; i < nb_alloc; i++) { nmb = rte_pktmbuf_alloc(rxq->mpool); if (!nmb) @@ -27,7 +28,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) rxq->sw_ring[idx + i] = nmb; } if (i != nb_alloc) { - rxq->no_mbufs += nb_alloc - i; + rxq->stats.no_mbufs += nb_alloc - i; nb_alloc = i; } } @@ -55,6 +56,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) nb_alloc = rxq->nb_rx_desc - idx; diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc); if (diag < 0) { + rxq->stats.no_mbufs_bulk++; for (i = 0; i < nb_alloc; i++) { nmb = rte_pktmbuf_alloc(rxq->mpool); if (!nmb) @@ -62,7 +64,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) rxq->sw_ring[idx + i] = nmb; } if (i != nb_alloc) { - rxq->no_mbufs += nb_alloc - i; + rxq->stats.no_mbufs += nb_alloc - i; nb_alloc = i; } } @@ -106,7 +108,7 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) break; if (rxd->flags_seq & GVE_RXF_ERR) { - rxq->errors++; + rxq->stats.errors++; continue; } @@ -154,8 +156,8 @@ gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) gve_rx_refill(rxq); if (nb_rx) { - rxq->packets += nb_rx; - rxq->bytes += bytes; + rxq->stats.packets += nb_rx; + rxq->stats.bytes += bytes; } return nb_rx; diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index 9b41c59358..fee3b939c7 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -366,9 +366,9 @@ gve_tx_burst_qpl(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txq->tx_tail = tx_tail; txq->sw_tail = sw_id; - txq->packets += nb_tx; - txq->bytes += bytes; - txq->errors += nb_pkts - nb_tx; + txq->stats.packets += nb_tx; + txq->stats.bytes += bytes; + txq->stats.errors += nb_pkts - nb_tx; } return nb_tx; @@ -455,8 +455,9 @@ gve_tx_burst_ra(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) rte_write32(rte_cpu_to_be_32(tx_tail), txq->qtx_tail); txq->tx_tail = tx_tail; - txq->packets += nb_tx; - txq->bytes += bytes; + txq->stats.packets += nb_tx; + txq->stats.bytes += bytes; + txq->stats.errors += nb_pkts - nb_tx; } return nb_tx; -- 2.37.1 (Apple Git-137.1) ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats 2023-02-21 14:18 ` [PATCH v5] " Levend Sayar @ 2023-02-21 15:58 ` Ferruh Yigit 2023-02-21 16:44 ` Levend Sayar 0 siblings, 1 reply; 26+ messages in thread From: Ferruh Yigit @ 2023-02-21 15:58 UTC (permalink / raw) To: Levend Sayar, junfeng.guo; +Cc: dev On 2/21/2023 2:18 PM, Levend Sayar wrote: > Google Virtual NIC rx/tx queue stats are added as extended stats. > > Signed-off-by: Levend Sayar <levendsayar@gmail.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> <...> > @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) > if (nb_alloc <= rxq->nb_avail) { > diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc); > if (diag < 0) { > + rxq->stats.no_mbufs_bulk++; It is not common to record bulk alloc failures, but as 'no_mbufs' already recorded conventionally, I guess it is OK to keep this extra stat if it is helpful. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats 2023-02-21 15:58 ` Ferruh Yigit @ 2023-02-21 16:44 ` Levend Sayar 2023-02-23 2:49 ` Guo, Junfeng 0 siblings, 1 reply; 26+ messages in thread From: Levend Sayar @ 2023-02-21 16:44 UTC (permalink / raw) To: Ferruh Yigit; +Cc: Guo, Junfeng, dev Thanks Ferruh for the review. > On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote: > > On 2/21/2023 2:18 PM, Levend Sayar wrote: >> Google Virtual NIC rx/tx queue stats are added as extended stats. >> >> Signed-off-by: Levend Sayar <levendsayar@gmail.com> > > Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> > > <...> > >> @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) >> if (nb_alloc <= rxq->nb_avail) { >> diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[idx], nb_alloc); >> if (diag < 0) { >> + rxq->stats.no_mbufs_bulk++; > > It is not common to record bulk alloc failures, but as 'no_mbufs' > already recorded conventionally, I guess it is OK to keep this extra > stat if it is helpful. > ^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats 2023-02-21 16:44 ` Levend Sayar @ 2023-02-23 2:49 ` Guo, Junfeng 2023-02-23 6:28 ` Levend Sayar 2023-02-23 11:09 ` Ferruh Yigit 0 siblings, 2 replies; 26+ messages in thread From: Guo, Junfeng @ 2023-02-23 2:49 UTC (permalink / raw) To: Levend Sayar, Ferruh Yigit; +Cc: dev Thanks! > -----Original Message----- > From: Levend Sayar <levendsayar@gmail.com> > Sent: Wednesday, February 22, 2023 00:44 > To: Ferruh Yigit <ferruh.yigit@amd.com> > Cc: Guo, Junfeng <junfeng.guo@intel.com>; dev@dpdk.org > Subject: Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats > > Thanks Ferruh for the review. > > > On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote: > > > > On 2/21/2023 2:18 PM, Levend Sayar wrote: > >> Google Virtual NIC rx/tx queue stats are added as extended stats. > >> > >> Signed-off-by: Levend Sayar <levendsayar@gmail.com> > > > > Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> > > Acked-by: Junfeng Guo <junfeng.guo@intel.com> > > <...> > > > >> @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) > >> if (nb_alloc <= rxq->nb_avail) { > >> diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq- > >sw_ring[idx], nb_alloc); > >> if (diag < 0) { > >> + rxq->stats.no_mbufs_bulk++; > > > > It is not common to record bulk alloc failures, but as 'no_mbufs' > > already recorded conventionally, I guess it is OK to keep this extra > > stat if it is helpful. > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats 2023-02-23 2:49 ` Guo, Junfeng @ 2023-02-23 6:28 ` Levend Sayar 2023-02-23 11:09 ` Ferruh Yigit 1 sibling, 0 replies; 26+ messages in thread From: Levend Sayar @ 2023-02-23 6:28 UTC (permalink / raw) To: Guo, Junfeng; +Cc: Ferruh Yigit, dev Thanks Junfeng for acknowledging. > On 23 Feb 2023, at 05:49, Guo, Junfeng <junfeng.guo@intel.com> wrote: > > Thanks! > >> -----Original Message----- >> From: Levend Sayar <levendsayar@gmail.com> >> Sent: Wednesday, February 22, 2023 00:44 >> To: Ferruh Yigit <ferruh.yigit@amd.com> >> Cc: Guo, Junfeng <junfeng.guo@intel.com>; dev@dpdk.org >> Subject: Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats >> >> Thanks Ferruh for the review. >> >>> On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote: >>> >>> On 2/21/2023 2:18 PM, Levend Sayar wrote: >>>> Google Virtual NIC rx/tx queue stats are added as extended stats. >>>> >>>> Signed-off-by: Levend Sayar <levendsayar@gmail.com> >>> >>> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> >>> > > Acked-by: Junfeng Guo <junfeng.guo@intel.com> > >>> <...> >>> >>>> @@ -20,6 +20,7 @@ gve_rx_refill(struct gve_rx_queue *rxq) >>>> if (nb_alloc <= rxq->nb_avail) { >>>> diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq- >>> sw_ring[idx], nb_alloc); >>>> if (diag < 0) { >>>> + rxq->stats.no_mbufs_bulk++; >>> >>> It is not common to record bulk alloc failures, but as 'no_mbufs' >>> already recorded conventionally, I guess it is OK to keep this extra >>> stat if it is helpful. >>> > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats 2023-02-23 2:49 ` Guo, Junfeng 2023-02-23 6:28 ` Levend Sayar @ 2023-02-23 11:09 ` Ferruh Yigit 2023-02-23 12:30 ` Levend Sayar 1 sibling, 1 reply; 26+ messages in thread From: Ferruh Yigit @ 2023-02-23 11:09 UTC (permalink / raw) To: Guo, Junfeng, Levend Sayar; +Cc: dev On 2/23/2023 2:49 AM, Guo, Junfeng wrote: >> -----Original Message----- >> From: Levend Sayar <levendsayar@gmail.com> >> Sent: Wednesday, February 22, 2023 00:44 >> To: Ferruh Yigit <ferruh.yigit@amd.com> >> Cc: Guo, Junfeng <junfeng.guo@intel.com>; dev@dpdk.org >> Subject: Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats >> >> Thanks Ferruh for the review. >> >>> On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote: >>> >>> On 2/21/2023 2:18 PM, Levend Sayar wrote: >>>> Google Virtual NIC rx/tx queue stats are added as extended stats. >>>> >>>> Signed-off-by: Levend Sayar <levendsayar@gmail.com> >>> >>> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> >>> > > Acked-by: Junfeng Guo <junfeng.guo@intel.com> > while merging 'gve_xstats_name_offset' structs moved down to group them with xstats dev_ops. Applied to dpdk-next-net/main, thanks. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats 2023-02-23 11:09 ` Ferruh Yigit @ 2023-02-23 12:30 ` Levend Sayar 0 siblings, 0 replies; 26+ messages in thread From: Levend Sayar @ 2023-02-23 12:30 UTC (permalink / raw) To: Ferruh Yigit; +Cc: Guo, Junfeng, dev Thanks Ferruh for applying. > On 23 Feb 2023, at 14:09, Ferruh Yigit <ferruh.yigit@amd.com> wrote: > > On 2/23/2023 2:49 AM, Guo, Junfeng wrote: > >>> -----Original Message----- >>> From: Levend Sayar <levendsayar@gmail.com> >>> Sent: Wednesday, February 22, 2023 00:44 >>> To: Ferruh Yigit <ferruh.yigit@amd.com> >>> Cc: Guo, Junfeng <junfeng.guo@intel.com>; dev@dpdk.org >>> Subject: Re: [PATCH v5] net/gve: add Rx/Tx queue stats as extended stats >>> >>> Thanks Ferruh for the review. >>> >>>> On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote: >>>> >>>> On 2/21/2023 2:18 PM, Levend Sayar wrote: >>>>> Google Virtual NIC rx/tx queue stats are added as extended stats. >>>>> >>>>> Signed-off-by: Levend Sayar <levendsayar@gmail.com> >>>> >>>> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> >>>> >> >> Acked-by: Junfeng Guo <junfeng.guo@intel.com> >> > > while merging 'gve_xstats_name_offset' structs moved down to group them > with xstats dev_ops. > > Applied to dpdk-next-net/main, thanks. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-20 21:11 ` [PATCH v3 1/2] " Levend Sayar 2023-02-20 21:11 ` [PATCH v3 2/2] net/gve: add Rx/Tx queue stats as extended stats Levend Sayar @ 2023-02-20 22:57 ` Ferruh Yigit 2023-02-21 10:07 ` Levend Sayar 2023-02-21 15:58 ` Ferruh Yigit 2 siblings, 1 reply; 26+ messages in thread From: Ferruh Yigit @ 2023-02-20 22:57 UTC (permalink / raw) To: Levend Sayar, junfeng.guo; +Cc: dev, Stephen Hemminger On 2/20/2023 9:11 PM, Levend Sayar wrote: > rx no_mbufs stats counter update is added for another error case. > > Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") > Cc: junfeng.guo@intel.com > > Signed-off-by: Levend Sayar <levendsayar@gmail.com> > --- > drivers/net/gve/gve_rx.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c > index 66fbcf3930..d346efa57c 100644 > --- a/drivers/net/gve/gve_rx.c > +++ b/drivers/net/gve/gve_rx.c > @@ -61,7 +61,10 @@ gve_rx_refill(struct gve_rx_queue *rxq) > break; > rxq->sw_ring[idx + i] = nmb; > } > - nb_alloc = i; > + if (i != nb_alloc) { > + rxq->no_mbufs += nb_alloc - i; > + nb_alloc = i; > + } > } > rxq->nb_avail -= nb_alloc; > next_avail += nb_alloc; Looks good to me, there was a comment from Stephen to add 'unlikely()', is that issue resolved? ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-20 22:57 ` [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update Ferruh Yigit @ 2023-02-21 10:07 ` Levend Sayar 2023-02-21 10:30 ` Ferruh Yigit 0 siblings, 1 reply; 26+ messages in thread From: Levend Sayar @ 2023-02-21 10:07 UTC (permalink / raw) To: Ferruh Yigit; +Cc: Guo, Junfeng, dev, Stephen Hemminger Not only this if, there can be many places to add such branch prediction helpers On the gve pmd code. I preferred to patch only the bug here and not used unlikely to minimize noise. Imho, adding likely/unlikely to all gve pmd code can be topic of another patch maybe. Levend > On 21 Feb 2023, at 01:57, Ferruh Yigit <ferruh.yigit@amd.com> wrote: > > On 2/20/2023 9:11 PM, Levend Sayar wrote: >> rx no_mbufs stats counter update is added for another error case. >> >> Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") >> Cc: junfeng.guo@intel.com >> >> Signed-off-by: Levend Sayar <levendsayar@gmail.com> >> --- >> drivers/net/gve/gve_rx.c | 5 ++++- >> 1 file changed, 4 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c >> index 66fbcf3930..d346efa57c 100644 >> --- a/drivers/net/gve/gve_rx.c >> +++ b/drivers/net/gve/gve_rx.c >> @@ -61,7 +61,10 @@ gve_rx_refill(struct gve_rx_queue *rxq) >> break; >> rxq->sw_ring[idx + i] = nmb; >> } >> - nb_alloc = i; >> + if (i != nb_alloc) { >> + rxq->no_mbufs += nb_alloc - i; >> + nb_alloc = i; >> + } >> } >> rxq->nb_avail -= nb_alloc; >> next_avail += nb_alloc; > > Looks good to me, > there was a comment from Stephen to add 'unlikely()', is that issue > resolved? ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-21 10:07 ` Levend Sayar @ 2023-02-21 10:30 ` Ferruh Yigit 2023-02-23 4:34 ` Guo, Junfeng 0 siblings, 1 reply; 26+ messages in thread From: Ferruh Yigit @ 2023-02-21 10:30 UTC (permalink / raw) To: Levend Sayar; +Cc: Guo, Junfeng, dev, Stephen Hemminger On 2/21/2023 10:07 AM, Levend Sayar wrote: > Not only this if, there can be many places to add such branch prediction helpers > On the gve pmd code. > > I preferred to patch only the bug here and not used unlikely to minimize noise. > > Imho, adding likely/unlikely to all gve pmd code can be topic of another patch maybe. > ack, sounds reasonable to me > Levend > >> On 21 Feb 2023, at 01:57, Ferruh Yigit <ferruh.yigit@amd.com> wrote: >> >> On 2/20/2023 9:11 PM, Levend Sayar wrote: >>> rx no_mbufs stats counter update is added for another error case. >>> >>> Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") >>> Cc: junfeng.guo@intel.com >>> >>> Signed-off-by: Levend Sayar <levendsayar@gmail.com> >>> --- >>> drivers/net/gve/gve_rx.c | 5 ++++- >>> 1 file changed, 4 insertions(+), 1 deletion(-) >>> >>> diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c >>> index 66fbcf3930..d346efa57c 100644 >>> --- a/drivers/net/gve/gve_rx.c >>> +++ b/drivers/net/gve/gve_rx.c >>> @@ -61,7 +61,10 @@ gve_rx_refill(struct gve_rx_queue *rxq) >>> break; >>> rxq->sw_ring[idx + i] = nmb; >>> } >>> - nb_alloc = i; >>> + if (i != nb_alloc) { >>> + rxq->no_mbufs += nb_alloc - i; >>> + nb_alloc = i; >>> + } >>> } >>> rxq->nb_avail -= nb_alloc; >>> next_avail += nb_alloc; >> >> Looks good to me, >> there was a comment from Stephen to add 'unlikely()', is that issue >> resolved? > ^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-21 10:30 ` Ferruh Yigit @ 2023-02-23 4:34 ` Guo, Junfeng 2023-02-23 6:29 ` Levend Sayar 2023-02-23 11:10 ` Ferruh Yigit 0 siblings, 2 replies; 26+ messages in thread From: Guo, Junfeng @ 2023-02-23 4:34 UTC (permalink / raw) To: Ferruh Yigit, Levend Sayar; +Cc: dev, Stephen Hemminger Acked-by: Junfeng Guo <junfeng.guo@intel.com> > -----Original Message----- > From: Ferruh Yigit <ferruh.yigit@amd.com> > Sent: Tuesday, February 21, 2023 18:31 > To: Levend Sayar <levendsayar@gmail.com> > Cc: Guo, Junfeng <junfeng.guo@intel.com>; dev@dpdk.org; Stephen > Hemminger <stephen@networkplumber.org> > Subject: Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update > > On 2/21/2023 10:07 AM, Levend Sayar wrote: > > Not only this if, there can be many places to add such branch prediction > helpers > > On the gve pmd code. > > > > I preferred to patch only the bug here and not used unlikely to minimize > noise. > > > > Imho, adding likely/unlikely to all gve pmd code can be topic of another > patch maybe. Agreed. Adding likely/unlikely is more related to the performance with compiler. This can be an optimization for performance. Thanks! > > > > ack, sounds reasonable to me > > > Levend > > > >> On 21 Feb 2023, at 01:57, Ferruh Yigit <ferruh.yigit@amd.com> wrote: > >> > >> On 2/20/2023 9:11 PM, Levend Sayar wrote: > >>> rx no_mbufs stats counter update is added for another error case. > >>> > >>> Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") > >>> Cc: junfeng.guo@intel.com > >>> > >>> Signed-off-by: Levend Sayar <levendsayar@gmail.com> > >>> --- > >>> drivers/net/gve/gve_rx.c | 5 ++++- > >>> 1 file changed, 4 insertions(+), 1 deletion(-) > >>> > >>> diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c > >>> index 66fbcf3930..d346efa57c 100644 > >>> --- a/drivers/net/gve/gve_rx.c > >>> +++ b/drivers/net/gve/gve_rx.c > >>> @@ -61,7 +61,10 @@ gve_rx_refill(struct gve_rx_queue *rxq) > >>> break; > >>> rxq->sw_ring[idx + i] = nmb; > >>> } > >>> - nb_alloc = i; > >>> + if (i != nb_alloc) { > >>> + rxq->no_mbufs += nb_alloc - i; > >>> + nb_alloc = i; > >>> + } > >>> } > >>> rxq->nb_avail -= nb_alloc; > >>> next_avail += nb_alloc; > >> > >> Looks good to me, > >> there was a comment from Stephen to add 'unlikely()', is that issue > >> resolved? > > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-23 4:34 ` Guo, Junfeng @ 2023-02-23 6:29 ` Levend Sayar 2023-02-23 11:10 ` Ferruh Yigit 1 sibling, 0 replies; 26+ messages in thread From: Levend Sayar @ 2023-02-23 6:29 UTC (permalink / raw) To: Guo, Junfeng; +Cc: Ferruh Yigit, dev, Stephen Hemminger [-- Attachment #1: Type: text/plain, Size: 2446 bytes --] Thanks Junfeng for acknowledging. > On 23 Feb 2023, at 07:34, Guo, Junfeng <junfeng.guo@intel.com> wrote: > > Acked-by: Junfeng Guo <junfeng.guo@intel.com <mailto:junfeng.guo@intel.com>> > >> -----Original Message----- >> From: Ferruh Yigit <ferruh.yigit@amd.com <mailto:ferruh.yigit@amd.com>> >> Sent: Tuesday, February 21, 2023 18:31 >> To: Levend Sayar <levendsayar@gmail.com <mailto:levendsayar@gmail.com>> >> Cc: Guo, Junfeng <junfeng.guo@intel.com <mailto:junfeng.guo@intel.com>>; dev@dpdk.org <mailto:dev@dpdk.org>; Stephen >> Hemminger <stephen@networkplumber.org <mailto:stephen@networkplumber.org>> >> Subject: Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update >> >> On 2/21/2023 10:07 AM, Levend Sayar wrote: >>> Not only this if, there can be many places to add such branch prediction >> helpers >>> On the gve pmd code. >>> >>> I preferred to patch only the bug here and not used unlikely to minimize >> noise. >>> >>> Imho, adding likely/unlikely to all gve pmd code can be topic of another >> patch maybe. > > Agreed. > Adding likely/unlikely is more related to the performance with compiler. > This can be an optimization for performance. Thanks! > >>> >> >> ack, sounds reasonable to me >> >>> Levend >>> >>>> On 21 Feb 2023, at 01:57, Ferruh Yigit <ferruh.yigit@amd.com> wrote: >>>> >>>> On 2/20/2023 9:11 PM, Levend Sayar wrote: >>>>> rx no_mbufs stats counter update is added for another error case. >>>>> >>>>> Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") >>>>> Cc: junfeng.guo@intel.com >>>>> >>>>> Signed-off-by: Levend Sayar <levendsayar@gmail.com> >>>>> --- >>>>> drivers/net/gve/gve_rx.c | 5 ++++- >>>>> 1 file changed, 4 insertions(+), 1 deletion(-) >>>>> >>>>> diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c >>>>> index 66fbcf3930..d346efa57c 100644 >>>>> --- a/drivers/net/gve/gve_rx.c >>>>> +++ b/drivers/net/gve/gve_rx.c >>>>> @@ -61,7 +61,10 @@ gve_rx_refill(struct gve_rx_queue *rxq) >>>>> break; >>>>> rxq->sw_ring[idx + i] = nmb; >>>>> } >>>>> - nb_alloc = i; >>>>> + if (i != nb_alloc) { >>>>> + rxq->no_mbufs += nb_alloc - i; >>>>> + nb_alloc = i; >>>>> + } >>>>> } >>>>> rxq->nb_avail -= nb_alloc; >>>>> next_avail += nb_alloc; >>>> >>>> Looks good to me, >>>> there was a comment from Stephen to add 'unlikely()', is that issue >>>> resolved? [-- Attachment #2: Type: text/html, Size: 10475 bytes --] ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-23 4:34 ` Guo, Junfeng 2023-02-23 6:29 ` Levend Sayar @ 2023-02-23 11:10 ` Ferruh Yigit 2023-02-23 12:29 ` Levend Sayar 1 sibling, 1 reply; 26+ messages in thread From: Ferruh Yigit @ 2023-02-23 11:10 UTC (permalink / raw) To: Guo, Junfeng, Levend Sayar; +Cc: dev, Stephen Hemminger On 2/23/2023 4:34 AM, Guo, Junfeng wrote: >>>> On 2/20/2023 9:11 PM, Levend Sayar wrote: >>>>> rx no_mbufs stats counter update is added for another error case. >>>>> >>>>> Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") >>>>> Cc: junfeng.guo@intel.com >>>>> >>>>> Signed-off-by: Levend Sayar <levendsayar@gmail.com> >>>>> --- > > Acked-by: Junfeng Guo <junfeng.guo@intel.com> > > > Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> > Applied to dpdk-next-net/main, thanks. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-23 11:10 ` Ferruh Yigit @ 2023-02-23 12:29 ` Levend Sayar 0 siblings, 0 replies; 26+ messages in thread From: Levend Sayar @ 2023-02-23 12:29 UTC (permalink / raw) To: Ferruh Yigit; +Cc: Guo, Junfeng, dev, Stephen Hemminger Thanks Ferruh For applying. > On 23 Feb 2023, at 14:10, Ferruh Yigit <ferruh.yigit@amd.com> wrote: > > On 2/23/2023 4:34 AM, Guo, Junfeng wrote: > >>>>> On 2/20/2023 9:11 PM, Levend Sayar wrote: >>>>>> rx no_mbufs stats counter update is added for another error case. >>>>>> >>>>>> Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") >>>>>> Cc: junfeng.guo@intel.com >>>>>> >>>>>> Signed-off-by: Levend Sayar <levendsayar@gmail.com> >>>>>> --- >> >> Acked-by: Junfeng Guo <junfeng.guo@intel.com> >>> >> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> >> > > Applied to dpdk-next-net/main, thanks. > ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-20 21:11 ` [PATCH v3 1/2] " Levend Sayar 2023-02-20 21:11 ` [PATCH v3 2/2] net/gve: add Rx/Tx queue stats as extended stats Levend Sayar 2023-02-20 22:57 ` [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update Ferruh Yigit @ 2023-02-21 15:58 ` Ferruh Yigit 2023-02-21 16:42 ` Levend Sayar 2 siblings, 1 reply; 26+ messages in thread From: Ferruh Yigit @ 2023-02-21 15:58 UTC (permalink / raw) To: Levend Sayar, junfeng.guo; +Cc: dev On 2/20/2023 9:11 PM, Levend Sayar wrote: > rx no_mbufs stats counter update is added for another error case. > > Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") > Cc: junfeng.guo@intel.com > > Signed-off-by: Levend Sayar <levendsayar@gmail.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> (No problem for this time, but for future contributions, it is expected to have new versions as a whole patchset, otherwise that becomes too difficult to manage for maintainers to cherry pick various versions of individual patches in a set, specially on peak time of release. Also harder to backtrace from patchwork/list if someone wants the check the history.) ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update 2023-02-21 15:58 ` Ferruh Yigit @ 2023-02-21 16:42 ` Levend Sayar 0 siblings, 0 replies; 26+ messages in thread From: Levend Sayar @ 2023-02-21 16:42 UTC (permalink / raw) To: Ferruh Yigit; +Cc: Guo, Junfeng, dev Thanks Ferruh for the heads up. It was a little confusion for me to decide what to do either. I will do as expected for future. I did not want to overwrite any commit that is already acked. So pushed new versions only for the ones needs recommits. Levend > On 21 Feb 2023, at 18:58, Ferruh Yigit <ferruh.yigit@amd.com> wrote: > > On 2/20/2023 9:11 PM, Levend Sayar wrote: >> rx no_mbufs stats counter update is added for another error case. >> >> Fixes: 4f6b1dd8240c ("net/gve: support basic statistics") >> Cc: junfeng.guo@intel.com >> >> Signed-off-by: Levend Sayar <levendsayar@gmail.com> > > Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> > > > (No problem for this time, but for future contributions, it is expected > to have new versions as a whole patchset, otherwise that becomes too > difficult to manage for maintainers to cherry pick various versions of > individual patches in a set, specially on peak time of release. > Also harder to backtrace from patchwork/list if someone wants the check > the history.) > ^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2023-02-23 12:30 UTC | newest] Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-02-19 0:30 [PATCH] net/gve: fix Rx no mbufs stats counter update Levend Sayar 2023-02-19 17:35 ` Stephen Hemminger 2023-02-19 20:43 ` Levend Sayar 2023-02-19 22:59 ` Stephen Hemminger 2023-02-20 15:19 ` [PATCH v2] " Levend Sayar 2023-02-20 21:11 ` [PATCH v3 1/2] " Levend Sayar 2023-02-20 21:11 ` [PATCH v3 2/2] net/gve: add Rx/Tx queue stats as extended stats Levend Sayar 2023-02-20 22:57 ` Ferruh Yigit 2023-02-21 11:11 ` Levend Sayar 2023-02-21 14:13 ` [PATCH v4] " Levend Sayar 2023-02-21 14:18 ` [PATCH v5] " Levend Sayar 2023-02-21 15:58 ` Ferruh Yigit 2023-02-21 16:44 ` Levend Sayar 2023-02-23 2:49 ` Guo, Junfeng 2023-02-23 6:28 ` Levend Sayar 2023-02-23 11:09 ` Ferruh Yigit 2023-02-23 12:30 ` Levend Sayar 2023-02-20 22:57 ` [PATCH v3 1/2] net/gve: fix Rx no mbufs stats counter update Ferruh Yigit 2023-02-21 10:07 ` Levend Sayar 2023-02-21 10:30 ` Ferruh Yigit 2023-02-23 4:34 ` Guo, Junfeng 2023-02-23 6:29 ` Levend Sayar 2023-02-23 11:10 ` Ferruh Yigit 2023-02-23 12:29 ` Levend Sayar 2023-02-21 15:58 ` Ferruh Yigit 2023-02-21 16:42 ` Levend Sayar
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).