Hello, As it turns out, this error actually propagates to the "total" stats as well, which I assume is just calculated by adding TX-packets and TX-dropped. Here are the full stats from the example that Rushil mentioned: ---------------------- Forward statistics for port 0 ---------------------- RX-packets: 2453802 RX-dropped: 0 RX-total: 2453802 TX-packets: 34266881 TX-dropped: 447034 TX-total: 34713915 ---------------------------------------------------------------------------- ---------------------- Forward statistics for port 1 ---------------------- RX-packets: 34713915 RX-dropped: 0 RX-total: 34713915 TX-packets: 2453802 TX-dropped: 0 TX-total: 2453802 ---------------------------------------------------------------------------- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++ RX-packets: 37167717 RX-dropped: 0 RX-total: 37167717 TX-packets: 36720683 TX-dropped: 807630 TX-total: 37528313 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ It can be seen that the stats for the individual ports are consistent, but the TX-total and TX-dropped are not consistent with the stats for the individual ports, as I believe that the TX-total and RX-total accumulated stats should be equal. On Mon, Dec 19, 2022 at 11:17 AM Rushil Gupta wrote: > Hi all > Josh just found out some inconsistencies in the Tx/Rx statistics sum > for all ports. Not sure if we can screenshot here but it goes like > this: > Tx-dropped for port0: 447034 > Tx-dropped for port1: 0 > Accumulated forward statistics for all ports: 807630 > > Please note that this issue is only with Tx-dropped (not > Tx-packets/Tx-total). > > > On Wed, Dec 7, 2022 at 8:39 AM Stephen Hemminger > wrote: > > > > On Wed, 7 Dec 2022 15:09:08 +0000 > > Ferruh Yigit wrote: > > > > > On 11/24/2022 7:33 AM, Junfeng Guo wrote: > > > > Add support for dev_ops of stats_get and stats_reset. > > > > > > > > Queue stats update will be moved into xstat [1], but the basic stats > > > > items may still be required. So we just keep the remaining ones and > > > > will implement the queue stats via xstats in the coming release. > > > > > > > > [1] > > > > https://elixir.bootlin.com/dpdk/v22.07/ \ > > > > source/doc/guides/rel_notes/deprecation.rst#L118 > > > > > > > > Signed-off-by: Xiaoyun Li > > > > Signed-off-by: Junfeng Guo > > > > > > <...> > > > > > > > +static int > > > > +gve_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats > *stats) > > > > +{ > > > > + uint16_t i; > > > > + > > > > + for (i = 0; i < dev->data->nb_tx_queues; i++) { > > > > + struct gve_tx_queue *txq = dev->data->tx_queues[i]; > > > > + if (txq == NULL) > > > > + continue; > > > > + > > > > + stats->opackets += txq->packets; > > > > + stats->obytes += txq->bytes; > > > > + stats->oerrors += txq->errors; > > > > > > Hi Junfeng, Qi, Jingjing, Beilei, > > > > > > Above logic looks wrong to me, did you test it? > > > > > > If the 'gve_dev_stats_get()' called multiple times (without stats reset > > > in between), same values will be keep added to stats. > > > Some hw based implementations does this, because reading from stats > > > registers automatically reset those registers but this shouldn't be > case > > > for this driver. > > > > > > I expect it to be something like: > > > > > > local_stats = 0 > > > foreach queue > > > local_stats += queue->stats > > > stats = local_stats > > > > The zero of local_stats is unnecessary. > > The only caller of the PMD stats_get is rte_ethdev_stats_get > > and it zeros the stats structure before calling the PMD. > > > > > > int > > rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats) > > { > > struct rte_eth_dev *dev; > > > > RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > > dev = &rte_eth_devices[port_id]; > > > > memset(stats, 0, sizeof(*stats)); > > ... > > stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; > > return eth_err(port_id, (*dev->dev_ops->stats_get)(dev, stats)); > > > > If any PMD has extra memset in their stats get that could be removed. > -- Joshua Washington | Software Engineer | joshwash@google.com | (414) 366-4423