From: Ajit Khaparde <ajit.khaparde@broadcom.com>
To: Somnath Kotur <somnath.kotur@broadcom.com>
Cc: dpdk-dev <dev@dpdk.org>, Ferruh Yigit <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] [PATCH 2/2] net/bnxt: workaround for spurious zero counter values in Thor
Date: Tue, 8 Jun 2021 10:55:21 -0700 [thread overview]
Message-ID: <CACZ4nhvO_jnEdgcHBgq3jeQejEt5if3_47Vtpq3mPN-FgCr=cw@mail.gmail.com> (raw)
In-Reply-To: <CACZ4nhsvZrA-a5__xDv9q58Zr4ws2MDVeYHuwsun_EnmhmVmhw@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 23216 bytes --]
On Tue, Jun 8, 2021 at 10:47 AM Ajit Khaparde
<ajit.khaparde@broadcom.com> wrote:
>
> On Wed, May 26, 2021 at 11:21 PM Somnath Kotur
> <somnath.kotur@broadcom.com> wrote:
> >
> > There is a HW bug that can result in certain stats being reported as
> > zero.
> > Workaround this by ignoring stats with a value of zero based on the
> > previously stored snapshot of the same stat.
> > This bug mainly manifests in the output of func_qstats as FW aggregrates
> > each ring's stat value to give the per function stat and if one of
> > them is zero, the per function stat value ends up being lower than the
> > previous snapshot which shows up as a zero PPS value in testpmd.
> > Eliminate invocation of func_qstats and aggregate the per-ring stat
> > values in the driver itself to derive the func_qstats output post
> > accounting for the spurious zero stat value.
> >
> > Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
> > Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>
> Updated the commit log with fixes tag and
> applied to dpdk-next-net-brcm for-next-net branch.
reverting.. I had missed the v2.
>
> > ---
> > drivers/net/bnxt/bnxt.h | 45 +++++++++++
> > drivers/net/bnxt/bnxt_ethdev.c | 14 ++++
> > drivers/net/bnxt/bnxt_hwrm.c | 106 +++++++++++++++++++++----
> > drivers/net/bnxt/bnxt_hwrm.h | 2 +
> > drivers/net/bnxt/bnxt_stats.c | 137 +++++++++++++++++++++++++++++++--
> > 5 files changed, 280 insertions(+), 24 deletions(-)
> >
> > diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
> > index db67bff127..f3108b0c0d 100644
> > --- a/drivers/net/bnxt/bnxt.h
> > +++ b/drivers/net/bnxt/bnxt.h
> > @@ -609,6 +609,49 @@ struct bnxt_flow_stat_info {
> > struct bnxt_ctx_mem_buf_info tx_fc_out_tbl;
> > };
> >
> > +struct bnxt_ring_stats {
> > + /* Number of transmitted unicast packets */
> > + uint64_t tx_ucast_pkts;
> > + /* Number of transmitted multicast packets */
> > + uint64_t tx_mcast_pkts;
> > + /* Number of transmitted broadcast packets */
> > + uint64_t tx_bcast_pkts;
> > + /* Number of packets discarded in transmit path */
> > + uint64_t tx_discard_pkts;
> > + /* Number of packets in transmit path with error */
> > + uint64_t tx_error_pkts;
> > + /* Number of transmitted bytes for unicast traffic */
> > + uint64_t tx_ucast_bytes;
> > + /* Number of transmitted bytes for multicast traffic */
> > + uint64_t tx_mcast_bytes;
> > + /* Number of transmitted bytes for broadcast traffic */
> > + uint64_t tx_bcast_bytes;
> > + /* Number of received unicast packets */
> > + uint64_t rx_ucast_pkts;
> > + /* Number of received multicast packets */
> > + uint64_t rx_mcast_pkts;
> > + /* Number of received broadcast packets */
> > + uint64_t rx_bcast_pkts;
> > + /* Number of packets discarded in receive path */
> > + uint64_t rx_discard_pkts;
> > + /* Number of packets in receive path with errors */
> > + uint64_t rx_error_pkts;
> > + /* Number of received bytes for unicast traffic */
> > + uint64_t rx_ucast_bytes;
> > + /* Number of received bytes for multicast traffic */
> > + uint64_t rx_mcast_bytes;
> > + /* Number of received bytes for broadcast traffic */
> > + uint64_t rx_bcast_bytes;
> > + /* Number of aggregated unicast packets */
> > + uint64_t rx_agg_pkts;
> > + /* Number of aggregated unicast bytes */
> > + uint64_t rx_agg_bytes;
> > + /* Number of aggregation events */
> > + uint64_t rx_agg_events;
> > + /* Number of aborted aggregations */
> > + uint64_t rx_agg_aborts;
> > +};
> > +
> > struct bnxt {
> > void *bar0;
> >
> > @@ -832,6 +875,7 @@ struct bnxt {
> > struct bnxt_flow_stat_info *flow_stat;
> > uint16_t max_num_kflows;
> > uint16_t tx_cfa_action;
> > + struct bnxt_ring_stats *prev_ring_stats[RTE_ETHDEV_QUEUE_STAT_CNTRS];
> > };
> >
> > static
> > @@ -999,5 +1043,6 @@ int bnxt_flow_stats_cnt(struct bnxt *bp);
> > uint32_t bnxt_get_speed_capabilities(struct bnxt *bp);
> > int bnxt_flow_ops_get_op(struct rte_eth_dev *dev,
> > const struct rte_flow_ops **ops);
> > +void bnxt_update_prev_stat(uint64_t *cntr, uint64_t *prev_cntr);
> >
> > #endif
> > diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> > index 3778e28cca..c66c9b0d75 100644
> > --- a/drivers/net/bnxt/bnxt_ethdev.c
> > +++ b/drivers/net/bnxt/bnxt_ethdev.c
> > @@ -1557,12 +1557,17 @@ bnxt_uninit_locks(struct bnxt *bp)
> >
> > static void bnxt_drv_uninit(struct bnxt *bp)
> > {
> > + int i = 0;
> > +
> > bnxt_free_leds_info(bp);
> > bnxt_free_cos_queues(bp);
> > bnxt_free_link_info(bp);
> > bnxt_free_parent_info(bp);
> > bnxt_uninit_locks(bp);
> >
> > + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++)
> > + rte_free(bp->prev_ring_stats[i]);
> > +
> > rte_memzone_free((const struct rte_memzone *)bp->tx_mem_zone);
> > bp->tx_mem_zone = NULL;
> > rte_memzone_free((const struct rte_memzone *)bp->rx_mem_zone);
> > @@ -4644,6 +4649,7 @@ static int bnxt_alloc_stats_mem(struct bnxt *bp)
> > const struct rte_memzone *mz = NULL;
> > uint32_t total_alloc_len;
> > rte_iova_t mz_phys_addr;
> > + int i = 0;
> >
> > if (pci_dev->id.device_id == BROADCOM_DEV_ID_NS2)
> > return 0;
> > @@ -4724,6 +4730,14 @@ static int bnxt_alloc_stats_mem(struct bnxt *bp)
> > bp->flags |= BNXT_FLAG_EXT_TX_PORT_STATS;
> > }
> >
> > + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
> > + bp->prev_ring_stats[i] = rte_zmalloc("bnxt_prev_stats",
> > + sizeof(struct bnxt_ring_stats),
> > + 0);
> > + if (bp->prev_ring_stats[i] == NULL)
> > + return -ENOMEM;
> > + }
> > +
> > return 0;
> > }
> >
> > diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
> > index 3d04b93d88..b495cf1dd7 100644
> > --- a/drivers/net/bnxt/bnxt_hwrm.c
> > +++ b/drivers/net/bnxt/bnxt_hwrm.c
> > @@ -4305,12 +4305,25 @@ int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, uint16_t target_id,
> > return rc;
> > }
> >
> > -int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
> > - struct rte_eth_stats *stats, uint8_t rx)
> > +void bnxt_update_prev_stat(uint64_t *cntr, uint64_t *prev_cntr)
> > +{
> > + /* One of the HW stat values that make up this counter was zero as
> > + * returned by HW in this iteration, so use the previous
> > + * iteration's counter value
> > + */
> > + if (*prev_cntr && *cntr == 0)
> > + *cntr = *prev_cntr;
> > + else
> > + *prev_cntr = *cntr;
> > +}
> > +
> > +int bnxt_hwrm_ring_stats(struct bnxt *bp, uint32_t cid, int idx,
> > + struct bnxt_ring_stats *ring_stats, bool rx)
> > {
> > int rc = 0;
> > struct hwrm_stat_ctx_query_input req = {.req_type = 0};
> > struct hwrm_stat_ctx_query_output *resp = bp->hwrm_cmd_resp_addr;
> > + struct bnxt_ring_stats *prev_stats = bp->prev_ring_stats[idx];
> >
> > HWRM_PREP(&req, HWRM_STAT_CTX_QUERY, BNXT_USE_CHIMP_MB);
> >
> > @@ -4321,21 +4334,82 @@ int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
> > HWRM_CHECK_RESULT();
> >
> > if (rx) {
> > - stats->q_ipackets[idx] = rte_le_to_cpu_64(resp->rx_ucast_pkts);
> > - stats->q_ipackets[idx] += rte_le_to_cpu_64(resp->rx_mcast_pkts);
> > - stats->q_ipackets[idx] += rte_le_to_cpu_64(resp->rx_bcast_pkts);
> > - stats->q_ibytes[idx] = rte_le_to_cpu_64(resp->rx_ucast_bytes);
> > - stats->q_ibytes[idx] += rte_le_to_cpu_64(resp->rx_mcast_bytes);
> > - stats->q_ibytes[idx] += rte_le_to_cpu_64(resp->rx_bcast_bytes);
> > - stats->q_errors[idx] = rte_le_to_cpu_64(resp->rx_discard_pkts);
> > - stats->q_errors[idx] += rte_le_to_cpu_64(resp->rx_error_pkts);
> > + ring_stats->rx_ucast_pkts = rte_le_to_cpu_64(resp->rx_ucast_pkts);
> > + bnxt_update_prev_stat(&ring_stats->rx_ucast_pkts,
> > + &prev_stats->rx_ucast_pkts);
> > +
> > + ring_stats->rx_mcast_pkts = rte_le_to_cpu_64(resp->rx_mcast_pkts);
> > + bnxt_update_prev_stat(&ring_stats->rx_mcast_pkts,
> > + &prev_stats->rx_mcast_pkts);
> > +
> > + ring_stats->rx_bcast_pkts = rte_le_to_cpu_64(resp->rx_bcast_pkts);
> > + bnxt_update_prev_stat(&ring_stats->rx_bcast_pkts,
> > + &prev_stats->rx_bcast_pkts);
> > +
> > + ring_stats->rx_ucast_bytes = rte_le_to_cpu_64(resp->rx_ucast_bytes);
> > + bnxt_update_prev_stat(&ring_stats->rx_ucast_bytes,
> > + &prev_stats->rx_ucast_bytes);
> > +
> > + ring_stats->rx_mcast_bytes = rte_le_to_cpu_64(resp->rx_mcast_bytes);
> > + bnxt_update_prev_stat(&ring_stats->rx_mcast_bytes,
> > + &prev_stats->rx_mcast_bytes);
> > +
> > + ring_stats->rx_bcast_bytes = rte_le_to_cpu_64(resp->rx_bcast_bytes);
> > + bnxt_update_prev_stat(&ring_stats->rx_bcast_bytes,
> > + &prev_stats->rx_bcast_bytes);
> > +
> > + ring_stats->rx_discard_pkts = rte_le_to_cpu_64(resp->rx_discard_pkts);
> > + bnxt_update_prev_stat(&ring_stats->rx_discard_pkts,
> > + &prev_stats->rx_discard_pkts);
> > +
> > + ring_stats->rx_error_pkts = rte_le_to_cpu_64(resp->rx_error_pkts);
> > + bnxt_update_prev_stat(&ring_stats->rx_error_pkts,
> > + &prev_stats->rx_error_pkts);
> > +
> > + ring_stats->rx_agg_pkts = rte_le_to_cpu_64(resp->rx_agg_pkts);
> > + bnxt_update_prev_stat(&ring_stats->rx_agg_pkts,
> > + &prev_stats->rx_agg_pkts);
> > +
> > + ring_stats->rx_agg_bytes = rte_le_to_cpu_64(resp->rx_agg_bytes);
> > + bnxt_update_prev_stat(&ring_stats->rx_agg_bytes,
> > + &prev_stats->rx_agg_bytes);
> > +
> > + ring_stats->rx_agg_events = rte_le_to_cpu_64(resp->rx_agg_events);
> > + bnxt_update_prev_stat(&ring_stats->rx_agg_events,
> > + &prev_stats->rx_agg_events);
> > +
> > + ring_stats->rx_agg_aborts = rte_le_to_cpu_64(resp->rx_agg_aborts);
> > + bnxt_update_prev_stat(&ring_stats->rx_agg_aborts,
> > + &prev_stats->rx_agg_aborts);
> > } else {
> > - stats->q_opackets[idx] = rte_le_to_cpu_64(resp->tx_ucast_pkts);
> > - stats->q_opackets[idx] += rte_le_to_cpu_64(resp->tx_mcast_pkts);
> > - stats->q_opackets[idx] += rte_le_to_cpu_64(resp->tx_bcast_pkts);
> > - stats->q_obytes[idx] = rte_le_to_cpu_64(resp->tx_ucast_bytes);
> > - stats->q_obytes[idx] += rte_le_to_cpu_64(resp->tx_mcast_bytes);
> > - stats->q_obytes[idx] += rte_le_to_cpu_64(resp->tx_bcast_bytes);
> > + /* Tx */
> > + ring_stats->tx_ucast_pkts = rte_le_to_cpu_64(resp->tx_ucast_pkts);
> > + bnxt_update_prev_stat(&ring_stats->tx_ucast_pkts,
> > + &prev_stats->tx_ucast_pkts);
> > +
> > + ring_stats->tx_mcast_pkts = rte_le_to_cpu_64(resp->tx_mcast_pkts);
> > + bnxt_update_prev_stat(&ring_stats->tx_mcast_pkts,
> > + &prev_stats->tx_mcast_pkts);
> > +
> > + ring_stats->tx_bcast_pkts = rte_le_to_cpu_64(resp->tx_bcast_pkts);
> > + bnxt_update_prev_stat(&ring_stats->tx_bcast_pkts,
> > + &prev_stats->tx_bcast_pkts);
> > +
> > + ring_stats->tx_ucast_bytes = rte_le_to_cpu_64(resp->tx_ucast_bytes);
> > + bnxt_update_prev_stat(&ring_stats->tx_ucast_bytes,
> > + &prev_stats->tx_ucast_bytes);
> > +
> > + ring_stats->tx_mcast_bytes = rte_le_to_cpu_64(resp->tx_mcast_bytes);
> > + bnxt_update_prev_stat(&ring_stats->tx_mcast_bytes,
> > + &prev_stats->tx_mcast_bytes);
> > +
> > + ring_stats->tx_bcast_bytes = rte_le_to_cpu_64(resp->tx_bcast_bytes);
> > + bnxt_update_prev_stat(&ring_stats->tx_bcast_bytes,
> > + &prev_stats->tx_bcast_bytes);
> > +
> > + ring_stats->tx_discard_pkts = rte_le_to_cpu_64(resp->tx_discard_pkts);
> > + bnxt_update_prev_stat(&ring_stats->tx_discard_pkts,
> > + &prev_stats->tx_discard_pkts);
> > }
> >
> > HWRM_UNLOCK();
> > diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
> > index a7271b7876..8fe0602689 100644
> > --- a/drivers/net/bnxt/bnxt_hwrm.h
> > +++ b/drivers/net/bnxt/bnxt_hwrm.h
> > @@ -302,4 +302,6 @@ int bnxt_hwrm_fw_echo_reply(struct bnxt *bp, uint32_t echo_req_data1,
> > uint32_t echo_req_data2);
> > int bnxt_hwrm_poll_ver_get(struct bnxt *bp);
> > int bnxt_hwrm_rx_ring_reset(struct bnxt *bp, int queue_index);
> > +int bnxt_hwrm_ring_stats(struct bnxt *bp, uint32_t cid, int idx,
> > + struct bnxt_ring_stats *stats, bool rx);
> > #endif
> > diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c
> > index 11767e06d0..0b49861894 100644
> > --- a/drivers/net/bnxt/bnxt_stats.c
> > +++ b/drivers/net/bnxt/bnxt_stats.c
> > @@ -506,8 +506,51 @@ void bnxt_free_stats(struct bnxt *bp)
> > }
> > }
> >
> > +static void bnxt_fill_rte_eth_stats(struct rte_eth_stats *stats,
> > + struct bnxt_ring_stats *ring_stats,
> > + unsigned int i, bool rx)
> > +{
> > + if (rx) {
> > + stats->q_ipackets[i] = ring_stats->rx_ucast_pkts;
> > + stats->q_ipackets[i] += ring_stats->rx_mcast_pkts;
> > + stats->q_ipackets[i] += ring_stats->rx_bcast_pkts;
> > +
> > + stats->ipackets += stats->q_ipackets[i];
> > +
> > + stats->q_ibytes[i] = ring_stats->rx_ucast_bytes;
> > + stats->q_ibytes[i] += ring_stats->rx_mcast_bytes;
> > + stats->q_ibytes[i] += ring_stats->rx_bcast_bytes;
> > +
> > + stats->ibytes += stats->q_ibytes[i];
> > +
> > + stats->q_errors[i] = ring_stats->rx_discard_pkts;
> > + stats->q_errors[i] += ring_stats->rx_error_pkts;
> > +
> > + /**< Total of RX packets dropped by the HW */
> > + stats->imissed += stats->q_errors[i];
> > + /**< Total number of erroneous received packets. */
> > + /* stats->ierrors = TBD - rx_error_pkts are also dropped,
> > + * so need to see what to do here
> > + */
> > + } else {
> > + stats->q_opackets[i] = ring_stats->tx_ucast_pkts;
> > + stats->q_opackets[i] += ring_stats->tx_mcast_pkts;
> > + stats->q_opackets[i] += ring_stats->tx_bcast_pkts;
> > +
> > + stats->opackets += stats->q_opackets[i];
> > +
> > + stats->q_obytes[i] = ring_stats->tx_ucast_bytes;
> > + stats->q_obytes[i] += ring_stats->tx_mcast_bytes;
> > + stats->q_obytes[i] += ring_stats->tx_bcast_bytes;
> > +
> > + stats->obytes += stats->q_obytes[i];
> > +
> > + stats->oerrors += ring_stats->tx_discard_pkts;
> > + }
> > +}
> > +
> > int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
> > - struct rte_eth_stats *bnxt_stats)
> > + struct rte_eth_stats *bnxt_stats)
> > {
> > int rc = 0;
> > unsigned int i;
> > @@ -527,13 +570,17 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
> > for (i = 0; i < num_q_stats; i++) {
> > struct bnxt_rx_queue *rxq = bp->rx_queues[i];
> > struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
> > + struct bnxt_ring_stats ring_stats = {0};
> >
> > if (!rxq->rx_started)
> > continue;
> > - rc = bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i,
> > - bnxt_stats, 1);
> > +
> > + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i,
> > + &ring_stats, true);
> > if (unlikely(rc))
> > return rc;
> > +
> > + bnxt_fill_rte_eth_stats(bnxt_stats, &ring_stats, i, true);
> > bnxt_stats->rx_nombuf +=
> > rte_atomic64_read(&rxq->rx_mbuf_alloc_fail);
> > }
> > @@ -544,16 +591,19 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
> > for (i = 0; i < num_q_stats; i++) {
> > struct bnxt_tx_queue *txq = bp->tx_queues[i];
> > struct bnxt_cp_ring_info *cpr = txq->cp_ring;
> > + struct bnxt_ring_stats ring_stats = {0};
> >
> > if (!txq->tx_started)
> > continue;
> > - rc = bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i,
> > - bnxt_stats, 0);
> > +
> > + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i,
> > + &ring_stats, false);
> > if (unlikely(rc))
> > return rc;
> > +
> > + bnxt_fill_rte_eth_stats(bnxt_stats, &ring_stats, i, false);
> > }
> >
> > - rc = bnxt_hwrm_func_qstats(bp, 0xffff, bnxt_stats, NULL);
> > return rc;
> > }
> >
> > @@ -582,6 +632,40 @@ int bnxt_stats_reset_op(struct rte_eth_dev *eth_dev)
> > return ret;
> > }
> >
> > +static void bnxt_fill_func_qstats(struct hwrm_func_qstats_output *func_qstats,
> > + struct bnxt_ring_stats *ring_stats,
> > + bool rx)
> > +{
> > + if (rx) {
> > + func_qstats->rx_ucast_pkts += ring_stats->rx_ucast_pkts;
> > + func_qstats->rx_mcast_pkts += ring_stats->rx_mcast_pkts;
> > + func_qstats->rx_bcast_pkts += ring_stats->rx_bcast_pkts;
> > +
> > + func_qstats->rx_ucast_bytes += ring_stats->rx_ucast_bytes;
> > + func_qstats->rx_mcast_bytes += ring_stats->rx_mcast_bytes;
> > + func_qstats->rx_bcast_bytes += ring_stats->rx_bcast_bytes;
> > +
> > + func_qstats->rx_discard_pkts += ring_stats->rx_discard_pkts;
> > + func_qstats->rx_drop_pkts += ring_stats->rx_error_pkts;
> > +
> > + func_qstats->rx_agg_pkts += ring_stats->rx_agg_pkts;
> > + func_qstats->rx_agg_bytes += ring_stats->rx_agg_bytes;
> > + func_qstats->rx_agg_events += ring_stats->rx_agg_events;
> > + func_qstats->rx_agg_aborts += ring_stats->rx_agg_aborts;
> > + } else {
> > + func_qstats->tx_ucast_pkts += ring_stats->tx_ucast_pkts;
> > + func_qstats->tx_mcast_pkts += ring_stats->tx_mcast_pkts;
> > + func_qstats->tx_bcast_pkts += ring_stats->tx_bcast_pkts;
> > +
> > + func_qstats->tx_ucast_bytes += ring_stats->tx_ucast_bytes;
> > + func_qstats->tx_mcast_bytes += ring_stats->tx_mcast_bytes;
> > + func_qstats->tx_bcast_bytes += ring_stats->tx_bcast_bytes;
> > +
> > + func_qstats->tx_drop_pkts += ring_stats->tx_error_pkts;
> > + func_qstats->tx_discard_pkts += ring_stats->tx_discard_pkts;
> > + }
> > +}
> > +
> > int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev,
> > struct rte_eth_xstat *xstats, unsigned int n)
> > {
> > @@ -591,7 +675,7 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev,
> > unsigned int tx_port_stats_ext_cnt;
> > unsigned int stat_size = sizeof(uint64_t);
> > struct hwrm_func_qstats_output func_qstats = {0};
> > - unsigned int stat_count;
> > + unsigned int stat_count, num_q_stats;
> > int rc;
> >
> > rc = is_bnxt_in_error(bp);
> > @@ -608,7 +692,44 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev,
> > if (n < stat_count || xstats == NULL)
> > return stat_count;
> >
> > - bnxt_hwrm_func_qstats(bp, 0xffff, NULL, &func_qstats);
> > + num_q_stats = RTE_MIN(bp->rx_cp_nr_rings,
> > + (unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS);
> > +
> > + for (i = 0; i < num_q_stats; i++) {
> > + struct bnxt_rx_queue *rxq = bp->rx_queues[i];
> > + struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
> > + struct bnxt_ring_stats ring_stats = {0};
> > +
> > + if (!rxq->rx_started)
> > + continue;
> > +
> > + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i,
> > + &ring_stats, true);
> > + if (unlikely(rc))
> > + return rc;
> > +
> > + bnxt_fill_func_qstats(&func_qstats, &ring_stats, true);
> > + }
> > +
> > + num_q_stats = RTE_MIN(bp->tx_cp_nr_rings,
> > + (unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS);
> > +
> > + for (i = 0; i < num_q_stats; i++) {
> > + struct bnxt_tx_queue *txq = bp->tx_queues[i];
> > + struct bnxt_cp_ring_info *cpr = txq->cp_ring;
> > + struct bnxt_ring_stats ring_stats = {0};
> > +
> > + if (!txq->tx_started)
> > + continue;
> > +
> > + rc = bnxt_hwrm_ring_stats(bp, cpr->hw_stats_ctx_id, i,
> > + &ring_stats, false);
> > + if (unlikely(rc))
> > + return rc;
> > +
> > + bnxt_fill_func_qstats(&func_qstats, &ring_stats, false);
> > + }
> > +
> > bnxt_hwrm_port_qstats(bp);
> > bnxt_hwrm_ext_port_qstats(bp);
> > rx_port_stats_ext_cnt = RTE_MIN(RTE_DIM(bnxt_rx_ext_stats_strings),
> > --
> > 2.28.0.497.g54e85e7
> >
next prev parent reply other threads:[~2021-06-08 17:55 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-27 6:19 Somnath Kotur
2021-05-31 5:53 ` [dpdk-dev] [PATCH v2 " Somnath Kotur
2021-06-08 21:08 ` Ajit Khaparde
2021-06-08 17:47 ` [dpdk-dev] [PATCH " Ajit Khaparde
2021-06-08 17:55 ` Ajit Khaparde [this message]
2021-05-27 12:48 Owen Hilyard
2021-05-27 14:31 ` Somnath Kotur
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CACZ4nhvO_jnEdgcHBgq3jeQejEt5if3_47Vtpq3mPN-FgCr=cw@mail.gmail.com' \
--to=ajit.khaparde@broadcom.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=somnath.kotur@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).