DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Liu, Mingxia" <mingxia.liu@intel.com>
To: "Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Wu,  Jingjing" <jingjing.wu@intel.com>,
	"Xing, Beilei" <beilei.xing@intel.com>
Subject: RE: [PATCH v6 1/6] common/idpf: add hw statistics
Date: Wed, 8 Feb 2023 08:28:50 +0000	[thread overview]
Message-ID: <PH0PR11MB5877EC887E5F34EE2F6A61EEECD89@PH0PR11MB5877.namprd11.prod.outlook.com> (raw)
In-Reply-To: <DM4PR11MB59946DF079C4AC6ED6FDD8F9D7D89@DM4PR11MB5994.namprd11.prod.outlook.com>

Thanks, will update module name.
And there is no warning when checked by ./devtools/check-git-log.sh.

> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Wednesday, February 8, 2023 10:00 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Subject: RE: [PATCH v6 1/6] common/idpf: add hw statistics
> 
> 
> 
> > -----Original Message-----
> > From: Liu, Mingxia <mingxia.liu@intel.com>
> > Sent: Tuesday, February 7, 2023 6:17 PM
> > To: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> > Cc: Liu, Mingxia <mingxia.liu@intel.com>
> > Subject: [PATCH v6 1/6] common/idpf: add hw statistics
> 
> Suggest to use ./devtools/check-git-log.sh to fix any title warning if possible
> Also the main purpose of this patch is to support stats_get /stats_reset API,
> the prefix is more reasonable to be "net/idpf" but not "common/idpf.
> 
> Please fix other patches if any similar issue.
> 
> >
> > This patch add hardware packets/bytes statistics.
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> > ---
> >  drivers/common/idpf/idpf_common_device.c   | 17 +++++
> >  drivers/common/idpf/idpf_common_device.h   |  4 +
> >  drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
> > drivers/common/idpf/idpf_common_virtchnl.h |  3 +
> >  drivers/common/idpf/version.map            |  2 +
> >  drivers/net/idpf/idpf_ethdev.c             | 86 ++++++++++++++++++++++
> >  6 files changed, 139 insertions(+)
> >
> > diff --git a/drivers/common/idpf/idpf_common_device.c
> > b/drivers/common/idpf/idpf_common_device.c
> > index 48b3e3c0dd..5475a3e52c 100644
> > --- a/drivers/common/idpf/idpf_common_device.c
> > +++ b/drivers/common/idpf/idpf_common_device.c
> > @@ -652,4 +652,21 @@ idpf_vport_info_init(struct idpf_vport *vport,
> >  	return 0;
> >  }
> >
> > +void
> > +idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct
> > +virtchnl2_vport_stats *nes) {
> > +	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
> > +	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
> > +	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
> > +	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
> > +	nes->rx_errors = nes->rx_errors - oes->rx_errors;
> > +	nes->rx_discards = nes->rx_discards - oes->rx_discards;
> > +	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
> > +	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
> > +	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
> > +	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
> > +	nes->tx_errors = nes->tx_errors - oes->tx_errors;
> > +	nes->tx_discards = nes->tx_discards - oes->tx_discards; }
> > +
> >  RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
> diff
> > --git a/drivers/common/idpf/idpf_common_device.h
> > b/drivers/common/idpf/idpf_common_device.h
> > index 545117df79..1d8e7d405a 100644
> > --- a/drivers/common/idpf/idpf_common_device.h
> > +++ b/drivers/common/idpf/idpf_common_device.h
> > @@ -115,6 +115,8 @@ struct idpf_vport {
> >  	bool tx_vec_allowed;
> >  	bool rx_use_avx512;
> >  	bool tx_use_avx512;
> > +
> > +	struct virtchnl2_vport_stats eth_stats_offset;
> >  };
> >
> >  /* Message type read in virtual channel from PF */ @@ -191,5 +193,7
> > @@ int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t
> > nb_rx_queues)  __rte_internal  int idpf_vport_info_init(struct
> > idpf_vport *vport,
> >  			 struct virtchnl2_create_vport *vport_info);
> > +__rte_internal
> > +void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes,
> > +struct virtchnl2_vport_stats *nes);
> >
> >  #endif /* _IDPF_COMMON_DEVICE_H_ */
> > diff --git a/drivers/common/idpf/idpf_common_virtchnl.c
> > b/drivers/common/idpf/idpf_common_virtchnl.c
> > index 31fadefbd3..40cff34c09 100644
> > --- a/drivers/common/idpf/idpf_common_virtchnl.c
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.c
> > @@ -217,6 +217,7 @@ idpf_vc_cmd_execute(struct idpf_adapter
> *adapter,
> > struct idpf_cmd_info *args)
> >  	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
> >  	case VIRTCHNL2_OP_ALLOC_VECTORS:
> >  	case VIRTCHNL2_OP_DEALLOC_VECTORS:
> > +	case VIRTCHNL2_OP_GET_STATS:
> >  		/* for init virtchnl ops, need to poll the response */
> >  		err = idpf_vc_one_msg_read(adapter, args->ops, args-
> > >out_size, args->out_buffer);
> >  		clear_cmd(adapter);
> > @@ -806,6 +807,32 @@ idpf_vc_ptype_info_query(struct idpf_adapter
> > *adapter)
> >  	return err;
> >  }
> >
> > +int
> > +idpf_vc_stats_query(struct idpf_vport *vport,
> > +		struct virtchnl2_vport_stats **pstats) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_vport_stats vport_stats;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	vport_stats.vport_id = vport->vport_id;
> > +	args.ops = VIRTCHNL2_OP_GET_STATS;
> > +	args.in_args = (u8 *)&vport_stats;
> > +	args.in_args_size = sizeof(vport_stats);
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_vc_cmd_execute(adapter, &args);
> > +	if (err) {
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_GET_STATS");
> > +		*pstats = NULL;
> > +		return err;
> > +	}
> > +	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
> > +	return 0;
> > +}
> > +
> >  #define IDPF_RX_BUF_STRIDE		64
> >  int
> >  idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue
> > *rxq) diff - -git a/drivers/common/idpf/idpf_common_virtchnl.h
> > b/drivers/common/idpf/idpf_common_virtchnl.h
> > index c105f02836..6b94fd5b8f 100644
> > --- a/drivers/common/idpf/idpf_common_virtchnl.h
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.h
> > @@ -49,4 +49,7 @@ __rte_internal
> >  int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue
> > *rxq); __rte_internal  int idpf_vc_txq_config(struct idpf_vport
> > *vport, struct idpf_tx_queue *txq);
> > +__rte_internal
> > +int idpf_vc_stats_query(struct idpf_vport *vport,
> > +			struct virtchnl2_vport_stats **pstats);
> >  #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git
> > a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
> > index 8b33130bd6..e6a02828ba
> > 100644
> > --- a/drivers/common/idpf/version.map
> > +++ b/drivers/common/idpf/version.map
> > @@ -46,6 +46,7 @@ INTERNAL {
> >  	idpf_vc_rss_key_set;
> >  	idpf_vc_rss_lut_set;
> >  	idpf_vc_rxq_config;
> > +	idpf_vc_stats_query;
> >  	idpf_vc_txq_config;
> >  	idpf_vc_vectors_alloc;
> >  	idpf_vc_vectors_dealloc;
> > @@ -59,6 +60,7 @@ INTERNAL {
> >  	idpf_vport_irq_map_config;
> >  	idpf_vport_irq_unmap_config;
> >  	idpf_vport_rss_config;
> > +	idpf_vport_stats_update;
> >
> >  	local: *;
> >  };
> > diff --git a/drivers/net/idpf/idpf_ethdev.c
> > b/drivers/net/idpf/idpf_ethdev.c index 33f5e90743..02ddb0330a 100644
> > --- a/drivers/net/idpf/idpf_ethdev.c
> > +++ b/drivers/net/idpf/idpf_ethdev.c
> > @@ -140,6 +140,87 @@ idpf_dev_supported_ptypes_get(struct
> rte_eth_dev
> > *dev __rte_unused)
> >  	return ptypes;
> >  }
> >
> > +static uint64_t
> > +idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) {
> > +	uint64_t mbuf_alloc_failed = 0;
> > +	struct idpf_rx_queue *rxq;
> > +	int i = 0;
> > +
> > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +		rxq = dev->data->rx_queues[i];
> > +		mbuf_alloc_failed += __atomic_load_n(&rxq-
> > >rx_stats.mbuf_alloc_failed,
> > +						     __ATOMIC_RELAXED);
> > +	}
> > +
> > +	return mbuf_alloc_failed;
> > +}
> > +
> > +static int
> > +idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
> > +*stats) {
> > +	struct idpf_vport *vport =
> > +		(struct idpf_vport *)dev->data->dev_private;
> > +	struct virtchnl2_vport_stats *pstats = NULL;
> > +	int ret;
> > +
> > +	ret = idpf_vc_stats_query(vport, &pstats);
> > +	if (ret == 0) {
> > +		uint8_t crc_stats_len = (dev->data-
> > >dev_conf.rxmode.offloads &
> > +					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
> > 0 :
> > +					 RTE_ETHER_CRC_LEN;
> > +
> > +		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
> > +		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> > +				pstats->rx_broadcast - pstats->rx_discards;
> > +		stats->opackets = pstats->tx_broadcast + pstats-
> > >tx_multicast +
> > +						pstats->tx_unicast;
> > +		stats->imissed = pstats->rx_discards;
> > +		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
> > +		stats->ibytes = pstats->rx_bytes;
> > +		stats->ibytes -= stats->ipackets * crc_stats_len;
> > +		stats->obytes = pstats->tx_bytes;
> > +
> > +		dev->data->rx_mbuf_alloc_failed =
> > idpf_get_mbuf_alloc_failed_stats(dev);
> > +		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
> > +	} else {
> > +		PMD_DRV_LOG(ERR, "Get statistics failed");
> > +	}
> > +	return ret;
> > +}
> > +
> > +static void
> > +idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) {
> > +	struct idpf_rx_queue *rxq;
> > +	int i;
> > +
> > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +		rxq = dev->data->rx_queues[i];
> > +		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0,
> > __ATOMIC_RELAXED);
> > +	}
> > +}
> > +
> > +static int
> > +idpf_dev_stats_reset(struct rte_eth_dev *dev) {
> > +	struct idpf_vport *vport =
> > +		(struct idpf_vport *)dev->data->dev_private;
> > +	struct virtchnl2_vport_stats *pstats = NULL;
> > +	int ret;
> > +
> > +	ret = idpf_vc_stats_query(vport, &pstats);
> > +	if (ret != 0)
> > +		return ret;
> > +
> > +	/* set stats offset base on current values */
> > +	vport->eth_stats_offset = *pstats;
> > +
> > +	idpf_reset_mbuf_alloc_failed_stats(dev);
> > +
> > +	return 0;
> > +}
> > +
> >  static int
> >  idpf_init_rss(struct idpf_vport *vport)  { @@ -327,6 +408,9 @@
> > idpf_dev_start(struct rte_eth_dev *dev)
> >  		goto err_vport;
> >  	}
> >
> > +	if (idpf_dev_stats_reset(dev))
> > +		PMD_DRV_LOG(ERR, "Failed to reset stats");
> > +
> >  	vport->stopped = 0;
> >
> >  	return 0;
> > @@ -606,6 +690,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops
> = {
> >  	.tx_queue_release		= idpf_dev_tx_queue_release,
> >  	.mtu_set			= idpf_dev_mtu_set,
> >  	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
> > +	.stats_get			= idpf_dev_stats_get,
> > +	.stats_reset			= idpf_dev_stats_reset,
> >  };
> >
> >  static uint16_t
> > --
> > 2.25.1


  reply	other threads:[~2023-02-08  8:30 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
2022-12-16  9:37 ` [PATCH 1/7] common/idpf: add hw statistics Mingxia Liu
2022-12-16  9:37 ` [PATCH 2/7] common/idpf: add RSS set/get ops Mingxia Liu
2022-12-16  9:37 ` [PATCH 3/7] common/idpf: support single q scatter RX datapath Mingxia Liu
2022-12-16  9:37 ` [PATCH 4/7] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2022-12-16  9:37 ` [PATCH 5/7] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2022-12-16  9:37 ` [PATCH 6/7] common/idpf: add xstats ops Mingxia Liu
2022-12-16  9:37 ` [PATCH 7/7] common/idpf: update mbuf_alloc_failed multi-thread process Mingxia Liu
2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 1/6] common/idpf: add hw statistics Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 6/6] common/idpf: add xstats ops Mingxia Liu
2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
2023-01-18  7:14     ` [PATCH v3 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-01  8:48       ` Wu, Jingjing
2023-02-01 12:34         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-02  3:28       ` Wu, Jingjing
2023-02-07  3:10         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-02  3:45       ` Wu, Jingjing
2023-02-02  7:19         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-01-18  7:14     ` [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-02  4:23       ` Wu, Jingjing
2023-02-02  7:39         ` Liu, Mingxia
2023-02-02  8:46           ` Wu, Jingjing
2023-01-18  7:14     ` [PATCH v3 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07  9:57       ` [PATCH v4 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07  9:57       ` [PATCH v4 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-08  2:00               ` Zhang, Qi Z
2023-02-08  8:28                 ` Liu, Mingxia [this message]
2023-02-07 10:16             ` [PATCH v6 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-08  0:28             ` [PATCH v6 0/6] add idpf pmd enhancement features Wu, Jingjing
2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 1/6] net/idpf: add hw statistics Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 2/6] net/idpf: add RSS set/get ops Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 3/6] net/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 4/6] net/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-08  7:34               ` [PATCH v7 5/6] net/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-08  7:34               ` [PATCH v7 6/6] net/idpf: add xstats ops Mingxia Liu
2023-02-08  9:32               ` [PATCH v7 0/6] add idpf pmd enhancement features Zhang, Qi Z
2023-02-07 10:08         ` [PATCH v5 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 6/6] common/idpf: add xstats ops Mingxia Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR11MB5877EC887E5F34EE2F6A61EEECD89@PH0PR11MB5877.namprd11.prod.outlook.com \
    --to=mingxia.liu@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=jingjing.wu@intel.com \
    --cc=qi.z.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).