DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/4] vhost: support vDPA virtio queue statistics
@ 2020-04-02 11:26 Matan Azrad
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
                   ` (4 more replies)
  0 siblings, 5 replies; 32+ messages in thread
From: Matan Azrad @ 2020-04-02 11:26 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

The vDPA device offloads all the datapath of the vhost device to the HW device.

In order to expose to the user traffic information this patch introduce a new API to get traffic statistics per virtio queue.

The statistics are taken directly from the vDPA driver managing the HW device.

See RFC https://patchwork.dpdk.org/patch/66716/

Added also support for it in vdpa/mlx5 driver and in vdpa example application.

Matan Azrad (4):
  vhost: inroduce operation to get vDPA queue stats
  common/mlx5: support DevX virtq stats operations
  vdpa/mlx5: support virtio queue statistics get
  examples/vdpa: add statistics show command

 doc/guides/rel_notes/release_20_05.rst          |  4 ++
 doc/guides/sample_app_ug/vdpa.rst               |  3 +-
 doc/guides/vdpadevs/features/default.ini        |  1 +
 doc/guides/vdpadevs/features/mlx5.ini           |  1 +
 doc/guides/vdpadevs/features_overview.rst       |  3 +
 drivers/common/mlx5/mlx5_devx_cmds.c            | 73 +++++++++++++++++++++++
 drivers/common/mlx5/mlx5_devx_cmds.h            | 43 ++++++++++++++
 drivers/common/mlx5/mlx5_prm.h                  | 26 ++++++++-
 drivers/common/mlx5/rte_common_mlx5_version.map |  7 +++
 drivers/vdpa/mlx5/mlx5_vdpa.c                   | 28 +++++++++
 drivers/vdpa/mlx5/mlx5_vdpa.h                   | 16 +++++
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c             | 43 ++++++++++++++
 examples/vdpa/main.c                            | 78 +++++++++++++++++++++++++
 lib/librte_vhost/rte_vdpa.h                     | 45 +++++++++++++-
 lib/librte_vhost/rte_vhost_version.map          |  1 +
 lib/librte_vhost/vdpa.c                         | 14 +++++
 16 files changed, 383 insertions(+), 3 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-04-02 11:26 [dpdk-dev] [PATCH 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
@ 2020-04-02 11:26 ` Matan Azrad
  2020-04-15 14:36   ` Maxime Coquelin
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 32+ messages in thread
From: Matan Azrad @ 2020-04-02 11:26 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

The vDPA device offloads all the datapath of the vhost device to the HW
device.

In order to expose to the user traffic information this patch introduce
new API to get traffic statistics per virtio queue.

The statistics are taken directly from the vDPA driver managing the HW
device.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 doc/guides/rel_notes/release_20_05.rst    |  4 +++
 doc/guides/vdpadevs/features/default.ini  |  1 +
 doc/guides/vdpadevs/features_overview.rst |  3 +++
 lib/librte_vhost/rte_vdpa.h               | 45 ++++++++++++++++++++++++++++++-
 lib/librte_vhost/rte_vhost_version.map    |  1 +
 lib/librte_vhost/vdpa.c                   | 14 ++++++++++
 6 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index c960fd2..812d19b 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -56,6 +56,10 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added vDPA device API to query virtio queue statistics.**
+
+  A new API has been added to query virtio queue statistics from a vDPA device.
+
 * **Updated Mellanox mlx5 driver.**
 
   Updated Mellanox mlx5 driver with new features and improvements, including:
diff --git a/doc/guides/vdpadevs/features/default.ini b/doc/guides/vdpadevs/features/default.ini
index 518e4f1..2c122a3 100644
--- a/doc/guides/vdpadevs/features/default.ini
+++ b/doc/guides/vdpadevs/features/default.ini
@@ -37,6 +37,7 @@ proto rarp           =
 proto reply ack      =
 proto host notifier  =
 proto pagefault      =
+queue statistics     =
 BSD nic_uio          =
 Linux VFIO           =
 Other kdrv           =
diff --git a/doc/guides/vdpadevs/features_overview.rst b/doc/guides/vdpadevs/features_overview.rst
index eb7eb3b..930bc87 100644
--- a/doc/guides/vdpadevs/features_overview.rst
+++ b/doc/guides/vdpadevs/features_overview.rst
@@ -96,6 +96,9 @@ proto host notifier
 proto pagefault
   Slave expose page-fault FD for migration process.
 
+queue statistics
+  Support virtio queue statistics query.
+
 BSD nic_uio
   BSD ``nic_uio`` module supported.
 
diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
index 9a3deb3..d6cbf48 100644
--- a/lib/librte_vhost/rte_vdpa.h
+++ b/lib/librte_vhost/rte_vdpa.h
@@ -37,6 +37,27 @@ struct rte_vdpa_dev_addr {
 	};
 };
 
+struct rte_vdpa_queue_stats {
+	/** Number of descriptors received by device */
+	uint64_t received_desc;
+	/** Number of descriptors completed by the device */
+	uint64_t completed_desc;
+	/** Number of bad descriptors received by device */
+	uint32_t bad_desc;
+	/**
+	 * Number of chained descriptors received that exceed the max allowed
+	 * chain by device
+	 */
+	uint32_t exceed_max_chain;
+	/**
+	 * Number of times device tried to read or write buffer that is not
+	 * registered to the device
+	 */
+	uint32_t invalid_buffer;
+	/** Number of errors detected by the device */
+	uint32_t errors;
+};
+
 /**
  * vdpa device operations
  */
@@ -73,8 +94,11 @@ struct rte_vdpa_dev_ops {
 	int (*get_notify_area)(int vid, int qid,
 			uint64_t *offset, uint64_t *size);
 
+	/** Get statistics of the queue */
+	int (*get_stats)(int did, int qid, struct rte_vdpa_queue_stats *stats);
+
 	/** Reserved for future extension */
-	void *reserved[5];
+	void *reserved[4];
 };
 
 /**
@@ -200,4 +224,23 @@ struct rte_vdpa_device *
 __rte_experimental
 int
 rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get vDPA device queue statistics.
+ *
+ * @param did
+ *  device id
+ * @param qid
+ *  queue id
+ * @param stats
+ *  queue statistics pointer.
+ * @return
+ *  0 on success, non-zero on failure.
+ */
+__rte_experimental
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats);
 #endif /* _RTE_VDPA_H_ */
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 051d08c..c9dcff4 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -38,6 +38,7 @@ EXPERIMENTAL {
 	rte_vdpa_find_device_id;
 	rte_vdpa_get_device;
 	rte_vdpa_get_device_num;
+	rte_vdpa_get_stats;
 	rte_vhost_driver_attach_vdpa_device;
 	rte_vhost_driver_detach_vdpa_device;
 	rte_vhost_driver_get_vdpa_device_id;
diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c
index 2b86708..57900fc 100644
--- a/lib/librte_vhost/vdpa.c
+++ b/lib/librte_vhost/vdpa.c
@@ -227,3 +227,17 @@ struct rte_vdpa_device *
 		free_ind_table(idesc);
 	return -1;
 }
+
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats, -ENOTSUP);
+
+	return vdpa_dev->ops->get_stats(did, qid, stats);
+}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH 2/4] common/mlx5: support DevX virtq stats operations
  2020-04-02 11:26 [dpdk-dev] [PATCH 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
@ 2020-04-02 11:26 ` Matan Azrad
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 32+ messages in thread
From: Matan Azrad @ 2020-04-02 11:26 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

Add DevX API to create and query virtio queue statistics from the HW.
The next counters are supported by the HW per virtio queue:
	received_desc.
	completed_desc.
	error_cqes.
	bad_desc_errors.
	exceed_max_chain.
	invalid_buffer.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c            | 73 +++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_devx_cmds.h            | 43 +++++++++++++++
 drivers/common/mlx5/mlx5_prm.h                  | 26 ++++++++-
 drivers/common/mlx5/rte_common_mlx5_version.map |  7 +++
 4 files changed, 148 insertions(+), 1 deletion(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 230ac58..cf7f914 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -460,6 +460,9 @@ struct mlx5_devx_obj *
 	attr->vdpa.valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
 					 general_obj_types) &
 			      MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q);
+	attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
+							general_obj_types) &
+				  MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS);
 	if (attr->qos.sup) {
 		MLX5_SET(query_hca_cap_in, in, op_mod,
 			 MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP |
@@ -1254,6 +1257,7 @@ struct mlx5_devx_obj *
 	MLX5_SET(virtio_q, virtctx, umem_3_id, attr->umems[2].id);
 	MLX5_SET(virtio_q, virtctx, umem_3_size, attr->umems[2].size);
 	MLX5_SET64(virtio_q, virtctx, umem_3_offset, attr->umems[2].offset);
+	MLX5_SET(virtio_q, virtctx, counter_set_id, attr->counters_obj_id);
 	MLX5_SET(virtio_net_q, virtq, tisn_or_qpn, attr->tis_id);
 	virtq_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
 						    sizeof(out));
@@ -1528,3 +1532,72 @@ struct mlx5_devx_obj *
 	}
 	return ret;
 }
+
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
+	struct mlx5_devx_obj *couners_obj = rte_zmalloc(__func__,
+						       sizeof(*couners_obj), 0);
+	void *hdr = MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr);
+
+	if (!couners_obj) {
+		DRV_LOG(ERR, "Failed to allocate virtio queue counters data.");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
+	couners_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
+						      sizeof(out));
+	if (!couners_obj->obj) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to create virtio queue counters Obj using"
+			" DevX.");
+		rte_free(couners_obj);
+		return NULL;
+	}
+	couners_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+	return couners_obj;
+}
+
+int
+mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
+				   struct mlx5_devx_virtio_q_couners_attr *attr)
+{
+	uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(query_virtio_q_counters_out)] = {0};
+	void *hdr = MLX5_ADDR_OF(query_virtio_q_counters_out, in, hdr);
+	void *virtio_q_counters = MLX5_ADDR_OF(query_virtio_q_counters_out, out,
+					       virtio_q_counters);
+	int ret;
+
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, couners_obj->id);
+	ret = mlx5_glue->devx_obj_query(couners_obj->obj, in, sizeof(in), out,
+					sizeof(out));
+	if (ret) {
+		DRV_LOG(ERR, "Failed to query virtio q counters using DevX.");
+		rte_errno = errno;
+		return -errno;
+	}
+	attr->received_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters,
+					 received_desc);
+	attr->completed_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters,
+					  completed_desc);
+	attr->error_cqes = MLX5_GET(virtio_q_counters, virtio_q_counters,
+				    error_cqes);
+	attr->bad_desc_errors = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					 bad_desc_errors);
+	attr->exceed_max_chain = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					  exceed_max_chain);
+	attr->invalid_buffer = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					invalid_buffer);
+	return ret;
+}
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 6a101e5..2ad9fcc 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -64,6 +64,7 @@ struct mlx5_hca_vdpa_attr {
 	uint32_t event_mode:3;
 	uint32_t log_doorbell_stride:5;
 	uint32_t log_doorbell_bar_size:5;
+	uint32_t queue_counters_valid:1;
 	uint32_t max_num_virtio_queues;
 	struct {
 		uint32_t a;
@@ -270,6 +271,7 @@ struct mlx5_devx_virtq_attr {
 	uint32_t qp_id;
 	uint32_t queue_index;
 	uint32_t tis_id;
+	uint32_t counters_obj_id;
 	uint64_t dirty_bitmap_addr;
 	uint64_t type;
 	uint64_t desc_addr;
@@ -298,6 +300,15 @@ struct mlx5_devx_qp_attr {
 	uint64_t wq_umem_offset;
 };
 
+struct mlx5_devx_virtio_q_couners_attr {
+	uint64_t received_desc;
+	uint64_t completed_desc;
+	uint32_t error_cqes;
+	uint32_t bad_desc_errors;
+	uint32_t exceed_max_chain;
+	uint32_t invalid_buffer;
+};
+
 /* mlx5_devx_cmds.c */
 
 struct mlx5_devx_obj *mlx5_devx_cmd_flow_counter_alloc(void *ctx,
@@ -348,5 +359,37 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp,
 				  uint32_t qp_st_mod_op, uint32_t remote_qp_id);
 int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
 			     struct mlx5_devx_rqt_attr *rqt_attr);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create virtio queue counters object DevX API.
+ *
+ * @param[in] ctx
+ *   Device context.
+
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct mlx5_devx_obj *mlx5_devx_cmd_create_virtio_q_counters(void *ctx);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Query virtio queue counters object using DevX API.
+ *
+ * @param[in] couners_obj
+ *   Pointer to virtq object structure.
+ * @param [in/out] attr
+ *   Pointer to virtio queue counters attributes structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
+				  struct mlx5_devx_virtio_q_couners_attr *attr);
 
 #endif /* RTE_PMD_MLX5_DEVX_CMDS_H_ */
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 00fd7c1..8497725 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -938,6 +938,7 @@ enum {
 
 enum {
 	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd),
+	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c),
 };
 
 enum {
@@ -1994,6 +1995,7 @@ struct mlx5_ifc_create_cq_in_bits {
 
 enum {
 	MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d,
+	MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c,
 };
 
 struct mlx5_ifc_general_obj_in_cmd_hdr_bits {
@@ -2012,6 +2014,27 @@ struct mlx5_ifc_general_obj_out_cmd_hdr_bits {
 	u8 reserved_at_60[0x20];
 };
 
+struct mlx5_ifc_virtio_q_counters_bits {
+	u8 modify_field_select[0x40];
+	u8 reserved_at_40[0x40];
+	u8 received_desc[0x40];
+	u8 completed_desc[0x40];
+	u8 error_cqes[0x20];
+	u8 bad_desc_errors[0x20];
+	u8 exceed_max_chain[0x20];
+	u8 invalid_buffer[0x20];
+	u8 reserved_at_180[0x50];
+};
+
+struct mlx5_ifc_create_virtio_q_counters_in_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters;
+};
+
+struct mlx5_ifc_query_virtio_q_counters_out_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters;
+};
 enum {
 	MLX5_VIRTQ_STATE_INIT = 0,
 	MLX5_VIRTQ_STATE_RDY = 1,
@@ -2052,7 +2075,8 @@ struct mlx5_ifc_virtio_q_bits {
 	u8 umem_3_id[0x20];
 	u8 umem_3_size[0x20];
 	u8 umem_3_offset[0x40];
-	u8 reserved_at_300[0x100];
+	u8 counter_set_id[0x20];
+	u8 reserved_at_320[0xe0];
 };
 
 struct mlx5_ifc_virtio_net_q_bits {
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index aede2a0..f5d9775 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -49,3 +49,10 @@ DPDK_20.0.1 {
 
 	mlx5_translate_port_name;
 };
+
+EXPERIMENTAL {
+	global:
+
+	mlx5_devx_cmd_create_virtio_q_counters;
+	mlx5_devx_cmd_query_virtio_q_counters;
+};
\ No newline at end of file
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH 3/4] vdpa/mlx5: support virtio queue statistics get
  2020-04-02 11:26 [dpdk-dev] [PATCH 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
@ 2020-04-02 11:26 ` Matan Azrad
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 4/4] examples/vdpa: add statistics show command Matan Azrad
  2020-05-05 15:54 ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
  4 siblings, 0 replies; 32+ messages in thread
From: Matan Azrad @ 2020-04-02 11:26 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

Add support for stats_get operation.

A DevX counter object is allocated per virtq in order to manage the
virtq statistics.

The counter object is allocated before the virtq creation and destroyed
after it, so the statistics are valid only in the life time of the
virtq.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 doc/guides/vdpadevs/features/mlx5.ini |  1 +
 drivers/vdpa/mlx5/mlx5_vdpa.c         | 28 +++++++++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa.h         | 16 +++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c   | 43 +++++++++++++++++++++++++++++++++++
 4 files changed, 88 insertions(+)

diff --git a/doc/guides/vdpadevs/features/mlx5.ini b/doc/guides/vdpadevs/features/mlx5.ini
index 1da9c1b..788d4e0 100644
--- a/doc/guides/vdpadevs/features/mlx5.ini
+++ b/doc/guides/vdpadevs/features/mlx5.ini
@@ -17,6 +17,7 @@ packed               = Y
 proto mq             = Y
 proto log shmfd      = Y
 proto host notifier  = Y
+queue statistics     = Y
 Other kdrv           = Y
 ARMv8                = Y
 Power8               = Y
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index fe17ced..91d8f96 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -274,6 +274,31 @@
 	return 0;
 }
 
+static int
+mlx5_vdpa_get_stats(int did, int qid, struct rte_vdpa_queue_stats *stats)
+{
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!priv->configured) {
+		DRV_LOG(ERR, "Device %d was not configured.", did);
+		return -ENODATA;
+	}
+	if (qid >= (int)priv->nr_virtqs) {
+		DRV_LOG(ERR, "Too big vring id: %d.", qid);
+		return -E2BIG;
+	}
+	if (!priv->caps.queue_counters_valid) {
+		DRV_LOG(ERR, "Virtq statistics is not supported for device %d.",
+			did);
+		return -ENOTSUP;
+	}
+	return mlx5_vdpa_virtq_stats_get(priv, qid, stats);
+}
+
 static struct rte_vdpa_dev_ops mlx5_vdpa_ops = {
 	.get_queue_num = mlx5_vdpa_get_queue_num,
 	.get_features = mlx5_vdpa_get_vdpa_features,
@@ -286,6 +311,7 @@
 	.get_vfio_group_fd = NULL,
 	.get_vfio_device_fd = mlx5_vdpa_get_device_fd,
 	.get_notify_area = mlx5_vdpa_get_notify_area,
+	.get_stats = mlx5_vdpa_get_stats,
 };
 
 static struct ibv_device *
@@ -489,6 +515,8 @@
 		rte_errno = ENOTSUP;
 		goto error;
 	}
+	if (!attr.vdpa.queue_counters_valid)
+		DRV_LOG(DEBUG, "No capability to support virtq statistics.");
 	priv = rte_zmalloc("mlx5 vDPA device private", sizeof(*priv) +
 			   sizeof(struct mlx5_vdpa_virtq) *
 			   attr.vdpa.max_num_virtio_queues * 2,
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index fcc216a..35b2be1 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -76,6 +76,7 @@ struct mlx5_vdpa_virtq {
 	uint16_t vq_size;
 	struct mlx5_vdpa_priv *priv;
 	struct mlx5_devx_obj *virtq;
+	struct mlx5_devx_obj *counters;
 	struct mlx5_vdpa_event_qp eqp;
 	struct {
 		struct mlx5dv_devx_umem *obj;
@@ -352,4 +353,19 @@ int mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
  */
 int mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index);
 
+/**
+ * Get virtq statistics.
+ *
+ * @param[in] priv
+ *   The vdpa driver private structure.
+ * @param[in] qid
+ *   The virtq index.
+ * @param stats
+ *   The virtq statistics structure to fill.
+ *
+ * @return
+ *   0 on success and @p stats is updated, a negative value otherwise.
+ */
+int mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid,
+			      struct rte_vdpa_queue_stats *stats);
 #endif /* RTE_PMD_MLX5_VDPA_H_ */
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index defb9e1..9fea6e9 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -72,6 +72,10 @@
 			rte_free(virtq->umems[i].buf);
 	}
 	memset(&virtq->umems, 0, sizeof(virtq->umems));
+	if (virtq->counters) {
+		claim_zero(mlx5_devx_cmd_destroy(virtq->counters));
+		virtq->counters = NULL;
+	}
 	if (virtq->eqp.fw_qp)
 		mlx5_vdpa_event_qp_destroy(&virtq->eqp);
 	return 0;
@@ -205,6 +209,16 @@
 		DRV_LOG(INFO, "Virtq %d is, for sure, working by poll mode, no"
 			" need event QPs and event mechanism.", index);
 	}
+	if (priv->caps.queue_counters_valid) {
+		virtq->counters = mlx5_devx_cmd_create_virtio_q_counters
+								    (priv->ctx);
+		if (!virtq->counters) {
+			DRV_LOG(ERR, "Failed to create virtq couners for virtq"
+				" %d.", index);
+			goto error;
+		}
+		attr.counters_obj_id = virtq->counters->id;
+	}
 	/* Setup 3 UMEMs for each virtq. */
 	for (i = 0; i < RTE_DIM(virtq->umems); ++i) {
 		virtq->umems[i].size = priv->caps.umems[i].a * vq.size +
@@ -448,3 +462,32 @@
 	}
 	return 0;
 }
+
+int
+mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid,
+			  struct rte_vdpa_queue_stats *stats)
+{
+	struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid];
+	struct mlx5_devx_virtio_q_couners_attr attr = {0};
+	int ret;
+
+	if (!virtq->virtq) {
+		DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq "
+			"synchronization failed.", qid);
+	}
+	MLX5_ASSERT(virtq->counters);
+	ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &attr);
+	if (ret) {
+		DRV_LOG(ERR, "Failed to read virtq %d stats from HW.", qid);
+		return ret;
+	}
+	*stats = (struct rte_vdpa_queue_stats) {
+			.received_desc = attr.received_desc,
+			.completed_desc = attr.completed_desc,
+			.bad_desc = attr.bad_desc_errors,
+			.exceed_max_chain = attr.exceed_max_chain,
+			.invalid_buffer = attr.invalid_buffer,
+			.errors = attr.error_cqes,
+		};
+	return 0;
+}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH 4/4] examples/vdpa: add statistics show command
  2020-04-02 11:26 [dpdk-dev] [PATCH 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
                   ` (2 preceding siblings ...)
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
@ 2020-04-02 11:26 ` Matan Azrad
  2020-05-05 15:54 ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
  4 siblings, 0 replies; 32+ messages in thread
From: Matan Azrad @ 2020-04-02 11:26 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

A new vDPA driver feature was added to query the virtq statistics from
the HW.

Use this feature to show the HW queues statistics for the virtqs.

Command description: stats X Y.
X is the device ID.
Y is the queue ID, Y=0xffff to show all the virtio queues statistics of
the device X.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 doc/guides/sample_app_ug/vdpa.rst |  3 +-
 examples/vdpa/main.c              | 78 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index 745f196..d66a724 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -44,7 +44,8 @@ where
   1. help: show help message
   2. list: list all available vdpa devices
   3. create: create a new vdpa port with socket file and vdpa device address
-  4. quit: unregister vhost driver and exit the application
+  4. stats: show statistics of virtio queues
+  5. quit: unregister vhost driver and exit the application
 
 Take IFCVF driver for example:
 
diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c
index d2e2cb7..1b8a9fd 100644
--- a/examples/vdpa/main.c
+++ b/examples/vdpa/main.c
@@ -18,6 +18,7 @@
 #include <cmdline_parse.h>
 #include <cmdline_socket.h>
 #include <cmdline_parse_string.h>
+#include <cmdline_parse_num.h>
 #include <cmdline.h>
 
 #define MAX_PATH_LEN 128
@@ -240,6 +241,7 @@ static void cmd_help_parsed(__attribute__((unused)) void *parsed_result,
 		"    help                                      : Show interactive instructions.\n"
 		"    list                                      : list all available vdpa devices.\n"
 		"    create <socket file> <vdev addr>          : create a new vdpa port.\n"
+		"    stats <device ID> <virtio queue ID>       : show statistics of virtio queue, 0xffff for all.\n"
 		"    quit                                      : exit vdpa sample app.\n"
 	);
 }
@@ -363,6 +365,81 @@ static void cmd_create_vdpa_port_parsed(void *parsed_result,
 	},
 };
 
+/* *** STATS *** */
+struct cmd_stats_result {
+	cmdline_fixed_string_t stats;
+	uint16_t did;
+	uint16_t qid;
+};
+
+static void cmd_device_stats_parsed(void *parsed_result, struct cmdline *cl,
+				    __attribute__((unused)) void *data)
+{
+	struct cmd_stats_result *res = parsed_result;
+	struct rte_vdpa_device *vdev = rte_vdpa_get_device(res->did);
+	struct rte_vdpa_queue_stats stats;
+	uint32_t first, last;
+
+	if (!vdev) {
+		RTE_LOG(ERR, VDPA, "Invalid device id %" PRIu16 ".\n",
+			res->did);
+		return;
+	}
+	if (res->qid == 0xFFFF) {
+		first = 0;
+		last = rte_vhost_get_vring_num(vports[res->did].vid);
+		if (last == 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get num of actual virtqs"
+				" for device id %d.\n", (int)res->did);
+			return;
+		}
+		last--;
+	} else {
+		first = res->qid;
+		last = res->qid;
+	}
+	cmdline_printf(cl, "\nDevice %d:\n", (int)res->did);
+	for (; first <= last; first++) {
+		memset(&stats, 0, sizeof(stats));
+		if (rte_vdpa_get_stats((int)res->did, (int)first, &stats)) {
+			RTE_LOG(ERR, VDPA, "Failed to get vdpa queue statistics"
+				" for device id %d qid %d.\n", (int)res->did,
+				(int)first);
+			return;
+		}
+		cmdline_printf(cl, "\tVirtq %" PRIu32
+			       ":\n\t\tReceived descriptors:   %16" PRIu64
+			       "\t\tCompleted desccriptors: %16" PRIu64
+			       "\n\t\tBad descriptors:        %16" PRIu32
+			       "\t\tExceed max chain:       %16" PRIu32
+			       "\n\t\tInvalid_buffers:        %16" PRIu32
+			       "\t\tErrors:                 %16" PRIu32
+			       "\n", first, stats.received_desc,
+			       stats.completed_desc, stats.bad_desc,
+			       stats.exceed_max_chain, stats.invalid_buffer,
+			       stats.errors);
+	}
+}
+
+cmdline_parse_token_string_t cmd_device_stats_ =
+	TOKEN_STRING_INITIALIZER(struct cmd_stats_result, stats, "stats");
+cmdline_parse_token_num_t cmd_device_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_stats_result, did, UINT32);
+cmdline_parse_token_num_t cmd_queue_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_stats_result, qid, UINT32);
+
+cmdline_parse_inst_t cmd_device_stats = {
+	.f = cmd_device_stats_parsed,
+	.data = NULL,
+	.help_str = "stats: show device statistics",
+	.tokens = {
+		(void *)&cmd_device_stats_,
+		(void *)&cmd_device_id,
+		(void *)&cmd_queue_id,
+		NULL,
+	},
+};
+
 /* *** QUIT *** */
 struct cmd_quit_result {
 	cmdline_fixed_string_t quit;
@@ -392,6 +469,7 @@ static void cmd_quit_parsed(__attribute__((unused)) void *parsed_result,
 	(cmdline_parse_inst_t *)&cmd_help,
 	(cmdline_parse_inst_t *)&cmd_list_vdpa_devices,
 	(cmdline_parse_inst_t *)&cmd_create_vdpa_port,
+	(cmdline_parse_inst_t *)&cmd_device_stats,
 	(cmdline_parse_inst_t *)&cmd_quit,
 	NULL,
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
@ 2020-04-15 14:36   ` Maxime Coquelin
  2020-04-16  9:06     ` Matan Azrad
  0 siblings, 1 reply; 32+ messages in thread
From: Maxime Coquelin @ 2020-04-15 14:36 UTC (permalink / raw)
  To: Matan Azrad, dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler

Hi Matan,

On 4/2/20 1:26 PM, Matan Azrad wrote:
> The vDPA device offloads all the datapath of the vhost device to the HW
> device.
> 
> In order to expose to the user traffic information this patch introduce
> new API to get traffic statistics per virtio queue.
> 
> The statistics are taken directly from the vDPA driver managing the HW
> device.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
>  doc/guides/rel_notes/release_20_05.rst    |  4 +++
>  doc/guides/vdpadevs/features/default.ini  |  1 +
>  doc/guides/vdpadevs/features_overview.rst |  3 +++
>  lib/librte_vhost/rte_vdpa.h               | 45 ++++++++++++++++++++++++++++++-
>  lib/librte_vhost/rte_vhost_version.map    |  1 +
>  lib/librte_vhost/vdpa.c                   | 14 ++++++++++
>  6 files changed, 67 insertions(+), 1 deletion(-)

...

> diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
> index 9a3deb3..d6cbf48 100644
> --- a/lib/librte_vhost/rte_vdpa.h
> +++ b/lib/librte_vhost/rte_vdpa.h
> @@ -37,6 +37,27 @@ struct rte_vdpa_dev_addr {
>  	};
>  };
>  
> +struct rte_vdpa_queue_stats {
> +	/** Number of descriptors received by device */
> +	uint64_t received_desc;
> +	/** Number of descriptors completed by the device */
> +	uint64_t completed_desc;
> +	/** Number of bad descriptors received by device */
> +	uint32_t bad_desc;
> +	/**
> +	 * Number of chained descriptors received that exceed the max allowed
> +	 * chain by device
> +	 */
> +	uint32_t exceed_max_chain;
> +	/**
> +	 * Number of times device tried to read or write buffer that is not
> +	 * registered to the device
> +	 */
> +	uint32_t invalid_buffer;
> +	/** Number of errors detected by the device */
> +	uint32_t errors;
> +};
> +

I think doing it like that, we risk to keep the rte_vdpa_get_stats API
always experimental, as every vendor will want to add their own counters
and so break the ABI.

How about implementing something similar to rte_eth_xstat?
As these stats are for debugging purpose, it would give you much more
flexibility in adding new counters as HW or firmwares evolves.

What do you think?

Thanks,
Maxime

>  /**
>   * vdpa device operations
>   */
> @@ -73,8 +94,11 @@ struct rte_vdpa_dev_ops {
>  	int (*get_notify_area)(int vid, int qid,
>  			uint64_t *offset, uint64_t *size);
>  
> +	/** Get statistics of the queue */
> +	int (*get_stats)(int did, int qid, struct rte_vdpa_queue_stats *stats);
> +
>  	/** Reserved for future extension */
> -	void *reserved[5];
> +	void *reserved[4];
>  };
>  
>  /**
> @@ -200,4 +224,23 @@ struct rte_vdpa_device *
>  __rte_experimental
>  int
>  rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Get vDPA device queue statistics.
> + *
> + * @param did
> + *  device id
> + * @param qid
> + *  queue id
> + * @param stats
> + *  queue statistics pointer.
> + * @return
> + *  0 on success, non-zero on failure.
> + */
> +__rte_experimental
> +int
> +rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats);
>  #endif /* _RTE_VDPA_H_ */
> diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
> index 051d08c..c9dcff4 100644
> --- a/lib/librte_vhost/rte_vhost_version.map
> +++ b/lib/librte_vhost/rte_vhost_version.map
> @@ -38,6 +38,7 @@ EXPERIMENTAL {
>  	rte_vdpa_find_device_id;
>  	rte_vdpa_get_device;
>  	rte_vdpa_get_device_num;
> +	rte_vdpa_get_stats;
>  	rte_vhost_driver_attach_vdpa_device;
>  	rte_vhost_driver_detach_vdpa_device;
>  	rte_vhost_driver_get_vdpa_device_id;
> diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c
> index 2b86708..57900fc 100644
> --- a/lib/librte_vhost/vdpa.c
> +++ b/lib/librte_vhost/vdpa.c
> @@ -227,3 +227,17 @@ struct rte_vdpa_device *
>  		free_ind_table(idesc);
>  	return -1;
>  }
> +
> +int
> +rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats)
> +{
> +	struct rte_vdpa_device *vdpa_dev;
> +
> +	vdpa_dev = rte_vdpa_get_device(did);
> +	if (!vdpa_dev)
> +		return -ENODEV;
> +
> +	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats, -ENOTSUP);
> +
> +	return vdpa_dev->ops->get_stats(did, qid, stats);
> +}
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-04-15 14:36   ` Maxime Coquelin
@ 2020-04-16  9:06     ` Matan Azrad
  2020-04-16 13:19       ` Maxime Coquelin
  0 siblings, 1 reply; 32+ messages in thread
From: Matan Azrad @ 2020-04-16  9:06 UTC (permalink / raw)
  To: Maxime Coquelin, dev; +Cc: Slava Ovsiienko, Shahaf Shuler

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="iso-8859-8-i", Size: 5229 bytes --]

Hi Maxime

Can you point on specific vendor specific counter I suggested?
I think all of them come directly from virtio protocols.


äùâ àú Outlook òáåø Android<https://aka.ms/ghei36>
________________________________
From: Maxime Coquelin <maxime.coquelin@redhat.com>
Sent: Wednesday, April 15, 2020 5:36:59 PM
To: Matan Azrad <matan@mellanox.com>; dev@dpdk.org <dev@dpdk.org>
Cc: Slava Ovsiienko <viacheslavo@mellanox.com>; Shahaf Shuler <shahafs@mellanox.com>
Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats

Hi Matan,

On 4/2/20 1:26 PM, Matan Azrad wrote:
> The vDPA device offloads all the datapath of the vhost device to the HW
> device.
>
> In order to expose to the user traffic information this patch introduce
> new API to get traffic statistics per virtio queue.
>
> The statistics are taken directly from the vDPA driver managing the HW
> device.
>
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
>  doc/guides/rel_notes/release_20_05.rst    |  4 +++
>  doc/guides/vdpadevs/features/default.ini  |  1 +
>  doc/guides/vdpadevs/features_overview.rst |  3 +++
>  lib/librte_vhost/rte_vdpa.h               | 45 ++++++++++++++++++++++++++++++-
>  lib/librte_vhost/rte_vhost_version.map    |  1 +
>  lib/librte_vhost/vdpa.c                   | 14 ++++++++++
>  6 files changed, 67 insertions(+), 1 deletion(-)

...

> diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
> index 9a3deb3..d6cbf48 100644
> --- a/lib/librte_vhost/rte_vdpa.h
> +++ b/lib/librte_vhost/rte_vdpa.h
> @@ -37,6 +37,27 @@ struct rte_vdpa_dev_addr {
>        };
>  };
>
> +struct rte_vdpa_queue_stats {
> +     /** Number of descriptors received by device */
> +     uint64_t received_desc;
> +     /** Number of descriptors completed by the device */
> +     uint64_t completed_desc;
> +     /** Number of bad descriptors received by device */
> +     uint32_t bad_desc;
> +     /**
> +      * Number of chained descriptors received that exceed the max allowed
> +      * chain by device
> +      */
> +     uint32_t exceed_max_chain;
> +     /**
> +      * Number of times device tried to read or write buffer that is not
> +      * registered to the device
> +      */
> +     uint32_t invalid_buffer;
> +     /** Number of errors detected by the device */
> +     uint32_t errors;
> +};
> +

I think doing it like that, we risk to keep the rte_vdpa_get_stats API
always experimental, as every vendor will want to add their own counters
and so break the ABI.

How about implementing something similar to rte_eth_xstat?
As these stats are for debugging purpose, it would give you much more
flexibility in adding new counters as HW or firmwares evolves.

What do you think?

Thanks,
Maxime

>  /**
>   * vdpa device operations
>   */
> @@ -73,8 +94,11 @@ struct rte_vdpa_dev_ops {
>        int (*get_notify_area)(int vid, int qid,
>                        uint64_t *offset, uint64_t *size);
>
> +     /** Get statistics of the queue */
> +     int (*get_stats)(int did, int qid, struct rte_vdpa_queue_stats *stats);
> +
>        /** Reserved for future extension */
> -     void *reserved[5];
> +     void *reserved[4];
>  };
>
>  /**
> @@ -200,4 +224,23 @@ struct rte_vdpa_device *
>  __rte_experimental
>  int
>  rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Get vDPA device queue statistics.
> + *
> + * @param did
> + *  device id
> + * @param qid
> + *  queue id
> + * @param stats
> + *  queue statistics pointer.
> + * @return
> + *  0 on success, non-zero on failure.
> + */
> +__rte_experimental
> +int
> +rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats);
>  #endif /* _RTE_VDPA_H_ */
> diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
> index 051d08c..c9dcff4 100644
> --- a/lib/librte_vhost/rte_vhost_version.map
> +++ b/lib/librte_vhost/rte_vhost_version.map
> @@ -38,6 +38,7 @@ EXPERIMENTAL {
>        rte_vdpa_find_device_id;
>        rte_vdpa_get_device;
>        rte_vdpa_get_device_num;
> +     rte_vdpa_get_stats;
>        rte_vhost_driver_attach_vdpa_device;
>        rte_vhost_driver_detach_vdpa_device;
>        rte_vhost_driver_get_vdpa_device_id;
> diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c
> index 2b86708..57900fc 100644
> --- a/lib/librte_vhost/vdpa.c
> +++ b/lib/librte_vhost/vdpa.c
> @@ -227,3 +227,17 @@ struct rte_vdpa_device *
>                free_ind_table(idesc);
>        return -1;
>  }
> +
> +int
> +rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats)
> +{
> +     struct rte_vdpa_device *vdpa_dev;
> +
> +     vdpa_dev = rte_vdpa_get_device(did);
> +     if (!vdpa_dev)
> +             return -ENODEV;
> +
> +     RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats, -ENOTSUP);
> +
> +     return vdpa_dev->ops->get_stats(did, qid, stats);
> +}
>


äùâ àú Outlook òáåø Android<https://aka.ms/ghei36>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-04-16  9:06     ` Matan Azrad
@ 2020-04-16 13:19       ` Maxime Coquelin
  2020-04-19  6:18         ` Shahaf Shuler
  0 siblings, 1 reply; 32+ messages in thread
From: Maxime Coquelin @ 2020-04-16 13:19 UTC (permalink / raw)
  To: Matan Azrad, dev, Xiao Wang; +Cc: Slava Ovsiienko, Shahaf Shuler

Hi Matan,

On 4/16/20 11:06 AM, Matan Azrad wrote:
> Hi Maxime
> 
> Can you point on specific vendor specific counter I suggested?

No, I can't, but I think we can expect that other vendors may have other
counters they would be interested to dump.

Maybe Intel has some counters in the IFC that they could dump.
Xiao, any thoughts?

> I think all of them come directly from virtio protocols.

exceed_max_chain, for example. Doesn't the spec specify that a
descriptors chain can be as long as the size of the virtqueue?

Here it seems to indicate the device could support less.

Also, as the spec evolves, we may have new counters that comes up,
so better to have something flexible from the start IMHO to avoid ABI
breakages.

Maybe we can have some common xstats names for the Virtio related
counters define in vdpa lib, and then the vendors can specify more
vendor-specific counters if they wish?

Thanks,
Maxime
> 
> השג את Outlook עבור Android <https://aka.ms/ghei36>
> ------------------------------------------------------------------------
> *From:* Maxime Coquelin <maxime.coquelin@redhat.com>
> *Sent:* Wednesday, April 15, 2020 5:36:59 PM
> *To:* Matan Azrad <matan@mellanox.com>; dev@dpdk.org <dev@dpdk.org>
> *Cc:* Slava Ovsiienko <viacheslavo@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>
> *Subject:* Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue
> stats
>  
> Hi Matan,
> 
> On 4/2/20 1:26 PM, Matan Azrad wrote:
>> The vDPA device offloads all the datapath of the vhost device to the HW
>> device.
>> 
>> In order to expose to the user traffic information this patch introduce
>> new API to get traffic statistics per virtio queue.
>> 
>> The statistics are taken directly from the vDPA driver managing the HW
>> device.
>> 
>> Signed-off-by: Matan Azrad <matan@mellanox.com>
>> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
>> ---
>>  doc/guides/rel_notes/release_20_05.rst    |  4 +++
>>  doc/guides/vdpadevs/features/default.ini  |  1 +
>>  doc/guides/vdpadevs/features_overview.rst |  3 +++
>>  lib/librte_vhost/rte_vdpa.h               | 45 ++++++++++++++++++++++++++++++-
>>  lib/librte_vhost/rte_vhost_version.map    |  1 +
>>  lib/librte_vhost/vdpa.c                   | 14 ++++++++++
>>  6 files changed, 67 insertions(+), 1 deletion(-)
> 
> ...
> 
>> diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
>> index 9a3deb3..d6cbf48 100644
>> --- a/lib/librte_vhost/rte_vdpa.h
>> +++ b/lib/librte_vhost/rte_vdpa.h
>> @@ -37,6 +37,27 @@ struct rte_vdpa_dev_addr {
>>        };
>>  };
>>  
>> +struct rte_vdpa_queue_stats {
>> +     /** Number of descriptors received by device */
>> +     uint64_t received_desc;
>> +     /** Number of descriptors completed by the device */
>> +     uint64_t completed_desc;
>> +     /** Number of bad descriptors received by device */
>> +     uint32_t bad_desc;
>> +     /**
>> +      * Number of chained descriptors received that exceed the max allowed
>> +      * chain by device
>> +      */
>> +     uint32_t exceed_max_chain;
>> +     /**
>> +      * Number of times device tried to read or write buffer that is not
>> +      * registered to the device
>> +      */
>> +     uint32_t invalid_buffer;
>> +     /** Number of errors detected by the device */
>> +     uint32_t errors;
>> +};
>> +
> 
> I think doing it like that, we risk to keep the rte_vdpa_get_stats API
> always experimental, as every vendor will want to add their own counters
> and so break the ABI.
> 
> How about implementing something similar to rte_eth_xstat?
> As these stats are for debugging purpose, it would give you much more
> flexibility in adding new counters as HW or firmwares evolves.
> 
> What do you think?
> 
> Thanks,
> Maxime
> 
>>  /**
>>   * vdpa device operations
>>   */
>> @@ -73,8 +94,11 @@ struct rte_vdpa_dev_ops {
>>        int (*get_notify_area)(int vid, int qid,
>>                        uint64_t *offset, uint64_t *size);
>>  
>> +     /** Get statistics of the queue */
>> +     int (*get_stats)(int did, int qid, struct rte_vdpa_queue_stats *stats);
>> +
>>        /** Reserved for future extension */
>> -     void *reserved[5];
>> +     void *reserved[4];
>>  };
>>  
>>  /**
>> @@ -200,4 +224,23 @@ struct rte_vdpa_device *
>>  __rte_experimental
>>  int
>>  rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
>> +
>> +/**
>> + * @warning
>> + * @b EXPERIMENTAL: this API may change without prior notice
>> + *
>> + * Get vDPA device queue statistics.
>> + *
>> + * @param did
>> + *  device id
>> + * @param qid
>> + *  queue id
>> + * @param stats
>> + *  queue statistics pointer.
>> + * @return
>> + *  0 on success, non-zero on failure.
>> + */
>> +__rte_experimental
>> +int
>> +rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats);
>>  #endif /* _RTE_VDPA_H_ */
>> diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
>> index 051d08c..c9dcff4 100644
>> --- a/lib/librte_vhost/rte_vhost_version.map
>> +++ b/lib/librte_vhost/rte_vhost_version.map
>> @@ -38,6 +38,7 @@ EXPERIMENTAL {
>>        rte_vdpa_find_device_id;
>>        rte_vdpa_get_device;
>>        rte_vdpa_get_device_num;
>> +     rte_vdpa_get_stats;
>>        rte_vhost_driver_attach_vdpa_device;
>>        rte_vhost_driver_detach_vdpa_device;
>>        rte_vhost_driver_get_vdpa_device_id;
>> diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c
>> index 2b86708..57900fc 100644
>> --- a/lib/librte_vhost/vdpa.c
>> +++ b/lib/librte_vhost/vdpa.c
>> @@ -227,3 +227,17 @@ struct rte_vdpa_device *
>>                free_ind_table(idesc);
>>        return -1;
>>  }
>> +
>> +int
>> +rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_queue_stats *stats)
>> +{
>> +     struct rte_vdpa_device *vdpa_dev;
>> +
>> +     vdpa_dev = rte_vdpa_get_device(did);
>> +     if (!vdpa_dev)
>> +             return -ENODEV;
>> +
>> +     RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats, -ENOTSUP);
>> +
>> +     return vdpa_dev->ops->get_stats(did, qid, stats);
>> +}
>> 
> 
> 
> השג את Outlook עבור Android <https://aka.ms/ghei36>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-04-16 13:19       ` Maxime Coquelin
@ 2020-04-19  6:18         ` Shahaf Shuler
  2020-04-20  7:13           ` Maxime Coquelin
  0 siblings, 1 reply; 32+ messages in thread
From: Shahaf Shuler @ 2020-04-19  6:18 UTC (permalink / raw)
  To: Maxime Coquelin, Matan Azrad, dev, Xiao Wang; +Cc: Slava Ovsiienko

Thursday, April 16, 2020 4:20 PM, Maxime Coquelin:
> Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
> 
> Hi Matan,
> 
> On 4/16/20 11:06 AM, Matan Azrad wrote:
> > Hi Maxime
> >
> > Can you point on specific vendor specific counter I suggested?
> 
> No, I can't, but I think we can expect that other vendors may have other
> counters they would be interested to dump.
> 
> Maybe Intel has some counters in the IFC that they could dump.
> Xiao, any thoughts?
> 
> > I think all of them come directly from virtio protocols.
> 
> exceed_max_chain, for example. Doesn't the spec specify that a descriptors
> chain can be as long as the size of the virtqueue?
> 
> Here it seems to indicate the device could support less.

Spec allows device to limit the max supported chain (see [1]). 

> 
> Also, as the spec evolves, we may have new counters that comes up, so
> better to have something flexible from the start IMHO to avoid ABI
> breakages.

I think there are better ways to address that, e.g.:
1. have some reserved fields for future
2. have the option to point to next item, and by that link chain of stat structures

> 
> Maybe we can have some common xstats names for the Virtio related
> counters define in vdpa lib, and then the vendors can specify more vendor-
> specific counters if they wish?

xstats are good, and we should have it to expose the vendor specific counters. The basic counters though, should be simple and vendor agnostic so that any SW/scripting layer on top of the DPDK can easily use and expose it. 
Hence I think it will be good to have the basic counters with well-defined stats structure as part of the vdpa stats API. Is the exceed_max_chain is the only counter you find odd or there are more? 

> 
> Thanks,
> Maxime

[1]
https://github.com/oasis-tcs/virtio-spec/blob/master/packed-ring.tex#L498


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-04-19  6:18         ` Shahaf Shuler
@ 2020-04-20  7:13           ` Maxime Coquelin
  2020-04-20 15:57             ` Shahaf Shuler
  0 siblings, 1 reply; 32+ messages in thread
From: Maxime Coquelin @ 2020-04-20  7:13 UTC (permalink / raw)
  To: Shahaf Shuler, Matan Azrad, dev, Xiao Wang; +Cc: Slava Ovsiienko

Hi Shahaf,

On 4/19/20 8:18 AM, Shahaf Shuler wrote:
> Thursday, April 16, 2020 4:20 PM, Maxime Coquelin:
>> Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
>>
>> Hi Matan,
>>
>> On 4/16/20 11:06 AM, Matan Azrad wrote:
>>> Hi Maxime
>>>
>>> Can you point on specific vendor specific counter I suggested?
>>
>> No, I can't, but I think we can expect that other vendors may have other
>> counters they would be interested to dump.
>>
>> Maybe Intel has some counters in the IFC that they could dump.
>> Xiao, any thoughts?
>>
>>> I think all of them come directly from virtio protocols.
>>
>> exceed_max_chain, for example. Doesn't the spec specify that a descriptors
>> chain can be as long as the size of the virtqueue?
>>
>> Here it seems to indicate the device could support less.
> 
> Spec allows device to limit the max supported chain (see [1]). 

Ha ok, I missed that. Please note that this is only allowed for packed
ring, it is not in the split ring part.

>>
>> Also, as the spec evolves, we may have new counters that comes up, so
>> better to have something flexible from the start IMHO to avoid ABI
>> breakages.
> 
> I think there are better ways to address that, e.g.:
> 1. have some reserved fields for future
> 2. have the option to point to next item, and by that link chain of stat structures
> 
>>
>> Maybe we can have some common xstats names for the Virtio related
>> counters define in vdpa lib, and then the vendors can specify more vendor-
>> specific counters if they wish?
> 
> xstats are good, and we should have it to expose the vendor specific counters. The basic counters though, should be simple and vendor agnostic so that any SW/scripting layer on top of the DPDK can easily use and expose it. 
> Hence I think it will be good to have the basic counters with well-defined stats structure as part of the vdpa stats API. Is the exceed_max_chain is the only counter you find odd or there are more? 

Problem is that not all the vDPA NIC will implement these counters, so
with only proposed implementation, the user cannot know whether counter
value is 0 or counter is just not implemented. For example, the Virtio
specification does not specify counters, so a full Virtio HW offload
device won't have them.

So I think the xstat is the right thing to do, with standardized names
for the standard  counters.

Regards,
Maxime
>>
>> Thanks,
>> Maxime
> 
> [1]
> https://github.com/oasis-tcs/virtio-spec/blob/master/packed-ring.tex#L498
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-04-20  7:13           ` Maxime Coquelin
@ 2020-04-20 15:57             ` Shahaf Shuler
  2020-04-20 16:18               ` Maxime Coquelin
  0 siblings, 1 reply; 32+ messages in thread
From: Shahaf Shuler @ 2020-04-20 15:57 UTC (permalink / raw)
  To: Maxime Coquelin, Matan Azrad, dev, Xiao Wang; +Cc: Slava Ovsiienko

Monday, April 20, 2020 10:13 AM, Maxime Coquelin:
> Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
> 
> Hi Shahaf,
> 
> On 4/19/20 8:18 AM, Shahaf Shuler wrote:
> > Thursday, April 16, 2020 4:20 PM, Maxime Coquelin:
> >> Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue
> >> stats
> >>
> >> Hi Matan,
> >>
> >> On 4/16/20 11:06 AM, Matan Azrad wrote:
> >>> Hi Maxime
> >>>
> >>> Can you point on specific vendor specific counter I suggested?
> >>
> >> No, I can't, but I think we can expect that other vendors may have
> >> other counters they would be interested to dump.
> >>
> >> Maybe Intel has some counters in the IFC that they could dump.
> >> Xiao, any thoughts?
> >>
> >>> I think all of them come directly from virtio protocols.
> >>
> >> exceed_max_chain, for example. Doesn't the spec specify that a
> >> descriptors chain can be as long as the size of the virtqueue?
> >>
> >> Here it seems to indicate the device could support less.
> >
> > Spec allows device to limit the max supported chain (see [1]).
> 
> Ha ok, I missed that. Please note that this is only allowed for packed ring, it is
> not in the split ring part.

On my version of spec (csprd01) it is also for split, however it was removed on the latest version not sure why. 

> 
> >>
> >> Also, as the spec evolves, we may have new counters that comes up, so
> >> better to have something flexible from the start IMHO to avoid ABI
> >> breakages.
> >
> > I think there are better ways to address that, e.g.:
> > 1. have some reserved fields for future 2. have the option to point to
> > next item, and by that link chain of stat structures
> >
> >>
> >> Maybe we can have some common xstats names for the Virtio related
> >> counters define in vdpa lib, and then the vendors can specify more
> >> vendor- specific counters if they wish?
> >
> > xstats are good, and we should have it to expose the vendor specific
> counters. The basic counters though, should be simple and vendor agnostic
> so that any SW/scripting layer on top of the DPDK can easily use and expose
> it.
> > Hence I think it will be good to have the basic counters with well-defined
> stats structure as part of the vdpa stats API. Is the exceed_max_chain is the
> only counter you find odd or there are more?
> 
> Problem is that not all the vDPA NIC will implement these counters, so with
> only proposed implementation, the user cannot know whether counter
> value is 0 or counter is just not implemented. For example, the Virtio
> specification does not specify counters, so a full Virtio HW offload device
> won't have them.

Yeah, full virtio emulated device is a good example. 
I think it is odd virtio doesn’t provide any statistics, e.g. how would the Netdev on top of the virtio device report anything? How will ppl debug? 
I think sooner or later we will need a way to expose stats. 

> 
> So I think the xstat is the right thing to do, with standardized names for the
> standard  counters.

Yeah tend to agree on this one due to the lack of spec statistics. 

> 
> Regards,
> Maxime
> >>
> >> Thanks,
> >> Maxime
> >
> > [1]
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
> > ub.com%2Foasis-tcs%2Fvirtio-spec%2Fblob%2Fmaster%2Fpacked-
> ring.tex%23L
> >
> 498&amp;data=02%7C01%7Cshahafs%40mellanox.com%7C2fbf00c6e115488f
> 483e08
> >
> d7e4fa4940%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6372296
> 3594175
> >
> 9512&amp;sdata=m2rPPMM%2Fen9Vkbp%2Fg5xz0MSTWYURh7woI7w5%2B
> b2Zjy8%3D&am
> > p;reserved=0
> >


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-04-20 15:57             ` Shahaf Shuler
@ 2020-04-20 16:18               ` Maxime Coquelin
  2020-04-21  5:02                 ` Shahaf Shuler
  0 siblings, 1 reply; 32+ messages in thread
From: Maxime Coquelin @ 2020-04-20 16:18 UTC (permalink / raw)
  To: Shahaf Shuler, Matan Azrad, dev, Xiao Wang; +Cc: Slava Ovsiienko



On 4/20/20 5:57 PM, Shahaf Shuler wrote:
> Monday, April 20, 2020 10:13 AM, Maxime Coquelin:
>> Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
>>
>> Hi Shahaf,
>>
>> On 4/19/20 8:18 AM, Shahaf Shuler wrote:
>>> Thursday, April 16, 2020 4:20 PM, Maxime Coquelin:
>>>> Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue
>>>> stats
>>>>
>>>> Hi Matan,
>>>>
>>>> On 4/16/20 11:06 AM, Matan Azrad wrote:
>>>>> Hi Maxime
>>>>>
>>>>> Can you point on specific vendor specific counter I suggested?
>>>>
>>>> No, I can't, but I think we can expect that other vendors may have
>>>> other counters they would be interested to dump.
>>>>
>>>> Maybe Intel has some counters in the IFC that they could dump.
>>>> Xiao, any thoughts?
>>>>
>>>>> I think all of them come directly from virtio protocols.
>>>>
>>>> exceed_max_chain, for example. Doesn't the spec specify that a
>>>> descriptors chain can be as long as the size of the virtqueue?
>>>>
>>>> Here it seems to indicate the device could support less.
>>>
>>> Spec allows device to limit the max supported chain (see [1]).
>>
>> Ha ok, I missed that. Please note that this is only allowed for packed ring, it is
>> not in the split ring part.
> 
> On my version of spec (csprd01) it is also for split, however it was removed on the latest version not sure why. 

Problem is that older drivers may assume max chain size is the virtio
ring size.

By the way, how is the guest Virtio driver made aware of the max
chain size? Isn't that missing in the spec?

>>
>>>>
>>>> Also, as the spec evolves, we may have new counters that comes up, so
>>>> better to have something flexible from the start IMHO to avoid ABI
>>>> breakages.
>>>
>>> I think there are better ways to address that, e.g.:
>>> 1. have some reserved fields for future 2. have the option to point to
>>> next item, and by that link chain of stat structures
>>>
>>>>
>>>> Maybe we can have some common xstats names for the Virtio related
>>>> counters define in vdpa lib, and then the vendors can specify more
>>>> vendor- specific counters if they wish?
>>>
>>> xstats are good, and we should have it to expose the vendor specific
>> counters. The basic counters though, should be simple and vendor agnostic
>> so that any SW/scripting layer on top of the DPDK can easily use and expose
>> it.
>>> Hence I think it will be good to have the basic counters with well-defined
>> stats structure as part of the vdpa stats API. Is the exceed_max_chain is the
>> only counter you find odd or there are more?
>>
>> Problem is that not all the vDPA NIC will implement these counters, so with
>> only proposed implementation, the user cannot know whether counter
>> value is 0 or counter is just not implemented. For example, the Virtio
>> specification does not specify counters, so a full Virtio HW offload device
>> won't have them.
> 
> Yeah, full virtio emulated device is a good example. 
> I think it is odd virtio doesn’t provide any statistics, e.g. how would the Netdev on top of the virtio device report anything? How will ppl debug? 
> I think sooner or later we will need a way to expose stats. 

I agree.
These missing counters were not blocking when using a SW backend, but
with HW Offload I agree such counters would be welcome.

>>
>> So I think the xstat is the right thing to do, with standardized names for the
>> standard  counters.
> 
> Yeah tend to agree on this one due to the lack of spec statistics. 
> 
>>
>> Regards,
>> Maxime
>>>>
>>>> Thanks,
>>>> Maxime
>>>
>>> [1]
>>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
>>> ub.com%2Foasis-tcs%2Fvirtio-spec%2Fblob%2Fmaster%2Fpacked-
>> ring.tex%23L
>>>
>> 498&amp;data=02%7C01%7Cshahafs%40mellanox.com%7C2fbf00c6e115488f
>> 483e08
>>>
>> d7e4fa4940%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6372296
>> 3594175
>>>
>> 9512&amp;sdata=m2rPPMM%2Fen9Vkbp%2Fg5xz0MSTWYURh7woI7w5%2B
>> b2Zjy8%3D&am
>>> p;reserved=0
>>>
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-04-20 16:18               ` Maxime Coquelin
@ 2020-04-21  5:02                 ` Shahaf Shuler
  0 siblings, 0 replies; 32+ messages in thread
From: Shahaf Shuler @ 2020-04-21  5:02 UTC (permalink / raw)
  To: Maxime Coquelin, Matan Azrad, dev, Xiao Wang; +Cc: Slava Ovsiienko

Monday, April 20, 2020 7:19 PM, Maxime Coquelin:
> Cc: Slava Ovsiienko <viacheslavo@mellanox.com>
> Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats
> 
> 
> 
> On 4/20/20 5:57 PM, Shahaf Shuler wrote:
> > Monday, April 20, 2020 10:13 AM, Maxime Coquelin:
> >> Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA queue
> >> stats
> >>
> >> Hi Shahaf,
> >>
> >> On 4/19/20 8:18 AM, Shahaf Shuler wrote:
> >>> Thursday, April 16, 2020 4:20 PM, Maxime Coquelin:
> >>>> Subject: Re: [PATCH 1/4] vhost: inroduce operation to get vDPA
> >>>> queue stats
> >>>>
> >>>> Hi Matan,
> >>>>
> >>>> On 4/16/20 11:06 AM, Matan Azrad wrote:
> >>>>> Hi Maxime
> >>>>>
> >>>>> Can you point on specific vendor specific counter I suggested?
> >>>>
> >>>> No, I can't, but I think we can expect that other vendors may have
> >>>> other counters they would be interested to dump.
> >>>>
> >>>> Maybe Intel has some counters in the IFC that they could dump.
> >>>> Xiao, any thoughts?
> >>>>
> >>>>> I think all of them come directly from virtio protocols.
> >>>>
> >>>> exceed_max_chain, for example. Doesn't the spec specify that a
> >>>> descriptors chain can be as long as the size of the virtqueue?
> >>>>
> >>>> Here it seems to indicate the device could support less.
> >>>
> >>> Spec allows device to limit the max supported chain (see [1]).
> >>
> >> Ha ok, I missed that. Please note that this is only allowed for
> >> packed ring, it is not in the split ring part.
> >
> > On my version of spec (csprd01) it is also for split, however it was removed
> on the latest version not sure why.

Actually also for split queue there is this vouge paragraph -
https://github.com/oasis-tcs/virtio-spec/blob/master/split-ring.tex#L138

> 
> Problem is that older drivers may assume max chain size is the virtio ring size.
> 
> By the way, how is the guest Virtio driver made aware of the max chain size?
> Isn't that missing in the spec?

Spec gap. Doesn't say how device report, doesn’t device how device should fail and report error. 

> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics
  2020-04-02 11:26 [dpdk-dev] [PATCH 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
                   ` (3 preceding siblings ...)
  2020-04-02 11:26 ` [dpdk-dev] [PATCH 4/4] examples/vdpa: add statistics show command Matan Azrad
@ 2020-05-05 15:54 ` Matan Azrad
  2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
                     ` (5 more replies)
  4 siblings, 6 replies; 32+ messages in thread
From: Matan Azrad @ 2020-05-05 15:54 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

The vDPA device offloads all the datapath of the vhost device to the HW device.

In order to expose to the user traffic information this patch introduce new APIs to get traffic statistics and to reset them per virtio queue.

Since there is no any formal statistics suggested by the virtio specs, this API doesn't define them and let the drivers to decide what are the good statistics to report.

The statistics are taken directly from the vDPA driver managing the HW device.

Added also support for it in vdpa/mlx5 driver and in vdpa example application.

V2:
Move to per driver statistics according to the discussion between me, Maxime and Shahaf on V1.


Matan Azrad (4):
  vhost: inroduce operation to get vDPA queue stats
  common/mlx5: support DevX virtq stats operations
  vdpa/mlx5: support virtio queue statistics get
  examples/vdpa: add statistics show command

 doc/guides/rel_notes/release_20_05.rst          |   5 +
 doc/guides/sample_app_ug/vdpa.rst               |   3 +-
 doc/guides/vdpadevs/features/default.ini        |   1 +
 doc/guides/vdpadevs/features/mlx5.ini           |   1 +
 doc/guides/vdpadevs/features_overview.rst       |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.c            |  73 +++++++++++++++
 drivers/common/mlx5/mlx5_devx_cmds.h            |  43 +++++++++
 drivers/common/mlx5/mlx5_prm.h                  |  26 +++++-
 drivers/common/mlx5/rte_common_mlx5_version.map |   7 ++
 drivers/vdpa/mlx5/mlx5_vdpa.c                   |  85 +++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa.h                   |  45 +++++++++
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c             |  90 ++++++++++++++++++
 examples/vdpa/main.c                            | 119 ++++++++++++++++++++++++
 lib/librte_vhost/rte_vdpa.h                     | 115 ++++++++++++++++++++++-
 lib/librte_vhost/rte_vhost_version.map          |   3 +
 lib/librte_vhost/vdpa.c                         |  47 ++++++++++
 16 files changed, 663 insertions(+), 3 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v2 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-05-05 15:54 ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
@ 2020-05-05 15:54   ` Matan Azrad
  2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Matan Azrad @ 2020-05-05 15:54 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

The vDPA device offloads all the datapath of the vhost device to the HW
device.

In order to expose to the user traffic information this patch
introduces new 3 APIs to get traffic statistics, the device statistics
name and to reset the statistics per virtio queue.

The statistics are taken directly from the vDPA driver managing the HW
device and can be different for each vendor driver.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/rel_notes/release_20_05.rst    |   5 ++
 doc/guides/vdpadevs/features/default.ini  |   1 +
 doc/guides/vdpadevs/features_overview.rst |   3 +
 lib/librte_vhost/rte_vdpa.h               | 115 +++++++++++++++++++++++++++++-
 lib/librte_vhost/rte_vhost_version.map    |   3 +
 lib/librte_vhost/vdpa.c                   |  47 ++++++++++++
 6 files changed, 173 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index a5ba8a4..b7d8e86 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -213,6 +213,11 @@ New Features
   * Added IPsec inbound load-distribution support for ipsec-secgw application
     using NIC load distribution feature(Flow Director).
 
+* **Added vDPA device APIs to query virtio queue statistics.**
+
+  A new 3 APIs has been added to query virtio queue statistics, to get their
+  names and to reset them by a vDPA device.
+
 
 Removed Items
 -------------
diff --git a/doc/guides/vdpadevs/features/default.ini b/doc/guides/vdpadevs/features/default.ini
index 518e4f1..2c122a3 100644
--- a/doc/guides/vdpadevs/features/default.ini
+++ b/doc/guides/vdpadevs/features/default.ini
@@ -37,6 +37,7 @@ proto rarp           =
 proto reply ack      =
 proto host notifier  =
 proto pagefault      =
+queue statistics     =
 BSD nic_uio          =
 Linux VFIO           =
 Other kdrv           =
diff --git a/doc/guides/vdpadevs/features_overview.rst b/doc/guides/vdpadevs/features_overview.rst
index eb7eb3b..930bc87 100644
--- a/doc/guides/vdpadevs/features_overview.rst
+++ b/doc/guides/vdpadevs/features_overview.rst
@@ -96,6 +96,9 @@ proto host notifier
 proto pagefault
   Slave expose page-fault FD for migration process.
 
+queue statistics
+  Support virtio queue statistics query.
+
 BSD nic_uio
   BSD ``nic_uio`` module supported.
 
diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
index 3c400ee..ecb3d91 100644
--- a/lib/librte_vhost/rte_vdpa.h
+++ b/lib/librte_vhost/rte_vdpa.h
@@ -37,6 +37,34 @@ struct rte_vdpa_dev_addr {
 	};
 };
 
+/** Maximum name length for statistics counters */
+#define RTE_VDPA_STATS_NAME_SIZE 64
+
+/**
+ * A vDPA device statistic structure
+ *
+ * This structure is used by rte_vdpa_stats_get() to provide
+ * statistics from the HW vDPA device.
+ *
+ * It maps a name id, corresponding to an index in the array returned
+ * by rte_vdpa_get_stats_names, to a statistic value.
+ */
+struct rte_vdpa_stat {
+	uint64_t id;        /**< The index in stats name array */
+	uint64_t value;     /**< The statistic counter value */
+};
+
+/**
+ * A name element for statistics
+ *
+ * An array of this structure is returned by rte_vdpa_get_stats_names
+ * It lists the names of extended statistics for a PMD. The rte_vdpa_stat
+ * structure references these names by their array index
+ */
+struct rte_vdpa_stat_name {
+	char name[RTE_VDPA_STATS_NAME_SIZE]; /**< The statistic name */
+};
+
 /**
  * vdpa device operations
  */
@@ -73,8 +101,19 @@ struct rte_vdpa_dev_ops {
 	int (*get_notify_area)(int vid, int qid,
 			uint64_t *offset, uint64_t *size);
 
+	/** Get statistics name */
+	int (*get_stats_names)(int did, struct rte_vdpa_stat_name *stats_names,
+			       unsigned int size);
+
+	/** Get statistics of the queue */
+	int (*get_stats)(int did, int qid, struct rte_vdpa_stat *stats,
+			 unsigned int n);
+
+	/** Reset statistics of the queue */
+	int (*reset_stats)(int did, int qid);
+
 	/** Reserved for future extension */
-	void *reserved[5];
+	void *reserved[2];
 };
 
 /**
@@ -200,4 +239,78 @@ struct rte_vdpa_device *
 __rte_experimental
 int
 rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve names of statistics of a vDPA device.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * And the array index is same with id field of 'struct rte_vdpa_stat':
+ * stats[i].id == i
+ *
+ * @param did
+ *  device id
+ * @param stats_names
+ *   array of at least size elements to be filled.
+ *   If set to NULL, the function returns the required number of elements.
+ * @param size
+ *   The number of elements in stats_names array.
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   stats name array.
+ */
+__rte_experimental
+int
+rte_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names,
+			 unsigned int size);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve statistics of a vDPA device.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * And the array index is same with id field of 'struct rte_vdpa_stat':
+ * stats[i].id == i
+ *
+ * @param did
+ *  device id
+ * @param qid
+ *  queue id
+ * @param stats
+ *   A pointer to a table of structure of type rte_vdpa_stat to be filled with
+ *   device statistics ids and values.
+ * @param n
+ *   The number of elements in stats array.
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   stats table.
+ */
+__rte_experimental
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_stat *stats,
+		   unsigned int n);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Reset statistics of a vDPA device.
+ *
+ * @param did
+ *  device id
+ * @param qid
+ *  queue id
+ * @return
+ *   0 on success, a negative value on error.
+ */
+__rte_experimental
+int
+rte_vdpa_reset_stats(int did, uint16_t qid);
 #endif /* _RTE_VDPA_H_ */
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 051d08c..3a2f7df 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -38,6 +38,9 @@ EXPERIMENTAL {
 	rte_vdpa_find_device_id;
 	rte_vdpa_get_device;
 	rte_vdpa_get_device_num;
+	rte_vdpa_get_stats_names;
+	rte_vdpa_get_stats;
+	rte_vdpa_reset_stats;
 	rte_vhost_driver_attach_vdpa_device;
 	rte_vhost_driver_detach_vdpa_device;
 	rte_vhost_driver_get_vdpa_device_id;
diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c
index b2b2a10..1f5cdcd 100644
--- a/lib/librte_vhost/vdpa.c
+++ b/lib/librte_vhost/vdpa.c
@@ -227,3 +227,50 @@ struct rte_vdpa_device *
 		free_ind_table(idesc);
 	return -1;
 }
+
+int
+rte_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names,
+			 unsigned int size)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats_names, -ENOTSUP);
+
+	return vdpa_dev->ops->get_stats_names(did, stats_names, size);
+}
+
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_stat *stats,
+		   unsigned int n)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	if (!stats || !n)
+		return -EINVAL;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats, -ENOTSUP);
+
+	return vdpa_dev->ops->get_stats(did, qid, stats, n);
+}
+
+int
+rte_vdpa_reset_stats(int did, uint16_t qid)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->reset_stats, -ENOTSUP);
+
+	return vdpa_dev->ops->reset_stats(did, qid);
+}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v2 2/4] common/mlx5: support DevX virtq stats operations
  2020-05-05 15:54 ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
  2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
@ 2020-05-05 15:54   ` Matan Azrad
  2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Matan Azrad @ 2020-05-05 15:54 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

Add DevX API to create and query virtio queue statistics from the HW.
The next counters are supported by the HW per virtio queue:
	received_desc.
	completed_desc.
	error_cqes.
	bad_desc_errors.
	exceed_max_chain.
	invalid_buffer.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c            | 73 +++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_devx_cmds.h            | 43 +++++++++++++++
 drivers/common/mlx5/mlx5_prm.h                  | 26 ++++++++-
 drivers/common/mlx5/rte_common_mlx5_version.map |  7 +++
 4 files changed, 148 insertions(+), 1 deletion(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 67c8a8c..c8fc4bb 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -460,6 +460,9 @@ struct mlx5_devx_obj *
 	attr->vdpa.valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
 					 general_obj_types) &
 			      MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q);
+	attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
+							general_obj_types) &
+				  MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS);
 	if (attr->qos.sup) {
 		MLX5_SET(query_hca_cap_in, in, op_mod,
 			 MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP |
@@ -1255,6 +1258,7 @@ struct mlx5_devx_obj *
 	MLX5_SET(virtio_q, virtctx, umem_3_id, attr->umems[2].id);
 	MLX5_SET(virtio_q, virtctx, umem_3_size, attr->umems[2].size);
 	MLX5_SET64(virtio_q, virtctx, umem_3_offset, attr->umems[2].offset);
+	MLX5_SET(virtio_q, virtctx, counter_set_id, attr->counters_obj_id);
 	MLX5_SET(virtio_net_q, virtq, tisn_or_qpn, attr->tis_id);
 	virtq_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
 						    sizeof(out));
@@ -1529,3 +1533,72 @@ struct mlx5_devx_obj *
 	}
 	return ret;
 }
+
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
+	struct mlx5_devx_obj *couners_obj = rte_zmalloc(__func__,
+						       sizeof(*couners_obj), 0);
+	void *hdr = MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr);
+
+	if (!couners_obj) {
+		DRV_LOG(ERR, "Failed to allocate virtio queue counters data.");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
+	couners_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
+						      sizeof(out));
+	if (!couners_obj->obj) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to create virtio queue counters Obj using"
+			" DevX.");
+		rte_free(couners_obj);
+		return NULL;
+	}
+	couners_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+	return couners_obj;
+}
+
+int
+mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
+				   struct mlx5_devx_virtio_q_couners_attr *attr)
+{
+	uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(query_virtio_q_counters_out)] = {0};
+	void *hdr = MLX5_ADDR_OF(query_virtio_q_counters_out, in, hdr);
+	void *virtio_q_counters = MLX5_ADDR_OF(query_virtio_q_counters_out, out,
+					       virtio_q_counters);
+	int ret;
+
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, couners_obj->id);
+	ret = mlx5_glue->devx_obj_query(couners_obj->obj, in, sizeof(in), out,
+					sizeof(out));
+	if (ret) {
+		DRV_LOG(ERR, "Failed to query virtio q counters using DevX.");
+		rte_errno = errno;
+		return -errno;
+	}
+	attr->received_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters,
+					 received_desc);
+	attr->completed_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters,
+					  completed_desc);
+	attr->error_cqes = MLX5_GET(virtio_q_counters, virtio_q_counters,
+				    error_cqes);
+	attr->bad_desc_errors = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					 bad_desc_errors);
+	attr->exceed_max_chain = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					  exceed_max_chain);
+	attr->invalid_buffer = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					invalid_buffer);
+	return ret;
+}
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index f7802e6..60db6da 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -64,6 +64,7 @@ struct mlx5_hca_vdpa_attr {
 	uint32_t event_mode:3;
 	uint32_t log_doorbell_stride:5;
 	uint32_t log_doorbell_bar_size:5;
+	uint32_t queue_counters_valid:1;
 	uint32_t max_num_virtio_queues;
 	struct {
 		uint32_t a;
@@ -270,6 +271,7 @@ struct mlx5_devx_virtq_attr {
 	uint32_t qp_id;
 	uint32_t queue_index;
 	uint32_t tis_id;
+	uint32_t counters_obj_id;
 	uint64_t dirty_bitmap_addr;
 	uint64_t type;
 	uint64_t desc_addr;
@@ -298,6 +300,15 @@ struct mlx5_devx_qp_attr {
 	uint64_t wq_umem_offset;
 };
 
+struct mlx5_devx_virtio_q_couners_attr {
+	uint64_t received_desc;
+	uint64_t completed_desc;
+	uint32_t error_cqes;
+	uint32_t bad_desc_errors;
+	uint32_t exceed_max_chain;
+	uint32_t invalid_buffer;
+};
+
 /* mlx5_devx_cmds.c */
 
 struct mlx5_devx_obj *mlx5_devx_cmd_flow_counter_alloc(struct ibv_context *ctx,
@@ -348,5 +359,37 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp,
 				  uint32_t qp_st_mod_op, uint32_t remote_qp_id);
 int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
 			     struct mlx5_devx_rqt_attr *rqt_attr);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create virtio queue counters object DevX API.
+ *
+ * @param[in] ctx
+ *   Device context.
+
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct mlx5_devx_obj *mlx5_devx_cmd_create_virtio_q_counters(void *ctx);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Query virtio queue counters object using DevX API.
+ *
+ * @param[in] couners_obj
+ *   Pointer to virtq object structure.
+ * @param [in/out] attr
+ *   Pointer to virtio queue counters attributes structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
+				  struct mlx5_devx_virtio_q_couners_attr *attr);
 
 #endif /* RTE_PMD_MLX5_DEVX_CMDS_H_ */
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 4ab1c75..eefaf26 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -947,6 +947,7 @@ enum {
 
 enum {
 	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd),
+	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c),
 };
 
 enum {
@@ -2003,6 +2004,7 @@ struct mlx5_ifc_create_cq_in_bits {
 
 enum {
 	MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d,
+	MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c,
 };
 
 struct mlx5_ifc_general_obj_in_cmd_hdr_bits {
@@ -2021,6 +2023,27 @@ struct mlx5_ifc_general_obj_out_cmd_hdr_bits {
 	u8 reserved_at_60[0x20];
 };
 
+struct mlx5_ifc_virtio_q_counters_bits {
+	u8 modify_field_select[0x40];
+	u8 reserved_at_40[0x40];
+	u8 received_desc[0x40];
+	u8 completed_desc[0x40];
+	u8 error_cqes[0x20];
+	u8 bad_desc_errors[0x20];
+	u8 exceed_max_chain[0x20];
+	u8 invalid_buffer[0x20];
+	u8 reserved_at_180[0x50];
+};
+
+struct mlx5_ifc_create_virtio_q_counters_in_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters;
+};
+
+struct mlx5_ifc_query_virtio_q_counters_out_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters;
+};
 enum {
 	MLX5_VIRTQ_STATE_INIT = 0,
 	MLX5_VIRTQ_STATE_RDY = 1,
@@ -2061,7 +2084,8 @@ struct mlx5_ifc_virtio_q_bits {
 	u8 umem_3_id[0x20];
 	u8 umem_3_size[0x20];
 	u8 umem_3_offset[0x40];
-	u8 reserved_at_300[0x100];
+	u8 counter_set_id[0x20];
+	u8 reserved_at_320[0xe0];
 };
 
 struct mlx5_ifc_virtio_net_q_bits {
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index b58a378..8c2d224 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -76,3 +76,10 @@ EXPERIMENTAL {
 	mlx5_mr_create_primary;
 	mlx5_mr_flush_local_cache;
 };
+
+EXPERIMENTAL {
+	global:
+
+	mlx5_devx_cmd_create_virtio_q_counters;
+	mlx5_devx_cmd_query_virtio_q_counters;
+};
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v2 3/4] vdpa/mlx5: support virtio queue statistics get
  2020-05-05 15:54 ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
  2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
  2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
@ 2020-05-05 15:54   ` Matan Azrad
  2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 4/4] examples/vdpa: add statistics show command Matan Azrad
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 32+ messages in thread
From: Matan Azrad @ 2020-05-05 15:54 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

Add support for statistics operations.

A DevX counter object is allocated per virtq in order to manage the
virtq statistics.

The counter object is allocated before the virtq creation and destroyed
after it, so the statistics are valid only in the life time of the
virtq.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/vdpadevs/features/mlx5.ini |  1 +
 drivers/vdpa/mlx5/mlx5_vdpa.c         | 85 +++++++++++++++++++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa.h         | 45 ++++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c   | 90 +++++++++++++++++++++++++++++++++++
 4 files changed, 221 insertions(+)

diff --git a/doc/guides/vdpadevs/features/mlx5.ini b/doc/guides/vdpadevs/features/mlx5.ini
index 1da9c1b..788d4e0 100644
--- a/doc/guides/vdpadevs/features/mlx5.ini
+++ b/doc/guides/vdpadevs/features/mlx5.ini
@@ -17,6 +17,7 @@ packed               = Y
 proto mq             = Y
 proto log shmfd      = Y
 proto host notifier  = Y
+queue statistics     = Y
 Other kdrv           = Y
 ARMv8                = Y
 Power8               = Y
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 1113d6c..a80e3f4 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -8,6 +8,7 @@
 #include <rte_errno.h>
 #include <rte_bus_pci.h>
 #include <rte_pci.h>
+#include <rte_string_fns.h>
 
 #include <mlx5_glue.h>
 #include <mlx5_common.h>
@@ -274,6 +275,85 @@
 	return 0;
 }
 
+static int
+mlx5_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names,
+			  unsigned int size)
+{
+	static const char *mlx5_vdpa_stats_names[MLX5_VDPA_STATS_MAX] = {
+		"received_descriptors",
+		"completed_descriptors",
+		"bad descriptor errors",
+		"exceed max chain",
+		"invalid buffer",
+		"completion errors",
+	};
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+	unsigned int i;
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!stats_names)
+		return MLX5_VDPA_STATS_MAX;
+	size = RTE_MIN(size, (unsigned int)MLX5_VDPA_STATS_MAX);
+	for (i = 0; i < size; ++i)
+		strlcpy(stats_names[i].name, mlx5_vdpa_stats_names[i],
+			RTE_VDPA_STATS_NAME_SIZE);
+	return size;
+}
+
+static int
+mlx5_vdpa_get_stats(int did, int qid, struct rte_vdpa_stat *stats,
+		    unsigned int n)
+{
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!priv->configured) {
+		DRV_LOG(ERR, "Device %d was not configured.", did);
+		return -ENODATA;
+	}
+	if (qid >= (int)priv->nr_virtqs) {
+		DRV_LOG(ERR, "Too big vring id: %d.", qid);
+		return -E2BIG;
+	}
+	if (!priv->caps.queue_counters_valid) {
+		DRV_LOG(ERR, "Virtq statistics is not supported for device %d.",
+			did);
+		return -ENOTSUP;
+	}
+	return mlx5_vdpa_virtq_stats_get(priv, qid, stats, n);
+}
+
+static int
+mlx5_vdpa_reset_stats(int did, int qid)
+{
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!priv->configured) {
+		DRV_LOG(ERR, "Device %d was not configured.", did);
+		return -ENODATA;
+	}
+	if (qid >= (int)priv->nr_virtqs) {
+		DRV_LOG(ERR, "Too big vring id: %d.", qid);
+		return -E2BIG;
+	}
+	if (!priv->caps.queue_counters_valid) {
+		DRV_LOG(ERR, "Virtq statistics is not supported for device %d.",
+			did);
+		return -ENOTSUP;
+	}
+	return mlx5_vdpa_virtq_stats_reset(priv, qid);
+}
+
 static struct rte_vdpa_dev_ops mlx5_vdpa_ops = {
 	.get_queue_num = mlx5_vdpa_get_queue_num,
 	.get_features = mlx5_vdpa_get_vdpa_features,
@@ -286,6 +366,9 @@
 	.get_vfio_group_fd = NULL,
 	.get_vfio_device_fd = mlx5_vdpa_get_device_fd,
 	.get_notify_area = mlx5_vdpa_get_notify_area,
+	.get_stats_names = mlx5_vdpa_get_stats_names,
+	.get_stats = mlx5_vdpa_get_stats,
+	.reset_stats = mlx5_vdpa_reset_stats,
 };
 
 static struct ibv_device *
@@ -489,6 +572,8 @@
 		rte_errno = ENOTSUP;
 		goto error;
 	}
+	if (!attr.vdpa.queue_counters_valid)
+		DRV_LOG(DEBUG, "No capability to support virtq statistics.");
 	priv = rte_zmalloc("mlx5 vDPA device private", sizeof(*priv) +
 			   sizeof(struct mlx5_vdpa_virtq) *
 			   attr.vdpa.max_num_virtio_queues * 2,
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index fcc216a..80b4c4b 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -76,6 +76,7 @@ struct mlx5_vdpa_virtq {
 	uint16_t vq_size;
 	struct mlx5_vdpa_priv *priv;
 	struct mlx5_devx_obj *virtq;
+	struct mlx5_devx_obj *counters;
 	struct mlx5_vdpa_event_qp eqp;
 	struct {
 		struct mlx5dv_devx_umem *obj;
@@ -83,6 +84,7 @@ struct mlx5_vdpa_virtq {
 		uint32_t size;
 	} umems[3];
 	struct rte_intr_handle intr_handle;
+	struct mlx5_devx_virtio_q_couners_attr reset;
 };
 
 struct mlx5_vdpa_steer {
@@ -127,6 +129,16 @@ struct mlx5_vdpa_priv {
 	struct mlx5_vdpa_virtq virtqs[];
 };
 
+enum {
+	MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS,
+	MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS,
+	MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS,
+	MLX5_VDPA_STATS_EXCEED_MAX_CHAIN,
+	MLX5_VDPA_STATS_INVALID_BUFFER,
+	MLX5_VDPA_STATS_COMPLETION_ERRORS,
+	MLX5_VDPA_STATS_MAX
+};
+
 /*
  * Check whether virtq is for traffic receive.
  * According to VIRTIO_NET Spec the virtqueues index identity its type by:
@@ -352,4 +364,37 @@ int mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
  */
 int mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index);
 
+/**
+ * Get virtq statistics.
+ *
+ * @param[in] priv
+ *   The vdpa driver private structure.
+ * @param[in] qid
+ *   The virtq index.
+ * @param stats
+ *   The virtq statistics array to fill.
+ * @param n
+ *   The number of elements in @p stats array.
+ *
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   @p stats array.
+ */
+int
+mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid,
+			  struct rte_vdpa_stat *stats, unsigned int n);
+
+/**
+ * Reset virtq statistics.
+ *
+ * @param[in] priv
+ *   The vdpa driver private structure.
+ * @param[in] qid
+ *   The virtq index.
+ *
+ * @return
+ *   A negative value on error, otherwise 0.
+ */
+int
+mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid);
 #endif /* RTE_PMD_MLX5_VDPA_H_ */
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index bd48460..d57ed59 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -72,6 +72,11 @@
 			rte_free(virtq->umems[i].buf);
 	}
 	memset(&virtq->umems, 0, sizeof(virtq->umems));
+	if (virtq->counters) {
+		claim_zero(mlx5_devx_cmd_destroy(virtq->counters));
+		virtq->counters = NULL;
+	}
+	memset(&virtq->reset, 0, sizeof(virtq->reset));
 	if (virtq->eqp.fw_qp)
 		mlx5_vdpa_event_qp_destroy(&virtq->eqp);
 	return 0;
@@ -205,6 +210,16 @@
 		DRV_LOG(INFO, "Virtq %d is, for sure, working by poll mode, no"
 			" need event QPs and event mechanism.", index);
 	}
+	if (priv->caps.queue_counters_valid) {
+		virtq->counters = mlx5_devx_cmd_create_virtio_q_counters
+								    (priv->ctx);
+		if (!virtq->counters) {
+			DRV_LOG(ERR, "Failed to create virtq couners for virtq"
+				" %d.", index);
+			goto error;
+		}
+		attr.counters_obj_id = virtq->counters->id;
+	}
 	/* Setup 3 UMEMs for each virtq. */
 	for (i = 0; i < RTE_DIM(virtq->umems); ++i) {
 		virtq->umems[i].size = priv->caps.umems[i].a * vq.size +
@@ -455,3 +470,78 @@
 	}
 	return 0;
 }
+
+int
+mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid,
+			  struct rte_vdpa_stat *stats, unsigned int n)
+{
+	struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid];
+	struct mlx5_devx_virtio_q_couners_attr attr = {0};
+	int ret;
+
+	if (!virtq->virtq)
+		DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq "
+			"synchronization failed.", qid);
+	MLX5_ASSERT(virtq->counters);
+	ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &attr);
+	if (ret) {
+		DRV_LOG(ERR, "Failed to read virtq %d stats from HW.", qid);
+		return ret;
+	}
+	ret = (int)RTE_MIN(n, (unsigned int)MLX5_VDPA_STATS_MAX);
+	if (ret == MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS,
+		.value = attr.received_desc - virtq->reset.received_desc,
+	};
+	if (ret == MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS,
+		.value = attr.completed_desc - virtq->reset.completed_desc,
+	};
+	if (ret == MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS,
+		.value = attr.bad_desc_errors - virtq->reset.bad_desc_errors,
+	};
+	if (ret == MLX5_VDPA_STATS_EXCEED_MAX_CHAIN)
+		return ret;
+	stats[MLX5_VDPA_STATS_EXCEED_MAX_CHAIN] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_EXCEED_MAX_CHAIN,
+		.value = attr.exceed_max_chain - virtq->reset.exceed_max_chain,
+	};
+	if (ret == MLX5_VDPA_STATS_INVALID_BUFFER)
+		return ret;
+	stats[MLX5_VDPA_STATS_INVALID_BUFFER] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_INVALID_BUFFER,
+		.value = attr.invalid_buffer - virtq->reset.invalid_buffer,
+	};
+	if (ret == MLX5_VDPA_STATS_COMPLETION_ERRORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_COMPLETION_ERRORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_COMPLETION_ERRORS,
+		.value = attr.error_cqes - virtq->reset.error_cqes,
+	};
+	return ret;
+}
+
+int
+mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid)
+{
+	struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid];
+	int ret;
+
+	if (!virtq->virtq)
+		DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq "
+			"synchronization failed.", qid);
+	MLX5_ASSERT(virtq->counters);
+	ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters,
+						    &virtq->reset);
+	if (ret)
+		DRV_LOG(ERR, "Failed to read virtq %d reset stats from HW.",
+			qid);
+	return ret;
+}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v2 4/4] examples/vdpa: add statistics show command
  2020-05-05 15:54 ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
                     ` (2 preceding siblings ...)
  2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
@ 2020-05-05 15:54   ` Matan Azrad
  2020-05-07 11:35   ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
  2020-06-02 15:47   ` [dpdk-dev] [PATCH v3 " Matan Azrad
  5 siblings, 0 replies; 32+ messages in thread
From: Matan Azrad @ 2020-05-05 15:54 UTC (permalink / raw)
  To: dev; +Cc: Viacheslav Ovsiienko, Shahaf Shuler, Maxime Coquelin

A new vDPA driver feature was added to query the virtq statistics from
the HW.

Use this feature to show the HW queues statistics for the virtqs.

Command description: stats X Y.
X is the device ID.
Y is the queue ID, Y=0xffff to show all the virtio queues statistics of
the device X.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/sample_app_ug/vdpa.rst |   3 +-
 examples/vdpa/main.c              | 119 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 121 insertions(+), 1 deletion(-)

diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index 745f196..d66a724 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -44,7 +44,8 @@ where
   1. help: show help message
   2. list: list all available vdpa devices
   3. create: create a new vdpa port with socket file and vdpa device address
-  4. quit: unregister vhost driver and exit the application
+  4. stats: show statistics of virtio queues
+  5. quit: unregister vhost driver and exit the application
 
 Take IFCVF driver for example:
 
diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c
index d9a9112..cdef715 100644
--- a/examples/vdpa/main.c
+++ b/examples/vdpa/main.c
@@ -18,6 +18,7 @@
 #include <cmdline_parse.h>
 #include <cmdline_socket.h>
 #include <cmdline_parse_string.h>
+#include <cmdline_parse_num.h>
 #include <cmdline.h>
 
 #define MAX_PATH_LEN 128
@@ -29,6 +30,9 @@ struct vdpa_port {
 	int did;
 	int vid;
 	uint64_t flags;
+	int stats_n;
+	struct rte_vdpa_stat_name *stats_names;
+	struct rte_vdpa_stat *stats;
 };
 
 static struct vdpa_port vports[MAX_VDPA_SAMPLE_PORTS];
@@ -199,6 +203,10 @@ struct vdpa_port {
 		RTE_LOG(ERR, VDPA,
 				"Fail to unregister vhost driver for %s.\n",
 				socket_path);
+	if (vport->stats_names) {
+		rte_free(vport->stats_names);
+		vport->stats_names = NULL;
+	}
 }
 
 static void
@@ -240,6 +248,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
 		"    help                                      : Show interactive instructions.\n"
 		"    list                                      : list all available vdpa devices.\n"
 		"    create <socket file> <vdev addr>          : create a new vdpa port.\n"
+		"    stats <device ID> <virtio queue ID>       : show statistics of virtio queue, 0xffff for all.\n"
 		"    quit                                      : exit vdpa sample app.\n"
 	);
 }
@@ -363,6 +372,115 @@ static void cmd_create_vdpa_port_parsed(void *parsed_result,
 	},
 };
 
+/* *** STATS *** */
+struct cmd_stats_result {
+	cmdline_fixed_string_t stats;
+	uint16_t did;
+	uint16_t qid;
+};
+
+static void cmd_device_stats_parsed(void *parsed_result, struct cmdline *cl,
+				    __attribute__((unused)) void *data)
+{
+	struct cmd_stats_result *res = parsed_result;
+	struct rte_vdpa_device *vdev = rte_vdpa_get_device(res->did);
+	struct vdpa_port *vport = NULL;
+	uint32_t first, last;
+	int i;
+
+	if (!vdev) {
+		RTE_LOG(ERR, VDPA, "Invalid device id %" PRIu16 ".\n",
+			res->did);
+		return;
+	}
+	for (i = 0; i < RTE_MIN(MAX_VDPA_SAMPLE_PORTS, dev_total); i++) {
+		if (vports[i].did == res->did) {
+			vport = &vports[i];
+			break;
+		}
+	}
+	if (!vport) {
+		RTE_LOG(ERR, VDPA, "Device id %" PRIu16 " was not created.\n",
+			res->did);
+		return;
+	}
+	if (res->qid == 0xFFFF) {
+		first = 0;
+		last = rte_vhost_get_vring_num(vport->vid);
+		if (last == 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get num of actual virtqs"
+				" for device id %d.\n", (int)res->did);
+			return;
+		}
+		last--;
+	} else {
+		first = res->qid;
+		last = res->qid;
+	}
+	if (!vport->stats_names) {
+		vport->stats_n = rte_vdpa_get_stats_names(vport->did, NULL, 0);
+		if (vport->stats_n <= 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get names number of "
+				"device %d stats.\n", (int)res->did);
+			return;
+		}
+		vport->stats_names = rte_zmalloc(NULL,
+			(sizeof(*vport->stats_names) + sizeof(*vport->stats)) *
+							vport->stats_n, 0);
+		if (!vport->stats_names) {
+			RTE_LOG(ERR, VDPA, "Failed to allocate memory for stat"
+				" names of device %d.\n", (int)res->did);
+			return;
+		}
+		i = rte_vdpa_get_stats_names(vport->did, vport->stats_names,
+						vport->stats_n);
+		if (vport->stats_n != i) {
+			RTE_LOG(ERR, VDPA, "Failed to get names of device %d "
+				"stats.\n", (int)res->did);
+			return;
+		}
+		vport->stats = (struct rte_vdpa_stat *)
+					(vport->stats_names + vport->stats_n);
+	}
+	cmdline_printf(cl, "\nDevice %d:\n", (int)res->did);
+	for (; first <= last; first++) {
+		memset(vport->stats, 0, sizeof(*vport->stats) * vport->stats_n);
+		if (rte_vdpa_get_stats(vport->did, (int)first, vport->stats,
+					vport->stats_n) <= 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get vdpa queue statistics"
+				" for device id %d qid %d.\n", (int)res->did,
+				(int)first);
+			return;
+		}
+		cmdline_printf(cl, "\tVirtq %" PRIu32 ":\n", first);
+		for (i = 0; i < vport->stats_n; ++i) {
+			cmdline_printf(cl, "\t\t%-*s %-16" PRIu64 "\n",
+				RTE_VDPA_STATS_NAME_SIZE,
+				vport->stats_names[vport->stats[i].id].name,
+				vport->stats[i].value);
+		}
+	}
+}
+
+cmdline_parse_token_string_t cmd_device_stats_ =
+	TOKEN_STRING_INITIALIZER(struct cmd_stats_result, stats, "stats");
+cmdline_parse_token_num_t cmd_device_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_stats_result, did, UINT32);
+cmdline_parse_token_num_t cmd_queue_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_stats_result, qid, UINT32);
+
+cmdline_parse_inst_t cmd_device_stats = {
+	.f = cmd_device_stats_parsed,
+	.data = NULL,
+	.help_str = "stats: show device statistics",
+	.tokens = {
+		(void *)&cmd_device_stats_,
+		(void *)&cmd_device_id,
+		(void *)&cmd_queue_id,
+		NULL,
+	},
+};
+
 /* *** QUIT *** */
 struct cmd_quit_result {
 	cmdline_fixed_string_t quit;
@@ -392,6 +510,7 @@ static void cmd_quit_parsed(__rte_unused void *parsed_result,
 	(cmdline_parse_inst_t *)&cmd_help,
 	(cmdline_parse_inst_t *)&cmd_list_vdpa_devices,
 	(cmdline_parse_inst_t *)&cmd_create_vdpa_port,
+	(cmdline_parse_inst_t *)&cmd_device_stats,
 	(cmdline_parse_inst_t *)&cmd_quit,
 	NULL,
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics
  2020-05-05 15:54 ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
                     ` (3 preceding siblings ...)
  2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 4/4] examples/vdpa: add statistics show command Matan Azrad
@ 2020-05-07 11:35   ` Matan Azrad
  2020-06-02 15:47   ` [dpdk-dev] [PATCH v3 " Matan Azrad
  5 siblings, 0 replies; 32+ messages in thread
From: Matan Azrad @ 2020-05-07 11:35 UTC (permalink / raw)
  To: Matan Azrad, dev; +Cc: Slava Ovsiienko, Shahaf Shuler, Maxime Coquelin

Hi Maxime
Out of the list


Any comments?


From: Matan Azrad
> The vDPA device offloads all the datapath of the vhost device to the HW
> device.
> 
> In order to expose to the user traffic information this patch introduce new
> APIs to get traffic statistics and to reset them per virtio queue.
> 
> Since there is no any formal statistics suggested by the virtio specs, this API
> doesn't define them and let the drivers to decide what are the good statistics
> to report.
> 
> The statistics are taken directly from the vDPA driver managing the HW
> device.
> 
> Added also support for it in vdpa/mlx5 driver and in vdpa example
> application.
> 
> V2:
> Move to per driver statistics according to the discussion between me,
> Maxime and Shahaf on V1.
> 
> 
> Matan Azrad (4):
>   vhost: inroduce operation to get vDPA queue stats
>   common/mlx5: support DevX virtq stats operations
>   vdpa/mlx5: support virtio queue statistics get
>   examples/vdpa: add statistics show command
> 
>  doc/guides/rel_notes/release_20_05.rst          |   5 +
>  doc/guides/sample_app_ug/vdpa.rst               |   3 +-
>  doc/guides/vdpadevs/features/default.ini        |   1 +
>  doc/guides/vdpadevs/features/mlx5.ini           |   1 +
>  doc/guides/vdpadevs/features_overview.rst       |   3 +
>  drivers/common/mlx5/mlx5_devx_cmds.c            |  73 +++++++++++++++
>  drivers/common/mlx5/mlx5_devx_cmds.h            |  43 +++++++++
>  drivers/common/mlx5/mlx5_prm.h                  |  26 +++++-
>  drivers/common/mlx5/rte_common_mlx5_version.map |   7 ++
>  drivers/vdpa/mlx5/mlx5_vdpa.c                   |  85 +++++++++++++++++
>  drivers/vdpa/mlx5/mlx5_vdpa.h                   |  45 +++++++++
>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c             |  90 ++++++++++++++++++
>  examples/vdpa/main.c                            | 119 ++++++++++++++++++++++++
>  lib/librte_vhost/rte_vdpa.h                     | 115 ++++++++++++++++++++++-
>  lib/librte_vhost/rte_vhost_version.map          |   3 +
>  lib/librte_vhost/vdpa.c                         |  47 ++++++++++
>  16 files changed, 663 insertions(+), 3 deletions(-)
> 
> --
> 1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v3 0/4] vhost: support vDPA virtio queue statistics
  2020-05-05 15:54 ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
                     ` (4 preceding siblings ...)
  2020-05-07 11:35   ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
@ 2020-06-02 15:47   ` Matan Azrad
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
                       ` (4 more replies)
  5 siblings, 5 replies; 32+ messages in thread
From: Matan Azrad @ 2020-06-02 15:47 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Maxime Coquelin; +Cc: dev, Shahaf Shuler

The vDPA device offloads all the datapath of the vhost device to the HW device.

In order to expose to the user traffic information this series introduces new APIs to get traffic statistics and to reset them per virtio queue.

Since there is no any formal statistics suggested by the virtio specs, this API doesn't define them and let the drivers to decide what are the good statistics to report.

The statistics are taken directly from the vDPA driver managing the HW device.

Added also support for it in vdpa/mlx5 driver and in vdpa example application.


V2:
Move to per driver statistics according to the discussion between me, Maxime and Shahaf on V1.

v3:
Send again for release 20.08.

Matan Azrad (4):
  vhost: inroduce operation to get vDPA queue stats
  common/mlx5: support DevX virtq stats operations
  vdpa/mlx5: support virtio queue statistics get
  examples/vdpa: add statistics show command

 doc/guides/rel_notes/release_20_08.rst          |  13 +++
 doc/guides/sample_app_ug/vdpa.rst               |   3 +-
 doc/guides/vdpadevs/features/default.ini        |   1 +
 doc/guides/vdpadevs/features/mlx5.ini           |   1 +
 doc/guides/vdpadevs/features_overview.rst       |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.c            |  73 +++++++++++++++
 drivers/common/mlx5/mlx5_devx_cmds.h            |  38 ++++++++
 drivers/common/mlx5/mlx5_prm.h                  |  26 +++++-
 drivers/common/mlx5/rte_common_mlx5_version.map |   2 +
 drivers/vdpa/mlx5/mlx5_vdpa.c                   |  85 +++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa.h                   |  45 +++++++++
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c             |  90 ++++++++++++++++++
 examples/vdpa/main.c                            | 119 ++++++++++++++++++++++++
 lib/librte_vhost/rte_vdpa.h                     | 115 ++++++++++++++++++++++-
 lib/librte_vhost/rte_vhost_version.map          |   3 +
 lib/librte_vhost/vdpa.c                         |  47 ++++++++++
 16 files changed, 661 insertions(+), 3 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-06-02 15:47   ` [dpdk-dev] [PATCH v3 " Matan Azrad
@ 2020-06-02 15:47     ` Matan Azrad
  2020-06-03  8:58       ` Maxime Coquelin
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
                       ` (3 subsequent siblings)
  4 siblings, 1 reply; 32+ messages in thread
From: Matan Azrad @ 2020-06-02 15:47 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Maxime Coquelin; +Cc: dev, Shahaf Shuler

The vDPA device offloads all the datapath of the vhost device to the HW
device.

In order to expose to the user traffic information this patch
introduces new 3 APIs to get traffic statistics, the device statistics
name and to reset the statistics per virtio queue.

The statistics are taken directly from the vDPA driver managing the HW
device and can be different for each vendor driver.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/rel_notes/release_20_08.rst    |   5 ++
 doc/guides/vdpadevs/features/default.ini  |   1 +
 doc/guides/vdpadevs/features_overview.rst |   3 +
 lib/librte_vhost/rte_vdpa.h               | 115 +++++++++++++++++++++++++++++-
 lib/librte_vhost/rte_vhost_version.map    |   3 +
 lib/librte_vhost/vdpa.c                   |  47 ++++++++++++
 6 files changed, 173 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 39064af..d6f09bd 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -56,6 +56,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+   * **Added vDPA device APIs to query virtio queue statistics.**
+
+     A new 3 APIs has been added to query virtio queue statistics, to get their
+     names and to reset them by a vDPA device.
+
 
 Removed Items
 -------------
diff --git a/doc/guides/vdpadevs/features/default.ini b/doc/guides/vdpadevs/features/default.ini
index 518e4f1..2c122a3 100644
--- a/doc/guides/vdpadevs/features/default.ini
+++ b/doc/guides/vdpadevs/features/default.ini
@@ -37,6 +37,7 @@ proto rarp           =
 proto reply ack      =
 proto host notifier  =
 proto pagefault      =
+queue statistics     =
 BSD nic_uio          =
 Linux VFIO           =
 Other kdrv           =
diff --git a/doc/guides/vdpadevs/features_overview.rst b/doc/guides/vdpadevs/features_overview.rst
index eb7eb3b..930bc87 100644
--- a/doc/guides/vdpadevs/features_overview.rst
+++ b/doc/guides/vdpadevs/features_overview.rst
@@ -96,6 +96,9 @@ proto host notifier
 proto pagefault
   Slave expose page-fault FD for migration process.
 
+queue statistics
+  Support virtio queue statistics query.
+
 BSD nic_uio
   BSD ``nic_uio`` module supported.
 
diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
index 3c400ee..ecb3d91 100644
--- a/lib/librte_vhost/rte_vdpa.h
+++ b/lib/librte_vhost/rte_vdpa.h
@@ -37,6 +37,34 @@ struct rte_vdpa_dev_addr {
 	};
 };
 
+/** Maximum name length for statistics counters */
+#define RTE_VDPA_STATS_NAME_SIZE 64
+
+/**
+ * A vDPA device statistic structure
+ *
+ * This structure is used by rte_vdpa_stats_get() to provide
+ * statistics from the HW vDPA device.
+ *
+ * It maps a name id, corresponding to an index in the array returned
+ * by rte_vdpa_get_stats_names, to a statistic value.
+ */
+struct rte_vdpa_stat {
+	uint64_t id;        /**< The index in stats name array */
+	uint64_t value;     /**< The statistic counter value */
+};
+
+/**
+ * A name element for statistics
+ *
+ * An array of this structure is returned by rte_vdpa_get_stats_names
+ * It lists the names of extended statistics for a PMD. The rte_vdpa_stat
+ * structure references these names by their array index
+ */
+struct rte_vdpa_stat_name {
+	char name[RTE_VDPA_STATS_NAME_SIZE]; /**< The statistic name */
+};
+
 /**
  * vdpa device operations
  */
@@ -73,8 +101,19 @@ struct rte_vdpa_dev_ops {
 	int (*get_notify_area)(int vid, int qid,
 			uint64_t *offset, uint64_t *size);
 
+	/** Get statistics name */
+	int (*get_stats_names)(int did, struct rte_vdpa_stat_name *stats_names,
+			       unsigned int size);
+
+	/** Get statistics of the queue */
+	int (*get_stats)(int did, int qid, struct rte_vdpa_stat *stats,
+			 unsigned int n);
+
+	/** Reset statistics of the queue */
+	int (*reset_stats)(int did, int qid);
+
 	/** Reserved for future extension */
-	void *reserved[5];
+	void *reserved[2];
 };
 
 /**
@@ -200,4 +239,78 @@ struct rte_vdpa_device *
 __rte_experimental
 int
 rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve names of statistics of a vDPA device.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * And the array index is same with id field of 'struct rte_vdpa_stat':
+ * stats[i].id == i
+ *
+ * @param did
+ *  device id
+ * @param stats_names
+ *   array of at least size elements to be filled.
+ *   If set to NULL, the function returns the required number of elements.
+ * @param size
+ *   The number of elements in stats_names array.
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   stats name array.
+ */
+__rte_experimental
+int
+rte_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names,
+			 unsigned int size);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve statistics of a vDPA device.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * And the array index is same with id field of 'struct rte_vdpa_stat':
+ * stats[i].id == i
+ *
+ * @param did
+ *  device id
+ * @param qid
+ *  queue id
+ * @param stats
+ *   A pointer to a table of structure of type rte_vdpa_stat to be filled with
+ *   device statistics ids and values.
+ * @param n
+ *   The number of elements in stats array.
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   stats table.
+ */
+__rte_experimental
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_stat *stats,
+		   unsigned int n);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Reset statistics of a vDPA device.
+ *
+ * @param did
+ *  device id
+ * @param qid
+ *  queue id
+ * @return
+ *   0 on success, a negative value on error.
+ */
+__rte_experimental
+int
+rte_vdpa_reset_stats(int did, uint16_t qid);
 #endif /* _RTE_VDPA_H_ */
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 051d08c..3a2f7df 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -38,6 +38,9 @@ EXPERIMENTAL {
 	rte_vdpa_find_device_id;
 	rte_vdpa_get_device;
 	rte_vdpa_get_device_num;
+	rte_vdpa_get_stats_names;
+	rte_vdpa_get_stats;
+	rte_vdpa_reset_stats;
 	rte_vhost_driver_attach_vdpa_device;
 	rte_vhost_driver_detach_vdpa_device;
 	rte_vhost_driver_get_vdpa_device_id;
diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c
index b2b2a10..1f5cdcd 100644
--- a/lib/librte_vhost/vdpa.c
+++ b/lib/librte_vhost/vdpa.c
@@ -227,3 +227,50 @@ struct rte_vdpa_device *
 		free_ind_table(idesc);
 	return -1;
 }
+
+int
+rte_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names,
+			 unsigned int size)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats_names, -ENOTSUP);
+
+	return vdpa_dev->ops->get_stats_names(did, stats_names, size);
+}
+
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_stat *stats,
+		   unsigned int n)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	if (!stats || !n)
+		return -EINVAL;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats, -ENOTSUP);
+
+	return vdpa_dev->ops->get_stats(did, qid, stats, n);
+}
+
+int
+rte_vdpa_reset_stats(int did, uint16_t qid)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->reset_stats, -ENOTSUP);
+
+	return vdpa_dev->ops->reset_stats(did, qid);
+}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v3 2/4] common/mlx5: support DevX virtq stats operations
  2020-06-02 15:47   ` [dpdk-dev] [PATCH v3 " Matan Azrad
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
@ 2020-06-02 15:47     ` Matan Azrad
  2020-06-18 10:58       ` Maxime Coquelin
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
                       ` (2 subsequent siblings)
  4 siblings, 1 reply; 32+ messages in thread
From: Matan Azrad @ 2020-06-02 15:47 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Maxime Coquelin; +Cc: dev, Shahaf Shuler

Add DevX API to create and query virtio queue statistics from the HW.
The next counters are supported by the HW per virtio queue:
	received_desc.
	completed_desc.
	error_cqes.
	bad_desc_errors.
	exceed_max_chain.
	invalid_buffer.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c            | 73 +++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_devx_cmds.h            | 38 +++++++++++++
 drivers/common/mlx5/mlx5_prm.h                  | 26 ++++++++-
 drivers/common/mlx5/rte_common_mlx5_version.map |  2 +
 4 files changed, 138 insertions(+), 1 deletion(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index fba485e..4bf22ce 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -464,6 +464,9 @@ struct mlx5_devx_obj *
 	attr->vdpa.valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
 					 general_obj_types) &
 			      MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q);
+	attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
+							general_obj_types) &
+				  MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS);
 	if (attr->qos.sup) {
 		MLX5_SET(query_hca_cap_in, in, op_mod,
 			 MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP |
@@ -1258,6 +1261,7 @@ struct mlx5_devx_obj *
 	MLX5_SET(virtio_q, virtctx, umem_3_id, attr->umems[2].id);
 	MLX5_SET(virtio_q, virtctx, umem_3_size, attr->umems[2].size);
 	MLX5_SET64(virtio_q, virtctx, umem_3_offset, attr->umems[2].offset);
+	MLX5_SET(virtio_q, virtctx, counter_set_id, attr->counters_obj_id);
 	MLX5_SET(virtio_net_q, virtq, tisn_or_qpn, attr->tis_id);
 	virtq_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
 						    sizeof(out));
@@ -1532,3 +1536,72 @@ struct mlx5_devx_obj *
 	}
 	return ret;
 }
+
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
+	struct mlx5_devx_obj *couners_obj = rte_zmalloc(__func__,
+						       sizeof(*couners_obj), 0);
+	void *hdr = MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr);
+
+	if (!couners_obj) {
+		DRV_LOG(ERR, "Failed to allocate virtio queue counters data.");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
+	couners_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
+						      sizeof(out));
+	if (!couners_obj->obj) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to create virtio queue counters Obj using"
+			" DevX.");
+		rte_free(couners_obj);
+		return NULL;
+	}
+	couners_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+	return couners_obj;
+}
+
+int
+mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
+				   struct mlx5_devx_virtio_q_couners_attr *attr)
+{
+	uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(query_virtio_q_counters_out)] = {0};
+	void *hdr = MLX5_ADDR_OF(query_virtio_q_counters_out, in, hdr);
+	void *virtio_q_counters = MLX5_ADDR_OF(query_virtio_q_counters_out, out,
+					       virtio_q_counters);
+	int ret;
+
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, couners_obj->id);
+	ret = mlx5_glue->devx_obj_query(couners_obj->obj, in, sizeof(in), out,
+					sizeof(out));
+	if (ret) {
+		DRV_LOG(ERR, "Failed to query virtio q counters using DevX.");
+		rte_errno = errno;
+		return -errno;
+	}
+	attr->received_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters,
+					 received_desc);
+	attr->completed_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters,
+					  completed_desc);
+	attr->error_cqes = MLX5_GET(virtio_q_counters, virtio_q_counters,
+				    error_cqes);
+	attr->bad_desc_errors = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					 bad_desc_errors);
+	attr->exceed_max_chain = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					  exceed_max_chain);
+	attr->invalid_buffer = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					invalid_buffer);
+	return ret;
+}
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 49b174a..59a70a0 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -64,6 +64,7 @@ struct mlx5_hca_vdpa_attr {
 	uint32_t event_mode:3;
 	uint32_t log_doorbell_stride:5;
 	uint32_t log_doorbell_bar_size:5;
+	uint32_t queue_counters_valid:1;
 	uint32_t max_num_virtio_queues;
 	struct {
 		uint32_t a;
@@ -272,6 +273,7 @@ struct mlx5_devx_virtq_attr {
 	uint32_t qp_id;
 	uint32_t queue_index;
 	uint32_t tis_id;
+	uint32_t counters_obj_id;
 	uint64_t dirty_bitmap_addr;
 	uint64_t type;
 	uint64_t desc_addr;
@@ -300,6 +302,15 @@ struct mlx5_devx_qp_attr {
 	uint64_t wq_umem_offset;
 };
 
+struct mlx5_devx_virtio_q_couners_attr {
+	uint64_t received_desc;
+	uint64_t completed_desc;
+	uint32_t error_cqes;
+	uint32_t bad_desc_errors;
+	uint32_t exceed_max_chain;
+	uint32_t invalid_buffer;
+};
+
 /* mlx5_devx_cmds.c */
 
 __rte_internal
@@ -374,4 +385,31 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp,
 int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
 			     struct mlx5_devx_rqt_attr *rqt_attr);
 
+/**
+ * Create virtio queue counters object DevX API.
+ *
+ * @param[in] ctx
+ *   Device context.
+
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+__rte_internal
+struct mlx5_devx_obj *mlx5_devx_cmd_create_virtio_q_counters(void *ctx);
+
+/**
+ * Query virtio queue counters object using DevX API.
+ *
+ * @param[in] couners_obj
+ *   Pointer to virtq object structure.
+ * @param [in/out] attr
+ *   Pointer to virtio queue counters attributes structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_internal
+int mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
+				  struct mlx5_devx_virtio_q_couners_attr *attr);
+
 #endif /* RTE_PMD_MLX5_DEVX_CMDS_H_ */
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index e4ef2ac..5fc10d6 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -949,6 +949,7 @@ enum {
 
 enum {
 	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd),
+	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c),
 };
 
 enum {
@@ -2006,6 +2007,7 @@ struct mlx5_ifc_create_cq_in_bits {
 
 enum {
 	MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d,
+	MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c,
 };
 
 struct mlx5_ifc_general_obj_in_cmd_hdr_bits {
@@ -2024,6 +2026,27 @@ struct mlx5_ifc_general_obj_out_cmd_hdr_bits {
 	u8 reserved_at_60[0x20];
 };
 
+struct mlx5_ifc_virtio_q_counters_bits {
+	u8 modify_field_select[0x40];
+	u8 reserved_at_40[0x40];
+	u8 received_desc[0x40];
+	u8 completed_desc[0x40];
+	u8 error_cqes[0x20];
+	u8 bad_desc_errors[0x20];
+	u8 exceed_max_chain[0x20];
+	u8 invalid_buffer[0x20];
+	u8 reserved_at_180[0x50];
+};
+
+struct mlx5_ifc_create_virtio_q_counters_in_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters;
+};
+
+struct mlx5_ifc_query_virtio_q_counters_out_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters;
+};
 enum {
 	MLX5_VIRTQ_STATE_INIT = 0,
 	MLX5_VIRTQ_STATE_RDY = 1,
@@ -2064,7 +2087,8 @@ struct mlx5_ifc_virtio_q_bits {
 	u8 umem_3_id[0x20];
 	u8 umem_3_size[0x20];
 	u8 umem_3_offset[0x40];
-	u8 reserved_at_300[0x100];
+	u8 counter_set_id[0x20];
+	u8 reserved_at_320[0xe0];
 };
 
 struct mlx5_ifc_virtio_net_q_bits {
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index 350e771..b3410df 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -15,6 +15,7 @@ INTERNAL {
 	mlx5_devx_cmd_create_tir;
 	mlx5_devx_cmd_create_td;
 	mlx5_devx_cmd_create_tis;
+	mlx5_devx_cmd_create_virtio_q_counters;
 	mlx5_devx_cmd_create_virtq;
 	mlx5_devx_cmd_destroy;
 	mlx5_devx_cmd_flow_counter_alloc;
@@ -28,6 +29,7 @@ INTERNAL {
 	mlx5_devx_cmd_modify_virtq;
 	mlx5_devx_cmd_qp_query_tis_td;
 	mlx5_devx_cmd_query_hca_attr;
+	mlx5_devx_cmd_query_virtio_q_counters;
 	mlx5_devx_cmd_query_virtq;
 	mlx5_devx_get_out_command_status;
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v3 3/4] vdpa/mlx5: support virtio queue statistics get
  2020-06-02 15:47   ` [dpdk-dev] [PATCH v3 " Matan Azrad
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
@ 2020-06-02 15:47     ` Matan Azrad
  2020-06-18 11:05       ` Maxime Coquelin
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 4/4] examples/vdpa: add statistics show command Matan Azrad
  2020-06-18 16:29     ` [dpdk-dev] [PATCH v3 0/4] vhost: support vDPA virtio queue statistics Maxime Coquelin
  4 siblings, 1 reply; 32+ messages in thread
From: Matan Azrad @ 2020-06-02 15:47 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Maxime Coquelin; +Cc: dev, Shahaf Shuler

Add support for statistics operations.

A DevX counter object is allocated per virtq in order to manage the
virtq statistics.

The counter object is allocated before the virtq creation and destroyed
after it, so the statistics are valid only in the life time of the
virtq.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/rel_notes/release_20_08.rst |  8 +++
 doc/guides/vdpadevs/features/mlx5.ini  |  1 +
 drivers/vdpa/mlx5/mlx5_vdpa.c          | 85 ++++++++++++++++++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa.h          | 45 +++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c    | 90 ++++++++++++++++++++++++++++++++++
 5 files changed, 229 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index d6f09bd..25a1563 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -62,6 +62,14 @@ New Features
      names and to reset them by a vDPA device.
 
 
+   * **Updated Mellanox mlx5 vDPA driver.**
+
+     Updated Mellanox mlx5 vDPA driver with new features, including:
+
+     * Added support for virtio queue statistics.
+
+
+
 Removed Items
 -------------
 
diff --git a/doc/guides/vdpadevs/features/mlx5.ini b/doc/guides/vdpadevs/features/mlx5.ini
index 1da9c1b..788d4e0 100644
--- a/doc/guides/vdpadevs/features/mlx5.ini
+++ b/doc/guides/vdpadevs/features/mlx5.ini
@@ -17,6 +17,7 @@ packed               = Y
 proto mq             = Y
 proto log shmfd      = Y
 proto host notifier  = Y
+queue statistics     = Y
 Other kdrv           = Y
 ARMv8                = Y
 Power8               = Y
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 1113d6c..a80e3f4 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -8,6 +8,7 @@
 #include <rte_errno.h>
 #include <rte_bus_pci.h>
 #include <rte_pci.h>
+#include <rte_string_fns.h>
 
 #include <mlx5_glue.h>
 #include <mlx5_common.h>
@@ -274,6 +275,85 @@
 	return 0;
 }
 
+static int
+mlx5_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names,
+			  unsigned int size)
+{
+	static const char *mlx5_vdpa_stats_names[MLX5_VDPA_STATS_MAX] = {
+		"received_descriptors",
+		"completed_descriptors",
+		"bad descriptor errors",
+		"exceed max chain",
+		"invalid buffer",
+		"completion errors",
+	};
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+	unsigned int i;
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!stats_names)
+		return MLX5_VDPA_STATS_MAX;
+	size = RTE_MIN(size, (unsigned int)MLX5_VDPA_STATS_MAX);
+	for (i = 0; i < size; ++i)
+		strlcpy(stats_names[i].name, mlx5_vdpa_stats_names[i],
+			RTE_VDPA_STATS_NAME_SIZE);
+	return size;
+}
+
+static int
+mlx5_vdpa_get_stats(int did, int qid, struct rte_vdpa_stat *stats,
+		    unsigned int n)
+{
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!priv->configured) {
+		DRV_LOG(ERR, "Device %d was not configured.", did);
+		return -ENODATA;
+	}
+	if (qid >= (int)priv->nr_virtqs) {
+		DRV_LOG(ERR, "Too big vring id: %d.", qid);
+		return -E2BIG;
+	}
+	if (!priv->caps.queue_counters_valid) {
+		DRV_LOG(ERR, "Virtq statistics is not supported for device %d.",
+			did);
+		return -ENOTSUP;
+	}
+	return mlx5_vdpa_virtq_stats_get(priv, qid, stats, n);
+}
+
+static int
+mlx5_vdpa_reset_stats(int did, int qid)
+{
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!priv->configured) {
+		DRV_LOG(ERR, "Device %d was not configured.", did);
+		return -ENODATA;
+	}
+	if (qid >= (int)priv->nr_virtqs) {
+		DRV_LOG(ERR, "Too big vring id: %d.", qid);
+		return -E2BIG;
+	}
+	if (!priv->caps.queue_counters_valid) {
+		DRV_LOG(ERR, "Virtq statistics is not supported for device %d.",
+			did);
+		return -ENOTSUP;
+	}
+	return mlx5_vdpa_virtq_stats_reset(priv, qid);
+}
+
 static struct rte_vdpa_dev_ops mlx5_vdpa_ops = {
 	.get_queue_num = mlx5_vdpa_get_queue_num,
 	.get_features = mlx5_vdpa_get_vdpa_features,
@@ -286,6 +366,9 @@
 	.get_vfio_group_fd = NULL,
 	.get_vfio_device_fd = mlx5_vdpa_get_device_fd,
 	.get_notify_area = mlx5_vdpa_get_notify_area,
+	.get_stats_names = mlx5_vdpa_get_stats_names,
+	.get_stats = mlx5_vdpa_get_stats,
+	.reset_stats = mlx5_vdpa_reset_stats,
 };
 
 static struct ibv_device *
@@ -489,6 +572,8 @@
 		rte_errno = ENOTSUP;
 		goto error;
 	}
+	if (!attr.vdpa.queue_counters_valid)
+		DRV_LOG(DEBUG, "No capability to support virtq statistics.");
 	priv = rte_zmalloc("mlx5 vDPA device private", sizeof(*priv) +
 			   sizeof(struct mlx5_vdpa_virtq) *
 			   attr.vdpa.max_num_virtio_queues * 2,
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index fcc216a..80b4c4b 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -76,6 +76,7 @@ struct mlx5_vdpa_virtq {
 	uint16_t vq_size;
 	struct mlx5_vdpa_priv *priv;
 	struct mlx5_devx_obj *virtq;
+	struct mlx5_devx_obj *counters;
 	struct mlx5_vdpa_event_qp eqp;
 	struct {
 		struct mlx5dv_devx_umem *obj;
@@ -83,6 +84,7 @@ struct mlx5_vdpa_virtq {
 		uint32_t size;
 	} umems[3];
 	struct rte_intr_handle intr_handle;
+	struct mlx5_devx_virtio_q_couners_attr reset;
 };
 
 struct mlx5_vdpa_steer {
@@ -127,6 +129,16 @@ struct mlx5_vdpa_priv {
 	struct mlx5_vdpa_virtq virtqs[];
 };
 
+enum {
+	MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS,
+	MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS,
+	MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS,
+	MLX5_VDPA_STATS_EXCEED_MAX_CHAIN,
+	MLX5_VDPA_STATS_INVALID_BUFFER,
+	MLX5_VDPA_STATS_COMPLETION_ERRORS,
+	MLX5_VDPA_STATS_MAX
+};
+
 /*
  * Check whether virtq is for traffic receive.
  * According to VIRTIO_NET Spec the virtqueues index identity its type by:
@@ -352,4 +364,37 @@ int mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
  */
 int mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index);
 
+/**
+ * Get virtq statistics.
+ *
+ * @param[in] priv
+ *   The vdpa driver private structure.
+ * @param[in] qid
+ *   The virtq index.
+ * @param stats
+ *   The virtq statistics array to fill.
+ * @param n
+ *   The number of elements in @p stats array.
+ *
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   @p stats array.
+ */
+int
+mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid,
+			  struct rte_vdpa_stat *stats, unsigned int n);
+
+/**
+ * Reset virtq statistics.
+ *
+ * @param[in] priv
+ *   The vdpa driver private structure.
+ * @param[in] qid
+ *   The virtq index.
+ *
+ * @return
+ *   A negative value on error, otherwise 0.
+ */
+int
+mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid);
 #endif /* RTE_PMD_MLX5_VDPA_H_ */
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index bd48460..d57ed59 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -72,6 +72,11 @@
 			rte_free(virtq->umems[i].buf);
 	}
 	memset(&virtq->umems, 0, sizeof(virtq->umems));
+	if (virtq->counters) {
+		claim_zero(mlx5_devx_cmd_destroy(virtq->counters));
+		virtq->counters = NULL;
+	}
+	memset(&virtq->reset, 0, sizeof(virtq->reset));
 	if (virtq->eqp.fw_qp)
 		mlx5_vdpa_event_qp_destroy(&virtq->eqp);
 	return 0;
@@ -205,6 +210,16 @@
 		DRV_LOG(INFO, "Virtq %d is, for sure, working by poll mode, no"
 			" need event QPs and event mechanism.", index);
 	}
+	if (priv->caps.queue_counters_valid) {
+		virtq->counters = mlx5_devx_cmd_create_virtio_q_counters
+								    (priv->ctx);
+		if (!virtq->counters) {
+			DRV_LOG(ERR, "Failed to create virtq couners for virtq"
+				" %d.", index);
+			goto error;
+		}
+		attr.counters_obj_id = virtq->counters->id;
+	}
 	/* Setup 3 UMEMs for each virtq. */
 	for (i = 0; i < RTE_DIM(virtq->umems); ++i) {
 		virtq->umems[i].size = priv->caps.umems[i].a * vq.size +
@@ -455,3 +470,78 @@
 	}
 	return 0;
 }
+
+int
+mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid,
+			  struct rte_vdpa_stat *stats, unsigned int n)
+{
+	struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid];
+	struct mlx5_devx_virtio_q_couners_attr attr = {0};
+	int ret;
+
+	if (!virtq->virtq)
+		DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq "
+			"synchronization failed.", qid);
+	MLX5_ASSERT(virtq->counters);
+	ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &attr);
+	if (ret) {
+		DRV_LOG(ERR, "Failed to read virtq %d stats from HW.", qid);
+		return ret;
+	}
+	ret = (int)RTE_MIN(n, (unsigned int)MLX5_VDPA_STATS_MAX);
+	if (ret == MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS,
+		.value = attr.received_desc - virtq->reset.received_desc,
+	};
+	if (ret == MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS,
+		.value = attr.completed_desc - virtq->reset.completed_desc,
+	};
+	if (ret == MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS,
+		.value = attr.bad_desc_errors - virtq->reset.bad_desc_errors,
+	};
+	if (ret == MLX5_VDPA_STATS_EXCEED_MAX_CHAIN)
+		return ret;
+	stats[MLX5_VDPA_STATS_EXCEED_MAX_CHAIN] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_EXCEED_MAX_CHAIN,
+		.value = attr.exceed_max_chain - virtq->reset.exceed_max_chain,
+	};
+	if (ret == MLX5_VDPA_STATS_INVALID_BUFFER)
+		return ret;
+	stats[MLX5_VDPA_STATS_INVALID_BUFFER] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_INVALID_BUFFER,
+		.value = attr.invalid_buffer - virtq->reset.invalid_buffer,
+	};
+	if (ret == MLX5_VDPA_STATS_COMPLETION_ERRORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_COMPLETION_ERRORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_COMPLETION_ERRORS,
+		.value = attr.error_cqes - virtq->reset.error_cqes,
+	};
+	return ret;
+}
+
+int
+mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid)
+{
+	struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid];
+	int ret;
+
+	if (!virtq->virtq)
+		DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq "
+			"synchronization failed.", qid);
+	MLX5_ASSERT(virtq->counters);
+	ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters,
+						    &virtq->reset);
+	if (ret)
+		DRV_LOG(ERR, "Failed to read virtq %d reset stats from HW.",
+			qid);
+	return ret;
+}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [dpdk-dev] [PATCH v3 4/4] examples/vdpa: add statistics show command
  2020-06-02 15:47   ` [dpdk-dev] [PATCH v3 " Matan Azrad
                       ` (2 preceding siblings ...)
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
@ 2020-06-02 15:47     ` Matan Azrad
  2020-06-18 12:13       ` Maxime Coquelin
  2020-06-18 16:29     ` [dpdk-dev] [PATCH v3 0/4] vhost: support vDPA virtio queue statistics Maxime Coquelin
  4 siblings, 1 reply; 32+ messages in thread
From: Matan Azrad @ 2020-06-02 15:47 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Maxime Coquelin; +Cc: dev, Shahaf Shuler

A new vDPA driver feature was added to query the virtq statistics from
the HW.

Use this feature to show the HW queues statistics for the virtqs.

Command description: stats X Y.
X is the device ID.
Y is the queue ID, Y=0xffff to show all the virtio queues statistics of
the device X.

Signed-off-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/sample_app_ug/vdpa.rst |   3 +-
 examples/vdpa/main.c              | 119 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 121 insertions(+), 1 deletion(-)

diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index 745f196..d66a724 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -44,7 +44,8 @@ where
   1. help: show help message
   2. list: list all available vdpa devices
   3. create: create a new vdpa port with socket file and vdpa device address
-  4. quit: unregister vhost driver and exit the application
+  4. stats: show statistics of virtio queues
+  5. quit: unregister vhost driver and exit the application
 
 Take IFCVF driver for example:
 
diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c
index d9a9112..cdef715 100644
--- a/examples/vdpa/main.c
+++ b/examples/vdpa/main.c
@@ -18,6 +18,7 @@
 #include <cmdline_parse.h>
 #include <cmdline_socket.h>
 #include <cmdline_parse_string.h>
+#include <cmdline_parse_num.h>
 #include <cmdline.h>
 
 #define MAX_PATH_LEN 128
@@ -29,6 +30,9 @@ struct vdpa_port {
 	int did;
 	int vid;
 	uint64_t flags;
+	int stats_n;
+	struct rte_vdpa_stat_name *stats_names;
+	struct rte_vdpa_stat *stats;
 };
 
 static struct vdpa_port vports[MAX_VDPA_SAMPLE_PORTS];
@@ -199,6 +203,10 @@ struct vdpa_port {
 		RTE_LOG(ERR, VDPA,
 				"Fail to unregister vhost driver for %s.\n",
 				socket_path);
+	if (vport->stats_names) {
+		rte_free(vport->stats_names);
+		vport->stats_names = NULL;
+	}
 }
 
 static void
@@ -240,6 +248,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
 		"    help                                      : Show interactive instructions.\n"
 		"    list                                      : list all available vdpa devices.\n"
 		"    create <socket file> <vdev addr>          : create a new vdpa port.\n"
+		"    stats <device ID> <virtio queue ID>       : show statistics of virtio queue, 0xffff for all.\n"
 		"    quit                                      : exit vdpa sample app.\n"
 	);
 }
@@ -363,6 +372,115 @@ static void cmd_create_vdpa_port_parsed(void *parsed_result,
 	},
 };
 
+/* *** STATS *** */
+struct cmd_stats_result {
+	cmdline_fixed_string_t stats;
+	uint16_t did;
+	uint16_t qid;
+};
+
+static void cmd_device_stats_parsed(void *parsed_result, struct cmdline *cl,
+				    __attribute__((unused)) void *data)
+{
+	struct cmd_stats_result *res = parsed_result;
+	struct rte_vdpa_device *vdev = rte_vdpa_get_device(res->did);
+	struct vdpa_port *vport = NULL;
+	uint32_t first, last;
+	int i;
+
+	if (!vdev) {
+		RTE_LOG(ERR, VDPA, "Invalid device id %" PRIu16 ".\n",
+			res->did);
+		return;
+	}
+	for (i = 0; i < RTE_MIN(MAX_VDPA_SAMPLE_PORTS, dev_total); i++) {
+		if (vports[i].did == res->did) {
+			vport = &vports[i];
+			break;
+		}
+	}
+	if (!vport) {
+		RTE_LOG(ERR, VDPA, "Device id %" PRIu16 " was not created.\n",
+			res->did);
+		return;
+	}
+	if (res->qid == 0xFFFF) {
+		first = 0;
+		last = rte_vhost_get_vring_num(vport->vid);
+		if (last == 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get num of actual virtqs"
+				" for device id %d.\n", (int)res->did);
+			return;
+		}
+		last--;
+	} else {
+		first = res->qid;
+		last = res->qid;
+	}
+	if (!vport->stats_names) {
+		vport->stats_n = rte_vdpa_get_stats_names(vport->did, NULL, 0);
+		if (vport->stats_n <= 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get names number of "
+				"device %d stats.\n", (int)res->did);
+			return;
+		}
+		vport->stats_names = rte_zmalloc(NULL,
+			(sizeof(*vport->stats_names) + sizeof(*vport->stats)) *
+							vport->stats_n, 0);
+		if (!vport->stats_names) {
+			RTE_LOG(ERR, VDPA, "Failed to allocate memory for stat"
+				" names of device %d.\n", (int)res->did);
+			return;
+		}
+		i = rte_vdpa_get_stats_names(vport->did, vport->stats_names,
+						vport->stats_n);
+		if (vport->stats_n != i) {
+			RTE_LOG(ERR, VDPA, "Failed to get names of device %d "
+				"stats.\n", (int)res->did);
+			return;
+		}
+		vport->stats = (struct rte_vdpa_stat *)
+					(vport->stats_names + vport->stats_n);
+	}
+	cmdline_printf(cl, "\nDevice %d:\n", (int)res->did);
+	for (; first <= last; first++) {
+		memset(vport->stats, 0, sizeof(*vport->stats) * vport->stats_n);
+		if (rte_vdpa_get_stats(vport->did, (int)first, vport->stats,
+					vport->stats_n) <= 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get vdpa queue statistics"
+				" for device id %d qid %d.\n", (int)res->did,
+				(int)first);
+			return;
+		}
+		cmdline_printf(cl, "\tVirtq %" PRIu32 ":\n", first);
+		for (i = 0; i < vport->stats_n; ++i) {
+			cmdline_printf(cl, "\t\t%-*s %-16" PRIu64 "\n",
+				RTE_VDPA_STATS_NAME_SIZE,
+				vport->stats_names[vport->stats[i].id].name,
+				vport->stats[i].value);
+		}
+	}
+}
+
+cmdline_parse_token_string_t cmd_device_stats_ =
+	TOKEN_STRING_INITIALIZER(struct cmd_stats_result, stats, "stats");
+cmdline_parse_token_num_t cmd_device_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_stats_result, did, UINT32);
+cmdline_parse_token_num_t cmd_queue_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_stats_result, qid, UINT32);
+
+cmdline_parse_inst_t cmd_device_stats = {
+	.f = cmd_device_stats_parsed,
+	.data = NULL,
+	.help_str = "stats: show device statistics",
+	.tokens = {
+		(void *)&cmd_device_stats_,
+		(void *)&cmd_device_id,
+		(void *)&cmd_queue_id,
+		NULL,
+	},
+};
+
 /* *** QUIT *** */
 struct cmd_quit_result {
 	cmdline_fixed_string_t quit;
@@ -392,6 +510,7 @@ static void cmd_quit_parsed(__rte_unused void *parsed_result,
 	(cmdline_parse_inst_t *)&cmd_help,
 	(cmdline_parse_inst_t *)&cmd_list_vdpa_devices,
 	(cmdline_parse_inst_t *)&cmd_create_vdpa_port,
+	(cmdline_parse_inst_t *)&cmd_device_stats,
 	(cmdline_parse_inst_t *)&cmd_quit,
 	NULL,
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
@ 2020-06-03  8:58       ` Maxime Coquelin
  2020-06-04 10:36         ` Wang, Xiao W
  0 siblings, 1 reply; 32+ messages in thread
From: Maxime Coquelin @ 2020-06-03  8:58 UTC (permalink / raw)
  To: Matan Azrad, Viacheslav Ovsiienko; +Cc: dev, Shahaf Shuler, Xiao Wang

Hi Matan,

On 6/2/20 5:47 PM, Matan Azrad wrote:
> The vDPA device offloads all the datapath of the vhost device to the HW
> device.
> 
> In order to expose to the user traffic information this patch
> introduces new 3 APIs to get traffic statistics, the device statistics
> name and to reset the statistics per virtio queue.
> 
> The statistics are taken directly from the vDPA driver managing the HW
> device and can be different for each vendor driver.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> ---
>  doc/guides/rel_notes/release_20_08.rst    |   5 ++
>  doc/guides/vdpadevs/features/default.ini  |   1 +
>  doc/guides/vdpadevs/features_overview.rst |   3 +
>  lib/librte_vhost/rte_vdpa.h               | 115 +++++++++++++++++++++++++++++-
>  lib/librte_vhost/rte_vhost_version.map    |   3 +
>  lib/librte_vhost/vdpa.c                   |  47 ++++++++++++
>  6 files changed, 173 insertions(+), 1 deletion(-)
> 
> diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
> index 39064af..d6f09bd 100644
> --- a/doc/guides/rel_notes/release_20_08.rst
> +++ b/doc/guides/rel_notes/release_20_08.rst
> @@ -56,6 +56,11 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =========================================================
>  
> +   * **Added vDPA device APIs to query virtio queue statistics.**
> +
> +     A new 3 APIs has been added to query virtio queue statistics, to get their

s/A new 3 APIs/3 new APIs/

Other than this minor typo, it looks all good to me.

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

I would like Intel feedback on this new API. If no feedback by June
12th, I'll merge it as-is (no need to resend for the typo, I'll fixup
while applying).


Xiao, does the API sound right to you?
Do you have Virtio counters in your IFC device that you could expose?

Thanks in advance,
Maxime


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-06-03  8:58       ` Maxime Coquelin
@ 2020-06-04 10:36         ` Wang, Xiao W
  2020-06-09  9:18           ` Maxime Coquelin
  0 siblings, 1 reply; 32+ messages in thread
From: Wang, Xiao W @ 2020-06-04 10:36 UTC (permalink / raw)
  To: Maxime Coquelin, Matan Azrad, Viacheslav Ovsiienko; +Cc: dev, Shahaf Shuler

Hi,

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, June 3, 2020 4:58 PM
> To: Matan Azrad <matan@mellanox.com>; Viacheslav Ovsiienko
> <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Shahaf Shuler <shahafs@mellanox.com>; Wang, Xiao W
> <xiao.w.wang@intel.com>
> Subject: Re: [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats
> 
> Hi Matan,
> 
> On 6/2/20 5:47 PM, Matan Azrad wrote:
> > The vDPA device offloads all the datapath of the vhost device to the HW
> > device.
> >
> > In order to expose to the user traffic information this patch
> > introduces new 3 APIs to get traffic statistics, the device statistics
> > name and to reset the statistics per virtio queue.
> >
> > The statistics are taken directly from the vDPA driver managing the HW
> > device and can be different for each vendor driver.
> >
> > Signed-off-by: Matan Azrad <matan@mellanox.com>
> > ---
> >  doc/guides/rel_notes/release_20_08.rst    |   5 ++
> >  doc/guides/vdpadevs/features/default.ini  |   1 +
> >  doc/guides/vdpadevs/features_overview.rst |   3 +
> >  lib/librte_vhost/rte_vdpa.h               | 115 +++++++++++++++++++++++++++++-
> >  lib/librte_vhost/rte_vhost_version.map    |   3 +
> >  lib/librte_vhost/vdpa.c                   |  47 ++++++++++++
> >  6 files changed, 173 insertions(+), 1 deletion(-)
> >
> > diff --git a/doc/guides/rel_notes/release_20_08.rst
> b/doc/guides/rel_notes/release_20_08.rst
> > index 39064af..d6f09bd 100644
> > --- a/doc/guides/rel_notes/release_20_08.rst
> > +++ b/doc/guides/rel_notes/release_20_08.rst
> > @@ -56,6 +56,11 @@ New Features
> >       Also, make sure to start the actual text at the margin.
> >       =========================================================
> >
> > +   * **Added vDPA device APIs to query virtio queue statistics.**
> > +
> > +     A new 3 APIs has been added to query virtio queue statistics, to get their
> 
> s/A new 3 APIs/3 new APIs/
> 
> Other than this minor typo, it looks all good to me.
> 
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> I would like Intel feedback on this new API. If no feedback by June
> 12th, I'll merge it as-is (no need to resend for the typo, I'll fixup
> while applying).
> 
> 
> Xiao, does the API sound right to you?
> Do you have Virtio counters in your IFC device that you could expose?
> 
> Thanks in advance,
> Maxime

This stats API looks flexible. IFC virtio stats is not queried from virtio VF, so I think for now
IFC won't expose stats via this API.

Acked-by: Xiao Wang <xiao.w.wang@intel.com>

BRs,
Xiao


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-06-04 10:36         ` Wang, Xiao W
@ 2020-06-09  9:18           ` Maxime Coquelin
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Coquelin @ 2020-06-09  9:18 UTC (permalink / raw)
  To: Wang, Xiao W, Matan Azrad, Viacheslav Ovsiienko; +Cc: dev, Shahaf Shuler



On 6/4/20 12:36 PM, Wang, Xiao W wrote:
> Hi,
> 
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Wednesday, June 3, 2020 4:58 PM
>> To: Matan Azrad <matan@mellanox.com>; Viacheslav Ovsiienko
>> <viacheslavo@mellanox.com>
>> Cc: dev@dpdk.org; Shahaf Shuler <shahafs@mellanox.com>; Wang, Xiao W
>> <xiao.w.wang@intel.com>
>> Subject: Re: [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats
>>
>> Hi Matan,
>>
>> On 6/2/20 5:47 PM, Matan Azrad wrote:
>>> The vDPA device offloads all the datapath of the vhost device to the HW
>>> device.
>>>
>>> In order to expose to the user traffic information this patch
>>> introduces new 3 APIs to get traffic statistics, the device statistics
>>> name and to reset the statistics per virtio queue.
>>>
>>> The statistics are taken directly from the vDPA driver managing the HW
>>> device and can be different for each vendor driver.
>>>
>>> Signed-off-by: Matan Azrad <matan@mellanox.com>
>>> ---
>>>  doc/guides/rel_notes/release_20_08.rst    |   5 ++
>>>  doc/guides/vdpadevs/features/default.ini  |   1 +
>>>  doc/guides/vdpadevs/features_overview.rst |   3 +
>>>  lib/librte_vhost/rte_vdpa.h               | 115 +++++++++++++++++++++++++++++-
>>>  lib/librte_vhost/rte_vhost_version.map    |   3 +
>>>  lib/librte_vhost/vdpa.c                   |  47 ++++++++++++
>>>  6 files changed, 173 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/doc/guides/rel_notes/release_20_08.rst
>> b/doc/guides/rel_notes/release_20_08.rst
>>> index 39064af..d6f09bd 100644
>>> --- a/doc/guides/rel_notes/release_20_08.rst
>>> +++ b/doc/guides/rel_notes/release_20_08.rst
>>> @@ -56,6 +56,11 @@ New Features
>>>       Also, make sure to start the actual text at the margin.
>>>       =========================================================
>>>
>>> +   * **Added vDPA device APIs to query virtio queue statistics.**
>>> +
>>> +     A new 3 APIs has been added to query virtio queue statistics, to get their
>>
>> s/A new 3 APIs/3 new APIs/
>>
>> Other than this minor typo, it looks all good to me.
>>
>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>
>> I would like Intel feedback on this new API. If no feedback by June
>> 12th, I'll merge it as-is (no need to resend for the typo, I'll fixup
>> while applying).
>>
>>
>> Xiao, does the API sound right to you?
>> Do you have Virtio counters in your IFC device that you could expose?
>>
>> Thanks in advance,
>> Maxime
> 
> This stats API looks flexible. IFC virtio stats is not queried from virtio VF, so I think for now
> IFC won't expose stats via this API.
> 
> Acked-by: Xiao Wang <xiao.w.wang@intel.com>


Thanks Xiao for the feedback, so it's all good to me.

Maxime
> BRs,
> Xiao
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/4] common/mlx5: support DevX virtq stats operations
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
@ 2020-06-18 10:58       ` Maxime Coquelin
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Coquelin @ 2020-06-18 10:58 UTC (permalink / raw)
  To: Matan Azrad, Viacheslav Ovsiienko; +Cc: dev, Shahaf Shuler



On 6/2/20 5:47 PM, Matan Azrad wrote:
> Add DevX API to create and query virtio queue statistics from the HW.
> The next counters are supported by the HW per virtio queue:
> 	received_desc.
> 	completed_desc.
> 	error_cqes.
> 	bad_desc_errors.
> 	exceed_max_chain.
> 	invalid_buffer.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
>  drivers/common/mlx5/mlx5_devx_cmds.c            | 73 +++++++++++++++++++++++++
>  drivers/common/mlx5/mlx5_devx_cmds.h            | 38 +++++++++++++
>  drivers/common/mlx5/mlx5_prm.h                  | 26 ++++++++-
>  drivers/common/mlx5/rte_common_mlx5_version.map |  2 +
>  4 files changed, 138 insertions(+), 1 deletion(-)

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v3 3/4] vdpa/mlx5: support virtio queue statistics get
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
@ 2020-06-18 11:05       ` Maxime Coquelin
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Coquelin @ 2020-06-18 11:05 UTC (permalink / raw)
  To: Matan Azrad, Viacheslav Ovsiienko; +Cc: dev, Shahaf Shuler



On 6/2/20 5:47 PM, Matan Azrad wrote:
> Add support for statistics operations.
> 
> A DevX counter object is allocated per virtq in order to manage the
> virtq statistics.
> 
> The counter object is allocated before the virtq creation and destroyed
> after it, so the statistics are valid only in the life time of the
> virtq.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> ---
>  doc/guides/rel_notes/release_20_08.rst |  8 +++
>  doc/guides/vdpadevs/features/mlx5.ini  |  1 +
>  drivers/vdpa/mlx5/mlx5_vdpa.c          | 85 ++++++++++++++++++++++++++++++++
>  drivers/vdpa/mlx5/mlx5_vdpa.h          | 45 +++++++++++++++++
>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c    | 90 ++++++++++++++++++++++++++++++++++
>  5 files changed, 229 insertions(+)
> 


Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/4] examples/vdpa: add statistics show command
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 4/4] examples/vdpa: add statistics show command Matan Azrad
@ 2020-06-18 12:13       ` Maxime Coquelin
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Coquelin @ 2020-06-18 12:13 UTC (permalink / raw)
  To: Matan Azrad, Viacheslav Ovsiienko; +Cc: dev, Shahaf Shuler



On 6/2/20 5:47 PM, Matan Azrad wrote:
> A new vDPA driver feature was added to query the virtq statistics from
> the HW.
> 
> Use this feature to show the HW queues statistics for the virtqs.
> 
> Command description: stats X Y.
> X is the device ID.
> Y is the queue ID, Y=0xffff to show all the virtio queues statistics of
> the device X.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> ---
>  doc/guides/sample_app_ug/vdpa.rst |   3 +-
>  examples/vdpa/main.c              | 119 ++++++++++++++++++++++++++++++++++++++
>  2 files changed, 121 insertions(+), 1 deletion(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/4] vhost: support vDPA virtio queue statistics
  2020-06-02 15:47   ` [dpdk-dev] [PATCH v3 " Matan Azrad
                       ` (3 preceding siblings ...)
  2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 4/4] examples/vdpa: add statistics show command Matan Azrad
@ 2020-06-18 16:29     ` Maxime Coquelin
  2020-06-19  6:01       ` Maxime Coquelin
  4 siblings, 1 reply; 32+ messages in thread
From: Maxime Coquelin @ 2020-06-18 16:29 UTC (permalink / raw)
  To: Matan Azrad, Viacheslav Ovsiienko; +Cc: dev, Shahaf Shuler



On 6/2/20 5:47 PM, Matan Azrad wrote:
> The vDPA device offloads all the datapath of the vhost device to the HW device.
> 
> In order to expose to the user traffic information this series introduces new APIs to get traffic statistics and to reset them per virtio queue.
> 
> Since there is no any formal statistics suggested by the virtio specs, this API doesn't define them and let the drivers to decide what are the good statistics to report.
> 
> The statistics are taken directly from the vDPA driver managing the HW device.
> 
> Added also support for it in vdpa/mlx5 driver and in vdpa example application.
> 
> 
> V2:
> Move to per driver statistics according to the discussion between me, Maxime and Shahaf on V1.
> 
> v3:
> Send again for release 20.08.
> 
> Matan Azrad (4):
>   vhost: inroduce operation to get vDPA queue stats
>   common/mlx5: support DevX virtq stats operations
>   vdpa/mlx5: support virtio queue statistics get
>   examples/vdpa: add statistics show command
> 
>  doc/guides/rel_notes/release_20_08.rst          |  13 +++
>  doc/guides/sample_app_ug/vdpa.rst               |   3 +-
>  doc/guides/vdpadevs/features/default.ini        |   1 +
>  doc/guides/vdpadevs/features/mlx5.ini           |   1 +
>  doc/guides/vdpadevs/features_overview.rst       |   3 +
>  drivers/common/mlx5/mlx5_devx_cmds.c            |  73 +++++++++++++++
>  drivers/common/mlx5/mlx5_devx_cmds.h            |  38 ++++++++
>  drivers/common/mlx5/mlx5_prm.h                  |  26 +++++-
>  drivers/common/mlx5/rte_common_mlx5_version.map |   2 +
>  drivers/vdpa/mlx5/mlx5_vdpa.c                   |  85 +++++++++++++++++
>  drivers/vdpa/mlx5/mlx5_vdpa.h                   |  45 +++++++++
>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c             |  90 ++++++++++++++++++
>  examples/vdpa/main.c                            | 119 ++++++++++++++++++++++++
>  lib/librte_vhost/rte_vdpa.h                     | 115 ++++++++++++++++++++++-
>  lib/librte_vhost/rte_vhost_version.map          |   3 +
>  lib/librte_vhost/vdpa.c                         |  47 ++++++++++
>  16 files changed, 661 insertions(+), 3 deletions(-)
> 


Applied to dpdk-next-virtio/master

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/4] vhost: support vDPA virtio queue statistics
  2020-06-18 16:29     ` [dpdk-dev] [PATCH v3 0/4] vhost: support vDPA virtio queue statistics Maxime Coquelin
@ 2020-06-19  6:01       ` Maxime Coquelin
  0 siblings, 0 replies; 32+ messages in thread
From: Maxime Coquelin @ 2020-06-19  6:01 UTC (permalink / raw)
  To: Matan Azrad, Viacheslav Ovsiienko, Ferruh Yigit; +Cc: dev, Shahaf Shuler

Hi Ferruh,

On 6/18/20 6:29 PM, Maxime Coquelin wrote:
> 
> 
> On 6/2/20 5:47 PM, Matan Azrad wrote:
>> The vDPA device offloads all the datapath of the vhost device to the HW device.
>>
>> In order to expose to the user traffic information this series introduces new APIs to get traffic statistics and to reset them per virtio queue.
>>
>> Since there is no any formal statistics suggested by the virtio specs, this API doesn't define them and let the drivers to decide what are the good statistics to report.
>>
>> The statistics are taken directly from the vDPA driver managing the HW device.
>>
>> Added also support for it in vdpa/mlx5 driver and in vdpa example application.
>>
>>
>> V2:
>> Move to per driver statistics according to the discussion between me, Maxime and Shahaf on V1.
>>
>> v3:
>> Send again for release 20.08.
>>
>> Matan Azrad (4):
>>   vhost: inroduce operation to get vDPA queue stats
>>   common/mlx5: support DevX virtq stats operations
>>   vdpa/mlx5: support virtio queue statistics get
>>   examples/vdpa: add statistics show command
>>
>>  doc/guides/rel_notes/release_20_08.rst          |  13 +++
>>  doc/guides/sample_app_ug/vdpa.rst               |   3 +-
>>  doc/guides/vdpadevs/features/default.ini        |   1 +
>>  doc/guides/vdpadevs/features/mlx5.ini           |   1 +
>>  doc/guides/vdpadevs/features_overview.rst       |   3 +
>>  drivers/common/mlx5/mlx5_devx_cmds.c            |  73 +++++++++++++++
>>  drivers/common/mlx5/mlx5_devx_cmds.h            |  38 ++++++++
>>  drivers/common/mlx5/mlx5_prm.h                  |  26 +++++-
>>  drivers/common/mlx5/rte_common_mlx5_version.map |   2 +
>>  drivers/vdpa/mlx5/mlx5_vdpa.c                   |  85 +++++++++++++++++
>>  drivers/vdpa/mlx5/mlx5_vdpa.h                   |  45 +++++++++
>>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c             |  90 ++++++++++++++++++
>>  examples/vdpa/main.c                            | 119 ++++++++++++++++++++++++
>>  lib/librte_vhost/rte_vdpa.h                     | 115 ++++++++++++++++++++++-
>>  lib/librte_vhost/rte_vhost_version.map          |   3 +
>>  lib/librte_vhost/vdpa.c                         |  47 ++++++++++
>>  16 files changed, 661 insertions(+), 3 deletions(-)
>>
> 
> 
> Applied to dpdk-next-virtio/master

Finally not, this version has been superseded.
I removed it from my tree.

Maxime

> Thanks,
> Maxime
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2020-06-19  6:01 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-02 11:26 [dpdk-dev] [PATCH 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
2020-04-02 11:26 ` [dpdk-dev] [PATCH 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
2020-04-15 14:36   ` Maxime Coquelin
2020-04-16  9:06     ` Matan Azrad
2020-04-16 13:19       ` Maxime Coquelin
2020-04-19  6:18         ` Shahaf Shuler
2020-04-20  7:13           ` Maxime Coquelin
2020-04-20 15:57             ` Shahaf Shuler
2020-04-20 16:18               ` Maxime Coquelin
2020-04-21  5:02                 ` Shahaf Shuler
2020-04-02 11:26 ` [dpdk-dev] [PATCH 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
2020-04-02 11:26 ` [dpdk-dev] [PATCH 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
2020-04-02 11:26 ` [dpdk-dev] [PATCH 4/4] examples/vdpa: add statistics show command Matan Azrad
2020-05-05 15:54 ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
2020-05-05 15:54   ` [dpdk-dev] [PATCH v2 4/4] examples/vdpa: add statistics show command Matan Azrad
2020-05-07 11:35   ` [dpdk-dev] [PATCH v2 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
2020-06-02 15:47   ` [dpdk-dev] [PATCH v3 " Matan Azrad
2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
2020-06-03  8:58       ` Maxime Coquelin
2020-06-04 10:36         ` Wang, Xiao W
2020-06-09  9:18           ` Maxime Coquelin
2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
2020-06-18 10:58       ` Maxime Coquelin
2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
2020-06-18 11:05       ` Maxime Coquelin
2020-06-02 15:47     ` [dpdk-dev] [PATCH v3 4/4] examples/vdpa: add statistics show command Matan Azrad
2020-06-18 12:13       ` Maxime Coquelin
2020-06-18 16:29     ` [dpdk-dev] [PATCH v3 0/4] vhost: support vDPA virtio queue statistics Maxime Coquelin
2020-06-19  6:01       ` Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).