DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics
       [not found] <01591112869-78828-1-git-send-email-matan@mellanox.com>
@ 2020-06-18 18:59 ` Matan Azrad
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
                     ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Matan Azrad @ 2020-06-18 18:59 UTC (permalink / raw)
  To: Maxime Coquelin, Xiao Wang; +Cc: dev

The vDPA device offloads all the datapath of the vhost device to the HW device.

In order to expose to the user traffic information this series introduces new APIs to get traffic statistics and to reset them per virtio queue.

Since there is no any formal statistics suggested by the virtio specs, this API doesn't define them and let the drivers to decide what are the good statistics to report.

The statistics are taken directly from the vDPA driver managing the HW device.

Added also support for it in vdpa/mlx5 driver and in vdpa example application.


V2:
Move to per driver statistics according to the discussion between me, Maxime and Shahaf on V1.

v3:
Send again for release 20.08.

v4:
small fixes and rebase.

Matan Azrad (4):
  vhost: inroduce operation to get vDPA queue stats
  common/mlx5: support DevX virtq stats operations
  vdpa/mlx5: support virtio queue statistics get
  examples/vdpa: add statistics show command

 doc/guides/rel_notes/release_20_08.rst          |  13 +++
 doc/guides/sample_app_ug/vdpa.rst               |   3 +-
 doc/guides/vdpadevs/features/default.ini        |   1 +
 doc/guides/vdpadevs/features/mlx5.ini           |   1 +
 doc/guides/vdpadevs/features_overview.rst       |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.c            |  73 +++++++++++++++
 drivers/common/mlx5/mlx5_devx_cmds.h            |  38 ++++++++
 drivers/common/mlx5/mlx5_prm.h                  |  26 +++++-
 drivers/common/mlx5/rte_common_mlx5_version.map |   2 +
 drivers/vdpa/mlx5/mlx5_vdpa.c                   |  85 +++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa.h                   |  45 +++++++++
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c             |  94 +++++++++++++++++++
 examples/vdpa/main.c                            | 119 ++++++++++++++++++++++++
 lib/librte_vhost/rte_vdpa.h                     | 115 ++++++++++++++++++++++-
 lib/librte_vhost/rte_vhost_version.map          |   3 +
 lib/librte_vhost/vdpa.c                         |  47 ++++++++++
 16 files changed, 665 insertions(+), 3 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH v4 1/4] vhost: inroduce operation to get vDPA queue stats
  2020-06-18 18:59 ` [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
@ 2020-06-18 18:59   ` Matan Azrad
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Matan Azrad @ 2020-06-18 18:59 UTC (permalink / raw)
  To: Maxime Coquelin, Xiao Wang; +Cc: dev

The vDPA device offloads all the datapath of the vhost device to the HW
device.

In order to expose to the user traffic information this patch
introduces new 3 APIs to get traffic statistics, the device statistics
name and to reset the statistics per virtio queue.

The statistics are taken directly from the vDPA driver managing the HW
device and can be different for each vendor driver.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 doc/guides/rel_notes/release_20_08.rst    |   5 ++
 doc/guides/vdpadevs/features/default.ini  |   1 +
 doc/guides/vdpadevs/features_overview.rst |   3 +
 lib/librte_vhost/rte_vdpa.h               | 115 +++++++++++++++++++++++++++++-
 lib/librte_vhost/rte_vhost_version.map    |   3 +
 lib/librte_vhost/vdpa.c                   |  47 ++++++++++++
 6 files changed, 173 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index f5a22bc..1d9bd5c 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -68,6 +68,11 @@ New Features
 
   * Added new PMD devarg ``reclaim_mem_mode``.
 
+* **Added vDPA device APIs to query virtio queue statistics.**
+
+     A new 3 APIs has been added to query virtio queue statistics, to get their
+     names and to reset them by a vDPA device.
+
 
 Removed Items
 -------------
diff --git a/doc/guides/vdpadevs/features/default.ini b/doc/guides/vdpadevs/features/default.ini
index 518e4f1..2c122a3 100644
--- a/doc/guides/vdpadevs/features/default.ini
+++ b/doc/guides/vdpadevs/features/default.ini
@@ -37,6 +37,7 @@ proto rarp           =
 proto reply ack      =
 proto host notifier  =
 proto pagefault      =
+queue statistics     =
 BSD nic_uio          =
 Linux VFIO           =
 Other kdrv           =
diff --git a/doc/guides/vdpadevs/features_overview.rst b/doc/guides/vdpadevs/features_overview.rst
index eb7eb3b..930bc87 100644
--- a/doc/guides/vdpadevs/features_overview.rst
+++ b/doc/guides/vdpadevs/features_overview.rst
@@ -96,6 +96,9 @@ proto host notifier
 proto pagefault
   Slave expose page-fault FD for migration process.
 
+queue statistics
+  Support virtio queue statistics query.
+
 BSD nic_uio
   BSD ``nic_uio`` module supported.
 
diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
index 3c400ee..ecb3d91 100644
--- a/lib/librte_vhost/rte_vdpa.h
+++ b/lib/librte_vhost/rte_vdpa.h
@@ -37,6 +37,34 @@ struct rte_vdpa_dev_addr {
 	};
 };
 
+/** Maximum name length for statistics counters */
+#define RTE_VDPA_STATS_NAME_SIZE 64
+
+/**
+ * A vDPA device statistic structure
+ *
+ * This structure is used by rte_vdpa_stats_get() to provide
+ * statistics from the HW vDPA device.
+ *
+ * It maps a name id, corresponding to an index in the array returned
+ * by rte_vdpa_get_stats_names, to a statistic value.
+ */
+struct rte_vdpa_stat {
+	uint64_t id;        /**< The index in stats name array */
+	uint64_t value;     /**< The statistic counter value */
+};
+
+/**
+ * A name element for statistics
+ *
+ * An array of this structure is returned by rte_vdpa_get_stats_names
+ * It lists the names of extended statistics for a PMD. The rte_vdpa_stat
+ * structure references these names by their array index
+ */
+struct rte_vdpa_stat_name {
+	char name[RTE_VDPA_STATS_NAME_SIZE]; /**< The statistic name */
+};
+
 /**
  * vdpa device operations
  */
@@ -73,8 +101,19 @@ struct rte_vdpa_dev_ops {
 	int (*get_notify_area)(int vid, int qid,
 			uint64_t *offset, uint64_t *size);
 
+	/** Get statistics name */
+	int (*get_stats_names)(int did, struct rte_vdpa_stat_name *stats_names,
+			       unsigned int size);
+
+	/** Get statistics of the queue */
+	int (*get_stats)(int did, int qid, struct rte_vdpa_stat *stats,
+			 unsigned int n);
+
+	/** Reset statistics of the queue */
+	int (*reset_stats)(int did, int qid);
+
 	/** Reserved for future extension */
-	void *reserved[5];
+	void *reserved[2];
 };
 
 /**
@@ -200,4 +239,78 @@ struct rte_vdpa_device *
 __rte_experimental
 int
 rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve names of statistics of a vDPA device.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * And the array index is same with id field of 'struct rte_vdpa_stat':
+ * stats[i].id == i
+ *
+ * @param did
+ *  device id
+ * @param stats_names
+ *   array of at least size elements to be filled.
+ *   If set to NULL, the function returns the required number of elements.
+ * @param size
+ *   The number of elements in stats_names array.
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   stats name array.
+ */
+__rte_experimental
+int
+rte_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names,
+			 unsigned int size);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Retrieve statistics of a vDPA device.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * And the array index is same with id field of 'struct rte_vdpa_stat':
+ * stats[i].id == i
+ *
+ * @param did
+ *  device id
+ * @param qid
+ *  queue id
+ * @param stats
+ *   A pointer to a table of structure of type rte_vdpa_stat to be filled with
+ *   device statistics ids and values.
+ * @param n
+ *   The number of elements in stats array.
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   stats table.
+ */
+__rte_experimental
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_stat *stats,
+		   unsigned int n);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Reset statistics of a vDPA device.
+ *
+ * @param did
+ *  device id
+ * @param qid
+ *  queue id
+ * @return
+ *   0 on success, a negative value on error.
+ */
+__rte_experimental
+int
+rte_vdpa_reset_stats(int did, uint16_t qid);
 #endif /* _RTE_VDPA_H_ */
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 051d08c..3a2f7df 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -38,6 +38,9 @@ EXPERIMENTAL {
 	rte_vdpa_find_device_id;
 	rte_vdpa_get_device;
 	rte_vdpa_get_device_num;
+	rte_vdpa_get_stats_names;
+	rte_vdpa_get_stats;
+	rte_vdpa_reset_stats;
 	rte_vhost_driver_attach_vdpa_device;
 	rte_vhost_driver_detach_vdpa_device;
 	rte_vhost_driver_get_vdpa_device_id;
diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c
index b2b2a10..1f5cdcd 100644
--- a/lib/librte_vhost/vdpa.c
+++ b/lib/librte_vhost/vdpa.c
@@ -227,3 +227,50 @@ struct rte_vdpa_device *
 		free_ind_table(idesc);
 	return -1;
 }
+
+int
+rte_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names,
+			 unsigned int size)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats_names, -ENOTSUP);
+
+	return vdpa_dev->ops->get_stats_names(did, stats_names, size);
+}
+
+int
+rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_stat *stats,
+		   unsigned int n)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	if (!stats || !n)
+		return -EINVAL;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats, -ENOTSUP);
+
+	return vdpa_dev->ops->get_stats(did, qid, stats, n);
+}
+
+int
+rte_vdpa_reset_stats(int did, uint16_t qid)
+{
+	struct rte_vdpa_device *vdpa_dev;
+
+	vdpa_dev = rte_vdpa_get_device(did);
+	if (!vdpa_dev)
+		return -ENODEV;
+
+	RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->reset_stats, -ENOTSUP);
+
+	return vdpa_dev->ops->reset_stats(did, qid);
+}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH v4 2/4] common/mlx5: support DevX virtq stats operations
  2020-06-18 18:59 ` [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
@ 2020-06-18 18:59   ` Matan Azrad
  2020-06-19  6:07     ` Maxime Coquelin
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Matan Azrad @ 2020-06-18 18:59 UTC (permalink / raw)
  To: Maxime Coquelin, Xiao Wang; +Cc: dev

Add DevX API to create and query virtio queue statistics from the HW.
The next counters are supported by the HW per virtio queue:
	received_desc.
	completed_desc.
	error_cqes.
	bad_desc_errors.
	exceed_max_chain.
	invalid_buffer.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c            | 73 +++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_devx_cmds.h            | 38 +++++++++++++
 drivers/common/mlx5/mlx5_prm.h                  | 26 ++++++++-
 drivers/common/mlx5/rte_common_mlx5_version.map |  2 +
 4 files changed, 138 insertions(+), 1 deletion(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index ccba1c1..08e6225 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -464,6 +464,9 @@ struct mlx5_devx_obj *
 	attr->vdpa.valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
 					 general_obj_types) &
 			      MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q);
+	attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
+							general_obj_types) &
+				  MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS);
 	if (attr->qos.sup) {
 		MLX5_SET(query_hca_cap_in, in, op_mod,
 			 MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP |
@@ -1265,6 +1268,7 @@ struct mlx5_devx_obj *
 	MLX5_SET(virtio_q, virtctx, umem_3_id, attr->umems[2].id);
 	MLX5_SET(virtio_q, virtctx, umem_3_size, attr->umems[2].size);
 	MLX5_SET64(virtio_q, virtctx, umem_3_offset, attr->umems[2].offset);
+	MLX5_SET(virtio_q, virtctx, counter_set_id, attr->counters_obj_id);
 	MLX5_SET(virtio_net_q, virtq, tisn_or_qpn, attr->tis_id);
 	virtq_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
 						    sizeof(out));
@@ -1539,3 +1543,72 @@ struct mlx5_devx_obj *
 	}
 	return ret;
 }
+
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_virtio_q_counters(void *ctx)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
+	struct mlx5_devx_obj *couners_obj = rte_zmalloc(__func__,
+						       sizeof(*couners_obj), 0);
+	void *hdr = MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr);
+
+	if (!couners_obj) {
+		DRV_LOG(ERR, "Failed to allocate virtio queue counters data.");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
+	couners_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out,
+						      sizeof(out));
+	if (!couners_obj->obj) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to create virtio queue counters Obj using"
+			" DevX.");
+		rte_free(couners_obj);
+		return NULL;
+	}
+	couners_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+	return couners_obj;
+}
+
+int
+mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
+				   struct mlx5_devx_virtio_q_couners_attr *attr)
+{
+	uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(query_virtio_q_counters_out)] = {0};
+	void *hdr = MLX5_ADDR_OF(query_virtio_q_counters_out, in, hdr);
+	void *virtio_q_counters = MLX5_ADDR_OF(query_virtio_q_counters_out, out,
+					       virtio_q_counters);
+	int ret;
+
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, couners_obj->id);
+	ret = mlx5_glue->devx_obj_query(couners_obj->obj, in, sizeof(in), out,
+					sizeof(out));
+	if (ret) {
+		DRV_LOG(ERR, "Failed to query virtio q counters using DevX.");
+		rte_errno = errno;
+		return -errno;
+	}
+	attr->received_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters,
+					 received_desc);
+	attr->completed_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters,
+					  completed_desc);
+	attr->error_cqes = MLX5_GET(virtio_q_counters, virtio_q_counters,
+				    error_cqes);
+	attr->bad_desc_errors = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					 bad_desc_errors);
+	attr->exceed_max_chain = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					  exceed_max_chain);
+	attr->invalid_buffer = MLX5_GET(virtio_q_counters, virtio_q_counters,
+					invalid_buffer);
+	return ret;
+}
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 49b174a..59a70a0 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -64,6 +64,7 @@ struct mlx5_hca_vdpa_attr {
 	uint32_t event_mode:3;
 	uint32_t log_doorbell_stride:5;
 	uint32_t log_doorbell_bar_size:5;
+	uint32_t queue_counters_valid:1;
 	uint32_t max_num_virtio_queues;
 	struct {
 		uint32_t a;
@@ -272,6 +273,7 @@ struct mlx5_devx_virtq_attr {
 	uint32_t qp_id;
 	uint32_t queue_index;
 	uint32_t tis_id;
+	uint32_t counters_obj_id;
 	uint64_t dirty_bitmap_addr;
 	uint64_t type;
 	uint64_t desc_addr;
@@ -300,6 +302,15 @@ struct mlx5_devx_qp_attr {
 	uint64_t wq_umem_offset;
 };
 
+struct mlx5_devx_virtio_q_couners_attr {
+	uint64_t received_desc;
+	uint64_t completed_desc;
+	uint32_t error_cqes;
+	uint32_t bad_desc_errors;
+	uint32_t exceed_max_chain;
+	uint32_t invalid_buffer;
+};
+
 /* mlx5_devx_cmds.c */
 
 __rte_internal
@@ -374,4 +385,31 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp,
 int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
 			     struct mlx5_devx_rqt_attr *rqt_attr);
 
+/**
+ * Create virtio queue counters object DevX API.
+ *
+ * @param[in] ctx
+ *   Device context.
+
+ * @return
+ *   The DevX object created, NULL otherwise and rte_errno is set.
+ */
+__rte_internal
+struct mlx5_devx_obj *mlx5_devx_cmd_create_virtio_q_counters(void *ctx);
+
+/**
+ * Query virtio queue counters object using DevX API.
+ *
+ * @param[in] couners_obj
+ *   Pointer to virtq object structure.
+ * @param [in/out] attr
+ *   Pointer to virtio queue counters attributes structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_internal
+int mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj,
+				  struct mlx5_devx_virtio_q_couners_attr *attr);
+
 #endif /* RTE_PMD_MLX5_DEVX_CMDS_H_ */
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index e4ef2ac..5fc10d6 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -949,6 +949,7 @@ enum {
 
 enum {
 	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd),
+	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c),
 };
 
 enum {
@@ -2006,6 +2007,7 @@ struct mlx5_ifc_create_cq_in_bits {
 
 enum {
 	MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d,
+	MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c,
 };
 
 struct mlx5_ifc_general_obj_in_cmd_hdr_bits {
@@ -2024,6 +2026,27 @@ struct mlx5_ifc_general_obj_out_cmd_hdr_bits {
 	u8 reserved_at_60[0x20];
 };
 
+struct mlx5_ifc_virtio_q_counters_bits {
+	u8 modify_field_select[0x40];
+	u8 reserved_at_40[0x40];
+	u8 received_desc[0x40];
+	u8 completed_desc[0x40];
+	u8 error_cqes[0x20];
+	u8 bad_desc_errors[0x20];
+	u8 exceed_max_chain[0x20];
+	u8 invalid_buffer[0x20];
+	u8 reserved_at_180[0x50];
+};
+
+struct mlx5_ifc_create_virtio_q_counters_in_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters;
+};
+
+struct mlx5_ifc_query_virtio_q_counters_out_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters;
+};
 enum {
 	MLX5_VIRTQ_STATE_INIT = 0,
 	MLX5_VIRTQ_STATE_RDY = 1,
@@ -2064,7 +2087,8 @@ struct mlx5_ifc_virtio_q_bits {
 	u8 umem_3_id[0x20];
 	u8 umem_3_size[0x20];
 	u8 umem_3_offset[0x40];
-	u8 reserved_at_300[0x100];
+	u8 counter_set_id[0x20];
+	u8 reserved_at_320[0xe0];
 };
 
 struct mlx5_ifc_virtio_net_q_bits {
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index 68f1207..3f62400 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -18,6 +18,7 @@ INTERNAL {
 	mlx5_devx_cmd_create_tir;
 	mlx5_devx_cmd_create_td;
 	mlx5_devx_cmd_create_tis;
+	mlx5_devx_cmd_create_virtio_q_counters;
 	mlx5_devx_cmd_create_virtq;
 	mlx5_devx_cmd_destroy;
 	mlx5_devx_cmd_flow_counter_alloc;
@@ -31,6 +32,7 @@ INTERNAL {
 	mlx5_devx_cmd_modify_virtq;
 	mlx5_devx_cmd_qp_query_tis_td;
 	mlx5_devx_cmd_query_hca_attr;
+	mlx5_devx_cmd_query_virtio_q_counters;
 	mlx5_devx_cmd_query_virtq;
 	mlx5_devx_get_out_command_status;
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH v4 3/4] vdpa/mlx5: support virtio queue statistics get
  2020-06-18 18:59 ` [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
@ 2020-06-18 18:59   ` Matan Azrad
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 4/4] examples/vdpa: add statistics show command Matan Azrad
  2020-06-19  6:39   ` [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics Maxime Coquelin
  4 siblings, 0 replies; 9+ messages in thread
From: Matan Azrad @ 2020-06-18 18:59 UTC (permalink / raw)
  To: Maxime Coquelin, Xiao Wang; +Cc: dev

Add support for statistics operations.

A DevX counter object is allocated per virtq in order to manage the
virtq statistics.

The counter object is allocated before the virtq creation and destroyed
after it, so the statistics are valid only in the life time of the
virtq.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 doc/guides/rel_notes/release_20_08.rst |  8 +++
 doc/guides/vdpadevs/features/mlx5.ini  |  1 +
 drivers/vdpa/mlx5/mlx5_vdpa.c          | 85 ++++++++++++++++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa.h          | 45 ++++++++++++++++
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c    | 94 ++++++++++++++++++++++++++++++++++
 5 files changed, 233 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 1d9bd5c..e2c93b1 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -74,6 +74,14 @@ New Features
      names and to reset them by a vDPA device.
 
 
+   * **Updated Mellanox mlx5 vDPA driver.**
+
+     Updated Mellanox mlx5 vDPA driver with new features, including:
+
+     * Added support for virtio queue statistics.
+
+
+
 Removed Items
 -------------
 
diff --git a/doc/guides/vdpadevs/features/mlx5.ini b/doc/guides/vdpadevs/features/mlx5.ini
index 1da9c1b..788d4e0 100644
--- a/doc/guides/vdpadevs/features/mlx5.ini
+++ b/doc/guides/vdpadevs/features/mlx5.ini
@@ -17,6 +17,7 @@ packed               = Y
 proto mq             = Y
 proto log shmfd      = Y
 proto host notifier  = Y
+queue statistics     = Y
 Other kdrv           = Y
 ARMv8                = Y
 Power8               = Y
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 1113d6c..a80e3f4 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -8,6 +8,7 @@
 #include <rte_errno.h>
 #include <rte_bus_pci.h>
 #include <rte_pci.h>
+#include <rte_string_fns.h>
 
 #include <mlx5_glue.h>
 #include <mlx5_common.h>
@@ -274,6 +275,85 @@
 	return 0;
 }
 
+static int
+mlx5_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names,
+			  unsigned int size)
+{
+	static const char *mlx5_vdpa_stats_names[MLX5_VDPA_STATS_MAX] = {
+		"received_descriptors",
+		"completed_descriptors",
+		"bad descriptor errors",
+		"exceed max chain",
+		"invalid buffer",
+		"completion errors",
+	};
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+	unsigned int i;
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!stats_names)
+		return MLX5_VDPA_STATS_MAX;
+	size = RTE_MIN(size, (unsigned int)MLX5_VDPA_STATS_MAX);
+	for (i = 0; i < size; ++i)
+		strlcpy(stats_names[i].name, mlx5_vdpa_stats_names[i],
+			RTE_VDPA_STATS_NAME_SIZE);
+	return size;
+}
+
+static int
+mlx5_vdpa_get_stats(int did, int qid, struct rte_vdpa_stat *stats,
+		    unsigned int n)
+{
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!priv->configured) {
+		DRV_LOG(ERR, "Device %d was not configured.", did);
+		return -ENODATA;
+	}
+	if (qid >= (int)priv->nr_virtqs) {
+		DRV_LOG(ERR, "Too big vring id: %d.", qid);
+		return -E2BIG;
+	}
+	if (!priv->caps.queue_counters_valid) {
+		DRV_LOG(ERR, "Virtq statistics is not supported for device %d.",
+			did);
+		return -ENOTSUP;
+	}
+	return mlx5_vdpa_virtq_stats_get(priv, qid, stats, n);
+}
+
+static int
+mlx5_vdpa_reset_stats(int did, int qid)
+{
+	struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did);
+
+	if (priv == NULL) {
+		DRV_LOG(ERR, "Invalid device id: %d.", did);
+		return -ENODEV;
+	}
+	if (!priv->configured) {
+		DRV_LOG(ERR, "Device %d was not configured.", did);
+		return -ENODATA;
+	}
+	if (qid >= (int)priv->nr_virtqs) {
+		DRV_LOG(ERR, "Too big vring id: %d.", qid);
+		return -E2BIG;
+	}
+	if (!priv->caps.queue_counters_valid) {
+		DRV_LOG(ERR, "Virtq statistics is not supported for device %d.",
+			did);
+		return -ENOTSUP;
+	}
+	return mlx5_vdpa_virtq_stats_reset(priv, qid);
+}
+
 static struct rte_vdpa_dev_ops mlx5_vdpa_ops = {
 	.get_queue_num = mlx5_vdpa_get_queue_num,
 	.get_features = mlx5_vdpa_get_vdpa_features,
@@ -286,6 +366,9 @@
 	.get_vfio_group_fd = NULL,
 	.get_vfio_device_fd = mlx5_vdpa_get_device_fd,
 	.get_notify_area = mlx5_vdpa_get_notify_area,
+	.get_stats_names = mlx5_vdpa_get_stats_names,
+	.get_stats = mlx5_vdpa_get_stats,
+	.reset_stats = mlx5_vdpa_reset_stats,
 };
 
 static struct ibv_device *
@@ -489,6 +572,8 @@
 		rte_errno = ENOTSUP;
 		goto error;
 	}
+	if (!attr.vdpa.queue_counters_valid)
+		DRV_LOG(DEBUG, "No capability to support virtq statistics.");
 	priv = rte_zmalloc("mlx5 vDPA device private", sizeof(*priv) +
 			   sizeof(struct mlx5_vdpa_virtq) *
 			   attr.vdpa.max_num_virtio_queues * 2,
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index fcc216a..80b4c4b 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -76,6 +76,7 @@ struct mlx5_vdpa_virtq {
 	uint16_t vq_size;
 	struct mlx5_vdpa_priv *priv;
 	struct mlx5_devx_obj *virtq;
+	struct mlx5_devx_obj *counters;
 	struct mlx5_vdpa_event_qp eqp;
 	struct {
 		struct mlx5dv_devx_umem *obj;
@@ -83,6 +84,7 @@ struct mlx5_vdpa_virtq {
 		uint32_t size;
 	} umems[3];
 	struct rte_intr_handle intr_handle;
+	struct mlx5_devx_virtio_q_couners_attr reset;
 };
 
 struct mlx5_vdpa_steer {
@@ -127,6 +129,16 @@ struct mlx5_vdpa_priv {
 	struct mlx5_vdpa_virtq virtqs[];
 };
 
+enum {
+	MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS,
+	MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS,
+	MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS,
+	MLX5_VDPA_STATS_EXCEED_MAX_CHAIN,
+	MLX5_VDPA_STATS_INVALID_BUFFER,
+	MLX5_VDPA_STATS_COMPLETION_ERRORS,
+	MLX5_VDPA_STATS_MAX
+};
+
 /*
  * Check whether virtq is for traffic receive.
  * According to VIRTIO_NET Spec the virtqueues index identity its type by:
@@ -352,4 +364,37 @@ int mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base,
  */
 int mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index);
 
+/**
+ * Get virtq statistics.
+ *
+ * @param[in] priv
+ *   The vdpa driver private structure.
+ * @param[in] qid
+ *   The virtq index.
+ * @param stats
+ *   The virtq statistics array to fill.
+ * @param n
+ *   The number of elements in @p stats array.
+ *
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   @p stats array.
+ */
+int
+mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid,
+			  struct rte_vdpa_stat *stats, unsigned int n);
+
+/**
+ * Reset virtq statistics.
+ *
+ * @param[in] priv
+ *   The vdpa driver private structure.
+ * @param[in] qid
+ *   The virtq index.
+ *
+ * @return
+ *   A negative value on error, otherwise 0.
+ */
+int
+mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid);
 #endif /* RTE_PMD_MLX5_VDPA_H_ */
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index bd48460..e99d82c 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -72,6 +72,11 @@
 			rte_free(virtq->umems[i].buf);
 	}
 	memset(&virtq->umems, 0, sizeof(virtq->umems));
+	if (virtq->counters) {
+		claim_zero(mlx5_devx_cmd_destroy(virtq->counters));
+		virtq->counters = NULL;
+	}
+	memset(&virtq->reset, 0, sizeof(virtq->reset));
 	if (virtq->eqp.fw_qp)
 		mlx5_vdpa_event_qp_destroy(&virtq->eqp);
 	return 0;
@@ -205,6 +210,16 @@
 		DRV_LOG(INFO, "Virtq %d is, for sure, working by poll mode, no"
 			" need event QPs and event mechanism.", index);
 	}
+	if (priv->caps.queue_counters_valid) {
+		virtq->counters = mlx5_devx_cmd_create_virtio_q_counters
+								    (priv->ctx);
+		if (!virtq->counters) {
+			DRV_LOG(ERR, "Failed to create virtq couners for virtq"
+				" %d.", index);
+			goto error;
+		}
+		attr.counters_obj_id = virtq->counters->id;
+	}
 	/* Setup 3 UMEMs for each virtq. */
 	for (i = 0; i < RTE_DIM(virtq->umems); ++i) {
 		virtq->umems[i].size = priv->caps.umems[i].a * vq.size +
@@ -455,3 +470,82 @@
 	}
 	return 0;
 }
+
+int
+mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid,
+			  struct rte_vdpa_stat *stats, unsigned int n)
+{
+	struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid];
+	struct mlx5_devx_virtio_q_couners_attr attr = {0};
+	int ret;
+
+	if (!virtq->virtq || !virtq->enable) {
+		DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq "
+			"is invalid.", qid);
+		return -EINVAL;
+	}
+	MLX5_ASSERT(virtq->counters);
+	ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &attr);
+	if (ret) {
+		DRV_LOG(ERR, "Failed to read virtq %d stats from HW.", qid);
+		return ret;
+	}
+	ret = (int)RTE_MIN(n, (unsigned int)MLX5_VDPA_STATS_MAX);
+	if (ret == MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS,
+		.value = attr.received_desc - virtq->reset.received_desc,
+	};
+	if (ret == MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS,
+		.value = attr.completed_desc - virtq->reset.completed_desc,
+	};
+	if (ret == MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS,
+		.value = attr.bad_desc_errors - virtq->reset.bad_desc_errors,
+	};
+	if (ret == MLX5_VDPA_STATS_EXCEED_MAX_CHAIN)
+		return ret;
+	stats[MLX5_VDPA_STATS_EXCEED_MAX_CHAIN] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_EXCEED_MAX_CHAIN,
+		.value = attr.exceed_max_chain - virtq->reset.exceed_max_chain,
+	};
+	if (ret == MLX5_VDPA_STATS_INVALID_BUFFER)
+		return ret;
+	stats[MLX5_VDPA_STATS_INVALID_BUFFER] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_INVALID_BUFFER,
+		.value = attr.invalid_buffer - virtq->reset.invalid_buffer,
+	};
+	if (ret == MLX5_VDPA_STATS_COMPLETION_ERRORS)
+		return ret;
+	stats[MLX5_VDPA_STATS_COMPLETION_ERRORS] = (struct rte_vdpa_stat) {
+		.id = MLX5_VDPA_STATS_COMPLETION_ERRORS,
+		.value = attr.error_cqes - virtq->reset.error_cqes,
+	};
+	return ret;
+}
+
+int
+mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid)
+{
+	struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid];
+	int ret;
+
+	if (!virtq->virtq || !virtq->enable) {
+		DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq "
+			"is invalid.", qid);
+		return -EINVAL;
+	}
+	MLX5_ASSERT(virtq->counters);
+	ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters,
+						    &virtq->reset);
+	if (ret)
+		DRV_LOG(ERR, "Failed to read virtq %d reset stats from HW.",
+			qid);
+	return ret;
+}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-dev] [PATCH v4 4/4] examples/vdpa: add statistics show command
  2020-06-18 18:59 ` [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
                     ` (2 preceding siblings ...)
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
@ 2020-06-18 18:59   ` Matan Azrad
  2020-06-19 15:02     ` Ferruh Yigit
  2020-06-19  6:39   ` [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics Maxime Coquelin
  4 siblings, 1 reply; 9+ messages in thread
From: Matan Azrad @ 2020-06-18 18:59 UTC (permalink / raw)
  To: Maxime Coquelin, Xiao Wang; +Cc: dev

A new vDPA driver feature was added to query the virtq statistics from
the HW.

Use this feature to show the HW queues statistics for the virtqs.

Command description: stats X Y.
X is the device ID.
Y is the queue ID, Y=0xffff to show all the virtio queues statistics of
the device X.

Signed-off-by: Matan Azrad <matan@mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 doc/guides/sample_app_ug/vdpa.rst |   3 +-
 examples/vdpa/main.c              | 119 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 121 insertions(+), 1 deletion(-)

diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index 745f196..d66a724 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -44,7 +44,8 @@ where
   1. help: show help message
   2. list: list all available vdpa devices
   3. create: create a new vdpa port with socket file and vdpa device address
-  4. quit: unregister vhost driver and exit the application
+  4. stats: show statistics of virtio queues
+  5. quit: unregister vhost driver and exit the application
 
 Take IFCVF driver for example:
 
diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c
index d9a9112..cdef715 100644
--- a/examples/vdpa/main.c
+++ b/examples/vdpa/main.c
@@ -18,6 +18,7 @@
 #include <cmdline_parse.h>
 #include <cmdline_socket.h>
 #include <cmdline_parse_string.h>
+#include <cmdline_parse_num.h>
 #include <cmdline.h>
 
 #define MAX_PATH_LEN 128
@@ -29,6 +30,9 @@ struct vdpa_port {
 	int did;
 	int vid;
 	uint64_t flags;
+	int stats_n;
+	struct rte_vdpa_stat_name *stats_names;
+	struct rte_vdpa_stat *stats;
 };
 
 static struct vdpa_port vports[MAX_VDPA_SAMPLE_PORTS];
@@ -199,6 +203,10 @@ struct vdpa_port {
 		RTE_LOG(ERR, VDPA,
 				"Fail to unregister vhost driver for %s.\n",
 				socket_path);
+	if (vport->stats_names) {
+		rte_free(vport->stats_names);
+		vport->stats_names = NULL;
+	}
 }
 
 static void
@@ -240,6 +248,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
 		"    help                                      : Show interactive instructions.\n"
 		"    list                                      : list all available vdpa devices.\n"
 		"    create <socket file> <vdev addr>          : create a new vdpa port.\n"
+		"    stats <device ID> <virtio queue ID>       : show statistics of virtio queue, 0xffff for all.\n"
 		"    quit                                      : exit vdpa sample app.\n"
 	);
 }
@@ -363,6 +372,115 @@ static void cmd_create_vdpa_port_parsed(void *parsed_result,
 	},
 };
 
+/* *** STATS *** */
+struct cmd_stats_result {
+	cmdline_fixed_string_t stats;
+	uint16_t did;
+	uint16_t qid;
+};
+
+static void cmd_device_stats_parsed(void *parsed_result, struct cmdline *cl,
+				    __attribute__((unused)) void *data)
+{
+	struct cmd_stats_result *res = parsed_result;
+	struct rte_vdpa_device *vdev = rte_vdpa_get_device(res->did);
+	struct vdpa_port *vport = NULL;
+	uint32_t first, last;
+	int i;
+
+	if (!vdev) {
+		RTE_LOG(ERR, VDPA, "Invalid device id %" PRIu16 ".\n",
+			res->did);
+		return;
+	}
+	for (i = 0; i < RTE_MIN(MAX_VDPA_SAMPLE_PORTS, dev_total); i++) {
+		if (vports[i].did == res->did) {
+			vport = &vports[i];
+			break;
+		}
+	}
+	if (!vport) {
+		RTE_LOG(ERR, VDPA, "Device id %" PRIu16 " was not created.\n",
+			res->did);
+		return;
+	}
+	if (res->qid == 0xFFFF) {
+		first = 0;
+		last = rte_vhost_get_vring_num(vport->vid);
+		if (last == 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get num of actual virtqs"
+				" for device id %d.\n", (int)res->did);
+			return;
+		}
+		last--;
+	} else {
+		first = res->qid;
+		last = res->qid;
+	}
+	if (!vport->stats_names) {
+		vport->stats_n = rte_vdpa_get_stats_names(vport->did, NULL, 0);
+		if (vport->stats_n <= 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get names number of "
+				"device %d stats.\n", (int)res->did);
+			return;
+		}
+		vport->stats_names = rte_zmalloc(NULL,
+			(sizeof(*vport->stats_names) + sizeof(*vport->stats)) *
+							vport->stats_n, 0);
+		if (!vport->stats_names) {
+			RTE_LOG(ERR, VDPA, "Failed to allocate memory for stat"
+				" names of device %d.\n", (int)res->did);
+			return;
+		}
+		i = rte_vdpa_get_stats_names(vport->did, vport->stats_names,
+						vport->stats_n);
+		if (vport->stats_n != i) {
+			RTE_LOG(ERR, VDPA, "Failed to get names of device %d "
+				"stats.\n", (int)res->did);
+			return;
+		}
+		vport->stats = (struct rte_vdpa_stat *)
+					(vport->stats_names + vport->stats_n);
+	}
+	cmdline_printf(cl, "\nDevice %d:\n", (int)res->did);
+	for (; first <= last; first++) {
+		memset(vport->stats, 0, sizeof(*vport->stats) * vport->stats_n);
+		if (rte_vdpa_get_stats(vport->did, (int)first, vport->stats,
+					vport->stats_n) <= 0) {
+			RTE_LOG(ERR, VDPA, "Failed to get vdpa queue statistics"
+				" for device id %d qid %d.\n", (int)res->did,
+				(int)first);
+			return;
+		}
+		cmdline_printf(cl, "\tVirtq %" PRIu32 ":\n", first);
+		for (i = 0; i < vport->stats_n; ++i) {
+			cmdline_printf(cl, "\t\t%-*s %-16" PRIu64 "\n",
+				RTE_VDPA_STATS_NAME_SIZE,
+				vport->stats_names[vport->stats[i].id].name,
+				vport->stats[i].value);
+		}
+	}
+}
+
+cmdline_parse_token_string_t cmd_device_stats_ =
+	TOKEN_STRING_INITIALIZER(struct cmd_stats_result, stats, "stats");
+cmdline_parse_token_num_t cmd_device_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_stats_result, did, UINT32);
+cmdline_parse_token_num_t cmd_queue_id =
+	TOKEN_NUM_INITIALIZER(struct cmd_stats_result, qid, UINT32);
+
+cmdline_parse_inst_t cmd_device_stats = {
+	.f = cmd_device_stats_parsed,
+	.data = NULL,
+	.help_str = "stats: show device statistics",
+	.tokens = {
+		(void *)&cmd_device_stats_,
+		(void *)&cmd_device_id,
+		(void *)&cmd_queue_id,
+		NULL,
+	},
+};
+
 /* *** QUIT *** */
 struct cmd_quit_result {
 	cmdline_fixed_string_t quit;
@@ -392,6 +510,7 @@ static void cmd_quit_parsed(__rte_unused void *parsed_result,
 	(cmdline_parse_inst_t *)&cmd_help,
 	(cmdline_parse_inst_t *)&cmd_list_vdpa_devices,
 	(cmdline_parse_inst_t *)&cmd_create_vdpa_port,
+	(cmdline_parse_inst_t *)&cmd_device_stats,
 	(cmdline_parse_inst_t *)&cmd_quit,
 	NULL,
 };
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH v4 2/4] common/mlx5: support DevX virtq stats operations
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
@ 2020-06-19  6:07     ` Maxime Coquelin
  0 siblings, 0 replies; 9+ messages in thread
From: Maxime Coquelin @ 2020-06-19  6:07 UTC (permalink / raw)
  To: Matan Azrad, Xiao Wang; +Cc: dev



On 6/18/20 8:59 PM, Matan Azrad wrote:
> Add DevX API to create and query virtio queue statistics from the HW.
> The next counters are supported by the HW per virtio queue:
> 	received_desc.
> 	completed_desc.
> 	error_cqes.
> 	bad_desc_errors.
> 	exceed_max_chain.
> 	invalid_buffer.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
>  drivers/common/mlx5/mlx5_devx_cmds.c            | 73 +++++++++++++++++++++++++
>  drivers/common/mlx5/mlx5_devx_cmds.h            | 38 +++++++++++++
>  drivers/common/mlx5/mlx5_prm.h                  | 26 ++++++++-
>  drivers/common/mlx5/rte_common_mlx5_version.map |  2 +
>  4 files changed, 138 insertions(+), 1 deletion(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics
  2020-06-18 18:59 ` [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
                     ` (3 preceding siblings ...)
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 4/4] examples/vdpa: add statistics show command Matan Azrad
@ 2020-06-19  6:39   ` Maxime Coquelin
  4 siblings, 0 replies; 9+ messages in thread
From: Maxime Coquelin @ 2020-06-19  6:39 UTC (permalink / raw)
  To: Matan Azrad, Xiao Wang; +Cc: dev



On 6/18/20 8:59 PM, Matan Azrad wrote:
> The vDPA device offloads all the datapath of the vhost device to the HW device.
> 
> In order to expose to the user traffic information this series introduces new APIs to get traffic statistics and to reset them per virtio queue.
> 
> Since there is no any formal statistics suggested by the virtio specs, this API doesn't define them and let the drivers to decide what are the good statistics to report.
> 
> The statistics are taken directly from the vDPA driver managing the HW device.
> 
> Added also support for it in vdpa/mlx5 driver and in vdpa example application.
> 
> 
> V2:
> Move to per driver statistics according to the discussion between me, Maxime and Shahaf on V1.
> 
> v3:
> Send again for release 20.08.
> 
> v4:
> small fixes and rebase.
> 
> Matan Azrad (4):
>   vhost: inroduce operation to get vDPA queue stats
>   common/mlx5: support DevX virtq stats operations
>   vdpa/mlx5: support virtio queue statistics get
>   examples/vdpa: add statistics show command
> 
>  doc/guides/rel_notes/release_20_08.rst          |  13 +++
>  doc/guides/sample_app_ug/vdpa.rst               |   3 +-
>  doc/guides/vdpadevs/features/default.ini        |   1 +
>  doc/guides/vdpadevs/features/mlx5.ini           |   1 +
>  doc/guides/vdpadevs/features_overview.rst       |   3 +
>  drivers/common/mlx5/mlx5_devx_cmds.c            |  73 +++++++++++++++
>  drivers/common/mlx5/mlx5_devx_cmds.h            |  38 ++++++++
>  drivers/common/mlx5/mlx5_prm.h                  |  26 +++++-
>  drivers/common/mlx5/rte_common_mlx5_version.map |   2 +
>  drivers/vdpa/mlx5/mlx5_vdpa.c                   |  85 +++++++++++++++++
>  drivers/vdpa/mlx5/mlx5_vdpa.h                   |  45 +++++++++
>  drivers/vdpa/mlx5/mlx5_vdpa_virtq.c             |  94 +++++++++++++++++++
>  examples/vdpa/main.c                            | 119 ++++++++++++++++++++++++
>  lib/librte_vhost/rte_vdpa.h                     | 115 ++++++++++++++++++++++-
>  lib/librte_vhost/rte_vhost_version.map          |   3 +
>  lib/librte_vhost/vdpa.c                         |  47 ++++++++++
>  16 files changed, 665 insertions(+), 3 deletions(-)
> 



Applied to dpdk-next-virtio/master

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH v4 4/4] examples/vdpa: add statistics show command
  2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 4/4] examples/vdpa: add statistics show command Matan Azrad
@ 2020-06-19 15:02     ` Ferruh Yigit
  2020-06-19 15:04       ` Maxime Coquelin
  0 siblings, 1 reply; 9+ messages in thread
From: Ferruh Yigit @ 2020-06-19 15:02 UTC (permalink / raw)
  To: Matan Azrad, Maxime Coquelin, Xiao Wang; +Cc: dev

On 6/18/2020 7:59 PM, Matan Azrad wrote:
> A new vDPA driver feature was added to query the virtq statistics from
> the HW.
> 
> Use this feature to show the HW queues statistics for the virtqs.
> 
> Command description: stats X Y.
> X is the device ID.
> Y is the queue ID, Y=0xffff to show all the virtio queues statistics of
> the device X.
> 
> Signed-off-by: Matan Azrad <matan@mellanox.com>
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

<...>

> +
> +static void cmd_device_stats_parsed(void *parsed_result, struct cmdline *cl,
> +				    __attribute__((unused)) void *data)

replacing "__attribute__((unused))" with '__rte_unused' while merging.

Please check checkpatch output, it now has a warning for it:
http://mails.dpdk.org/archives/test-report/2020-June/137330.html
"
Warning in /examples/vdpa/main.c:
Using compiler attribute directly
"

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] [PATCH v4 4/4] examples/vdpa: add statistics show command
  2020-06-19 15:02     ` Ferruh Yigit
@ 2020-06-19 15:04       ` Maxime Coquelin
  0 siblings, 0 replies; 9+ messages in thread
From: Maxime Coquelin @ 2020-06-19 15:04 UTC (permalink / raw)
  To: Ferruh Yigit, Matan Azrad, Xiao Wang; +Cc: dev



On 6/19/20 5:02 PM, Ferruh Yigit wrote:
> On 6/18/2020 7:59 PM, Matan Azrad wrote:
>> A new vDPA driver feature was added to query the virtq statistics from
>> the HW.
>>
>> Use this feature to show the HW queues statistics for the virtqs.
>>
>> Command description: stats X Y.
>> X is the device ID.
>> Y is the queue ID, Y=0xffff to show all the virtio queues statistics of
>> the device X.
>>
>> Signed-off-by: Matan Azrad <matan@mellanox.com>
>> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> 
> <...>
> 
>> +
>> +static void cmd_device_stats_parsed(void *parsed_result, struct cmdline *cl,
>> +				    __attribute__((unused)) void *data)
> 
> replacing "__attribute__((unused))" with '__rte_unused' while merging.
> 
> Please check checkpatch output, it now has a warning for it:
> http://mails.dpdk.org/archives/test-report/2020-June/137330.html
> "
> Warning in /examples/vdpa/main.c:
> Using compiler attribute directly
> "
> 

Thanks Ferruh!
It would be great if the script printed the line number.

Maxime


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-06-19 15:04 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <01591112869-78828-1-git-send-email-matan@mellanox.com>
2020-06-18 18:59 ` [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics Matan Azrad
2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 1/4] vhost: inroduce operation to get vDPA queue stats Matan Azrad
2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 2/4] common/mlx5: support DevX virtq stats operations Matan Azrad
2020-06-19  6:07     ` Maxime Coquelin
2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 3/4] vdpa/mlx5: support virtio queue statistics get Matan Azrad
2020-06-18 18:59   ` [dpdk-dev] [PATCH v4 4/4] examples/vdpa: add statistics show command Matan Azrad
2020-06-19 15:02     ` Ferruh Yigit
2020-06-19 15:04       ` Maxime Coquelin
2020-06-19  6:39   ` [dpdk-dev] [PATCH v4 0/4] vhost: support vDPA virtio queue statistics Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).