DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/5] vhost: introduce per-virtqueue stats API
@ 2022-01-27 14:56 Maxime Coquelin
  2022-01-27 14:56 ` [PATCH 1/5] vhost: fix missing virtqueue lock protection Maxime Coquelin
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-01-27 14:56 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand; +Cc: Maxime Coquelin

This series introduces a new Vhost API that provides
per-virtqueue statistics to the application. It will be
generally useful, but initial motivation for this series
was to be able to get to get virtqueues stats when Virtio
RSS feature will be supported in Vhost library.

First patch is a fix that should be considered even if the
series does not make it in v22.03. Second patch introduces
the new API and generic statistics. Patch 3 makes use of
this API in Vhost PMD. The two last patches introduce more
specific counters (syscalls in DP, IOTLB cache hits and
misses).

I understand we are late in the v22.03 release cycle, and
so the series may be postponned to next release even though
it does not introduces much risks of regressions. At least
patch 1 has to be considered for this release.

Maxime Coquelin (5):
  vhost: fix missing virtqueue lock protection
  vhost: add per-virtqueue statistics support
  net/vhost: move to Vhost library stats API
  vhost: add statistics for guest notifications
  vhost: add statistics for IOTLB

 drivers/net/vhost/rte_eth_vhost.c | 348 +++++++++++-------------------
 lib/vhost/rte_vhost.h             |  89 ++++++++
 lib/vhost/socket.c                |   4 +-
 lib/vhost/version.map             |   5 +
 lib/vhost/vhost.c                 | 124 ++++++++++-
 lib/vhost/vhost.h                 |  26 ++-
 lib/vhost/virtio_net.c            |  53 +++++
 7 files changed, 416 insertions(+), 233 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] vhost: fix missing virtqueue lock protection
  2022-01-27 14:56 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
@ 2022-01-27 14:56 ` Maxime Coquelin
  2022-01-27 14:56 ` [PATCH 2/5] vhost: add per-virtqueue statistics support Maxime Coquelin
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-01-27 14:56 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand; +Cc: Maxime Coquelin, stable

This patch ensures virtqueue metadata are not being
modified while rte_vhost_vring_call() is executed.

Fixes: 6c299bb7322f ("vhost: introduce vring call API")
Cc: stable@dpdk.org

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/vhost/vhost.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index f59ca6c157..42c01abf25 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -1294,11 +1294,15 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
 	if (!vq)
 		return -1;
 
+	rte_spinlock_lock(&vq->access_lock);
+
 	if (vq_is_packed(dev))
 		vhost_vring_call_packed(dev, vq);
 	else
 		vhost_vring_call_split(dev, vq);
 
+	rte_spinlock_unlock(&vq->access_lock);
+
 	return 0;
 }
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/5] vhost: add per-virtqueue statistics support
  2022-01-27 14:56 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
  2022-01-27 14:56 ` [PATCH 1/5] vhost: fix missing virtqueue lock protection Maxime Coquelin
@ 2022-01-27 14:56 ` Maxime Coquelin
  2022-04-21 14:09   ` Xia, Chenbo
  2022-01-27 14:56 ` [PATCH 3/5] net/vhost: move to Vhost library stats API Maxime Coquelin
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-01-27 14:56 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand; +Cc: Maxime Coquelin

This patch introduces new APIs for the application
to query and reset per-virtqueue statistics. The
patch also introduces generic counters.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/vhost/rte_vhost.h  |  89 +++++++++++++++++++++++++++++++++
 lib/vhost/socket.c     |   4 +-
 lib/vhost/version.map  |   5 ++
 lib/vhost/vhost.c      | 109 ++++++++++++++++++++++++++++++++++++++++-
 lib/vhost/vhost.h      |  18 ++++++-
 lib/vhost/virtio_net.c |  53 ++++++++++++++++++++
 6 files changed, 274 insertions(+), 4 deletions(-)

diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
index b454c05868..e739091ca0 100644
--- a/lib/vhost/rte_vhost.h
+++ b/lib/vhost/rte_vhost.h
@@ -37,6 +37,7 @@ extern "C" {
 #define RTE_VHOST_USER_LINEARBUF_SUPPORT	(1ULL << 6)
 #define RTE_VHOST_USER_ASYNC_COPY	(1ULL << 7)
 #define RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS	(1ULL << 8)
+#define RTE_VHOST_USER_NET_STATS_ENABLE	(1ULL << 9)
 
 /* Features. */
 #ifndef VIRTIO_NET_F_GUEST_ANNOUNCE
@@ -317,6 +318,32 @@ struct rte_vhost_power_monitor_cond {
 	uint8_t match;
 };
 
+/** Maximum name length for the statistics counters */
+#define RTE_VHOST_STATS_NAME_SIZE 64
+
+/**
+ * Vhost virtqueue statistics structure
+ *
+ * This structure is used by rte_vhost_vring_stats_get() to provide
+ * virtqueue statistics to the calling application.
+ * It maps a name ID, corresponding to an index in the array returned
+ * by rte_vhost_vring_stats_get_names(), to a statistic value.
+ */
+struct rte_vhost_stat {
+	uint64_t id;    /**< The index in xstats name array. */
+	uint64_t value; /**< The statistic counter value. */
+};
+
+/**
+ * Vhost virtqueue statistic name element
+ *
+ * This structure is used by rte_vhost_vring_stats_get_anmes() to
+ * provide virtqueue statistics names to the calling application.
+ */
+struct rte_vhost_stat_name {
+	char name[RTE_VHOST_STATS_NAME_SIZE]; /**< The statistic name. */
+};
+
 /**
  * Convert guest physical address to host virtual address
  *
@@ -1059,6 +1086,68 @@ __rte_experimental
 int
 rte_vhost_slave_config_change(int vid, bool need_reply);
 
+/**
+ * Retrieve names of statistics of a Vhost virtqueue.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * @param vid
+ *   vhost device ID
+ * @param queue_id
+ *   vhost queue index
+ * @param stats_names
+ *   array of at least size elements to be filled.
+ *   If set to NULL, the function returns the required number of elements.
+ * @param size
+ *   The number of elements in stats_names array.
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   stats name array.
+ */
+__rte_experimental
+int
+rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
+		struct rte_vhost_stat_name *name, unsigned int size);
+
+/**
+ * Retrieve statistics of a Vhost virtqueue.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * @param vid
+ *   vhost device ID
+ * @param queue_id
+ *   vhost queue index
+ * @param stats
+ *   A pointer to a table of structure of type rte_vhost_stat to be filled with
+ *   virtqueue statistics ids and values.
+ * @param n
+ *   The number of elements in stats array.
+ * @return
+ *   A negative value on error, otherwise the number of entries filled in the
+ *   stats table.
+ */
+__rte_experimental
+int
+rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
+		struct rte_vhost_stat *stats, unsigned int n);
+
+/**
+ * Reset statistics of a Vhost virtqueue.
+ *
+ * @param vid
+ *   vhost device ID
+ * @param queue_id
+ *   vhost queue index
+ * @return
+ *   0 on success, a negative value on error.
+ */
+__rte_experimental
+int
+rte_vhost_vring_stats_reset(int vid, uint16_t queue_id);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index c2f8013cd5..6020565fb6 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -43,6 +43,7 @@ struct vhost_user_socket {
 	bool linearbuf;
 	bool async_copy;
 	bool net_compliant_ol_flags;
+	bool stats_enabled;
 
 	/*
 	 * The "supported_features" indicates the feature bits the
@@ -228,7 +229,7 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket)
 	vhost_set_ifname(vid, vsocket->path, size);
 
 	vhost_setup_virtio_net(vid, vsocket->use_builtin_virtio_net,
-		vsocket->net_compliant_ol_flags);
+		vsocket->net_compliant_ol_flags, vsocket->stats_enabled);
 
 	vhost_attach_vdpa_device(vid, vsocket->vdpa_dev);
 
@@ -864,6 +865,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
 	vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT;
 	vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY;
 	vsocket->net_compliant_ol_flags = flags & RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS;
+	vsocket->stats_enabled = flags & RTE_VHOST_USER_NET_STATS_ENABLE;
 
 	if (vsocket->async_copy &&
 		(flags & (RTE_VHOST_USER_IOMMU_SUPPORT |
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index a7ef7f1976..b83f79c87f 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -84,6 +84,11 @@ EXPERIMENTAL {
 
 	# added in 21.11
 	rte_vhost_get_monitor_addr;
+
+	# added in 22.03
+	rte_vhost_vring_stats_get_names;
+	rte_vhost_vring_stats_get;
+	rte_vhost_vring_stats_reset;
 };
 
 INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 42c01abf25..0c6a737aca 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -28,6 +28,28 @@
 struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
 pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
 
+struct vhost_vq_stats_name_off {
+	char name[RTE_VHOST_STATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
+	{"good_packets",           offsetof(struct vhost_virtqueue, stats.packets)},
+	{"good_bytes",             offsetof(struct vhost_virtqueue, stats.bytes)},
+	{"multicast_packets",      offsetof(struct vhost_virtqueue, stats.multicast)},
+	{"broadcast_packets",      offsetof(struct vhost_virtqueue, stats.broadcast)},
+	{"undersize_packets",      offsetof(struct vhost_virtqueue, stats.size_bins[0])},
+	{"size_64_packets",        offsetof(struct vhost_virtqueue, stats.size_bins[1])},
+	{"size_65_127_packets",    offsetof(struct vhost_virtqueue, stats.size_bins[2])},
+	{"size_128_255_packets",   offsetof(struct vhost_virtqueue, stats.size_bins[3])},
+	{"size_256_511_packets",   offsetof(struct vhost_virtqueue, stats.size_bins[4])},
+	{"size_512_1023_packets",  offsetof(struct vhost_virtqueue, stats.size_bins[5])},
+	{"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])},
+	{"size_1519_max_packets",  offsetof(struct vhost_virtqueue, stats.size_bins[7])},
+};
+
+#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
+
 /* Called with iotlb_lock read-locked */
 uint64_t
 __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
@@ -758,7 +780,7 @@ vhost_set_ifname(int vid, const char *if_name, unsigned int if_len)
 }
 
 void
-vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags)
+vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats_enabled)
 {
 	struct virtio_net *dev = get_device(vid);
 
@@ -773,6 +795,10 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags)
 		dev->flags |= VIRTIO_DEV_LEGACY_OL_FLAGS;
 	else
 		dev->flags &= ~VIRTIO_DEV_LEGACY_OL_FLAGS;
+	if (stats_enabled)
+		dev->flags |= VIRTIO_DEV_STATS_ENABLED;
+	else
+		dev->flags &= ~VIRTIO_DEV_STATS_ENABLED;
 }
 
 void
@@ -1908,5 +1934,86 @@ rte_vhost_get_monitor_addr(int vid, uint16_t queue_id,
 	return 0;
 }
 
+
+int
+rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
+		struct rte_vhost_stat_name *name, unsigned int size)
+{
+	struct virtio_net *dev = get_device(vid);
+	unsigned int i;
+
+	if (dev == NULL)
+		return -1;
+
+	if (queue_id >= dev->nr_vring)
+		return -1;
+
+	if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+		return -1;
+
+	if (name == NULL || size < VHOST_NB_VQ_STATS)
+		return VHOST_NB_VQ_STATS;
+
+	for (i = 0; i < VHOST_NB_VQ_STATS; i++)
+		snprintf(name[i].name, sizeof(name[i].name), "%s_q%u_%s",
+				(queue_id & 1) ? "rx" : "tx",
+				queue_id / 2, vhost_vq_stat_strings[i].name);
+
+	return VHOST_NB_VQ_STATS;
+}
+
+int
+rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
+		struct rte_vhost_stat *stats, unsigned int n)
+{
+	struct virtio_net *dev = get_device(vid);
+	struct vhost_virtqueue *vq;
+	unsigned int i;
+
+	if (dev == NULL)
+		return -1;
+
+	if (queue_id >= dev->nr_vring)
+		return -1;
+
+	if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+		return -1;
+
+	if (stats == NULL || n < VHOST_NB_VQ_STATS)
+		return VHOST_NB_VQ_STATS;
+
+	vq = dev->virtqueue[queue_id];
+
+	rte_spinlock_lock(&vq->access_lock);
+	for (i = 0; i < VHOST_NB_VQ_STATS; i++) {
+		stats[i].value =
+			*(uint64_t *)(((char *)vq) + vhost_vq_stat_strings[i].offset);
+		stats[i].id = i;
+	}
+	rte_spinlock_unlock(&vq->access_lock);
+
+	return VHOST_NB_VQ_STATS;
+}
+
+int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
+{
+	struct virtio_net *dev = get_device(vid);
+	struct vhost_virtqueue *vq;
+
+	if (dev == NULL)
+		return -1;
+
+	if (queue_id >= dev->nr_vring)
+		return -1;
+
+	vq = dev->virtqueue[queue_id];
+
+	rte_spinlock_lock(&vq->access_lock);
+	memset(&vq->stats, 0, sizeof(vq->stats));
+	rte_spinlock_unlock(&vq->access_lock);
+
+	return 0;
+}
+
 RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
 RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 7085e0885c..4c151244c7 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -38,6 +38,8 @@
 #define VIRTIO_DEV_FEATURES_FAILED ((uint32_t)1 << 4)
 /* Used to indicate that the virtio_net tx code should fill TX ol_flags */
 #define VIRTIO_DEV_LEGACY_OL_FLAGS ((uint32_t)1 << 5)
+/*  Used to indicate the application has requested statistics collection */
+#define VIRTIO_DEV_STATS_ENABLED ((uint32_t)1 << 6)
 
 /* Backend value set by guest. */
 #define VIRTIO_DEV_STOPPED -1
@@ -119,6 +121,18 @@ struct vring_used_elem_packed {
 	uint32_t count;
 };
 
+/**
+ * Virtqueue statistics
+ */
+struct virtqueue_stats {
+	uint64_t packets;
+	uint64_t bytes;
+	uint64_t multicast;
+	uint64_t broadcast;
+	/* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
+	uint64_t size_bins[8];
+};
+
 /**
  * inflight async packet information
  */
@@ -235,6 +249,7 @@ struct vhost_virtqueue {
 #define VIRTIO_UNINITIALIZED_NOTIF	(-1)
 
 	struct vhost_vring_addr ring_addrs;
+	struct virtqueue_stats	stats;
 } __rte_cache_aligned;
 
 /* Virtio device status as per Virtio specification */
@@ -696,7 +711,7 @@ int alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx);
 void vhost_attach_vdpa_device(int vid, struct rte_vdpa_device *dev);
 
 void vhost_set_ifname(int, const char *if_name, unsigned int if_len);
-void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags);
+void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags, bool stats_enabled);
 void vhost_enable_extbuf(int vid);
 void vhost_enable_linearbuf(int vid);
 int vhost_enable_guest_notification(struct virtio_net *dev,
@@ -873,5 +888,4 @@ mbuf_is_consumed(struct rte_mbuf *m)
 
 	return true;
 }
-
 #endif /* _VHOST_NET_CDEV_H_ */
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index f19713137c..550b239450 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -43,6 +43,54 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring)
 	return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
 }
 
+/*
+ * This function must be called with virtqueue's access_lock taken.
+ */
+static inline void
+vhost_queue_stats_update(struct virtio_net *dev, struct vhost_virtqueue *vq,
+		struct rte_mbuf **pkts, uint16_t count)
+{
+	struct virtqueue_stats *stats = &vq->stats;
+	int i;
+
+	if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+		return;
+
+	for (i = 0; i < count; i++) {
+		struct rte_ether_addr *ea;
+		struct rte_mbuf *pkt = pkts[i];
+		uint32_t pkt_len = rte_pktmbuf_pkt_len(pkt);
+
+		stats->packets++;
+		stats->bytes += pkt_len;
+
+		if (pkt_len == 64) {
+			stats->size_bins[1]++;
+		} else if (pkt_len > 64 && pkt_len < 1024) {
+			uint32_t bin;
+
+			/* count zeros, and offset into correct bin */
+			bin = (sizeof(pkt_len) * 8) - __builtin_clz(pkt_len) - 5;
+			stats->size_bins[bin]++;
+		} else {
+			if (pkt_len < 64)
+				stats->size_bins[0]++;
+			else if (pkt_len < 1519)
+				stats->size_bins[6]++;
+			else
+				stats->size_bins[7]++;
+		}
+
+		ea = rte_pktmbuf_mtod(pkt, struct rte_ether_addr *);
+		if (rte_is_multicast_ether_addr(ea)) {
+			if (rte_is_broadcast_ether_addr(ea))
+				stats->broadcast++;
+			else
+				stats->multicast++;
+		}
+	}
+}
+
 static inline void
 do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
@@ -1375,6 +1423,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 	else
 		nb_tx = virtio_dev_rx_split(dev, vq, pkts, count);
 
+	vhost_queue_stats_update(dev, vq, pkts, nb_tx);
+
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_unlock(vq);
@@ -2941,6 +2991,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		 * learning table will get updated first.
 		 */
 		pkts[0] = rarp_mbuf;
+		vhost_queue_stats_update(dev, vq, pkts, 1);
 		pkts++;
 		count -= 1;
 	}
@@ -2957,6 +3008,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 			count = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
 	}
 
+	vhost_queue_stats_update(dev, vq, pkts, count);
+
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_unlock(vq);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 3/5] net/vhost: move to Vhost library stats API
  2022-01-27 14:56 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
  2022-01-27 14:56 ` [PATCH 1/5] vhost: fix missing virtqueue lock protection Maxime Coquelin
  2022-01-27 14:56 ` [PATCH 2/5] vhost: add per-virtqueue statistics support Maxime Coquelin
@ 2022-01-27 14:56 ` Maxime Coquelin
  2022-01-27 14:56 ` [PATCH 4/5] vhost: add statistics for guest notifications Maxime Coquelin
  2022-01-27 14:56 ` [PATCH 5/5] vhost: add statistics for IOTLB Maxime Coquelin
  4 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-01-27 14:56 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand; +Cc: Maxime Coquelin

Now that we have Vhost statistics APIs, this patch replaces
Vhost PMD extented statistics implementation with calls
to the new API. It will enable getting more statistics for
counters that cannot be implmented at the PMD level.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/vhost/rte_eth_vhost.c | 348 +++++++++++-------------------
 1 file changed, 120 insertions(+), 228 deletions(-)

diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 070f0e6dfd..bac1c0acba 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -59,33 +59,10 @@ static struct rte_ether_addr base_eth_addr = {
 	}
 };
 
-enum vhost_xstats_pkts {
-	VHOST_UNDERSIZE_PKT = 0,
-	VHOST_64_PKT,
-	VHOST_65_TO_127_PKT,
-	VHOST_128_TO_255_PKT,
-	VHOST_256_TO_511_PKT,
-	VHOST_512_TO_1023_PKT,
-	VHOST_1024_TO_1522_PKT,
-	VHOST_1523_TO_MAX_PKT,
-	VHOST_BROADCAST_PKT,
-	VHOST_MULTICAST_PKT,
-	VHOST_UNICAST_PKT,
-	VHOST_PKT,
-	VHOST_BYTE,
-	VHOST_MISSED_PKT,
-	VHOST_ERRORS_PKT,
-	VHOST_ERRORS_FRAGMENTED,
-	VHOST_ERRORS_JABBER,
-	VHOST_UNKNOWN_PROTOCOL,
-	VHOST_XSTATS_MAX,
-};
-
 struct vhost_stats {
 	uint64_t pkts;
 	uint64_t bytes;
 	uint64_t missed_pkts;
-	uint64_t xstats[VHOST_XSTATS_MAX];
 };
 
 struct vhost_queue {
@@ -140,138 +117,92 @@ struct rte_vhost_vring_state {
 
 static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS];
 
-#define VHOST_XSTATS_NAME_SIZE 64
-
-struct vhost_xstats_name_off {
-	char name[VHOST_XSTATS_NAME_SIZE];
-	uint64_t offset;
-};
-
-/* [rx]_is prepended to the name string here */
-static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = {
-	{"good_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])},
-	{"total_bytes",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])},
-	{"missed_pkts",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])},
-	{"broadcast_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])},
-	{"multicast_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])},
-	{"unicast_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])},
-	 {"undersize_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])},
-	{"size_64_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
-	{"size_65_to_127_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])},
-	{"size_128_to_255_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])},
-	{"size_256_to_511_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])},
-	{"size_512_to_1023_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])},
-	{"size_1024_to_1522_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])},
-	{"size_1523_to_max_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])},
-	{"errors_with_bad_CRC",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
-	{"fragmented_errors",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_FRAGMENTED])},
-	{"jabber_errors",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_JABBER])},
-	{"unknown_protos_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_UNKNOWN_PROTOCOL])},
-};
-
-/* [tx]_ is prepended to the name string here */
-static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = {
-	{"good_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])},
-	{"total_bytes",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])},
-	{"missed_pkts",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])},
-	{"broadcast_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])},
-	{"multicast_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])},
-	{"unicast_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])},
-	{"undersize_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])},
-	{"size_64_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
-	{"size_65_to_127_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])},
-	{"size_128_to_255_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])},
-	{"size_256_to_511_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])},
-	{"size_512_to_1023_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])},
-	{"size_1024_to_1522_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])},
-	{"size_1523_to_max_packets",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])},
-	{"errors_with_bad_CRC",
-	 offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
-};
-
-#define VHOST_NB_XSTATS_RXPORT (sizeof(vhost_rxport_stat_strings) / \
-				sizeof(vhost_rxport_stat_strings[0]))
-
-#define VHOST_NB_XSTATS_TXPORT (sizeof(vhost_txport_stat_strings) / \
-				sizeof(vhost_txport_stat_strings[0]))
-
 static int
 vhost_dev_xstats_reset(struct rte_eth_dev *dev)
 {
-	struct vhost_queue *vq = NULL;
-	unsigned int i = 0;
+	struct vhost_queue *vq;
+	int ret, i;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		vq = dev->data->rx_queues[i];
-		if (!vq)
-			continue;
-		memset(&vq->stats, 0, sizeof(vq->stats));
+		ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id);
+		if (ret < 0)
+			return ret;
 	}
+
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		vq = dev->data->tx_queues[i];
-		if (!vq)
-			continue;
-		memset(&vq->stats, 0, sizeof(vq->stats));
+		ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id);
+		if (ret < 0)
+			return ret;
 	}
 
 	return 0;
 }
 
 static int
-vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
+vhost_dev_xstats_get_names(struct rte_eth_dev *dev,
 			   struct rte_eth_xstat_name *xstats_names,
-			   unsigned int limit __rte_unused)
+			   unsigned int limit)
 {
-	unsigned int t = 0;
-	int count = 0;
-	int nstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT;
+	struct rte_vhost_stat_name *name;
+	struct vhost_queue *vq;
+	int ret, i, count = 0, nstats = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		vq = dev->data->rx_queues[i];
+		ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0);
+		if (ret < 0)
+			return ret;
 
-	if (!xstats_names)
+		nstats += ret;
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		vq = dev->data->tx_queues[i];
+		ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0);
+		if (ret < 0)
+			return ret;
+
+		nstats += ret;
+	}
+
+	if (!xstats_names || limit < (unsigned int)nstats)
 		return nstats;
-	for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) {
-		snprintf(xstats_names[count].name,
-			 sizeof(xstats_names[count].name),
-			 "rx_%s", vhost_rxport_stat_strings[t].name);
-		count++;
-	}
-	for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) {
-		snprintf(xstats_names[count].name,
-			 sizeof(xstats_names[count].name),
-			 "tx_%s", vhost_txport_stat_strings[t].name);
-		count++;
+
+	name = calloc(nstats, sizeof(*name));
+	if (!name)
+		return -1;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		vq = dev->data->rx_queues[i];
+		ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id,
+				name + count, nstats - count);
+		if (ret < 0) {
+			free(name);
+			return ret;
+		}
+
+		count += ret;
 	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		vq = dev->data->tx_queues[i];
+		ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id,
+				name + count, nstats - count);
+		if (ret < 0) {
+			free(name);
+			return ret;
+		}
+
+		count += ret;
+	}
+
+	for (i = 0; i < count; i++)
+		strncpy(xstats_names[i].name, name[i].name, RTE_ETH_XSTATS_NAME_SIZE);
+
+	free(name);
+
 	return count;
 }
 
@@ -279,86 +210,63 @@ static int
 vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
 		     unsigned int n)
 {
-	unsigned int i;
-	unsigned int t;
-	unsigned int count = 0;
-	struct vhost_queue *vq = NULL;
-	unsigned int nxstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT;
-
-	if (n < nxstats)
-		return nxstats;
-
-	for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) {
-		xstats[count].value = 0;
-		for (i = 0; i < dev->data->nb_rx_queues; i++) {
-			vq = dev->data->rx_queues[i];
-			if (!vq)
-				continue;
-			xstats[count].value +=
-				*(uint64_t *)(((char *)vq)
-				+ vhost_rxport_stat_strings[t].offset);
-		}
-		xstats[count].id = count;
-		count++;
+	struct rte_vhost_stat *stats;
+	struct vhost_queue *vq;
+	int ret, i, count = 0, nstats = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		vq = dev->data->rx_queues[i];
+		ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0);
+		if (ret < 0)
+			return ret;
+
+		nstats += ret;
 	}
-	for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) {
-		xstats[count].value = 0;
-		for (i = 0; i < dev->data->nb_tx_queues; i++) {
-			vq = dev->data->tx_queues[i];
-			if (!vq)
-				continue;
-			xstats[count].value +=
-				*(uint64_t *)(((char *)vq)
-				+ vhost_txport_stat_strings[t].offset);
-		}
-		xstats[count].id = count;
-		count++;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		vq = dev->data->tx_queues[i];
+		ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0);
+		if (ret < 0)
+			return ret;
+
+		nstats += ret;
 	}
-	return count;
-}
 
-static inline void
-vhost_count_xcast_packets(struct vhost_queue *vq,
-				struct rte_mbuf *mbuf)
-{
-	struct rte_ether_addr *ea = NULL;
-	struct vhost_stats *pstats = &vq->stats;
-
-	ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *);
-	if (rte_is_multicast_ether_addr(ea)) {
-		if (rte_is_broadcast_ether_addr(ea))
-			pstats->xstats[VHOST_BROADCAST_PKT]++;
-		else
-			pstats->xstats[VHOST_MULTICAST_PKT]++;
-	} else {
-		pstats->xstats[VHOST_UNICAST_PKT]++;
+	if (!xstats || n < (unsigned int)nstats)
+		return nstats;
+
+	stats = calloc(nstats, sizeof(*stats));
+	if (!stats)
+		return -1;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		vq = dev->data->rx_queues[i];
+		ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
+				stats + count, nstats - count);
+		if (ret < 0)
+			return ret;
+
+		count += ret;
 	}
-}
 
-static __rte_always_inline void
-vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf *buf)
-{
-	uint32_t pkt_len = 0;
-	uint64_t index;
-	struct vhost_stats *pstats = &vq->stats;
-
-	pstats->xstats[VHOST_PKT]++;
-	pkt_len = buf->pkt_len;
-	if (pkt_len == 64) {
-		pstats->xstats[VHOST_64_PKT]++;
-	} else if (pkt_len > 64 && pkt_len < 1024) {
-		index = (sizeof(pkt_len) * 8)
-			- __builtin_clz(pkt_len) - 5;
-		pstats->xstats[index]++;
-	} else {
-		if (pkt_len < 64)
-			pstats->xstats[VHOST_UNDERSIZE_PKT]++;
-		else if (pkt_len <= 1522)
-			pstats->xstats[VHOST_1024_TO_1522_PKT]++;
-		else if (pkt_len > 1522)
-			pstats->xstats[VHOST_1523_TO_MAX_PKT]++;
-	}
-	vhost_count_xcast_packets(vq, buf);
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		vq = dev->data->tx_queues[i];
+		ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
+				stats + count, nstats - count);
+		if (ret < 0)
+			return ret;
+
+		count += ret;
+	}
+
+	for (i = 0; i < count; i++) {
+		xstats[i].id = stats[i].id;
+		xstats[i].value = stats[i].value;
+	}
+
+	free(stats);
+
+	return nstats;
 }
 
 static uint16_t
@@ -402,9 +310,6 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 			rte_vlan_strip(bufs[i]);
 
 		r->stats.bytes += bufs[i]->pkt_len;
-		r->stats.xstats[VHOST_BYTE] += bufs[i]->pkt_len;
-
-		vhost_update_single_packet_xstats(r, bufs[i]);
 	}
 
 out:
@@ -461,10 +366,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 			break;
 	}
 
-	for (i = 0; likely(i < nb_tx); i++) {
+	for (i = 0; likely(i < nb_tx); i++)
 		nb_bytes += bufs[i]->pkt_len;
-		vhost_update_single_packet_xstats(r, bufs[i]);
-	}
 
 	nb_missed = nb_bufs - nb_tx;
 
@@ -472,17 +375,6 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	r->stats.bytes += nb_bytes;
 	r->stats.missed_pkts += nb_missed;
 
-	r->stats.xstats[VHOST_BYTE] += nb_bytes;
-	r->stats.xstats[VHOST_MISSED_PKT] += nb_missed;
-	r->stats.xstats[VHOST_UNICAST_PKT] += nb_missed;
-
-	/* According to RFC2863, ifHCOutUcastPkts, ifHCOutMulticastPkts and
-	 * ifHCOutBroadcastPkts counters are increased when packets are not
-	 * transmitted successfully.
-	 */
-	for (i = nb_tx; i < nb_bufs; i++)
-		vhost_count_xcast_packets(r, bufs[i]);
-
 	for (i = 0; likely(i < nb_tx); i++)
 		rte_pktmbuf_free(bufs[i]);
 out:
@@ -1555,7 +1447,7 @@ rte_pmd_vhost_probe(struct rte_vdev_device *dev)
 	int ret = 0;
 	char *iface_name;
 	uint16_t queues;
-	uint64_t flags = 0;
+	uint64_t flags = RTE_VHOST_USER_NET_STATS_ENABLE;
 	uint64_t disable_flags = 0;
 	int client_mode = 0;
 	int iommu_support = 0;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 4/5] vhost: add statistics for guest notifications
  2022-01-27 14:56 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
                   ` (2 preceding siblings ...)
  2022-01-27 14:56 ` [PATCH 3/5] net/vhost: move to Vhost library stats API Maxime Coquelin
@ 2022-01-27 14:56 ` Maxime Coquelin
  2022-01-27 14:56 ` [PATCH 5/5] vhost: add statistics for IOTLB Maxime Coquelin
  4 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-01-27 14:56 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand; +Cc: Maxime Coquelin

This patch adds a new virtqueue statistic for guest
notifications. It is useful to deduce from hypervisor side
whether the corresponding guest Virtio device is using
Kernel Virtio-net driver or DPDK Virtio PMD.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/vhost/vhost.c | 1 +
 lib/vhost/vhost.h | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 0c6a737aca..2d0d9e7f51 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -46,6 +46,7 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
 	{"size_512_1023_packets",  offsetof(struct vhost_virtqueue, stats.size_bins[5])},
 	{"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])},
 	{"size_1519_max_packets",  offsetof(struct vhost_virtqueue, stats.size_bins[7])},
+	{"guest_notifications",    offsetof(struct vhost_virtqueue, stats.guest_notifications)},
 };
 
 #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 4c151244c7..0c7669e8c9 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -131,6 +131,7 @@ struct virtqueue_stats {
 	uint64_t broadcast;
 	/* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
 	uint64_t size_bins[8];
+	uint64_t guest_notifications;
 };
 
 /**
@@ -787,6 +788,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
 					(vq->callfd >= 0)) ||
 				unlikely(!signalled_used_valid)) {
 			eventfd_write(vq->callfd, (eventfd_t) 1);
+			if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+				vq->stats.guest_notifications++;
 			if (dev->notify_ops->guest_notified)
 				dev->notify_ops->guest_notified(dev->vid);
 		}
@@ -795,6 +798,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
 		if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT)
 				&& (vq->callfd >= 0)) {
 			eventfd_write(vq->callfd, (eventfd_t)1);
+			if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+				vq->stats.guest_notifications++;
 			if (dev->notify_ops->guest_notified)
 				dev->notify_ops->guest_notified(dev->vid);
 		}
-- 
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 5/5] vhost: add statistics for IOTLB
  2022-01-27 14:56 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
                   ` (3 preceding siblings ...)
  2022-01-27 14:56 ` [PATCH 4/5] vhost: add statistics for guest notifications Maxime Coquelin
@ 2022-01-27 14:56 ` Maxime Coquelin
  4 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-01-27 14:56 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand; +Cc: Maxime Coquelin

This patch adds statistics for IOTLB hits and misses.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/vhost/vhost.c | 10 +++++++++-
 lib/vhost/vhost.h |  3 +++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 2d0d9e7f51..bda974d34f 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -47,6 +47,8 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
 	{"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])},
 	{"size_1519_max_packets",  offsetof(struct vhost_virtqueue, stats.size_bins[7])},
 	{"guest_notifications",    offsetof(struct vhost_virtqueue, stats.guest_notifications)},
+	{"iotlb_hits",             offsetof(struct vhost_virtqueue, stats.iotlb_hits)},
+	{"iotlb_misses",           offsetof(struct vhost_virtqueue, stats.iotlb_misses)},
 };
 
 #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
@@ -64,8 +66,14 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	tmp_size = *size;
 
 	vva = vhost_user_iotlb_cache_find(vq, iova, &tmp_size, perm);
-	if (tmp_size == *size)
+	if (tmp_size == *size) {
+		if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+			vq->stats.iotlb_hits++;
 		return vva;
+	}
+
+	if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+		vq->stats.iotlb_misses++;
 
 	iova += tmp_size;
 
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 0c7669e8c9..8c078decab 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -132,6 +132,9 @@ struct virtqueue_stats {
 	/* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
 	uint64_t size_bins[8];
 	uint64_t guest_notifications;
+	uint64_t iotlb_hits;
+	uint64_t iotlb_misses;
+	uint64_t iotlb_errors;
 };
 
 /**
-- 
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 2/5] vhost: add per-virtqueue statistics support
  2022-01-27 14:56 ` [PATCH 2/5] vhost: add per-virtqueue statistics support Maxime Coquelin
@ 2022-04-21 14:09   ` Xia, Chenbo
  2022-04-22 12:17     ` Maxime Coquelin
  0 siblings, 1 reply; 12+ messages in thread
From: Xia, Chenbo @ 2022-04-21 14:09 UTC (permalink / raw)
  To: Maxime Coquelin, dev, david.marchand

Hi Maxime,

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Thursday, January 27, 2022 10:57 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH 2/5] vhost: add per-virtqueue statistics support
> 
> This patch introduces new APIs for the application
> to query and reset per-virtqueue statistics. The
> patch also introduces generic counters.
> 
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/vhost/rte_vhost.h  |  89 +++++++++++++++++++++++++++++++++
>  lib/vhost/socket.c     |   4 +-
>  lib/vhost/version.map  |   5 ++
>  lib/vhost/vhost.c      | 109 ++++++++++++++++++++++++++++++++++++++++-
>  lib/vhost/vhost.h      |  18 ++++++-
>  lib/vhost/virtio_net.c |  53 ++++++++++++++++++++
>  6 files changed, 274 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
> index b454c05868..e739091ca0 100644
> --- a/lib/vhost/rte_vhost.h
> +++ b/lib/vhost/rte_vhost.h
> @@ -37,6 +37,7 @@ extern "C" {
>  #define RTE_VHOST_USER_LINEARBUF_SUPPORT	(1ULL << 6)
>  #define RTE_VHOST_USER_ASYNC_COPY	(1ULL << 7)
>  #define RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS	(1ULL << 8)
> +#define RTE_VHOST_USER_NET_STATS_ENABLE	(1ULL << 9)
> 
>  /* Features. */
>  #ifndef VIRTIO_NET_F_GUEST_ANNOUNCE
> @@ -317,6 +318,32 @@ struct rte_vhost_power_monitor_cond {
>  	uint8_t match;
>  };
> 
> +/** Maximum name length for the statistics counters */
> +#define RTE_VHOST_STATS_NAME_SIZE 64
> +
> +/**
> + * Vhost virtqueue statistics structure
> + *
> + * This structure is used by rte_vhost_vring_stats_get() to provide
> + * virtqueue statistics to the calling application.
> + * It maps a name ID, corresponding to an index in the array returned
> + * by rte_vhost_vring_stats_get_names(), to a statistic value.
> + */
> +struct rte_vhost_stat {
> +	uint64_t id;    /**< The index in xstats name array. */
> +	uint64_t value; /**< The statistic counter value. */
> +};
> +
> +/**
> + * Vhost virtqueue statistic name element
> + *
> + * This structure is used by rte_vhost_vring_stats_get_anmes() to

Anmes -> names

> + * provide virtqueue statistics names to the calling application.
> + */
> +struct rte_vhost_stat_name {
> +	char name[RTE_VHOST_STATS_NAME_SIZE]; /**< The statistic name. */

Should we consider using ethdev one? Since vhost lib already depends on ethdev
lib.

> +};
> +
>  /**
>   * Convert guest physical address to host virtual address
>   *
> @@ -1059,6 +1086,68 @@ __rte_experimental
>  int
>  rte_vhost_slave_config_change(int vid, bool need_reply);
> 
> +/**
> + * Retrieve names of statistics of a Vhost virtqueue.
> + *
> + * There is an assumption that 'stat_names' and 'stats' arrays are
> matched
> + * by array index: stats_names[i].name => stats[i].value
> + *
> + * @param vid
> + *   vhost device ID
> + * @param queue_id
> + *   vhost queue index
> + * @param stats_names
> + *   array of at least size elements to be filled.
> + *   If set to NULL, the function returns the required number of elements.
> + * @param size
> + *   The number of elements in stats_names array.
> + * @return
> + *   A negative value on error, otherwise the number of entries filled in
> the
> + *   stats name array.
> + */
> +__rte_experimental
> +int
> +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
> +		struct rte_vhost_stat_name *name, unsigned int size);

'@param stats_names' and 'struct rte_vhost_stat_name *name' do not align and reports
error: 

http://mails.dpdk.org/archives/test-report/2022-March/270275.html

> +
> +/**
> + * Retrieve statistics of a Vhost virtqueue.
> + *
> + * There is an assumption that 'stat_names' and 'stats' arrays are
> matched
> + * by array index: stats_names[i].name => stats[i].value
> + *
> + * @param vid
> + *   vhost device ID
> + * @param queue_id
> + *   vhost queue index
> + * @param stats
> + *   A pointer to a table of structure of type rte_vhost_stat to be
> filled with
> + *   virtqueue statistics ids and values.
> + * @param n
> + *   The number of elements in stats array.
> + * @return
> + *   A negative value on error, otherwise the number of entries filled in
> the
> + *   stats table.
> + */
> +__rte_experimental
> +int
> +rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
> +		struct rte_vhost_stat *stats, unsigned int n);
> +
> +/**
> + * Reset statistics of a Vhost virtqueue.
> + *
> + * @param vid
> + *   vhost device ID
> + * @param queue_id
> + *   vhost queue index
> + * @return
> + *   0 on success, a negative value on error.
> + */
> +__rte_experimental
> +int
> +rte_vhost_vring_stats_reset(int vid, uint16_t queue_id);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
> index c2f8013cd5..6020565fb6 100644
> --- a/lib/vhost/socket.c
> +++ b/lib/vhost/socket.c
> @@ -43,6 +43,7 @@ struct vhost_user_socket {
>  	bool linearbuf;
>  	bool async_copy;
>  	bool net_compliant_ol_flags;
> +	bool stats_enabled;
> 
>  	/*
>  	 * The "supported_features" indicates the feature bits the
> @@ -228,7 +229,7 @@ vhost_user_add_connection(int fd, struct
> vhost_user_socket *vsocket)
>  	vhost_set_ifname(vid, vsocket->path, size);
> 
>  	vhost_setup_virtio_net(vid, vsocket->use_builtin_virtio_net,
> -		vsocket->net_compliant_ol_flags);
> +		vsocket->net_compliant_ol_flags, vsocket->stats_enabled);
> 
>  	vhost_attach_vdpa_device(vid, vsocket->vdpa_dev);
> 
> @@ -864,6 +865,7 @@ rte_vhost_driver_register(const char *path, uint64_t
> flags)
>  	vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT;
>  	vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY;
>  	vsocket->net_compliant_ol_flags = flags &
> RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS;
> +	vsocket->stats_enabled = flags & RTE_VHOST_USER_NET_STATS_ENABLE;
> 
>  	if (vsocket->async_copy &&
>  		(flags & (RTE_VHOST_USER_IOMMU_SUPPORT |
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index a7ef7f1976..b83f79c87f 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -84,6 +84,11 @@ EXPERIMENTAL {
> 
>  	# added in 21.11
>  	rte_vhost_get_monitor_addr;
> +
> +	# added in 22.03
> +	rte_vhost_vring_stats_get_names;
> +	rte_vhost_vring_stats_get;
> +	rte_vhost_vring_stats_reset;
>  };
> 
>  INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 42c01abf25..0c6a737aca 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -28,6 +28,28 @@
>  struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
>  pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
> 
> +struct vhost_vq_stats_name_off {
> +	char name[RTE_VHOST_STATS_NAME_SIZE];
> +	unsigned int offset;
> +};
> +
> +static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
> +	{"good_packets",           offsetof(struct vhost_virtqueue,
> stats.packets)},
> +	{"good_bytes",             offsetof(struct vhost_virtqueue,
> stats.bytes)},
> +	{"multicast_packets",      offsetof(struct vhost_virtqueue,
> stats.multicast)},
> +	{"broadcast_packets",      offsetof(struct vhost_virtqueue,
> stats.broadcast)},
> +	{"undersize_packets",      offsetof(struct vhost_virtqueue,
> stats.size_bins[0])},
> +	{"size_64_packets",        offsetof(struct vhost_virtqueue,
> stats.size_bins[1])},
> +	{"size_65_127_packets",    offsetof(struct vhost_virtqueue,
> stats.size_bins[2])},
> +	{"size_128_255_packets",   offsetof(struct vhost_virtqueue,
> stats.size_bins[3])},
> +	{"size_256_511_packets",   offsetof(struct vhost_virtqueue,
> stats.size_bins[4])},
> +	{"size_512_1023_packets",  offsetof(struct vhost_virtqueue,
> stats.size_bins[5])},
> +	{"size_1024_1518_packets", offsetof(struct vhost_virtqueue,
> stats.size_bins[6])},
> +	{"size_1519_max_packets",  offsetof(struct vhost_virtqueue,
> stats.size_bins[7])},
> +};
> +
> +#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
> +
>  /* Called with iotlb_lock read-locked */
>  uint64_t
>  __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
> @@ -758,7 +780,7 @@ vhost_set_ifname(int vid, const char *if_name,
> unsigned int if_len)
>  }
> 
>  void
> -vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags)
> +vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags,
> bool stats_enabled)
>  {
>  	struct virtio_net *dev = get_device(vid);
> 
> @@ -773,6 +795,10 @@ vhost_setup_virtio_net(int vid, bool enable, bool
> compliant_ol_flags)
>  		dev->flags |= VIRTIO_DEV_LEGACY_OL_FLAGS;
>  	else
>  		dev->flags &= ~VIRTIO_DEV_LEGACY_OL_FLAGS;
> +	if (stats_enabled)
> +		dev->flags |= VIRTIO_DEV_STATS_ENABLED;
> +	else
> +		dev->flags &= ~VIRTIO_DEV_STATS_ENABLED;
>  }
> 
>  void
> @@ -1908,5 +1934,86 @@ rte_vhost_get_monitor_addr(int vid, uint16_t
> queue_id,
>  	return 0;
>  }
> 
> +
> +int
> +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
> +		struct rte_vhost_stat_name *name, unsigned int size)
> +{
> +	struct virtio_net *dev = get_device(vid);
> +	unsigned int i;
> +
> +	if (dev == NULL)
> +		return -1;
> +
> +	if (queue_id >= dev->nr_vring)
> +		return -1;
> +
> +	if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
> +		return -1;
> +
> +	if (name == NULL || size < VHOST_NB_VQ_STATS)
> +		return VHOST_NB_VQ_STATS;
> +
> +	for (i = 0; i < VHOST_NB_VQ_STATS; i++)
> +		snprintf(name[i].name, sizeof(name[i].name), "%s_q%u_%s",
> +				(queue_id & 1) ? "rx" : "tx",
> +				queue_id / 2, vhost_vq_stat_strings[i].name);
> +
> +	return VHOST_NB_VQ_STATS;
> +}
> +
> +int
> +rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
> +		struct rte_vhost_stat *stats, unsigned int n)
> +{
> +	struct virtio_net *dev = get_device(vid);
> +	struct vhost_virtqueue *vq;
> +	unsigned int i;
> +
> +	if (dev == NULL)
> +		return -1;
> +
> +	if (queue_id >= dev->nr_vring)
> +		return -1;
> +
> +	if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
> +		return -1;
> +
> +	if (stats == NULL || n < VHOST_NB_VQ_STATS)
> +		return VHOST_NB_VQ_STATS;
> +
> +	vq = dev->virtqueue[queue_id];
> +
> +	rte_spinlock_lock(&vq->access_lock);
> +	for (i = 0; i < VHOST_NB_VQ_STATS; i++) {
> +		stats[i].value =
> +			*(uint64_t *)(((char *)vq) +
> vhost_vq_stat_strings[i].offset);
> +		stats[i].id = i;
> +	}
> +	rte_spinlock_unlock(&vq->access_lock);
> +
> +	return VHOST_NB_VQ_STATS;
> +}
> +
> +int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
> +{
> +	struct virtio_net *dev = get_device(vid);
> +	struct vhost_virtqueue *vq;
> +
> +	if (dev == NULL)
> +		return -1;
> +
> +	if (queue_id >= dev->nr_vring)
> +		return -1;
> +

Guess you forgot to check VIRTIO_DEV_STATS_ENABLED?

> +	vq = dev->virtqueue[queue_id];
> +
> +	rte_spinlock_lock(&vq->access_lock);
> +	memset(&vq->stats, 0, sizeof(vq->stats));
> +	rte_spinlock_unlock(&vq->access_lock);
> +
> +	return 0;
> +}
> +
>  RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
>  RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 7085e0885c..4c151244c7 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -38,6 +38,8 @@
>  #define VIRTIO_DEV_FEATURES_FAILED ((uint32_t)1 << 4)
>  /* Used to indicate that the virtio_net tx code should fill TX ol_flags
> */
>  #define VIRTIO_DEV_LEGACY_OL_FLAGS ((uint32_t)1 << 5)
> +/*  Used to indicate the application has requested statistics collection

Should be only one space between '/*' and 'Used' :P

Thanks,
Chenbo

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/5] vhost: add per-virtqueue statistics support
  2022-04-21 14:09   ` Xia, Chenbo
@ 2022-04-22 12:17     ` Maxime Coquelin
  0 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-04-22 12:17 UTC (permalink / raw)
  To: Xia, Chenbo, dev, david.marchand

Hi Chenbo,

On 4/21/22 16:09, Xia, Chenbo wrote:
> Hi Maxime,
> 
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Thursday, January 27, 2022 10:57 PM
>> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
>> david.marchand@redhat.com
>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Subject: [PATCH 2/5] vhost: add per-virtqueue statistics support
>>
>> This patch introduces new APIs for the application
>> to query and reset per-virtqueue statistics. The
>> patch also introduces generic counters.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/vhost/rte_vhost.h  |  89 +++++++++++++++++++++++++++++++++
>>   lib/vhost/socket.c     |   4 +-
>>   lib/vhost/version.map  |   5 ++
>>   lib/vhost/vhost.c      | 109 ++++++++++++++++++++++++++++++++++++++++-
>>   lib/vhost/vhost.h      |  18 ++++++-
>>   lib/vhost/virtio_net.c |  53 ++++++++++++++++++++
>>   6 files changed, 274 insertions(+), 4 deletions(-)
>>
>> diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
>> index b454c05868..e739091ca0 100644
>> --- a/lib/vhost/rte_vhost.h
>> +++ b/lib/vhost/rte_vhost.h
>> @@ -37,6 +37,7 @@ extern "C" {
>>   #define RTE_VHOST_USER_LINEARBUF_SUPPORT	(1ULL << 6)
>>   #define RTE_VHOST_USER_ASYNC_COPY	(1ULL << 7)
>>   #define RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS	(1ULL << 8)
>> +#define RTE_VHOST_USER_NET_STATS_ENABLE	(1ULL << 9)
>>
>>   /* Features. */
>>   #ifndef VIRTIO_NET_F_GUEST_ANNOUNCE
>> @@ -317,6 +318,32 @@ struct rte_vhost_power_monitor_cond {
>>   	uint8_t match;
>>   };
>>
>> +/** Maximum name length for the statistics counters */
>> +#define RTE_VHOST_STATS_NAME_SIZE 64
>> +
>> +/**
>> + * Vhost virtqueue statistics structure
>> + *
>> + * This structure is used by rte_vhost_vring_stats_get() to provide
>> + * virtqueue statistics to the calling application.
>> + * It maps a name ID, corresponding to an index in the array returned
>> + * by rte_vhost_vring_stats_get_names(), to a statistic value.
>> + */
>> +struct rte_vhost_stat {
>> +	uint64_t id;    /**< The index in xstats name array. */
>> +	uint64_t value; /**< The statistic counter value. */
>> +};
>> +
>> +/**
>> + * Vhost virtqueue statistic name element
>> + *
>> + * This structure is used by rte_vhost_vring_stats_get_anmes() to
> 
> Anmes -> names
> 
>> + * provide virtqueue statistics names to the calling application.
>> + */
>> +struct rte_vhost_stat_name {
>> +	char name[RTE_VHOST_STATS_NAME_SIZE]; /**< The statistic name. */
> 
> Should we consider using ethdev one? Since vhost lib already depends on ethdev
> lib.

I initially thought about it, but this is not a good idea IMHO, as it
might confuse the user, which would think it could use the ethdev API to
control it.

>> +};
>> +
>>   /**
>>    * Convert guest physical address to host virtual address
>>    *
>> @@ -1059,6 +1086,68 @@ __rte_experimental
>>   int
>>   rte_vhost_slave_config_change(int vid, bool need_reply);
>>
>> +/**
>> + * Retrieve names of statistics of a Vhost virtqueue.
>> + *
>> + * There is an assumption that 'stat_names' and 'stats' arrays are
>> matched
>> + * by array index: stats_names[i].name => stats[i].value
>> + *
>> + * @param vid
>> + *   vhost device ID
>> + * @param queue_id
>> + *   vhost queue index
>> + * @param stats_names
>> + *   array of at least size elements to be filled.
>> + *   If set to NULL, the function returns the required number of elements.
>> + * @param size
>> + *   The number of elements in stats_names array.
>> + * @return
>> + *   A negative value on error, otherwise the number of entries filled in
>> the
>> + *   stats name array.
>> + */
>> +__rte_experimental
>> +int
>> +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
>> +		struct rte_vhost_stat_name *name, unsigned int size);
> 
> '@param stats_names' and 'struct rte_vhost_stat_name *name' do not align and reports
> error:
> 
> http://mails.dpdk.org/archives/test-report/2022-March/270275.html

Ha yes, it slept through the cracks when I reworked the series, thanks
for the heads-up.

>> +
>> +/**
>> + * Retrieve statistics of a Vhost virtqueue.
>> + *
>> + * There is an assumption that 'stat_names' and 'stats' arrays are
>> matched
>> + * by array index: stats_names[i].name => stats[i].value
>> + *
>> + * @param vid
>> + *   vhost device ID
>> + * @param queue_id
>> + *   vhost queue index
>> + * @param stats
>> + *   A pointer to a table of structure of type rte_vhost_stat to be
>> filled with
>> + *   virtqueue statistics ids and values.
>> + * @param n
>> + *   The number of elements in stats array.
>> + * @return
>> + *   A negative value on error, otherwise the number of entries filled in
>> the
>> + *   stats table.
>> + */
>> +__rte_experimental
>> +int
>> +rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
>> +		struct rte_vhost_stat *stats, unsigned int n);
>> +
>> +/**
>> + * Reset statistics of a Vhost virtqueue.
>> + *
>> + * @param vid
>> + *   vhost device ID
>> + * @param queue_id
>> + *   vhost queue index
>> + * @return
>> + *   0 on success, a negative value on error.
>> + */
>> +__rte_experimental
>> +int
>> +rte_vhost_vring_stats_reset(int vid, uint16_t queue_id);
>> +
>>   #ifdef __cplusplus
>>   }
>>   #endif
>> diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
>> index c2f8013cd5..6020565fb6 100644
>> --- a/lib/vhost/socket.c
>> +++ b/lib/vhost/socket.c
>> @@ -43,6 +43,7 @@ struct vhost_user_socket {
>>   	bool linearbuf;
>>   	bool async_copy;
>>   	bool net_compliant_ol_flags;
>> +	bool stats_enabled;
>>
>>   	/*
>>   	 * The "supported_features" indicates the feature bits the
>> @@ -228,7 +229,7 @@ vhost_user_add_connection(int fd, struct
>> vhost_user_socket *vsocket)
>>   	vhost_set_ifname(vid, vsocket->path, size);
>>
>>   	vhost_setup_virtio_net(vid, vsocket->use_builtin_virtio_net,
>> -		vsocket->net_compliant_ol_flags);
>> +		vsocket->net_compliant_ol_flags, vsocket->stats_enabled);
>>
>>   	vhost_attach_vdpa_device(vid, vsocket->vdpa_dev);
>>
>> @@ -864,6 +865,7 @@ rte_vhost_driver_register(const char *path, uint64_t
>> flags)
>>   	vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT;
>>   	vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY;
>>   	vsocket->net_compliant_ol_flags = flags &
>> RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS;
>> +	vsocket->stats_enabled = flags & RTE_VHOST_USER_NET_STATS_ENABLE;
>>
>>   	if (vsocket->async_copy &&
>>   		(flags & (RTE_VHOST_USER_IOMMU_SUPPORT |
>> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
>> index a7ef7f1976..b83f79c87f 100644
>> --- a/lib/vhost/version.map
>> +++ b/lib/vhost/version.map
>> @@ -84,6 +84,11 @@ EXPERIMENTAL {
>>
>>   	# added in 21.11
>>   	rte_vhost_get_monitor_addr;
>> +
>> +	# added in 22.03
>> +	rte_vhost_vring_stats_get_names;
>> +	rte_vhost_vring_stats_get;
>> +	rte_vhost_vring_stats_reset;
>>   };
>>
>>   INTERNAL {
>> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
>> index 42c01abf25..0c6a737aca 100644
>> --- a/lib/vhost/vhost.c
>> +++ b/lib/vhost/vhost.c
>> @@ -28,6 +28,28 @@
>>   struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
>>   pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
>>
>> +struct vhost_vq_stats_name_off {
>> +	char name[RTE_VHOST_STATS_NAME_SIZE];
>> +	unsigned int offset;
>> +};
>> +
>> +static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
>> +	{"good_packets",           offsetof(struct vhost_virtqueue,
>> stats.packets)},
>> +	{"good_bytes",             offsetof(struct vhost_virtqueue,
>> stats.bytes)},
>> +	{"multicast_packets",      offsetof(struct vhost_virtqueue,
>> stats.multicast)},
>> +	{"broadcast_packets",      offsetof(struct vhost_virtqueue,
>> stats.broadcast)},
>> +	{"undersize_packets",      offsetof(struct vhost_virtqueue,
>> stats.size_bins[0])},
>> +	{"size_64_packets",        offsetof(struct vhost_virtqueue,
>> stats.size_bins[1])},
>> +	{"size_65_127_packets",    offsetof(struct vhost_virtqueue,
>> stats.size_bins[2])},
>> +	{"size_128_255_packets",   offsetof(struct vhost_virtqueue,
>> stats.size_bins[3])},
>> +	{"size_256_511_packets",   offsetof(struct vhost_virtqueue,
>> stats.size_bins[4])},
>> +	{"size_512_1023_packets",  offsetof(struct vhost_virtqueue,
>> stats.size_bins[5])},
>> +	{"size_1024_1518_packets", offsetof(struct vhost_virtqueue,
>> stats.size_bins[6])},
>> +	{"size_1519_max_packets",  offsetof(struct vhost_virtqueue,
>> stats.size_bins[7])},
>> +};
>> +
>> +#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
>> +
>>   /* Called with iotlb_lock read-locked */
>>   uint64_t
>>   __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
>> @@ -758,7 +780,7 @@ vhost_set_ifname(int vid, const char *if_name,
>> unsigned int if_len)
>>   }
>>
>>   void
>> -vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags)
>> +vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags,
>> bool stats_enabled)
>>   {
>>   	struct virtio_net *dev = get_device(vid);
>>
>> @@ -773,6 +795,10 @@ vhost_setup_virtio_net(int vid, bool enable, bool
>> compliant_ol_flags)
>>   		dev->flags |= VIRTIO_DEV_LEGACY_OL_FLAGS;
>>   	else
>>   		dev->flags &= ~VIRTIO_DEV_LEGACY_OL_FLAGS;
>> +	if (stats_enabled)
>> +		dev->flags |= VIRTIO_DEV_STATS_ENABLED;
>> +	else
>> +		dev->flags &= ~VIRTIO_DEV_STATS_ENABLED;
>>   }
>>
>>   void
>> @@ -1908,5 +1934,86 @@ rte_vhost_get_monitor_addr(int vid, uint16_t
>> queue_id,
>>   	return 0;
>>   }
>>
>> +
>> +int
>> +rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
>> +		struct rte_vhost_stat_name *name, unsigned int size)
>> +{
>> +	struct virtio_net *dev = get_device(vid);
>> +	unsigned int i;
>> +
>> +	if (dev == NULL)
>> +		return -1;
>> +
>> +	if (queue_id >= dev->nr_vring)
>> +		return -1;
>> +
>> +	if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
>> +		return -1;
>> +
>> +	if (name == NULL || size < VHOST_NB_VQ_STATS)
>> +		return VHOST_NB_VQ_STATS;
>> +
>> +	for (i = 0; i < VHOST_NB_VQ_STATS; i++)
>> +		snprintf(name[i].name, sizeof(name[i].name), "%s_q%u_%s",
>> +				(queue_id & 1) ? "rx" : "tx",
>> +				queue_id / 2, vhost_vq_stat_strings[i].name);
>> +
>> +	return VHOST_NB_VQ_STATS;
>> +}
>> +
>> +int
>> +rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
>> +		struct rte_vhost_stat *stats, unsigned int n)
>> +{
>> +	struct virtio_net *dev = get_device(vid);
>> +	struct vhost_virtqueue *vq;
>> +	unsigned int i;
>> +
>> +	if (dev == NULL)
>> +		return -1;
>> +
>> +	if (queue_id >= dev->nr_vring)
>> +		return -1;
>> +
>> +	if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
>> +		return -1;
>> +
>> +	if (stats == NULL || n < VHOST_NB_VQ_STATS)
>> +		return VHOST_NB_VQ_STATS;
>> +
>> +	vq = dev->virtqueue[queue_id];
>> +
>> +	rte_spinlock_lock(&vq->access_lock);
>> +	for (i = 0; i < VHOST_NB_VQ_STATS; i++) {
>> +		stats[i].value =
>> +			*(uint64_t *)(((char *)vq) +
>> vhost_vq_stat_strings[i].offset);
>> +		stats[i].id = i;
>> +	}
>> +	rte_spinlock_unlock(&vq->access_lock);
>> +
>> +	return VHOST_NB_VQ_STATS;
>> +}
>> +
>> +int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
>> +{
>> +	struct virtio_net *dev = get_device(vid);
>> +	struct vhost_virtqueue *vq;
>> +
>> +	if (dev == NULL)
>> +		return -1;
>> +
>> +	if (queue_id >= dev->nr_vring)
>> +		return -1;
>> +
> 
> Guess you forgot to check VIRTIO_DEV_STATS_ENABLED?

Yes, I will add it in next revision.

>> +	vq = dev->virtqueue[queue_id];
>> +
>> +	rte_spinlock_lock(&vq->access_lock);
>> +	memset(&vq->stats, 0, sizeof(vq->stats));
>> +	rte_spinlock_unlock(&vq->access_lock);
>> +
>> +	return 0;
>> +}
>> +
>>   RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
>>   RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
>> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
>> index 7085e0885c..4c151244c7 100644
>> --- a/lib/vhost/vhost.h
>> +++ b/lib/vhost/vhost.h
>> @@ -38,6 +38,8 @@
>>   #define VIRTIO_DEV_FEATURES_FAILED ((uint32_t)1 << 4)
>>   /* Used to indicate that the virtio_net tx code should fill TX ol_flags
>> */
>>   #define VIRTIO_DEV_LEGACY_OL_FLAGS ((uint32_t)1 << 5)
>> +/*  Used to indicate the application has requested statistics collection
> 
> Should be only one space between '/*' and 'Used' :P
> 
> Thanks,
> Chenbo
> 

Thanks for the reivew,
Maxime


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/5] vhost: introduce per-virtqueue stats API
  2022-05-17 13:23 ` Maxime Coquelin
@ 2022-05-17 13:33   ` Maxime Coquelin
  0 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-17 13:33 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand, i.maximets



On 5/17/22 15:23, Maxime Coquelin wrote:
> 
> 
> On 5/10/22 22:17, Maxime Coquelin wrote:
>> This series introduces a new Vhost API that provides
>> per-virtqueue statistics to the application. It will be
>> generally useful, but initial motivation for this series
>> was to be able to get to get virtqueues stats when Virtio
>> RSS feature will be supported in Vhost library.
>>
>> First patch introduces the new API and generic statistics.
>> Patch 2 makes use of this API in Vhost PMD.
>> The 3 last patches introduce more specific counters
>> (syscalls in DP, IOTLB cache hits and misses, inflight
>> submitted and completed for vhost async).
>>
>>
>>
>> Changes in v3:
>> ==============
>> - Remove patch1, already applied
>> - Fix memory leaks in error path (Chenbo)
>> - Fix various typos (Chenbo)
>> - Removed unused IOTLB counter (Chenbo)
>> - Add generic packets stats in async enqueue path
>> - Add Vhost async specific counters
>>
>> Changes in v2:
>> =============
>> - Rebased on top of v22.03
>> - Add documentation
>>
>> Maxime Coquelin (5):
>>    vhost: add per-virtqueue statistics support
>>    net/vhost: move to Vhost library stats API
>>    vhost: add statistics for guest notifications
>>    vhost: add statistics for IOTLB
>>    vhost: add statistics for in-flight packets
>>
>>   doc/guides/prog_guide/vhost_lib.rst |  24 ++
>>   drivers/net/vhost/rte_eth_vhost.c   | 348 ++++++++++------------------
>>   lib/vhost/rte_vhost.h               |  99 ++++++++
>>   lib/vhost/socket.c                  |   4 +-
>>   lib/vhost/version.map               |   4 +-
>>   lib/vhost/vhost.c                   | 125 +++++++++-
>>   lib/vhost/vhost.h                   |  27 ++-
>>   lib/vhost/virtio_net.c              |  61 +++++
>>   8 files changed, 460 insertions(+), 232 deletions(-)
>>


Applied to dpdk-next-virtio/main.

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/5] vhost: introduce per-virtqueue stats API
  2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
  2022-05-11  7:24 ` Maxime Coquelin
@ 2022-05-17 13:23 ` Maxime Coquelin
  2022-05-17 13:33   ` Maxime Coquelin
  1 sibling, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-17 13:23 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand, i.maximets



On 5/10/22 22:17, Maxime Coquelin wrote:
> This series introduces a new Vhost API that provides
> per-virtqueue statistics to the application. It will be
> generally useful, but initial motivation for this series
> was to be able to get to get virtqueues stats when Virtio
> RSS feature will be supported in Vhost library.
> 
> First patch introduces the new API and generic statistics.
> Patch 2 makes use of this API in Vhost PMD.
> The 3 last patches introduce more specific counters
> (syscalls in DP, IOTLB cache hits and misses, inflight
> submitted and completed for vhost async).
> 
> 
> 
> Changes in v3:
> ==============
> - Remove patch1, already applied
> - Fix memory leaks in error path (Chenbo)
> - Fix various typos (Chenbo)
> - Removed unused IOTLB counter (Chenbo)
> - Add generic packets stats in async enqueue path
> - Add Vhost async specific counters
> 
> Changes in v2:
> =============
> - Rebased on top of v22.03
> - Add documentation
> 
> Maxime Coquelin (5):
>    vhost: add per-virtqueue statistics support
>    net/vhost: move to Vhost library stats API
>    vhost: add statistics for guest notifications
>    vhost: add statistics for IOTLB
>    vhost: add statistics for in-flight packets
> 
>   doc/guides/prog_guide/vhost_lib.rst |  24 ++
>   drivers/net/vhost/rte_eth_vhost.c   | 348 ++++++++++------------------
>   lib/vhost/rte_vhost.h               |  99 ++++++++
>   lib/vhost/socket.c                  |   4 +-
>   lib/vhost/version.map               |   4 +-
>   lib/vhost/vhost.c                   | 125 +++++++++-
>   lib/vhost/vhost.h                   |  27 ++-
>   lib/vhost/virtio_net.c              |  61 +++++
>   8 files changed, 460 insertions(+), 232 deletions(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/5] vhost: introduce per-virtqueue stats API
  2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
@ 2022-05-11  7:24 ` Maxime Coquelin
  2022-05-17 13:23 ` Maxime Coquelin
  1 sibling, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-11  7:24 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand, i.maximets



On 5/10/22 22:17, Maxime Coquelin wrote:
> This series introduces a new Vhost API that provides
> per-virtqueue statistics to the application. It will be
> generally useful, but initial motivation for this series
> was to be able to get to get virtqueues stats when Virtio
> RSS feature will be supported in Vhost library.
> 
> First patch introduces the new API and generic statistics.
> Patch 2 makes use of this API in Vhost PMD.
> The 3 last patches introduce more specific counters
> (syscalls in DP, IOTLB cache hits and misses, inflight
> submitted and completed for vhost async).
> 
> 
> 
> Changes in v3:
> ==============
> - Remove patch1, already applied
> - Fix memory leaks in error path (Chenbo)
> - Fix various typos (Chenbo)
> - Removed unused IOTLB counter (Chenbo)
> - Add generic packets stats in async enqueue path
> - Add Vhost async specific counters
> 
> Changes in v2:
> =============
> - Rebased on top of v22.03
> - Add documentation
> 
> Maxime Coquelin (5):
>    vhost: add per-virtqueue statistics support
>    net/vhost: move to Vhost library stats API
>    vhost: add statistics for guest notifications
>    vhost: add statistics for IOTLB
>    vhost: add statistics for in-flight packets
> 
>   doc/guides/prog_guide/vhost_lib.rst |  24 ++
>   drivers/net/vhost/rte_eth_vhost.c   | 348 ++++++++++------------------
>   lib/vhost/rte_vhost.h               |  99 ++++++++
>   lib/vhost/socket.c                  |   4 +-
>   lib/vhost/version.map               |   4 +-
>   lib/vhost/vhost.c                   | 125 +++++++++-
>   lib/vhost/vhost.h                   |  27 ++-
>   lib/vhost/virtio_net.c              |  61 +++++
>   8 files changed, 460 insertions(+), 232 deletions(-)
> 

Sorry, I missed to set the subject prefix to v3.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/5] vhost: introduce per-virtqueue stats API
@ 2022-05-10 20:17 Maxime Coquelin
  2022-05-11  7:24 ` Maxime Coquelin
  2022-05-17 13:23 ` Maxime Coquelin
  0 siblings, 2 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-10 20:17 UTC (permalink / raw)
  To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin

This series introduces a new Vhost API that provides
per-virtqueue statistics to the application. It will be
generally useful, but initial motivation for this series
was to be able to get to get virtqueues stats when Virtio
RSS feature will be supported in Vhost library.

First patch introduces the new API and generic statistics.
Patch 2 makes use of this API in Vhost PMD.
The 3 last patches introduce more specific counters
(syscalls in DP, IOTLB cache hits and misses, inflight
submitted and completed for vhost async).



Changes in v3:
==============
- Remove patch1, already applied
- Fix memory leaks in error path (Chenbo)
- Fix various typos (Chenbo)
- Removed unused IOTLB counter (Chenbo)
- Add generic packets stats in async enqueue path
- Add Vhost async specific counters

Changes in v2:
=============
- Rebased on top of v22.03
- Add documentation

Maxime Coquelin (5):
  vhost: add per-virtqueue statistics support
  net/vhost: move to Vhost library stats API
  vhost: add statistics for guest notifications
  vhost: add statistics for IOTLB
  vhost: add statistics for in-flight packets

 doc/guides/prog_guide/vhost_lib.rst |  24 ++
 drivers/net/vhost/rte_eth_vhost.c   | 348 ++++++++++------------------
 lib/vhost/rte_vhost.h               |  99 ++++++++
 lib/vhost/socket.c                  |   4 +-
 lib/vhost/version.map               |   4 +-
 lib/vhost/vhost.c                   | 125 +++++++++-
 lib/vhost/vhost.h                   |  27 ++-
 lib/vhost/virtio_net.c              |  61 +++++
 8 files changed, 460 insertions(+), 232 deletions(-)

-- 
2.35.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-05-17 13:33 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-27 14:56 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
2022-01-27 14:56 ` [PATCH 1/5] vhost: fix missing virtqueue lock protection Maxime Coquelin
2022-01-27 14:56 ` [PATCH 2/5] vhost: add per-virtqueue statistics support Maxime Coquelin
2022-04-21 14:09   ` Xia, Chenbo
2022-04-22 12:17     ` Maxime Coquelin
2022-01-27 14:56 ` [PATCH 3/5] net/vhost: move to Vhost library stats API Maxime Coquelin
2022-01-27 14:56 ` [PATCH 4/5] vhost: add statistics for guest notifications Maxime Coquelin
2022-01-27 14:56 ` [PATCH 5/5] vhost: add statistics for IOTLB Maxime Coquelin
2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
2022-05-11  7:24 ` Maxime Coquelin
2022-05-17 13:23 ` Maxime Coquelin
2022-05-17 13:33   ` Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).