* [PATCH 0/5] vhost: introduce per-virtqueue stats API
@ 2022-05-10 20:17 Maxime Coquelin
2022-05-10 20:17 ` [PATCH 1/5] vhost: add per-virtqueue statistics support Maxime Coquelin
` (6 more replies)
0 siblings, 7 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-10 20:17 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
This series introduces a new Vhost API that provides
per-virtqueue statistics to the application. It will be
generally useful, but initial motivation for this series
was to be able to get to get virtqueues stats when Virtio
RSS feature will be supported in Vhost library.
First patch introduces the new API and generic statistics.
Patch 2 makes use of this API in Vhost PMD.
The 3 last patches introduce more specific counters
(syscalls in DP, IOTLB cache hits and misses, inflight
submitted and completed for vhost async).
Changes in v3:
==============
- Remove patch1, already applied
- Fix memory leaks in error path (Chenbo)
- Fix various typos (Chenbo)
- Removed unused IOTLB counter (Chenbo)
- Add generic packets stats in async enqueue path
- Add Vhost async specific counters
Changes in v2:
=============
- Rebased on top of v22.03
- Add documentation
Maxime Coquelin (5):
vhost: add per-virtqueue statistics support
net/vhost: move to Vhost library stats API
vhost: add statistics for guest notifications
vhost: add statistics for IOTLB
vhost: add statistics for in-flight packets
doc/guides/prog_guide/vhost_lib.rst | 24 ++
drivers/net/vhost/rte_eth_vhost.c | 348 ++++++++++------------------
lib/vhost/rte_vhost.h | 99 ++++++++
lib/vhost/socket.c | 4 +-
lib/vhost/version.map | 4 +-
lib/vhost/vhost.c | 125 +++++++++-
lib/vhost/vhost.h | 27 ++-
lib/vhost/virtio_net.c | 61 +++++
8 files changed, 460 insertions(+), 232 deletions(-)
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/5] vhost: add per-virtqueue statistics support
2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
@ 2022-05-10 20:17 ` Maxime Coquelin
2022-05-11 13:56 ` Xia, Chenbo
2022-05-10 20:17 ` [PATCH 2/5] net/vhost: move to Vhost library stats API Maxime Coquelin
` (5 subsequent siblings)
6 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-10 20:17 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
This patch introduces new APIs for the application
to query and reset per-virtqueue statistics. The
patch also introduces generic counters.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
doc/guides/prog_guide/vhost_lib.rst | 24 ++++++
lib/vhost/rte_vhost.h | 99 ++++++++++++++++++++++++
lib/vhost/socket.c | 4 +-
lib/vhost/version.map | 4 +-
lib/vhost/vhost.c | 112 +++++++++++++++++++++++++++-
lib/vhost/vhost.h | 18 ++++-
lib/vhost/virtio_net.c | 55 ++++++++++++++
7 files changed, 311 insertions(+), 5 deletions(-)
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index f287b76ebf..0f91215a90 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -130,6 +130,15 @@ The following is an overview of some key Vhost API functions:
It is disabled by default.
+ - ``RTE_VHOST_USER_NET_STATS_ENABLE``
+
+ Per-virtqueue statistics collection will be enabled when this flag is set.
+ When enabled, the application may use rte_vhost_stats_get_names() and
+ rte_vhost_stats_get() to collect statistics, and rte_vhost_stats_reset() to
+ reset them.
+
+ It is disabled by default
+
* ``rte_vhost_driver_set_features(path, features)``
This function sets the feature bits the vhost-user driver supports. The
@@ -282,6 +291,21 @@ The following is an overview of some key Vhost API functions:
Clear inflight packets which are submitted to DMA engine in vhost async data
path. Completed packets are returned to applications through ``pkts``.
+* ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct rte_vhost_stat_name *names, unsigned int size)``
+
+ This function returns the names of the queue statistics. It requires
+ statistics collection to be enabled at registration time.
+
+* ``rte_vhost_vring_stats_get(int vid, uint16_t queue_id, struct rte_vhost_stat *stats, unsigned int n)``
+
+ This function returns the queue statistics. It requires statistics
+ collection to be enabled at registration time.
+
+* ``rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)``
+
+ This function resets the queue statistics. It requires statistics
+ collection to be enabled at registration time.
+
Vhost-user Implementations
--------------------------
diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
index c733f857c6..266048f3e5 100644
--- a/lib/vhost/rte_vhost.h
+++ b/lib/vhost/rte_vhost.h
@@ -39,6 +39,7 @@ extern "C" {
#define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6)
#define RTE_VHOST_USER_ASYNC_COPY (1ULL << 7)
#define RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS (1ULL << 8)
+#define RTE_VHOST_USER_NET_STATS_ENABLE (1ULL << 9)
/* Features. */
#ifndef VIRTIO_NET_F_GUEST_ANNOUNCE
@@ -321,6 +322,32 @@ struct rte_vhost_power_monitor_cond {
uint8_t match;
};
+/** Maximum name length for the statistics counters */
+#define RTE_VHOST_STATS_NAME_SIZE 64
+
+/**
+ * Vhost virtqueue statistics structure
+ *
+ * This structure is used by rte_vhost_vring_stats_get() to provide
+ * virtqueue statistics to the calling application.
+ * It maps a name ID, corresponding to an index in the array returned
+ * by rte_vhost_vring_stats_get_names(), to a statistic value.
+ */
+struct rte_vhost_stat {
+ uint64_t id; /**< The index in xstats name array. */
+ uint64_t value; /**< The statistic counter value. */
+};
+
+/**
+ * Vhost virtqueue statistic name element
+ *
+ * This structure is used by rte_vhost_vring_stats_get_names() to
+ * provide virtqueue statistics names to the calling application.
+ */
+struct rte_vhost_stat_name {
+ char name[RTE_VHOST_STATS_NAME_SIZE]; /**< The statistic name. */
+};
+
/**
* Convert guest physical address to host virtual address
*
@@ -1063,6 +1090,78 @@ __rte_experimental
int
rte_vhost_slave_config_change(int vid, bool need_reply);
+/**
+ * Retrieve names of statistics of a Vhost virtqueue.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * @param vid
+ * vhost device ID
+ * @param queue_id
+ * vhost queue index
+ * @param name
+ * array of at least size elements to be filled.
+ * If set to NULL, the function returns the required number of elements.
+ * @param size
+ * The number of elements in stats_names array.
+ * @return
+ * - Success if greater than 0 and lower or equal to *size*. The return value
+ * indicates the number of elements filled in the *names* array.
+ * - Failure if greater than *size*. The return value indicates the number of
+ * elements the *names* array that should be given to succeed.
+ * - Failure if lower than 0. The device ID or queue ID is invalid or
+ + statistics collection is not enabled.
+ */
+__rte_experimental
+int
+rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
+ struct rte_vhost_stat_name *name, unsigned int size);
+
+/**
+ * Retrieve statistics of a Vhost virtqueue.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * @param vid
+ * vhost device ID
+ * @param queue_id
+ * vhost queue index
+ * @param stats
+ * A pointer to a table of structure of type rte_vhost_stat to be filled with
+ * virtqueue statistics ids and values.
+ * @param n
+ * The number of elements in stats array.
+ * @return
+ * - Success if greater than 0 and lower or equal to *n*. The return value
+ * indicates the number of elements filled in the *stats* array.
+ * - Failure if greater than *n*. The return value indicates the number of
+ * elements the *stats* array that should be given to succeed.
+ * - Failure if lower than 0. The device ID or queue ID is invalid, or
+ * statistics collection is not enabled.
+ */
+__rte_experimental
+int
+rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
+ struct rte_vhost_stat *stats, unsigned int n);
+
+/**
+ * Reset statistics of a Vhost virtqueue.
+ *
+ * @param vid
+ * vhost device ID
+ * @param queue_id
+ * vhost queue index
+ * @return
+ * - Success if 0. Statistics have been reset.
+ * - Failure if lower than 0. The device ID or queue ID is invalid, or
+ * statistics collection is not enabled.
+ */
+__rte_experimental
+int
+rte_vhost_vring_stats_reset(int vid, uint16_t queue_id);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index b304339de9..baf6c338d3 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -42,6 +42,7 @@ struct vhost_user_socket {
bool linearbuf;
bool async_copy;
bool net_compliant_ol_flags;
+ bool stats_enabled;
/*
* The "supported_features" indicates the feature bits the
@@ -227,7 +228,7 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket)
vhost_set_ifname(vid, vsocket->path, size);
vhost_setup_virtio_net(vid, vsocket->use_builtin_virtio_net,
- vsocket->net_compliant_ol_flags);
+ vsocket->net_compliant_ol_flags, vsocket->stats_enabled);
vhost_attach_vdpa_device(vid, vsocket->vdpa_dev);
@@ -863,6 +864,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT;
vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY;
vsocket->net_compliant_ol_flags = flags & RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS;
+ vsocket->stats_enabled = flags & RTE_VHOST_USER_NET_STATS_ENABLE;
if (vsocket->async_copy &&
(flags & (RTE_VHOST_USER_IOMMU_SUPPORT |
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 5841315386..34df41556d 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -90,7 +90,9 @@ EXPERIMENTAL {
# added in 22.07
rte_vhost_async_get_inflight_thread_unsafe;
-
+ rte_vhost_vring_stats_get_names;
+ rte_vhost_vring_stats_get;
+ rte_vhost_vring_stats_reset;
};
INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index df0bb9d043..b22f10de72 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -24,6 +24,28 @@
struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+struct vhost_vq_stats_name_off {
+ char name[RTE_VHOST_STATS_NAME_SIZE];
+ unsigned int offset;
+};
+
+static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
+ {"good_packets", offsetof(struct vhost_virtqueue, stats.packets)},
+ {"good_bytes", offsetof(struct vhost_virtqueue, stats.bytes)},
+ {"multicast_packets", offsetof(struct vhost_virtqueue, stats.multicast)},
+ {"broadcast_packets", offsetof(struct vhost_virtqueue, stats.broadcast)},
+ {"undersize_packets", offsetof(struct vhost_virtqueue, stats.size_bins[0])},
+ {"size_64_packets", offsetof(struct vhost_virtqueue, stats.size_bins[1])},
+ {"size_65_127_packets", offsetof(struct vhost_virtqueue, stats.size_bins[2])},
+ {"size_128_255_packets", offsetof(struct vhost_virtqueue, stats.size_bins[3])},
+ {"size_256_511_packets", offsetof(struct vhost_virtqueue, stats.size_bins[4])},
+ {"size_512_1023_packets", offsetof(struct vhost_virtqueue, stats.size_bins[5])},
+ {"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])},
+ {"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])},
+};
+
+#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
+
/* Called with iotlb_lock read-locked */
uint64_t
__vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
@@ -755,7 +777,7 @@ vhost_set_ifname(int vid, const char *if_name, unsigned int if_len)
}
void
-vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags)
+vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats_enabled)
{
struct virtio_net *dev = get_device(vid);
@@ -770,6 +792,10 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags)
dev->flags |= VIRTIO_DEV_LEGACY_OL_FLAGS;
else
dev->flags &= ~VIRTIO_DEV_LEGACY_OL_FLAGS;
+ if (stats_enabled)
+ dev->flags |= VIRTIO_DEV_STATS_ENABLED;
+ else
+ dev->flags &= ~VIRTIO_DEV_STATS_ENABLED;
}
void
@@ -1971,5 +1997,89 @@ rte_vhost_get_monitor_addr(int vid, uint16_t queue_id,
return 0;
}
+
+int
+rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
+ struct rte_vhost_stat_name *name, unsigned int size)
+{
+ struct virtio_net *dev = get_device(vid);
+ unsigned int i;
+
+ if (dev == NULL)
+ return -1;
+
+ if (queue_id >= dev->nr_vring)
+ return -1;
+
+ if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+ return -1;
+
+ if (name == NULL || size < VHOST_NB_VQ_STATS)
+ return VHOST_NB_VQ_STATS;
+
+ for (i = 0; i < VHOST_NB_VQ_STATS; i++)
+ snprintf(name[i].name, sizeof(name[i].name), "%s_q%u_%s",
+ (queue_id & 1) ? "rx" : "tx",
+ queue_id / 2, vhost_vq_stat_strings[i].name);
+
+ return VHOST_NB_VQ_STATS;
+}
+
+int
+rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
+ struct rte_vhost_stat *stats, unsigned int n)
+{
+ struct virtio_net *dev = get_device(vid);
+ struct vhost_virtqueue *vq;
+ unsigned int i;
+
+ if (dev == NULL)
+ return -1;
+
+ if (queue_id >= dev->nr_vring)
+ return -1;
+
+ if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+ return -1;
+
+ if (stats == NULL || n < VHOST_NB_VQ_STATS)
+ return VHOST_NB_VQ_STATS;
+
+ vq = dev->virtqueue[queue_id];
+
+ rte_spinlock_lock(&vq->access_lock);
+ for (i = 0; i < VHOST_NB_VQ_STATS; i++) {
+ stats[i].value =
+ *(uint64_t *)(((char *)vq) + vhost_vq_stat_strings[i].offset);
+ stats[i].id = i;
+ }
+ rte_spinlock_unlock(&vq->access_lock);
+
+ return VHOST_NB_VQ_STATS;
+}
+
+int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
+{
+ struct virtio_net *dev = get_device(vid);
+ struct vhost_virtqueue *vq;
+
+ if (dev == NULL)
+ return -1;
+
+ if (queue_id >= dev->nr_vring)
+ return -1;
+
+ if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+ return -1;
+
+ vq = dev->virtqueue[queue_id];
+
+ rte_spinlock_lock(&vq->access_lock);
+ memset(&vq->stats, 0, sizeof(vq->stats));
+ rte_spinlock_unlock(&vq->access_lock);
+
+ return 0;
+}
+
RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index a9edc271aa..01b97011aa 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -37,6 +37,8 @@
#define VIRTIO_DEV_FEATURES_FAILED ((uint32_t)1 << 4)
/* Used to indicate that the virtio_net tx code should fill TX ol_flags */
#define VIRTIO_DEV_LEGACY_OL_FLAGS ((uint32_t)1 << 5)
+/* Used to indicate the application has requested statistics collection */
+#define VIRTIO_DEV_STATS_ENABLED ((uint32_t)1 << 6)
/* Backend value set by guest. */
#define VIRTIO_DEV_STOPPED -1
@@ -121,6 +123,18 @@ struct vring_used_elem_packed {
uint32_t count;
};
+/**
+ * Virtqueue statistics
+ */
+struct virtqueue_stats {
+ uint64_t packets;
+ uint64_t bytes;
+ uint64_t multicast;
+ uint64_t broadcast;
+ /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
+ uint64_t size_bins[8];
+};
+
/**
* iovec
*/
@@ -305,6 +319,7 @@ struct vhost_virtqueue {
#define VIRTIO_UNINITIALIZED_NOTIF (-1)
struct vhost_vring_addr ring_addrs;
+ struct virtqueue_stats stats;
} __rte_cache_aligned;
/* Virtio device status as per Virtio specification */
@@ -780,7 +795,7 @@ int alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx);
void vhost_attach_vdpa_device(int vid, struct rte_vdpa_device *dev);
void vhost_set_ifname(int, const char *if_name, unsigned int if_len);
-void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags);
+void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags, bool stats_enabled);
void vhost_enable_extbuf(int vid);
void vhost_enable_linearbuf(int vid);
int vhost_enable_guest_notification(struct virtio_net *dev,
@@ -957,5 +972,4 @@ mbuf_is_consumed(struct rte_mbuf *m)
return true;
}
-
#endif /* _VHOST_NET_CDEV_H_ */
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 5f432b0d77..b1ea9fa4a5 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -47,6 +47,54 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring)
return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
}
+/*
+ * This function must be called with virtqueue's access_lock taken.
+ */
+static inline void
+vhost_queue_stats_update(struct virtio_net *dev, struct vhost_virtqueue *vq,
+ struct rte_mbuf **pkts, uint16_t count)
+{
+ struct virtqueue_stats *stats = &vq->stats;
+ int i;
+
+ if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+ return;
+
+ for (i = 0; i < count; i++) {
+ struct rte_ether_addr *ea;
+ struct rte_mbuf *pkt = pkts[i];
+ uint32_t pkt_len = rte_pktmbuf_pkt_len(pkt);
+
+ stats->packets++;
+ stats->bytes += pkt_len;
+
+ if (pkt_len == 64) {
+ stats->size_bins[1]++;
+ } else if (pkt_len > 64 && pkt_len < 1024) {
+ uint32_t bin;
+
+ /* count zeros, and offset into correct bin */
+ bin = (sizeof(pkt_len) * 8) - __builtin_clz(pkt_len) - 5;
+ stats->size_bins[bin]++;
+ } else {
+ if (pkt_len < 64)
+ stats->size_bins[0]++;
+ else if (pkt_len < 1519)
+ stats->size_bins[6]++;
+ else
+ stats->size_bins[7]++;
+ }
+
+ ea = rte_pktmbuf_mtod(pkt, struct rte_ether_addr *);
+ if (rte_is_multicast_ether_addr(ea)) {
+ if (rte_is_broadcast_ether_addr(ea))
+ stats->broadcast++;
+ else
+ stats->multicast++;
+ }
+ }
+}
+
static __rte_always_inline int64_t
vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq,
int16_t dma_id, uint16_t vchan_id, uint16_t flag_idx,
@@ -1509,6 +1557,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
else
nb_tx = virtio_dev_rx_split(dev, vq, pkts, count);
+ vhost_queue_stats_update(dev, vq, pkts, nb_tx);
+
out:
if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
vhost_user_iotlb_rd_unlock(vq);
@@ -2064,6 +2114,8 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id);
+ vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl);
+
out:
rte_spinlock_unlock(&vq->access_lock);
@@ -3113,6 +3165,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
* learning table will get updated first.
*/
pkts[0] = rarp_mbuf;
+ vhost_queue_stats_update(dev, vq, pkts, 1);
pkts++;
count -= 1;
}
@@ -3129,6 +3182,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
count = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
}
+ vhost_queue_stats_update(dev, vq, pkts, count);
+
out:
if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
vhost_user_iotlb_rd_unlock(vq);
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/5] net/vhost: move to Vhost library stats API
2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
2022-05-10 20:17 ` [PATCH 1/5] vhost: add per-virtqueue statistics support Maxime Coquelin
@ 2022-05-10 20:17 ` Maxime Coquelin
2022-05-10 20:17 ` [PATCH 3/5] vhost: add statistics for guest notifications Maxime Coquelin
` (4 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-10 20:17 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
Now that we have Vhost statistics APIs, this patch replaces
Vhost PMD extended statistics implementation with calls
to the new API. It will enable getting more statistics for
counters that cannot be implemented at the PMD level.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
---
drivers/net/vhost/rte_eth_vhost.c | 348 +++++++++++-------------------
1 file changed, 122 insertions(+), 226 deletions(-)
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a248a65df4..8dee629fb0 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -59,33 +59,10 @@ static struct rte_ether_addr base_eth_addr = {
}
};
-enum vhost_xstats_pkts {
- VHOST_UNDERSIZE_PKT = 0,
- VHOST_64_PKT,
- VHOST_65_TO_127_PKT,
- VHOST_128_TO_255_PKT,
- VHOST_256_TO_511_PKT,
- VHOST_512_TO_1023_PKT,
- VHOST_1024_TO_1522_PKT,
- VHOST_1523_TO_MAX_PKT,
- VHOST_BROADCAST_PKT,
- VHOST_MULTICAST_PKT,
- VHOST_UNICAST_PKT,
- VHOST_PKT,
- VHOST_BYTE,
- VHOST_MISSED_PKT,
- VHOST_ERRORS_PKT,
- VHOST_ERRORS_FRAGMENTED,
- VHOST_ERRORS_JABBER,
- VHOST_UNKNOWN_PROTOCOL,
- VHOST_XSTATS_MAX,
-};
-
struct vhost_stats {
uint64_t pkts;
uint64_t bytes;
uint64_t missed_pkts;
- uint64_t xstats[VHOST_XSTATS_MAX];
};
struct vhost_queue {
@@ -140,138 +117,92 @@ struct rte_vhost_vring_state {
static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS];
-#define VHOST_XSTATS_NAME_SIZE 64
-
-struct vhost_xstats_name_off {
- char name[VHOST_XSTATS_NAME_SIZE];
- uint64_t offset;
-};
-
-/* [rx]_is prepended to the name string here */
-static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = {
- {"good_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])},
- {"total_bytes",
- offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])},
- {"missed_pkts",
- offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])},
- {"broadcast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])},
- {"multicast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])},
- {"unicast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])},
- {"undersize_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])},
- {"size_64_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
- {"size_65_to_127_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])},
- {"size_128_to_255_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])},
- {"size_256_to_511_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])},
- {"size_512_to_1023_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])},
- {"size_1024_to_1522_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])},
- {"size_1523_to_max_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])},
- {"errors_with_bad_CRC",
- offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
- {"fragmented_errors",
- offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_FRAGMENTED])},
- {"jabber_errors",
- offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_JABBER])},
- {"unknown_protos_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNKNOWN_PROTOCOL])},
-};
-
-/* [tx]_ is prepended to the name string here */
-static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = {
- {"good_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])},
- {"total_bytes",
- offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])},
- {"missed_pkts",
- offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])},
- {"broadcast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])},
- {"multicast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])},
- {"unicast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])},
- {"undersize_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])},
- {"size_64_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
- {"size_65_to_127_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])},
- {"size_128_to_255_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])},
- {"size_256_to_511_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])},
- {"size_512_to_1023_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])},
- {"size_1024_to_1522_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])},
- {"size_1523_to_max_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])},
- {"errors_with_bad_CRC",
- offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
-};
-
-#define VHOST_NB_XSTATS_RXPORT (sizeof(vhost_rxport_stat_strings) / \
- sizeof(vhost_rxport_stat_strings[0]))
-
-#define VHOST_NB_XSTATS_TXPORT (sizeof(vhost_txport_stat_strings) / \
- sizeof(vhost_txport_stat_strings[0]))
-
static int
vhost_dev_xstats_reset(struct rte_eth_dev *dev)
{
- struct vhost_queue *vq = NULL;
- unsigned int i = 0;
+ struct vhost_queue *vq;
+ int ret, i;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
vq = dev->data->rx_queues[i];
- if (!vq)
- continue;
- memset(&vq->stats, 0, sizeof(vq->stats));
+ ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id);
+ if (ret < 0)
+ return ret;
}
+
for (i = 0; i < dev->data->nb_tx_queues; i++) {
vq = dev->data->tx_queues[i];
- if (!vq)
- continue;
- memset(&vq->stats, 0, sizeof(vq->stats));
+ ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id);
+ if (ret < 0)
+ return ret;
}
return 0;
}
static int
-vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
+vhost_dev_xstats_get_names(struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names,
- unsigned int limit __rte_unused)
+ unsigned int limit)
{
- unsigned int t = 0;
- int count = 0;
- int nstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT;
+ struct rte_vhost_stat_name *name;
+ struct vhost_queue *vq;
+ int ret, i, count = 0, nstats = 0;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ vq = dev->data->rx_queues[i];
+ ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0);
+ if (ret < 0)
+ return ret;
- if (!xstats_names)
+ nstats += ret;
+ }
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ vq = dev->data->tx_queues[i];
+ ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0);
+ if (ret < 0)
+ return ret;
+
+ nstats += ret;
+ }
+
+ if (!xstats_names || limit < (unsigned int)nstats)
return nstats;
- for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "rx_%s", vhost_rxport_stat_strings[t].name);
- count++;
- }
- for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "tx_%s", vhost_txport_stat_strings[t].name);
- count++;
+
+ name = calloc(nstats, sizeof(*name));
+ if (!name)
+ return -1;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ vq = dev->data->rx_queues[i];
+ ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id,
+ name + count, nstats - count);
+ if (ret < 0) {
+ free(name);
+ return ret;
+ }
+
+ count += ret;
}
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ vq = dev->data->tx_queues[i];
+ ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id,
+ name + count, nstats - count);
+ if (ret < 0) {
+ free(name);
+ return ret;
+ }
+
+ count += ret;
+ }
+
+ for (i = 0; i < count; i++)
+ strncpy(xstats_names[i].name, name[i].name, RTE_ETH_XSTATS_NAME_SIZE);
+
+ free(name);
+
return count;
}
@@ -279,86 +210,67 @@ static int
vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
unsigned int n)
{
- unsigned int i;
- unsigned int t;
- unsigned int count = 0;
- struct vhost_queue *vq = NULL;
- unsigned int nxstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT;
-
- if (n < nxstats)
- return nxstats;
-
- for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) {
- xstats[count].value = 0;
- for (i = 0; i < dev->data->nb_rx_queues; i++) {
- vq = dev->data->rx_queues[i];
- if (!vq)
- continue;
- xstats[count].value +=
- *(uint64_t *)(((char *)vq)
- + vhost_rxport_stat_strings[t].offset);
- }
- xstats[count].id = count;
- count++;
+ struct rte_vhost_stat *stats;
+ struct vhost_queue *vq;
+ int ret, i, count = 0, nstats = 0;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ vq = dev->data->rx_queues[i];
+ ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0);
+ if (ret < 0)
+ return ret;
+
+ nstats += ret;
}
- for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) {
- xstats[count].value = 0;
- for (i = 0; i < dev->data->nb_tx_queues; i++) {
- vq = dev->data->tx_queues[i];
- if (!vq)
- continue;
- xstats[count].value +=
- *(uint64_t *)(((char *)vq)
- + vhost_txport_stat_strings[t].offset);
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ vq = dev->data->tx_queues[i];
+ ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0);
+ if (ret < 0)
+ return ret;
+
+ nstats += ret;
+ }
+
+ if (!xstats || n < (unsigned int)nstats)
+ return nstats;
+
+ stats = calloc(nstats, sizeof(*stats));
+ if (!stats)
+ return -1;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ vq = dev->data->rx_queues[i];
+ ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
+ stats + count, nstats - count);
+ if (ret < 0) {
+ free(stats);
+ return ret;
}
- xstats[count].id = count;
- count++;
+
+ count += ret;
}
- return count;
-}
-static inline void
-vhost_count_xcast_packets(struct vhost_queue *vq,
- struct rte_mbuf *mbuf)
-{
- struct rte_ether_addr *ea = NULL;
- struct vhost_stats *pstats = &vq->stats;
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ vq = dev->data->tx_queues[i];
+ ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
+ stats + count, nstats - count);
+ if (ret < 0) {
+ free(stats);
+ return ret;
+ }
- ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *);
- if (rte_is_multicast_ether_addr(ea)) {
- if (rte_is_broadcast_ether_addr(ea))
- pstats->xstats[VHOST_BROADCAST_PKT]++;
- else
- pstats->xstats[VHOST_MULTICAST_PKT]++;
- } else {
- pstats->xstats[VHOST_UNICAST_PKT]++;
+ count += ret;
}
-}
-static __rte_always_inline void
-vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf *buf)
-{
- uint32_t pkt_len = 0;
- uint64_t index;
- struct vhost_stats *pstats = &vq->stats;
-
- pstats->xstats[VHOST_PKT]++;
- pkt_len = buf->pkt_len;
- if (pkt_len == 64) {
- pstats->xstats[VHOST_64_PKT]++;
- } else if (pkt_len > 64 && pkt_len < 1024) {
- index = (sizeof(pkt_len) * 8)
- - __builtin_clz(pkt_len) - 5;
- pstats->xstats[index]++;
- } else {
- if (pkt_len < 64)
- pstats->xstats[VHOST_UNDERSIZE_PKT]++;
- else if (pkt_len <= 1522)
- pstats->xstats[VHOST_1024_TO_1522_PKT]++;
- else if (pkt_len > 1522)
- pstats->xstats[VHOST_1523_TO_MAX_PKT]++;
- }
- vhost_count_xcast_packets(vq, buf);
+ for (i = 0; i < count; i++) {
+ xstats[i].id = stats[i].id;
+ xstats[i].value = stats[i].value;
+ }
+
+ free(stats);
+
+ return nstats;
}
static uint16_t
@@ -402,9 +314,6 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
rte_vlan_strip(bufs[i]);
r->stats.bytes += bufs[i]->pkt_len;
- r->stats.xstats[VHOST_BYTE] += bufs[i]->pkt_len;
-
- vhost_update_single_packet_xstats(r, bufs[i]);
}
out:
@@ -461,10 +370,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
break;
}
- for (i = 0; likely(i < nb_tx); i++) {
+ for (i = 0; likely(i < nb_tx); i++)
nb_bytes += bufs[i]->pkt_len;
- vhost_update_single_packet_xstats(r, bufs[i]);
- }
nb_missed = nb_bufs - nb_tx;
@@ -472,17 +379,6 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
r->stats.bytes += nb_bytes;
r->stats.missed_pkts += nb_missed;
- r->stats.xstats[VHOST_BYTE] += nb_bytes;
- r->stats.xstats[VHOST_MISSED_PKT] += nb_missed;
- r->stats.xstats[VHOST_UNICAST_PKT] += nb_missed;
-
- /* According to RFC2863, ifHCOutUcastPkts, ifHCOutMulticastPkts and
- * ifHCOutBroadcastPkts counters are increased when packets are not
- * transmitted successfully.
- */
- for (i = nb_tx; i < nb_bufs; i++)
- vhost_count_xcast_packets(r, bufs[i]);
-
for (i = 0; likely(i < nb_tx); i++)
rte_pktmbuf_free(bufs[i]);
out:
@@ -1566,7 +1462,7 @@ rte_pmd_vhost_probe(struct rte_vdev_device *dev)
int ret = 0;
char *iface_name;
uint16_t queues;
- uint64_t flags = 0;
+ uint64_t flags = RTE_VHOST_USER_NET_STATS_ENABLE;
uint64_t disable_flags = 0;
int client_mode = 0;
int iommu_support = 0;
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 3/5] vhost: add statistics for guest notifications
2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
2022-05-10 20:17 ` [PATCH 1/5] vhost: add per-virtqueue statistics support Maxime Coquelin
2022-05-10 20:17 ` [PATCH 2/5] net/vhost: move to Vhost library stats API Maxime Coquelin
@ 2022-05-10 20:17 ` Maxime Coquelin
2022-05-10 20:17 ` [PATCH 4/5] vhost: add statistics for IOTLB Maxime Coquelin
` (3 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-10 20:17 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
This patch adds a new virtqueue statistic for guest
notifications. It is useful to deduce from hypervisor side
whether the corresponding guest Virtio device is using
Kernel Virtio-net driver or DPDK Virtio PMD.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
---
lib/vhost/vhost.c | 1 +
lib/vhost/vhost.h | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index b22f10de72..fa708c1f9c 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -42,6 +42,7 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
{"size_512_1023_packets", offsetof(struct vhost_virtqueue, stats.size_bins[5])},
{"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])},
{"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])},
+ {"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)},
};
#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 01b97011aa..13c5c2266d 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -133,6 +133,7 @@ struct virtqueue_stats {
uint64_t broadcast;
/* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
uint64_t size_bins[8];
+ uint64_t guest_notifications;
};
/**
@@ -871,6 +872,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
(vq->callfd >= 0)) ||
unlikely(!signalled_used_valid)) {
eventfd_write(vq->callfd, (eventfd_t) 1);
+ if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+ vq->stats.guest_notifications++;
if (dev->notify_ops->guest_notified)
dev->notify_ops->guest_notified(dev->vid);
}
@@ -879,6 +882,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT)
&& (vq->callfd >= 0)) {
eventfd_write(vq->callfd, (eventfd_t)1);
+ if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+ vq->stats.guest_notifications++;
if (dev->notify_ops->guest_notified)
dev->notify_ops->guest_notified(dev->vid);
}
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 4/5] vhost: add statistics for IOTLB
2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
` (2 preceding siblings ...)
2022-05-10 20:17 ` [PATCH 3/5] vhost: add statistics for guest notifications Maxime Coquelin
@ 2022-05-10 20:17 ` Maxime Coquelin
2022-05-11 13:56 ` Xia, Chenbo
2022-05-10 20:17 ` [PATCH 5/5] vhost: add statistics for in-flight packets Maxime Coquelin
` (2 subsequent siblings)
6 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-10 20:17 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
This patch adds statistics for IOTLB hits and misses.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost.c | 10 +++++++++-
lib/vhost/vhost.h | 2 ++
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index fa708c1f9c..721b3a3247 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -43,6 +43,8 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
{"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])},
{"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])},
{"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)},
+ {"iotlb_hits", offsetof(struct vhost_virtqueue, stats.iotlb_hits)},
+ {"iotlb_misses", offsetof(struct vhost_virtqueue, stats.iotlb_misses)},
};
#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
@@ -60,8 +62,14 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
tmp_size = *size;
vva = vhost_user_iotlb_cache_find(vq, iova, &tmp_size, perm);
- if (tmp_size == *size)
+ if (tmp_size == *size) {
+ if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+ vq->stats.iotlb_hits++;
return vva;
+ }
+
+ if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+ vq->stats.iotlb_misses++;
iova += tmp_size;
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 13c5c2266d..872675207e 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -134,6 +134,8 @@ struct virtqueue_stats {
/* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
uint64_t size_bins[8];
uint64_t guest_notifications;
+ uint64_t iotlb_hits;
+ uint64_t iotlb_misses;
};
/**
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 5/5] vhost: add statistics for in-flight packets
2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
` (3 preceding siblings ...)
2022-05-10 20:17 ` [PATCH 4/5] vhost: add statistics for IOTLB Maxime Coquelin
@ 2022-05-10 20:17 ` Maxime Coquelin
2022-05-11 13:57 ` Xia, Chenbo
2022-05-11 7:24 ` [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
2022-05-17 13:23 ` Maxime Coquelin
6 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-10 20:17 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
This patch adds statistics for packets in-flight submission
and completion, when Vhost async mode is used.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost.c | 2 ++
lib/vhost/vhost.h | 2 ++
lib/vhost/virtio_net.c | 6 ++++++
3 files changed, 10 insertions(+)
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 721b3a3247..d9d31b2d03 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -45,6 +45,8 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
{"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)},
{"iotlb_hits", offsetof(struct vhost_virtqueue, stats.iotlb_hits)},
{"iotlb_misses", offsetof(struct vhost_virtqueue, stats.iotlb_misses)},
+ {"inflight_submitted", offsetof(struct vhost_virtqueue, stats.inflight_submitted)},
+ {"inflight_completed", offsetof(struct vhost_virtqueue, stats.inflight_completed)},
};
#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 872675207e..1573d0afe9 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -136,6 +136,8 @@ struct virtqueue_stats {
uint64_t guest_notifications;
uint64_t iotlb_hits;
uint64_t iotlb_misses;
+ uint64_t inflight_submitted;
+ uint64_t inflight_completed;
};
/**
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index b1ea9fa4a5..c8905c770a 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -2115,6 +2115,7 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id);
vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl);
+ vq->stats.inflight_completed += n_pkts_cpl;
out:
rte_spinlock_unlock(&vq->access_lock);
@@ -2158,6 +2159,9 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id);
+ vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl);
+ vq->stats.inflight_completed += n_pkts_cpl;
+
return n_pkts_cpl;
}
@@ -2207,6 +2211,8 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id,
nb_tx = virtio_dev_rx_async_submit_split(dev, vq, queue_id,
pkts, count, dma_id, vchan_id);
+ vq->stats.inflight_submitted += nb_tx;
+
out:
if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
vhost_user_iotlb_rd_unlock(vq);
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/5] vhost: introduce per-virtqueue stats API
2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
` (4 preceding siblings ...)
2022-05-10 20:17 ` [PATCH 5/5] vhost: add statistics for in-flight packets Maxime Coquelin
@ 2022-05-11 7:24 ` Maxime Coquelin
2022-05-17 13:23 ` Maxime Coquelin
6 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-11 7:24 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets
On 5/10/22 22:17, Maxime Coquelin wrote:
> This series introduces a new Vhost API that provides
> per-virtqueue statistics to the application. It will be
> generally useful, but initial motivation for this series
> was to be able to get to get virtqueues stats when Virtio
> RSS feature will be supported in Vhost library.
>
> First patch introduces the new API and generic statistics.
> Patch 2 makes use of this API in Vhost PMD.
> The 3 last patches introduce more specific counters
> (syscalls in DP, IOTLB cache hits and misses, inflight
> submitted and completed for vhost async).
>
>
>
> Changes in v3:
> ==============
> - Remove patch1, already applied
> - Fix memory leaks in error path (Chenbo)
> - Fix various typos (Chenbo)
> - Removed unused IOTLB counter (Chenbo)
> - Add generic packets stats in async enqueue path
> - Add Vhost async specific counters
>
> Changes in v2:
> =============
> - Rebased on top of v22.03
> - Add documentation
>
> Maxime Coquelin (5):
> vhost: add per-virtqueue statistics support
> net/vhost: move to Vhost library stats API
> vhost: add statistics for guest notifications
> vhost: add statistics for IOTLB
> vhost: add statistics for in-flight packets
>
> doc/guides/prog_guide/vhost_lib.rst | 24 ++
> drivers/net/vhost/rte_eth_vhost.c | 348 ++++++++++------------------
> lib/vhost/rte_vhost.h | 99 ++++++++
> lib/vhost/socket.c | 4 +-
> lib/vhost/version.map | 4 +-
> lib/vhost/vhost.c | 125 +++++++++-
> lib/vhost/vhost.h | 27 ++-
> lib/vhost/virtio_net.c | 61 +++++
> 8 files changed, 460 insertions(+), 232 deletions(-)
>
Sorry, I missed to set the subject prefix to v3.
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH 1/5] vhost: add per-virtqueue statistics support
2022-05-10 20:17 ` [PATCH 1/5] vhost: add per-virtqueue statistics support Maxime Coquelin
@ 2022-05-11 13:56 ` Xia, Chenbo
0 siblings, 0 replies; 12+ messages in thread
From: Xia, Chenbo @ 2022-05-11 13:56 UTC (permalink / raw)
To: Maxime Coquelin, dev, david.marchand, i.maximets
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, May 11, 2022 4:17 AM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; i.maximets@ovn.org
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH 1/5] vhost: add per-virtqueue statistics support
>
> This patch introduces new APIs for the application
> to query and reset per-virtqueue statistics. The
> patch also introduces generic counters.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> doc/guides/prog_guide/vhost_lib.rst | 24 ++++++
> lib/vhost/rte_vhost.h | 99 ++++++++++++++++++++++++
> lib/vhost/socket.c | 4 +-
> lib/vhost/version.map | 4 +-
> lib/vhost/vhost.c | 112 +++++++++++++++++++++++++++-
> lib/vhost/vhost.h | 18 ++++-
> lib/vhost/virtio_net.c | 55 ++++++++++++++
> 7 files changed, 311 insertions(+), 5 deletions(-)
>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH 4/5] vhost: add statistics for IOTLB
2022-05-10 20:17 ` [PATCH 4/5] vhost: add statistics for IOTLB Maxime Coquelin
@ 2022-05-11 13:56 ` Xia, Chenbo
0 siblings, 0 replies; 12+ messages in thread
From: Xia, Chenbo @ 2022-05-11 13:56 UTC (permalink / raw)
To: Maxime Coquelin, dev, david.marchand, i.maximets
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, May 11, 2022 4:17 AM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; i.maximets@ovn.org
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH 4/5] vhost: add statistics for IOTLB
>
> This patch adds statistics for IOTLB hits and misses.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/vhost.c | 10 +++++++++-
> lib/vhost/vhost.h | 2 ++
> 2 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index fa708c1f9c..721b3a3247 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -43,6 +43,8 @@ static const struct vhost_vq_stats_name_off
> vhost_vq_stat_strings[] = {
> {"size_1024_1518_packets", offsetof(struct vhost_virtqueue,
> stats.size_bins[6])},
> {"size_1519_max_packets", offsetof(struct vhost_virtqueue,
> stats.size_bins[7])},
> {"guest_notifications", offsetof(struct vhost_virtqueue,
> stats.guest_notifications)},
> + {"iotlb_hits", offsetof(struct vhost_virtqueue,
> stats.iotlb_hits)},
> + {"iotlb_misses", offsetof(struct vhost_virtqueue,
> stats.iotlb_misses)},
> };
>
> #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
> @@ -60,8 +62,14 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> tmp_size = *size;
>
> vva = vhost_user_iotlb_cache_find(vq, iova, &tmp_size, perm);
> - if (tmp_size == *size)
> + if (tmp_size == *size) {
> + if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
> + vq->stats.iotlb_hits++;
> return vva;
> + }
> +
> + if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
> + vq->stats.iotlb_misses++;
>
> iova += tmp_size;
>
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 13c5c2266d..872675207e 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -134,6 +134,8 @@ struct virtqueue_stats {
> /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
> uint64_t size_bins[8];
> uint64_t guest_notifications;
> + uint64_t iotlb_hits;
> + uint64_t iotlb_misses;
> };
>
> /**
> --
> 2.35.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH 5/5] vhost: add statistics for in-flight packets
2022-05-10 20:17 ` [PATCH 5/5] vhost: add statistics for in-flight packets Maxime Coquelin
@ 2022-05-11 13:57 ` Xia, Chenbo
0 siblings, 0 replies; 12+ messages in thread
From: Xia, Chenbo @ 2022-05-11 13:57 UTC (permalink / raw)
To: Maxime Coquelin, dev, david.marchand, i.maximets
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, May 11, 2022 4:17 AM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; i.maximets@ovn.org
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH 5/5] vhost: add statistics for in-flight packets
>
> This patch adds statistics for packets in-flight submission
> and completion, when Vhost async mode is used.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/vhost.c | 2 ++
> lib/vhost/vhost.h | 2 ++
> lib/vhost/virtio_net.c | 6 ++++++
> 3 files changed, 10 insertions(+)
>
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 721b3a3247..d9d31b2d03 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -45,6 +45,8 @@ static const struct vhost_vq_stats_name_off
> vhost_vq_stat_strings[] = {
> {"guest_notifications", offsetof(struct vhost_virtqueue,
> stats.guest_notifications)},
> {"iotlb_hits", offsetof(struct vhost_virtqueue,
> stats.iotlb_hits)},
> {"iotlb_misses", offsetof(struct vhost_virtqueue,
> stats.iotlb_misses)},
> + {"inflight_submitted", offsetof(struct vhost_virtqueue,
> stats.inflight_submitted)},
> + {"inflight_completed", offsetof(struct vhost_virtqueue,
> stats.inflight_completed)},
> };
>
> #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 872675207e..1573d0afe9 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -136,6 +136,8 @@ struct virtqueue_stats {
> uint64_t guest_notifications;
> uint64_t iotlb_hits;
> uint64_t iotlb_misses;
> + uint64_t inflight_submitted;
> + uint64_t inflight_completed;
> };
>
> /**
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index b1ea9fa4a5..c8905c770a 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -2115,6 +2115,7 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t
> queue_id,
> n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count,
> dma_id, vchan_id);
>
> vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl);
> + vq->stats.inflight_completed += n_pkts_cpl;
>
> out:
> rte_spinlock_unlock(&vq->access_lock);
> @@ -2158,6 +2159,9 @@ rte_vhost_clear_queue_thread_unsafe(int vid,
> uint16_t queue_id,
>
> n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count,
> dma_id, vchan_id);
>
> + vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl);
> + vq->stats.inflight_completed += n_pkts_cpl;
> +
> return n_pkts_cpl;
> }
>
> @@ -2207,6 +2211,8 @@ virtio_dev_rx_async_submit(struct virtio_net *dev,
> uint16_t queue_id,
> nb_tx = virtio_dev_rx_async_submit_split(dev, vq, queue_id,
> pkts, count, dma_id, vchan_id);
>
> + vq->stats.inflight_submitted += nb_tx;
> +
> out:
> if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
> vhost_user_iotlb_rd_unlock(vq);
> --
> 2.35.1
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/5] vhost: introduce per-virtqueue stats API
2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
` (5 preceding siblings ...)
2022-05-11 7:24 ` [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
@ 2022-05-17 13:23 ` Maxime Coquelin
2022-05-17 13:33 ` Maxime Coquelin
6 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-17 13:23 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets
On 5/10/22 22:17, Maxime Coquelin wrote:
> This series introduces a new Vhost API that provides
> per-virtqueue statistics to the application. It will be
> generally useful, but initial motivation for this series
> was to be able to get to get virtqueues stats when Virtio
> RSS feature will be supported in Vhost library.
>
> First patch introduces the new API and generic statistics.
> Patch 2 makes use of this API in Vhost PMD.
> The 3 last patches introduce more specific counters
> (syscalls in DP, IOTLB cache hits and misses, inflight
> submitted and completed for vhost async).
>
>
>
> Changes in v3:
> ==============
> - Remove patch1, already applied
> - Fix memory leaks in error path (Chenbo)
> - Fix various typos (Chenbo)
> - Removed unused IOTLB counter (Chenbo)
> - Add generic packets stats in async enqueue path
> - Add Vhost async specific counters
>
> Changes in v2:
> =============
> - Rebased on top of v22.03
> - Add documentation
>
> Maxime Coquelin (5):
> vhost: add per-virtqueue statistics support
> net/vhost: move to Vhost library stats API
> vhost: add statistics for guest notifications
> vhost: add statistics for IOTLB
> vhost: add statistics for in-flight packets
>
> doc/guides/prog_guide/vhost_lib.rst | 24 ++
> drivers/net/vhost/rte_eth_vhost.c | 348 ++++++++++------------------
> lib/vhost/rte_vhost.h | 99 ++++++++
> lib/vhost/socket.c | 4 +-
> lib/vhost/version.map | 4 +-
> lib/vhost/vhost.c | 125 +++++++++-
> lib/vhost/vhost.h | 27 ++-
> lib/vhost/virtio_net.c | 61 +++++
> 8 files changed, 460 insertions(+), 232 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/5] vhost: introduce per-virtqueue stats API
2022-05-17 13:23 ` Maxime Coquelin
@ 2022-05-17 13:33 ` Maxime Coquelin
0 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-17 13:33 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets
On 5/17/22 15:23, Maxime Coquelin wrote:
>
>
> On 5/10/22 22:17, Maxime Coquelin wrote:
>> This series introduces a new Vhost API that provides
>> per-virtqueue statistics to the application. It will be
>> generally useful, but initial motivation for this series
>> was to be able to get to get virtqueues stats when Virtio
>> RSS feature will be supported in Vhost library.
>>
>> First patch introduces the new API and generic statistics.
>> Patch 2 makes use of this API in Vhost PMD.
>> The 3 last patches introduce more specific counters
>> (syscalls in DP, IOTLB cache hits and misses, inflight
>> submitted and completed for vhost async).
>>
>>
>>
>> Changes in v3:
>> ==============
>> - Remove patch1, already applied
>> - Fix memory leaks in error path (Chenbo)
>> - Fix various typos (Chenbo)
>> - Removed unused IOTLB counter (Chenbo)
>> - Add generic packets stats in async enqueue path
>> - Add Vhost async specific counters
>>
>> Changes in v2:
>> =============
>> - Rebased on top of v22.03
>> - Add documentation
>>
>> Maxime Coquelin (5):
>> vhost: add per-virtqueue statistics support
>> net/vhost: move to Vhost library stats API
>> vhost: add statistics for guest notifications
>> vhost: add statistics for IOTLB
>> vhost: add statistics for in-flight packets
>>
>> doc/guides/prog_guide/vhost_lib.rst | 24 ++
>> drivers/net/vhost/rte_eth_vhost.c | 348 ++++++++++------------------
>> lib/vhost/rte_vhost.h | 99 ++++++++
>> lib/vhost/socket.c | 4 +-
>> lib/vhost/version.map | 4 +-
>> lib/vhost/vhost.c | 125 +++++++++-
>> lib/vhost/vhost.h | 27 ++-
>> lib/vhost/virtio_net.c | 61 +++++
>> 8 files changed, 460 insertions(+), 232 deletions(-)
>>
Applied to dpdk-next-virtio/main.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2022-05-17 13:33 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-10 20:17 [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
2022-05-10 20:17 ` [PATCH 1/5] vhost: add per-virtqueue statistics support Maxime Coquelin
2022-05-11 13:56 ` Xia, Chenbo
2022-05-10 20:17 ` [PATCH 2/5] net/vhost: move to Vhost library stats API Maxime Coquelin
2022-05-10 20:17 ` [PATCH 3/5] vhost: add statistics for guest notifications Maxime Coquelin
2022-05-10 20:17 ` [PATCH 4/5] vhost: add statistics for IOTLB Maxime Coquelin
2022-05-11 13:56 ` Xia, Chenbo
2022-05-10 20:17 ` [PATCH 5/5] vhost: add statistics for in-flight packets Maxime Coquelin
2022-05-11 13:57 ` Xia, Chenbo
2022-05-11 7:24 ` [PATCH 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
2022-05-17 13:23 ` Maxime Coquelin
2022-05-17 13:33 ` Maxime Coquelin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).