* [PATCH v2 1/5] vhost: fix missing virtqueue lock protection
2022-03-24 12:46 [PATCH v2 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
@ 2022-03-24 12:46 ` Maxime Coquelin
2022-03-28 8:07 ` David Marchand
2022-03-24 12:46 ` [PATCH v2 2/5] vhost: add per-virtqueue statistics support Maxime Coquelin
` (3 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-03-24 12:46 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin, stable
This patch ensures virtqueue metadata are not being
modified while rte_vhost_vring_call() is executed.
Fixes: 6c299bb7322f ("vhost: introduce vring call API")
Cc: stable@dpdk.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index bc88148347..2f96a28dac 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -1291,11 +1291,15 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
if (!vq)
return -1;
+ rte_spinlock_lock(&vq->access_lock);
+
if (vq_is_packed(dev))
vhost_vring_call_packed(dev, vq);
else
vhost_vring_call_split(dev, vq);
+ rte_spinlock_unlock(&vq->access_lock);
+
return 0;
}
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 1/5] vhost: fix missing virtqueue lock protection
2022-03-24 12:46 ` [PATCH v2 1/5] vhost: fix missing virtqueue lock protection Maxime Coquelin
@ 2022-03-28 8:07 ` David Marchand
2022-03-28 14:59 ` Maxime Coquelin
0 siblings, 1 reply; 12+ messages in thread
From: David Marchand @ 2022-03-28 8:07 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: dev, Xia, Chenbo, Ilya Maximets, dpdk stable
On Thu, Mar 24, 2022 at 1:46 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch ensures virtqueue metadata are not being
> modified while rte_vhost_vring_call() is executed.
>
> Fixes: 6c299bb7322f ("vhost: introduce vring call API")
> Cc: stable@dpdk.org
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/vhost.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index bc88148347..2f96a28dac 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -1291,11 +1291,15 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
> if (!vq)
> return -1;
>
> + rte_spinlock_lock(&vq->access_lock);
> +
> if (vq_is_packed(dev))
> vhost_vring_call_packed(dev, vq);
> else
> vhost_vring_call_split(dev, vq);
>
> + rte_spinlock_unlock(&vq->access_lock);
> +
> return 0;
> }
>
Reviewed-by: David Marchand <david.marchand@redhat.com>
You can probably merge quickly this fix regardless of the rest of the
stats series.
It would help me with a series of mine that annotates vhost locks.
--
David Marchand
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 1/5] vhost: fix missing virtqueue lock protection
2022-03-28 8:07 ` David Marchand
@ 2022-03-28 14:59 ` Maxime Coquelin
0 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-03-28 14:59 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Xia, Chenbo, Ilya Maximets, dpdk stable
On 3/28/22 10:07, David Marchand wrote:
> On Thu, Mar 24, 2022 at 1:46 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>>
>> This patch ensures virtqueue metadata are not being
>> modified while rte_vhost_vring_call() is executed.
>>
>> Fixes: 6c299bb7322f ("vhost: introduce vring call API")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>> lib/vhost/vhost.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
>> index bc88148347..2f96a28dac 100644
>> --- a/lib/vhost/vhost.c
>> +++ b/lib/vhost/vhost.c
>> @@ -1291,11 +1291,15 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
>> if (!vq)
>> return -1;
>>
>> + rte_spinlock_lock(&vq->access_lock);
>> +
>> if (vq_is_packed(dev))
>> vhost_vring_call_packed(dev, vq);
>> else
>> vhost_vring_call_split(dev, vq);
>>
>> + rte_spinlock_unlock(&vq->access_lock);
>> +
>> return 0;
>> }
>>
>
> Reviewed-by: David Marchand <david.marchand@redhat.com>
>
> You can probably merge quickly this fix regardless of the rest of the
> stats series.
> It would help me with a series of mine that annotates vhost locks.
>
>
Thanks for the review.
Applied this single patch to next-virtio tree.
Maxime
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 2/5] vhost: add per-virtqueue statistics support
2022-03-24 12:46 [PATCH v2 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
2022-03-24 12:46 ` [PATCH v2 1/5] vhost: fix missing virtqueue lock protection Maxime Coquelin
@ 2022-03-24 12:46 ` Maxime Coquelin
2022-03-24 12:46 ` [PATCH v2 3/5] net/vhost: move to Vhost library stats API Maxime Coquelin
` (2 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-03-24 12:46 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
This patch introduces new APIs for the application
to query and reset per-virtqueue statistics. The
patch also introduces generic counters.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
doc/guides/prog_guide/vhost_lib.rst | 24 ++++++
lib/vhost/rte_vhost.h | 99 +++++++++++++++++++++++++
lib/vhost/socket.c | 4 +-
lib/vhost/version.map | 5 ++
lib/vhost/vhost.c | 109 +++++++++++++++++++++++++++-
lib/vhost/vhost.h | 18 ++++-
lib/vhost/virtio_net.c | 53 ++++++++++++++
7 files changed, 308 insertions(+), 4 deletions(-)
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index 886f8f5e72..8cd1c3ddc7 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -130,6 +130,15 @@ The following is an overview of some key Vhost API functions:
It is disabled by default.
+ - ``RTE_VHOST_USER_NET_STATS_ENABLE``
+
+ Per-virtqueue statistics collection will be enabled when this flag is set.
+ When enabled, the application may use rte_vhost_stats_get_names() and
+ rte_vhost_stats_get() to collect statistics, and rte_vhost_stats_reset() to
+ reset them.
+
+ It is disabled by default
+
* ``rte_vhost_driver_set_features(path, features)``
This function sets the feature bits the vhost-user driver supports. The
@@ -276,6 +285,21 @@ The following is an overview of some key Vhost API functions:
Clear inflight packets which are submitted to DMA engine in vhost async data
path. Completed packets are returned to applications through ``pkts``.
+* ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct rte_vhost_stat_name *names, unsigned int size)``
+
+ This function returns the names of the queue statistics. It requires
+ statistics collection to be enabled at registration time.
+
+* ``rte_vhost_vring_stats_get(int vid, uint16_t queue_id, struct rte_vhost_stat *stats, unsigned int n)``
+
+ This function returns the queue statistics. It requires statistics
+ collection to be enabled at registration time.
+
+* ``rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)``
+
+ This function resets the queue statistics. It requires statistics
+ collection to be enabled at registration time.
+
Vhost-user Implementations
--------------------------
diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
index c733f857c6..65cfd90bae 100644
--- a/lib/vhost/rte_vhost.h
+++ b/lib/vhost/rte_vhost.h
@@ -39,6 +39,7 @@ extern "C" {
#define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6)
#define RTE_VHOST_USER_ASYNC_COPY (1ULL << 7)
#define RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS (1ULL << 8)
+#define RTE_VHOST_USER_NET_STATS_ENABLE (1ULL << 9)
/* Features. */
#ifndef VIRTIO_NET_F_GUEST_ANNOUNCE
@@ -321,6 +322,32 @@ struct rte_vhost_power_monitor_cond {
uint8_t match;
};
+/** Maximum name length for the statistics counters */
+#define RTE_VHOST_STATS_NAME_SIZE 64
+
+/**
+ * Vhost virtqueue statistics structure
+ *
+ * This structure is used by rte_vhost_vring_stats_get() to provide
+ * virtqueue statistics to the calling application.
+ * It maps a name ID, corresponding to an index in the array returned
+ * by rte_vhost_vring_stats_get_names(), to a statistic value.
+ */
+struct rte_vhost_stat {
+ uint64_t id; /**< The index in xstats name array. */
+ uint64_t value; /**< The statistic counter value. */
+};
+
+/**
+ * Vhost virtqueue statistic name element
+ *
+ * This structure is used by rte_vhost_vring_stats_get_anmes() to
+ * provide virtqueue statistics names to the calling application.
+ */
+struct rte_vhost_stat_name {
+ char name[RTE_VHOST_STATS_NAME_SIZE]; /**< The statistic name. */
+};
+
/**
* Convert guest physical address to host virtual address
*
@@ -1063,6 +1090,78 @@ __rte_experimental
int
rte_vhost_slave_config_change(int vid, bool need_reply);
+/**
+ * Retrieve names of statistics of a Vhost virtqueue.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * @param vid
+ * vhost device ID
+ * @param queue_id
+ * vhost queue index
+ * @param stats_names
+ * array of at least size elements to be filled.
+ * If set to NULL, the function returns the required number of elements.
+ * @param size
+ * The number of elements in stats_names array.
+ * @return
+ * - Success if greater than 0 and lower or equal to *size*. The return value
+ * indicates the number of elements filled in the *names* array.
+ * - Failure if greater than *size*. The return value indicates the number of
+ * elements the *names* array that should be given to succeed.
+ * - Failure if lower than 0. The device ID or queue ID is invalid or
+ + statistics collection is not enabled.
+ */
+__rte_experimental
+int
+rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
+ struct rte_vhost_stat_name *name, unsigned int size);
+
+/**
+ * Retrieve statistics of a Vhost virtqueue.
+ *
+ * There is an assumption that 'stat_names' and 'stats' arrays are matched
+ * by array index: stats_names[i].name => stats[i].value
+ *
+ * @param vid
+ * vhost device ID
+ * @param queue_id
+ * vhost queue index
+ * @param stats
+ * A pointer to a table of structure of type rte_vhost_stat to be filled with
+ * virtqueue statistics ids and values.
+ * @param n
+ * The number of elements in stats array.
+ * @return
+ * - Success if greater than 0 and lower or equal to *n*. The return value
+ * indicates the number of elements filled in the *stats* array.
+ * - Failure if greater than *n*. The return value indicates the number of
+ * elements the *stats* array that should be given to succeed.
+ * - Failure if lower than 0. The device ID or queue ID is invalid, or
+ * statistics collection is not enabled.
+ */
+__rte_experimental
+int
+rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
+ struct rte_vhost_stat *stats, unsigned int n);
+
+/**
+ * Reset statistics of a Vhost virtqueue.
+ *
+ * @param vid
+ * vhost device ID
+ * @param queue_id
+ * vhost queue index
+ * @return
+ * - Success if 0. Statistics have been reset.
+ * - Failure if lower than 0. The device ID or queue ID is invalid, or
+ * statistics collection is not enabled.
+ */
+__rte_experimental
+int
+rte_vhost_vring_stats_reset(int vid, uint16_t queue_id);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index b304339de9..baf6c338d3 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -42,6 +42,7 @@ struct vhost_user_socket {
bool linearbuf;
bool async_copy;
bool net_compliant_ol_flags;
+ bool stats_enabled;
/*
* The "supported_features" indicates the feature bits the
@@ -227,7 +228,7 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket)
vhost_set_ifname(vid, vsocket->path, size);
vhost_setup_virtio_net(vid, vsocket->use_builtin_virtio_net,
- vsocket->net_compliant_ol_flags);
+ vsocket->net_compliant_ol_flags, vsocket->stats_enabled);
vhost_attach_vdpa_device(vid, vsocket->vdpa_dev);
@@ -863,6 +864,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT;
vsocket->async_copy = flags & RTE_VHOST_USER_ASYNC_COPY;
vsocket->net_compliant_ol_flags = flags & RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS;
+ vsocket->stats_enabled = flags & RTE_VHOST_USER_NET_STATS_ENABLE;
if (vsocket->async_copy &&
(flags & (RTE_VHOST_USER_IOMMU_SUPPORT |
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 0a66c5840c..e33a215a58 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -87,6 +87,11 @@ EXPERIMENTAL {
# added in 22.03
rte_vhost_async_dma_configure;
+
+ # added in 22.07
+ rte_vhost_vring_stats_get_names;
+ rte_vhost_vring_stats_get;
+ rte_vhost_vring_stats_reset;
};
INTERNAL {
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 2f96a28dac..80ed024019 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -24,6 +24,28 @@
struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];
pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
+struct vhost_vq_stats_name_off {
+ char name[RTE_VHOST_STATS_NAME_SIZE];
+ unsigned int offset;
+};
+
+static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
+ {"good_packets", offsetof(struct vhost_virtqueue, stats.packets)},
+ {"good_bytes", offsetof(struct vhost_virtqueue, stats.bytes)},
+ {"multicast_packets", offsetof(struct vhost_virtqueue, stats.multicast)},
+ {"broadcast_packets", offsetof(struct vhost_virtqueue, stats.broadcast)},
+ {"undersize_packets", offsetof(struct vhost_virtqueue, stats.size_bins[0])},
+ {"size_64_packets", offsetof(struct vhost_virtqueue, stats.size_bins[1])},
+ {"size_65_127_packets", offsetof(struct vhost_virtqueue, stats.size_bins[2])},
+ {"size_128_255_packets", offsetof(struct vhost_virtqueue, stats.size_bins[3])},
+ {"size_256_511_packets", offsetof(struct vhost_virtqueue, stats.size_bins[4])},
+ {"size_512_1023_packets", offsetof(struct vhost_virtqueue, stats.size_bins[5])},
+ {"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])},
+ {"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])},
+};
+
+#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
+
/* Called with iotlb_lock read-locked */
uint64_t
__vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
@@ -755,7 +777,7 @@ vhost_set_ifname(int vid, const char *if_name, unsigned int if_len)
}
void
-vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags)
+vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats_enabled)
{
struct virtio_net *dev = get_device(vid);
@@ -770,6 +792,10 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags)
dev->flags |= VIRTIO_DEV_LEGACY_OL_FLAGS;
else
dev->flags &= ~VIRTIO_DEV_LEGACY_OL_FLAGS;
+ if (stats_enabled)
+ dev->flags |= VIRTIO_DEV_STATS_ENABLED;
+ else
+ dev->flags &= ~VIRTIO_DEV_STATS_ENABLED;
}
void
@@ -1945,5 +1971,86 @@ rte_vhost_get_monitor_addr(int vid, uint16_t queue_id,
return 0;
}
+
+int
+rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id,
+ struct rte_vhost_stat_name *name, unsigned int size)
+{
+ struct virtio_net *dev = get_device(vid);
+ unsigned int i;
+
+ if (dev == NULL)
+ return -1;
+
+ if (queue_id >= dev->nr_vring)
+ return -1;
+
+ if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+ return -1;
+
+ if (name == NULL || size < VHOST_NB_VQ_STATS)
+ return VHOST_NB_VQ_STATS;
+
+ for (i = 0; i < VHOST_NB_VQ_STATS; i++)
+ snprintf(name[i].name, sizeof(name[i].name), "%s_q%u_%s",
+ (queue_id & 1) ? "rx" : "tx",
+ queue_id / 2, vhost_vq_stat_strings[i].name);
+
+ return VHOST_NB_VQ_STATS;
+}
+
+int
+rte_vhost_vring_stats_get(int vid, uint16_t queue_id,
+ struct rte_vhost_stat *stats, unsigned int n)
+{
+ struct virtio_net *dev = get_device(vid);
+ struct vhost_virtqueue *vq;
+ unsigned int i;
+
+ if (dev == NULL)
+ return -1;
+
+ if (queue_id >= dev->nr_vring)
+ return -1;
+
+ if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+ return -1;
+
+ if (stats == NULL || n < VHOST_NB_VQ_STATS)
+ return VHOST_NB_VQ_STATS;
+
+ vq = dev->virtqueue[queue_id];
+
+ rte_spinlock_lock(&vq->access_lock);
+ for (i = 0; i < VHOST_NB_VQ_STATS; i++) {
+ stats[i].value =
+ *(uint64_t *)(((char *)vq) + vhost_vq_stat_strings[i].offset);
+ stats[i].id = i;
+ }
+ rte_spinlock_unlock(&vq->access_lock);
+
+ return VHOST_NB_VQ_STATS;
+}
+
+int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)
+{
+ struct virtio_net *dev = get_device(vid);
+ struct vhost_virtqueue *vq;
+
+ if (dev == NULL)
+ return -1;
+
+ if (queue_id >= dev->nr_vring)
+ return -1;
+
+ vq = dev->virtqueue[queue_id];
+
+ rte_spinlock_lock(&vq->access_lock);
+ memset(&vq->stats, 0, sizeof(vq->stats));
+ rte_spinlock_unlock(&vq->access_lock);
+
+ return 0;
+}
+
RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);
RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index a9edc271aa..01b97011aa 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -37,6 +37,8 @@
#define VIRTIO_DEV_FEATURES_FAILED ((uint32_t)1 << 4)
/* Used to indicate that the virtio_net tx code should fill TX ol_flags */
#define VIRTIO_DEV_LEGACY_OL_FLAGS ((uint32_t)1 << 5)
+/* Used to indicate the application has requested statistics collection */
+#define VIRTIO_DEV_STATS_ENABLED ((uint32_t)1 << 6)
/* Backend value set by guest. */
#define VIRTIO_DEV_STOPPED -1
@@ -121,6 +123,18 @@ struct vring_used_elem_packed {
uint32_t count;
};
+/**
+ * Virtqueue statistics
+ */
+struct virtqueue_stats {
+ uint64_t packets;
+ uint64_t bytes;
+ uint64_t multicast;
+ uint64_t broadcast;
+ /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
+ uint64_t size_bins[8];
+};
+
/**
* iovec
*/
@@ -305,6 +319,7 @@ struct vhost_virtqueue {
#define VIRTIO_UNINITIALIZED_NOTIF (-1)
struct vhost_vring_addr ring_addrs;
+ struct virtqueue_stats stats;
} __rte_cache_aligned;
/* Virtio device status as per Virtio specification */
@@ -780,7 +795,7 @@ int alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx);
void vhost_attach_vdpa_device(int vid, struct rte_vdpa_device *dev);
void vhost_set_ifname(int, const char *if_name, unsigned int if_len);
-void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags);
+void vhost_setup_virtio_net(int vid, bool enable, bool legacy_ol_flags, bool stats_enabled);
void vhost_enable_extbuf(int vid);
void vhost_enable_linearbuf(int vid);
int vhost_enable_guest_notification(struct virtio_net *dev,
@@ -957,5 +972,4 @@ mbuf_is_consumed(struct rte_mbuf *m)
return true;
}
-
#endif /* _VHOST_NET_CDEV_H_ */
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 5f432b0d77..96b894dffe 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -47,6 +47,54 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring)
return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
}
+/*
+ * This function must be called with virtqueue's access_lock taken.
+ */
+static inline void
+vhost_queue_stats_update(struct virtio_net *dev, struct vhost_virtqueue *vq,
+ struct rte_mbuf **pkts, uint16_t count)
+{
+ struct virtqueue_stats *stats = &vq->stats;
+ int i;
+
+ if (!(dev->flags & VIRTIO_DEV_STATS_ENABLED))
+ return;
+
+ for (i = 0; i < count; i++) {
+ struct rte_ether_addr *ea;
+ struct rte_mbuf *pkt = pkts[i];
+ uint32_t pkt_len = rte_pktmbuf_pkt_len(pkt);
+
+ stats->packets++;
+ stats->bytes += pkt_len;
+
+ if (pkt_len == 64) {
+ stats->size_bins[1]++;
+ } else if (pkt_len > 64 && pkt_len < 1024) {
+ uint32_t bin;
+
+ /* count zeros, and offset into correct bin */
+ bin = (sizeof(pkt_len) * 8) - __builtin_clz(pkt_len) - 5;
+ stats->size_bins[bin]++;
+ } else {
+ if (pkt_len < 64)
+ stats->size_bins[0]++;
+ else if (pkt_len < 1519)
+ stats->size_bins[6]++;
+ else
+ stats->size_bins[7]++;
+ }
+
+ ea = rte_pktmbuf_mtod(pkt, struct rte_ether_addr *);
+ if (rte_is_multicast_ether_addr(ea)) {
+ if (rte_is_broadcast_ether_addr(ea))
+ stats->broadcast++;
+ else
+ stats->multicast++;
+ }
+ }
+}
+
static __rte_always_inline int64_t
vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq,
int16_t dma_id, uint16_t vchan_id, uint16_t flag_idx,
@@ -1509,6 +1557,8 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
else
nb_tx = virtio_dev_rx_split(dev, vq, pkts, count);
+ vhost_queue_stats_update(dev, vq, pkts, nb_tx);
+
out:
if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
vhost_user_iotlb_rd_unlock(vq);
@@ -3113,6 +3163,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
* learning table will get updated first.
*/
pkts[0] = rarp_mbuf;
+ vhost_queue_stats_update(dev, vq, pkts, 1);
pkts++;
count -= 1;
}
@@ -3129,6 +3180,8 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
count = virtio_dev_tx_split_compliant(dev, vq, mbuf_pool, pkts, count);
}
+ vhost_queue_stats_update(dev, vq, pkts, count);
+
out:
if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
vhost_user_iotlb_rd_unlock(vq);
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 3/5] net/vhost: move to Vhost library stats API
2022-03-24 12:46 [PATCH v2 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
2022-03-24 12:46 ` [PATCH v2 1/5] vhost: fix missing virtqueue lock protection Maxime Coquelin
2022-03-24 12:46 ` [PATCH v2 2/5] vhost: add per-virtqueue statistics support Maxime Coquelin
@ 2022-03-24 12:46 ` Maxime Coquelin
2022-04-25 12:06 ` Xia, Chenbo
2022-03-24 12:46 ` [PATCH v2 4/5] vhost: add statistics for guest notifications Maxime Coquelin
2022-03-24 12:46 ` [PATCH v2 5/5] vhost: add statistics for IOTLB Maxime Coquelin
4 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-03-24 12:46 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
Now that we have Vhost statistics APIs, this patch replaces
Vhost PMD extented statistics implementation with calls
to the new API. It will enable getting more statistics for
counters that cannot be implmented at the PMD level.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/vhost/rte_eth_vhost.c | 348 +++++++++++-------------------
1 file changed, 120 insertions(+), 228 deletions(-)
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 070f0e6dfd..bac1c0acba 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -59,33 +59,10 @@ static struct rte_ether_addr base_eth_addr = {
}
};
-enum vhost_xstats_pkts {
- VHOST_UNDERSIZE_PKT = 0,
- VHOST_64_PKT,
- VHOST_65_TO_127_PKT,
- VHOST_128_TO_255_PKT,
- VHOST_256_TO_511_PKT,
- VHOST_512_TO_1023_PKT,
- VHOST_1024_TO_1522_PKT,
- VHOST_1523_TO_MAX_PKT,
- VHOST_BROADCAST_PKT,
- VHOST_MULTICAST_PKT,
- VHOST_UNICAST_PKT,
- VHOST_PKT,
- VHOST_BYTE,
- VHOST_MISSED_PKT,
- VHOST_ERRORS_PKT,
- VHOST_ERRORS_FRAGMENTED,
- VHOST_ERRORS_JABBER,
- VHOST_UNKNOWN_PROTOCOL,
- VHOST_XSTATS_MAX,
-};
-
struct vhost_stats {
uint64_t pkts;
uint64_t bytes;
uint64_t missed_pkts;
- uint64_t xstats[VHOST_XSTATS_MAX];
};
struct vhost_queue {
@@ -140,138 +117,92 @@ struct rte_vhost_vring_state {
static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS];
-#define VHOST_XSTATS_NAME_SIZE 64
-
-struct vhost_xstats_name_off {
- char name[VHOST_XSTATS_NAME_SIZE];
- uint64_t offset;
-};
-
-/* [rx]_is prepended to the name string here */
-static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = {
- {"good_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])},
- {"total_bytes",
- offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])},
- {"missed_pkts",
- offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])},
- {"broadcast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])},
- {"multicast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])},
- {"unicast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])},
- {"undersize_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])},
- {"size_64_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
- {"size_65_to_127_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])},
- {"size_128_to_255_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])},
- {"size_256_to_511_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])},
- {"size_512_to_1023_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])},
- {"size_1024_to_1522_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])},
- {"size_1523_to_max_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])},
- {"errors_with_bad_CRC",
- offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
- {"fragmented_errors",
- offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_FRAGMENTED])},
- {"jabber_errors",
- offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_JABBER])},
- {"unknown_protos_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNKNOWN_PROTOCOL])},
-};
-
-/* [tx]_ is prepended to the name string here */
-static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = {
- {"good_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])},
- {"total_bytes",
- offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])},
- {"missed_pkts",
- offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])},
- {"broadcast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])},
- {"multicast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])},
- {"unicast_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])},
- {"undersize_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])},
- {"size_64_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
- {"size_65_to_127_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])},
- {"size_128_to_255_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])},
- {"size_256_to_511_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])},
- {"size_512_to_1023_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])},
- {"size_1024_to_1522_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])},
- {"size_1523_to_max_packets",
- offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])},
- {"errors_with_bad_CRC",
- offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
-};
-
-#define VHOST_NB_XSTATS_RXPORT (sizeof(vhost_rxport_stat_strings) / \
- sizeof(vhost_rxport_stat_strings[0]))
-
-#define VHOST_NB_XSTATS_TXPORT (sizeof(vhost_txport_stat_strings) / \
- sizeof(vhost_txport_stat_strings[0]))
-
static int
vhost_dev_xstats_reset(struct rte_eth_dev *dev)
{
- struct vhost_queue *vq = NULL;
- unsigned int i = 0;
+ struct vhost_queue *vq;
+ int ret, i;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
vq = dev->data->rx_queues[i];
- if (!vq)
- continue;
- memset(&vq->stats, 0, sizeof(vq->stats));
+ ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id);
+ if (ret < 0)
+ return ret;
}
+
for (i = 0; i < dev->data->nb_tx_queues; i++) {
vq = dev->data->tx_queues[i];
- if (!vq)
- continue;
- memset(&vq->stats, 0, sizeof(vq->stats));
+ ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id);
+ if (ret < 0)
+ return ret;
}
return 0;
}
static int
-vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
+vhost_dev_xstats_get_names(struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names,
- unsigned int limit __rte_unused)
+ unsigned int limit)
{
- unsigned int t = 0;
- int count = 0;
- int nstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT;
+ struct rte_vhost_stat_name *name;
+ struct vhost_queue *vq;
+ int ret, i, count = 0, nstats = 0;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ vq = dev->data->rx_queues[i];
+ ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0);
+ if (ret < 0)
+ return ret;
- if (!xstats_names)
+ nstats += ret;
+ }
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ vq = dev->data->tx_queues[i];
+ ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id, NULL, 0);
+ if (ret < 0)
+ return ret;
+
+ nstats += ret;
+ }
+
+ if (!xstats_names || limit < (unsigned int)nstats)
return nstats;
- for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "rx_%s", vhost_rxport_stat_strings[t].name);
- count++;
- }
- for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "tx_%s", vhost_txport_stat_strings[t].name);
- count++;
+
+ name = calloc(nstats, sizeof(*name));
+ if (!name)
+ return -1;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ vq = dev->data->rx_queues[i];
+ ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id,
+ name + count, nstats - count);
+ if (ret < 0) {
+ free(name);
+ return ret;
+ }
+
+ count += ret;
}
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ vq = dev->data->tx_queues[i];
+ ret = rte_vhost_vring_stats_get_names(vq->vid, vq->virtqueue_id,
+ name + count, nstats - count);
+ if (ret < 0) {
+ free(name);
+ return ret;
+ }
+
+ count += ret;
+ }
+
+ for (i = 0; i < count; i++)
+ strncpy(xstats_names[i].name, name[i].name, RTE_ETH_XSTATS_NAME_SIZE);
+
+ free(name);
+
return count;
}
@@ -279,86 +210,63 @@ static int
vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
unsigned int n)
{
- unsigned int i;
- unsigned int t;
- unsigned int count = 0;
- struct vhost_queue *vq = NULL;
- unsigned int nxstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT;
-
- if (n < nxstats)
- return nxstats;
-
- for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) {
- xstats[count].value = 0;
- for (i = 0; i < dev->data->nb_rx_queues; i++) {
- vq = dev->data->rx_queues[i];
- if (!vq)
- continue;
- xstats[count].value +=
- *(uint64_t *)(((char *)vq)
- + vhost_rxport_stat_strings[t].offset);
- }
- xstats[count].id = count;
- count++;
+ struct rte_vhost_stat *stats;
+ struct vhost_queue *vq;
+ int ret, i, count = 0, nstats = 0;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ vq = dev->data->rx_queues[i];
+ ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0);
+ if (ret < 0)
+ return ret;
+
+ nstats += ret;
}
- for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) {
- xstats[count].value = 0;
- for (i = 0; i < dev->data->nb_tx_queues; i++) {
- vq = dev->data->tx_queues[i];
- if (!vq)
- continue;
- xstats[count].value +=
- *(uint64_t *)(((char *)vq)
- + vhost_txport_stat_strings[t].offset);
- }
- xstats[count].id = count;
- count++;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ vq = dev->data->tx_queues[i];
+ ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id, NULL, 0);
+ if (ret < 0)
+ return ret;
+
+ nstats += ret;
}
- return count;
-}
-static inline void
-vhost_count_xcast_packets(struct vhost_queue *vq,
- struct rte_mbuf *mbuf)
-{
- struct rte_ether_addr *ea = NULL;
- struct vhost_stats *pstats = &vq->stats;
-
- ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *);
- if (rte_is_multicast_ether_addr(ea)) {
- if (rte_is_broadcast_ether_addr(ea))
- pstats->xstats[VHOST_BROADCAST_PKT]++;
- else
- pstats->xstats[VHOST_MULTICAST_PKT]++;
- } else {
- pstats->xstats[VHOST_UNICAST_PKT]++;
+ if (!xstats || n < (unsigned int)nstats)
+ return nstats;
+
+ stats = calloc(nstats, sizeof(*stats));
+ if (!stats)
+ return -1;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ vq = dev->data->rx_queues[i];
+ ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
+ stats + count, nstats - count);
+ if (ret < 0)
+ return ret;
+
+ count += ret;
}
-}
-static __rte_always_inline void
-vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf *buf)
-{
- uint32_t pkt_len = 0;
- uint64_t index;
- struct vhost_stats *pstats = &vq->stats;
-
- pstats->xstats[VHOST_PKT]++;
- pkt_len = buf->pkt_len;
- if (pkt_len == 64) {
- pstats->xstats[VHOST_64_PKT]++;
- } else if (pkt_len > 64 && pkt_len < 1024) {
- index = (sizeof(pkt_len) * 8)
- - __builtin_clz(pkt_len) - 5;
- pstats->xstats[index]++;
- } else {
- if (pkt_len < 64)
- pstats->xstats[VHOST_UNDERSIZE_PKT]++;
- else if (pkt_len <= 1522)
- pstats->xstats[VHOST_1024_TO_1522_PKT]++;
- else if (pkt_len > 1522)
- pstats->xstats[VHOST_1523_TO_MAX_PKT]++;
- }
- vhost_count_xcast_packets(vq, buf);
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ vq = dev->data->tx_queues[i];
+ ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
+ stats + count, nstats - count);
+ if (ret < 0)
+ return ret;
+
+ count += ret;
+ }
+
+ for (i = 0; i < count; i++) {
+ xstats[i].id = stats[i].id;
+ xstats[i].value = stats[i].value;
+ }
+
+ free(stats);
+
+ return nstats;
}
static uint16_t
@@ -402,9 +310,6 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
rte_vlan_strip(bufs[i]);
r->stats.bytes += bufs[i]->pkt_len;
- r->stats.xstats[VHOST_BYTE] += bufs[i]->pkt_len;
-
- vhost_update_single_packet_xstats(r, bufs[i]);
}
out:
@@ -461,10 +366,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
break;
}
- for (i = 0; likely(i < nb_tx); i++) {
+ for (i = 0; likely(i < nb_tx); i++)
nb_bytes += bufs[i]->pkt_len;
- vhost_update_single_packet_xstats(r, bufs[i]);
- }
nb_missed = nb_bufs - nb_tx;
@@ -472,17 +375,6 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
r->stats.bytes += nb_bytes;
r->stats.missed_pkts += nb_missed;
- r->stats.xstats[VHOST_BYTE] += nb_bytes;
- r->stats.xstats[VHOST_MISSED_PKT] += nb_missed;
- r->stats.xstats[VHOST_UNICAST_PKT] += nb_missed;
-
- /* According to RFC2863, ifHCOutUcastPkts, ifHCOutMulticastPkts and
- * ifHCOutBroadcastPkts counters are increased when packets are not
- * transmitted successfully.
- */
- for (i = nb_tx; i < nb_bufs; i++)
- vhost_count_xcast_packets(r, bufs[i]);
-
for (i = 0; likely(i < nb_tx); i++)
rte_pktmbuf_free(bufs[i]);
out:
@@ -1555,7 +1447,7 @@ rte_pmd_vhost_probe(struct rte_vdev_device *dev)
int ret = 0;
char *iface_name;
uint16_t queues;
- uint64_t flags = 0;
+ uint64_t flags = RTE_VHOST_USER_NET_STATS_ENABLE;
uint64_t disable_flags = 0;
int client_mode = 0;
int iommu_support = 0;
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH v2 3/5] net/vhost: move to Vhost library stats API
2022-03-24 12:46 ` [PATCH v2 3/5] net/vhost: move to Vhost library stats API Maxime Coquelin
@ 2022-04-25 12:06 ` Xia, Chenbo
0 siblings, 0 replies; 12+ messages in thread
From: Xia, Chenbo @ 2022-04-25 12:06 UTC (permalink / raw)
To: Maxime Coquelin, dev, david.marchand, i.maximets
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Thursday, March 24, 2022 8:47 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; i.maximets@ovn.org
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v2 3/5] net/vhost: move to Vhost library stats API
>
> Now that we have Vhost statistics APIs, this patch replaces
> Vhost PMD extented statistics implementation with calls
extented -> extended
> to the new API. It will enable getting more statistics for
> counters that cannot be implmented at the PMD level.
implmented -> implemented
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> drivers/net/vhost/rte_eth_vhost.c | 348 +++++++++++-------------------
> 1 file changed, 120 insertions(+), 228 deletions(-)
>
> diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> index 070f0e6dfd..bac1c0acba 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -59,33 +59,10 @@ static struct rte_ether_addr base_eth_addr = {
> }
> };
>
> -enum vhost_xstats_pkts {
> - VHOST_UNDERSIZE_PKT = 0,
> - VHOST_64_PKT,
> - VHOST_65_TO_127_PKT,
> - VHOST_128_TO_255_PKT,
> - VHOST_256_TO_511_PKT,
> - VHOST_512_TO_1023_PKT,
> - VHOST_1024_TO_1522_PKT,
> - VHOST_1523_TO_MAX_PKT,
> - VHOST_BROADCAST_PKT,
> - VHOST_MULTICAST_PKT,
> - VHOST_UNICAST_PKT,
> - VHOST_PKT,
> - VHOST_BYTE,
> - VHOST_MISSED_PKT,
> - VHOST_ERRORS_PKT,
> - VHOST_ERRORS_FRAGMENTED,
> - VHOST_ERRORS_JABBER,
> - VHOST_UNKNOWN_PROTOCOL,
> - VHOST_XSTATS_MAX,
> -};
> -
> struct vhost_stats {
> uint64_t pkts;
> uint64_t bytes;
> uint64_t missed_pkts;
> - uint64_t xstats[VHOST_XSTATS_MAX];
> };
>
> struct vhost_queue {
> @@ -140,138 +117,92 @@ struct rte_vhost_vring_state {
>
> static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS];
>
> -#define VHOST_XSTATS_NAME_SIZE 64
> -
> -struct vhost_xstats_name_off {
> - char name[VHOST_XSTATS_NAME_SIZE];
> - uint64_t offset;
> -};
> -
> -/* [rx]_is prepended to the name string here */
> -static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = {
> - {"good_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])},
> - {"total_bytes",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])},
> - {"missed_pkts",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])},
> - {"broadcast_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])},
> - {"multicast_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])},
> - {"unicast_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])},
> - {"undersize_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])},
> - {"size_64_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
> - {"size_65_to_127_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])},
> - {"size_128_to_255_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])},
> - {"size_256_to_511_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])},
> - {"size_512_to_1023_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])},
> - {"size_1024_to_1522_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])},
> - {"size_1523_to_max_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])},
> - {"errors_with_bad_CRC",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
> - {"fragmented_errors",
> - offsetof(struct vhost_queue,
> stats.xstats[VHOST_ERRORS_FRAGMENTED])},
> - {"jabber_errors",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_JABBER])},
> - {"unknown_protos_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_UNKNOWN_PROTOCOL])},
> -};
> -
> -/* [tx]_ is prepended to the name string here */
> -static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = {
> - {"good_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])},
> - {"total_bytes",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])},
> - {"missed_pkts",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])},
> - {"broadcast_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])},
> - {"multicast_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])},
> - {"unicast_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_UNICAST_PKT])},
> - {"undersize_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])},
> - {"size_64_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
> - {"size_65_to_127_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])},
> - {"size_128_to_255_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])},
> - {"size_256_to_511_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])},
> - {"size_512_to_1023_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])},
> - {"size_1024_to_1522_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])},
> - {"size_1523_to_max_packets",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])},
> - {"errors_with_bad_CRC",
> - offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
> -};
> -
> -#define VHOST_NB_XSTATS_RXPORT (sizeof(vhost_rxport_stat_strings) / \
> - sizeof(vhost_rxport_stat_strings[0]))
> -
> -#define VHOST_NB_XSTATS_TXPORT (sizeof(vhost_txport_stat_strings) / \
> - sizeof(vhost_txport_stat_strings[0]))
> -
> static int
> vhost_dev_xstats_reset(struct rte_eth_dev *dev)
> {
> - struct vhost_queue *vq = NULL;
> - unsigned int i = 0;
> + struct vhost_queue *vq;
> + int ret, i;
>
> for (i = 0; i < dev->data->nb_rx_queues; i++) {
> vq = dev->data->rx_queues[i];
> - if (!vq)
> - continue;
> - memset(&vq->stats, 0, sizeof(vq->stats));
> + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id);
> + if (ret < 0)
> + return ret;
> }
> +
> for (i = 0; i < dev->data->nb_tx_queues; i++) {
> vq = dev->data->tx_queues[i];
> - if (!vq)
> - continue;
> - memset(&vq->stats, 0, sizeof(vq->stats));
> + ret = rte_vhost_vring_stats_reset(vq->vid, vq->virtqueue_id);
> + if (ret < 0)
> + return ret;
> }
>
> return 0;
> }
>
> static int
> -vhost_dev_xstats_get_names(struct rte_eth_dev *dev __rte_unused,
> +vhost_dev_xstats_get_names(struct rte_eth_dev *dev,
> struct rte_eth_xstat_name *xstats_names,
> - unsigned int limit __rte_unused)
> + unsigned int limit)
> {
> - unsigned int t = 0;
> - int count = 0;
> - int nstats = VHOST_NB_XSTATS_RXPORT + VHOST_NB_XSTATS_TXPORT;
> + struct rte_vhost_stat_name *name;
> + struct vhost_queue *vq;
> + int ret, i, count = 0, nstats = 0;
> +
> + for (i = 0; i < dev->data->nb_rx_queues; i++) {
> + vq = dev->data->rx_queues[i];
> + ret = rte_vhost_vring_stats_get_names(vq->vid, vq-
> >virtqueue_id, NULL, 0);
> + if (ret < 0)
> + return ret;
>
> - if (!xstats_names)
> + nstats += ret;
> + }
> +
> + for (i = 0; i < dev->data->nb_tx_queues; i++) {
> + vq = dev->data->tx_queues[i];
> + ret = rte_vhost_vring_stats_get_names(vq->vid, vq-
> >virtqueue_id, NULL, 0);
> + if (ret < 0)
> + return ret;
> +
> + nstats += ret;
> + }
> +
> + if (!xstats_names || limit < (unsigned int)nstats)
> return nstats;
> - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) {
> - snprintf(xstats_names[count].name,
> - sizeof(xstats_names[count].name),
> - "rx_%s", vhost_rxport_stat_strings[t].name);
> - count++;
> - }
> - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) {
> - snprintf(xstats_names[count].name,
> - sizeof(xstats_names[count].name),
> - "tx_%s", vhost_txport_stat_strings[t].name);
> - count++;
> +
> + name = calloc(nstats, sizeof(*name));
> + if (!name)
> + return -1;
> +
> + for (i = 0; i < dev->data->nb_rx_queues; i++) {
> + vq = dev->data->rx_queues[i];
> + ret = rte_vhost_vring_stats_get_names(vq->vid, vq-
> >virtqueue_id,
> + name + count, nstats - count);
> + if (ret < 0) {
> + free(name);
> + return ret;
> + }
> +
> + count += ret;
> }
> +
> + for (i = 0; i < dev->data->nb_tx_queues; i++) {
> + vq = dev->data->tx_queues[i];
> + ret = rte_vhost_vring_stats_get_names(vq->vid, vq-
> >virtqueue_id,
> + name + count, nstats - count);
> + if (ret < 0) {
> + free(name);
> + return ret;
> + }
> +
> + count += ret;
> + }
> +
> + for (i = 0; i < count; i++)
> + strncpy(xstats_names[i].name, name[i].name,
> RTE_ETH_XSTATS_NAME_SIZE);
> +
> + free(name);
> +
> return count;
> }
>
> @@ -279,86 +210,63 @@ static int
> vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat
> *xstats,
> unsigned int n)
> {
> - unsigned int i;
> - unsigned int t;
> - unsigned int count = 0;
> - struct vhost_queue *vq = NULL;
> - unsigned int nxstats = VHOST_NB_XSTATS_RXPORT +
> VHOST_NB_XSTATS_TXPORT;
> -
> - if (n < nxstats)
> - return nxstats;
> -
> - for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) {
> - xstats[count].value = 0;
> - for (i = 0; i < dev->data->nb_rx_queues; i++) {
> - vq = dev->data->rx_queues[i];
> - if (!vq)
> - continue;
> - xstats[count].value +=
> - *(uint64_t *)(((char *)vq)
> - + vhost_rxport_stat_strings[t].offset);
> - }
> - xstats[count].id = count;
> - count++;
> + struct rte_vhost_stat *stats;
> + struct vhost_queue *vq;
> + int ret, i, count = 0, nstats = 0;
> +
> + for (i = 0; i < dev->data->nb_rx_queues; i++) {
> + vq = dev->data->rx_queues[i];
> + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
> NULL, 0);
> + if (ret < 0)
> + return ret;
> +
> + nstats += ret;
> }
> - for (t = 0; t < VHOST_NB_XSTATS_TXPORT; t++) {
> - xstats[count].value = 0;
> - for (i = 0; i < dev->data->nb_tx_queues; i++) {
> - vq = dev->data->tx_queues[i];
> - if (!vq)
> - continue;
> - xstats[count].value +=
> - *(uint64_t *)(((char *)vq)
> - + vhost_txport_stat_strings[t].offset);
> - }
> - xstats[count].id = count;
> - count++;
> +
> + for (i = 0; i < dev->data->nb_tx_queues; i++) {
> + vq = dev->data->tx_queues[i];
> + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
> NULL, 0);
> + if (ret < 0)
> + return ret;
> +
> + nstats += ret;
> }
> - return count;
> -}
>
> -static inline void
> -vhost_count_xcast_packets(struct vhost_queue *vq,
> - struct rte_mbuf *mbuf)
> -{
> - struct rte_ether_addr *ea = NULL;
> - struct vhost_stats *pstats = &vq->stats;
> -
> - ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *);
> - if (rte_is_multicast_ether_addr(ea)) {
> - if (rte_is_broadcast_ether_addr(ea))
> - pstats->xstats[VHOST_BROADCAST_PKT]++;
> - else
> - pstats->xstats[VHOST_MULTICAST_PKT]++;
> - } else {
> - pstats->xstats[VHOST_UNICAST_PKT]++;
> + if (!xstats || n < (unsigned int)nstats)
> + return nstats;
> +
> + stats = calloc(nstats, sizeof(*stats));
> + if (!stats)
> + return -1;
> +
> + for (i = 0; i < dev->data->nb_rx_queues; i++) {
> + vq = dev->data->rx_queues[i];
> + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
> + stats + count, nstats - count);
> + if (ret < 0)
> + return ret;
Should free stats here.
> +
> + count += ret;
> }
> -}
>
> -static __rte_always_inline void
> -vhost_update_single_packet_xstats(struct vhost_queue *vq, struct rte_mbuf
> *buf)
> -{
> - uint32_t pkt_len = 0;
> - uint64_t index;
> - struct vhost_stats *pstats = &vq->stats;
> -
> - pstats->xstats[VHOST_PKT]++;
> - pkt_len = buf->pkt_len;
> - if (pkt_len == 64) {
> - pstats->xstats[VHOST_64_PKT]++;
> - } else if (pkt_len > 64 && pkt_len < 1024) {
> - index = (sizeof(pkt_len) * 8)
> - - __builtin_clz(pkt_len) - 5;
> - pstats->xstats[index]++;
> - } else {
> - if (pkt_len < 64)
> - pstats->xstats[VHOST_UNDERSIZE_PKT]++;
> - else if (pkt_len <= 1522)
> - pstats->xstats[VHOST_1024_TO_1522_PKT]++;
> - else if (pkt_len > 1522)
> - pstats->xstats[VHOST_1523_TO_MAX_PKT]++;
> - }
> - vhost_count_xcast_packets(vq, buf);
> + for (i = 0; i < dev->data->nb_tx_queues; i++) {
> + vq = dev->data->tx_queues[i];
> + ret = rte_vhost_vring_stats_get(vq->vid, vq->virtqueue_id,
> + stats + count, nstats - count);
> + if (ret < 0)
> + return ret;
Ditto
Btw, when replying, I noticed I replied to v1 for 2nd patch.. sorry for that...
With above fixed:
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 4/5] vhost: add statistics for guest notifications
2022-03-24 12:46 [PATCH v2 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
` (2 preceding siblings ...)
2022-03-24 12:46 ` [PATCH v2 3/5] net/vhost: move to Vhost library stats API Maxime Coquelin
@ 2022-03-24 12:46 ` Maxime Coquelin
2022-04-25 12:09 ` Xia, Chenbo
2022-03-24 12:46 ` [PATCH v2 5/5] vhost: add statistics for IOTLB Maxime Coquelin
4 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-03-24 12:46 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
This patch adds a new virtqueue statistic for guest
notifications. It is useful to deduce from hypervisor side
whether the corresponding guest Virtio device is using
Kernel Virtio-net driver or DPDK Virtio PMD.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost.c | 1 +
lib/vhost/vhost.h | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 80ed024019..58b58fc40e 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -42,6 +42,7 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
{"size_512_1023_packets", offsetof(struct vhost_virtqueue, stats.size_bins[5])},
{"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])},
{"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])},
+ {"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)},
};
#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 01b97011aa..13c5c2266d 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -133,6 +133,7 @@ struct virtqueue_stats {
uint64_t broadcast;
/* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
uint64_t size_bins[8];
+ uint64_t guest_notifications;
};
/**
@@ -871,6 +872,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
(vq->callfd >= 0)) ||
unlikely(!signalled_used_valid)) {
eventfd_write(vq->callfd, (eventfd_t) 1);
+ if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+ vq->stats.guest_notifications++;
if (dev->notify_ops->guest_notified)
dev->notify_ops->guest_notified(dev->vid);
}
@@ -879,6 +882,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT)
&& (vq->callfd >= 0)) {
eventfd_write(vq->callfd, (eventfd_t)1);
+ if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+ vq->stats.guest_notifications++;
if (dev->notify_ops->guest_notified)
dev->notify_ops->guest_notified(dev->vid);
}
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH v2 4/5] vhost: add statistics for guest notifications
2022-03-24 12:46 ` [PATCH v2 4/5] vhost: add statistics for guest notifications Maxime Coquelin
@ 2022-04-25 12:09 ` Xia, Chenbo
0 siblings, 0 replies; 12+ messages in thread
From: Xia, Chenbo @ 2022-04-25 12:09 UTC (permalink / raw)
To: Maxime Coquelin, dev, david.marchand, i.maximets
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Thursday, March 24, 2022 8:47 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; i.maximets@ovn.org
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v2 4/5] vhost: add statistics for guest notifications
>
> This patch adds a new virtqueue statistic for guest
> notifications. It is useful to deduce from hypervisor side
> whether the corresponding guest Virtio device is using
> Kernel Virtio-net driver or DPDK Virtio PMD.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/vhost.c | 1 +
> lib/vhost/vhost.h | 5 +++++
> 2 files changed, 6 insertions(+)
>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v2 5/5] vhost: add statistics for IOTLB
2022-03-24 12:46 [PATCH v2 0/5] vhost: introduce per-virtqueue stats API Maxime Coquelin
` (3 preceding siblings ...)
2022-03-24 12:46 ` [PATCH v2 4/5] vhost: add statistics for guest notifications Maxime Coquelin
@ 2022-03-24 12:46 ` Maxime Coquelin
2022-04-25 12:10 ` Xia, Chenbo
4 siblings, 1 reply; 12+ messages in thread
From: Maxime Coquelin @ 2022-03-24 12:46 UTC (permalink / raw)
To: dev, chenbo.xia, david.marchand, i.maximets; +Cc: Maxime Coquelin
This patch adds statistics for IOTLB hits and misses.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost.c | 10 +++++++++-
lib/vhost/vhost.h | 3 +++
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
index 58b58fc40e..7f4fafdcb0 100644
--- a/lib/vhost/vhost.c
+++ b/lib/vhost/vhost.c
@@ -43,6 +43,8 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
{"size_1024_1518_packets", offsetof(struct vhost_virtqueue, stats.size_bins[6])},
{"size_1519_max_packets", offsetof(struct vhost_virtqueue, stats.size_bins[7])},
{"guest_notifications", offsetof(struct vhost_virtqueue, stats.guest_notifications)},
+ {"iotlb_hits", offsetof(struct vhost_virtqueue, stats.iotlb_hits)},
+ {"iotlb_misses", offsetof(struct vhost_virtqueue, stats.iotlb_misses)},
};
#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
@@ -60,8 +62,14 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq,
tmp_size = *size;
vva = vhost_user_iotlb_cache_find(vq, iova, &tmp_size, perm);
- if (tmp_size == *size)
+ if (tmp_size == *size) {
+ if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+ vq->stats.iotlb_hits++;
return vva;
+ }
+
+ if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
+ vq->stats.iotlb_misses++;
iova += tmp_size;
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index 13c5c2266d..e876fc157b 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -134,6 +134,9 @@ struct virtqueue_stats {
/* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
uint64_t size_bins[8];
uint64_t guest_notifications;
+ uint64_t iotlb_hits;
+ uint64_t iotlb_misses;
+ uint64_t iotlb_errors;
};
/**
--
2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH v2 5/5] vhost: add statistics for IOTLB
2022-03-24 12:46 ` [PATCH v2 5/5] vhost: add statistics for IOTLB Maxime Coquelin
@ 2022-04-25 12:10 ` Xia, Chenbo
2022-05-10 14:15 ` Maxime Coquelin
0 siblings, 1 reply; 12+ messages in thread
From: Xia, Chenbo @ 2022-04-25 12:10 UTC (permalink / raw)
To: Maxime Coquelin, dev, david.marchand, i.maximets
Hi Maxime,
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Thursday, March 24, 2022 8:47 PM
> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
> david.marchand@redhat.com; i.maximets@ovn.org
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Subject: [PATCH v2 5/5] vhost: add statistics for IOTLB
>
> This patch adds statistics for IOTLB hits and misses.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/vhost.c | 10 +++++++++-
> lib/vhost/vhost.h | 3 +++
> 2 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 58b58fc40e..7f4fafdcb0 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -43,6 +43,8 @@ static const struct vhost_vq_stats_name_off
> vhost_vq_stat_strings[] = {
> {"size_1024_1518_packets", offsetof(struct vhost_virtqueue,
> stats.size_bins[6])},
> {"size_1519_max_packets", offsetof(struct vhost_virtqueue,
> stats.size_bins[7])},
> {"guest_notifications", offsetof(struct vhost_virtqueue,
> stats.guest_notifications)},
> + {"iotlb_hits", offsetof(struct vhost_virtqueue,
> stats.iotlb_hits)},
> + {"iotlb_misses", offsetof(struct vhost_virtqueue,
> stats.iotlb_misses)},
> };
>
> #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
> @@ -60,8 +62,14 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> tmp_size = *size;
>
> vva = vhost_user_iotlb_cache_find(vq, iova, &tmp_size, perm);
> - if (tmp_size == *size)
> + if (tmp_size == *size) {
> + if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
> + vq->stats.iotlb_hits++;
> return vva;
> + }
> +
> + if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
> + vq->stats.iotlb_misses++;
>
> iova += tmp_size;
>
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 13c5c2266d..e876fc157b 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -134,6 +134,9 @@ struct virtqueue_stats {
> /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
> uint64_t size_bins[8];
> uint64_t guest_notifications;
> + uint64_t iotlb_hits;
> + uint64_t iotlb_misses;
> + uint64_t iotlb_errors;
This seems defined but never used.
Thanks,
Chenbo
> };
>
> /**
> --
> 2.35.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2 5/5] vhost: add statistics for IOTLB
2022-04-25 12:10 ` Xia, Chenbo
@ 2022-05-10 14:15 ` Maxime Coquelin
0 siblings, 0 replies; 12+ messages in thread
From: Maxime Coquelin @ 2022-05-10 14:15 UTC (permalink / raw)
To: Xia, Chenbo, dev, david.marchand, i.maximets
On 4/25/22 14:10, Xia, Chenbo wrote:
> Hi Maxime,
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Thursday, March 24, 2022 8:47 PM
>> To: dev@dpdk.org; Xia, Chenbo <chenbo.xia@intel.com>;
>> david.marchand@redhat.com; i.maximets@ovn.org
>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Subject: [PATCH v2 5/5] vhost: add statistics for IOTLB
>>
>> This patch adds statistics for IOTLB hits and misses.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>> lib/vhost/vhost.c | 10 +++++++++-
>> lib/vhost/vhost.h | 3 +++
>> 2 files changed, 12 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
>> index 58b58fc40e..7f4fafdcb0 100644
>> --- a/lib/vhost/vhost.c
>> +++ b/lib/vhost/vhost.c
>> @@ -43,6 +43,8 @@ static const struct vhost_vq_stats_name_off
>> vhost_vq_stat_strings[] = {
>> {"size_1024_1518_packets", offsetof(struct vhost_virtqueue,
>> stats.size_bins[6])},
>> {"size_1519_max_packets", offsetof(struct vhost_virtqueue,
>> stats.size_bins[7])},
>> {"guest_notifications", offsetof(struct vhost_virtqueue,
>> stats.guest_notifications)},
>> + {"iotlb_hits", offsetof(struct vhost_virtqueue,
>> stats.iotlb_hits)},
>> + {"iotlb_misses", offsetof(struct vhost_virtqueue,
>> stats.iotlb_misses)},
>> };
>>
>> #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
>> @@ -60,8 +62,14 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct
>> vhost_virtqueue *vq,
>> tmp_size = *size;
>>
>> vva = vhost_user_iotlb_cache_find(vq, iova, &tmp_size, perm);
>> - if (tmp_size == *size)
>> + if (tmp_size == *size) {
>> + if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
>> + vq->stats.iotlb_hits++;
>> return vva;
>> + }
>> +
>> + if (dev->flags & VIRTIO_DEV_STATS_ENABLED)
>> + vq->stats.iotlb_misses++;
>>
>> iova += tmp_size;
>>
>> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
>> index 13c5c2266d..e876fc157b 100644
>> --- a/lib/vhost/vhost.h
>> +++ b/lib/vhost/vhost.h
>> @@ -134,6 +134,9 @@ struct virtqueue_stats {
>> /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */
>> uint64_t size_bins[8];
>> uint64_t guest_notifications;
>> + uint64_t iotlb_hits;
>> + uint64_t iotlb_misses;
>> + uint64_t iotlb_errors;
>
> This seems defined but never used.
Good catch, I'll remove it in next revision.
Regards,
Maxime
> Thanks,
> Chenbo
>
>> };
>>
>> /**
>> --
>> 2.35.1
>
^ permalink raw reply [flat|nested] 12+ messages in thread