* [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init Tiwei Bie
` (10 subsequent siblings)
11 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Tiwei Bie (10):
net/virtio: fix typo in packed ring init
net/virtio: fix interrupt helper for packed ring
net/virtio: add missing barrier in interrupt enable
net/virtio: optimize flags update for packed ring
net/virtio: refactor virtqueue structure
net/virtio: drop redundant suffix in packed ring structure
net/virtio: drop unused field in Tx region structure
net/virtio: add interrupt helper for split ring
net/virtio: add ctrl vq helper for split ring
net/virtio: improve batching in standard Rx path
drivers/net/virtio/virtio_ethdev.c | 172 +++++++++---------
drivers/net/virtio/virtio_ring.h | 15 +-
drivers/net/virtio/virtio_rxtx.c | 139 +++++++-------
drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 22 +--
drivers/net/virtio/virtio_user_ethdev.c | 11 +-
drivers/net/virtio/virtqueue.c | 6 +-
drivers/net/virtio/virtqueue.h | 100 +++++-----
10 files changed, 241 insertions(+), 230 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
` (2 more replies)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 02/10] net/virtio: fix interrupt helper for packed ring Tiwei Bie
` (9 subsequent siblings)
11 siblings, 3 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev; +Cc: stable
The pointer to event structure should be cast to uintptr_t first.
Fixes: f803734b0f2e ("net/virtio: vring init for packed queues")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ring.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 1760823c6..5a37629fe 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -165,7 +165,7 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align,
vr->driver_event = (struct vring_packed_desc_event *)(p +
vr->num * sizeof(struct vring_packed_desc));
vr->device_event = (struct vring_packed_desc_event *)
- RTE_ALIGN_CEIL((uintptr_t)(vr->driver_event +
+ RTE_ALIGN_CEIL(((uintptr_t)vr->driver_event +
sizeof(struct vring_packed_desc_event)), align);
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init
2019-03-19 6:43 ` [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 8:39 ` Jens Freimann
2019-03-19 12:46 ` Maxime Coquelin
2 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev; +Cc: stable
The pointer to event structure should be cast to uintptr_t first.
Fixes: f803734b0f2e ("net/virtio: vring init for packed queues")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ring.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 1760823c6..5a37629fe 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -165,7 +165,7 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align,
vr->driver_event = (struct vring_packed_desc_event *)(p +
vr->num * sizeof(struct vring_packed_desc));
vr->device_event = (struct vring_packed_desc_event *)
- RTE_ALIGN_CEIL((uintptr_t)(vr->driver_event +
+ RTE_ALIGN_CEIL(((uintptr_t)vr->driver_event +
sizeof(struct vring_packed_desc_event)), align);
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init
2019-03-19 6:43 ` [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 8:39 ` Jens Freimann
2019-03-19 8:39 ` Jens Freimann
2019-03-19 12:46 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 8:39 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev, stable
On Tue, Mar 19, 2019 at 02:43:03PM +0800, Tiwei Bie wrote:
>The pointer to event structure should be cast to uintptr_t first.
>
>Fixes: f803734b0f2e ("net/virtio: vring init for packed queues")
>Cc: stable@dpdk.org
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ring.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init
2019-03-19 8:39 ` Jens Freimann
@ 2019-03-19 8:39 ` Jens Freimann
0 siblings, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 8:39 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev, stable
On Tue, Mar 19, 2019 at 02:43:03PM +0800, Tiwei Bie wrote:
>The pointer to event structure should be cast to uintptr_t first.
>
>Fixes: f803734b0f2e ("net/virtio: vring init for packed queues")
>Cc: stable@dpdk.org
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ring.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init
2019-03-19 6:43 ` [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 8:39 ` Jens Freimann
@ 2019-03-19 12:46 ` Maxime Coquelin
2019-03-19 12:46 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:46 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev; +Cc: stable
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> The pointer to event structure should be cast to uintptr_t first.
>
> Fixes: f803734b0f2e ("net/virtio: vring init for packed queues")
> Cc: stable@dpdk.org
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ring.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init
2019-03-19 12:46 ` Maxime Coquelin
@ 2019-03-19 12:46 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:46 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev; +Cc: stable
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> The pointer to event structure should be cast to uintptr_t first.
>
> Fixes: f803734b0f2e ("net/virtio: vring init for packed queues")
> Cc: stable@dpdk.org
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ring.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 02/10] net/virtio: fix interrupt helper for packed ring
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` [dpdk-dev] [PATCH 01/10] net/virtio: fix typo in packed ring init Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 12:48 ` Maxime Coquelin
2019-03-19 6:43 ` [dpdk-dev] [PATCH 03/10] net/virtio: add missing barrier in interrupt enable Tiwei Bie
` (8 subsequent siblings)
11 siblings, 2 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev; +Cc: stable
When disabling interrupt, the shadow event flags should also be
updated accordingly. The unnecessary wmb is also dropped.
Fixes: e9f4feb7e622 ("net/virtio: add packed virtqueue helpers")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtqueue.h | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index ca9d8e6e3..24fa873c3 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -321,12 +321,13 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n)
static inline void
virtqueue_disable_intr_packed(struct virtqueue *vq)
{
- uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
-
- *event_flags = RING_EVENT_FLAGS_DISABLE;
+ if (vq->event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
+ vq->event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
+ vq->ring_packed.driver_event->desc_event_flags =
+ vq->event_flags_shadow;
+ }
}
-
/**
* Tell the backend not to interrupt us.
*/
@@ -348,7 +349,6 @@ virtqueue_enable_intr_packed(struct virtqueue *vq)
uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
- virtio_wmb(vq->hw->weak_barriers);
vq->event_flags_shadow = RING_EVENT_FLAGS_ENABLE;
*event_flags = vq->event_flags_shadow;
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 02/10] net/virtio: fix interrupt helper for packed ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 02/10] net/virtio: fix interrupt helper for packed ring Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 12:48 ` Maxime Coquelin
1 sibling, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev; +Cc: stable
When disabling interrupt, the shadow event flags should also be
updated accordingly. The unnecessary wmb is also dropped.
Fixes: e9f4feb7e622 ("net/virtio: add packed virtqueue helpers")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtqueue.h | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index ca9d8e6e3..24fa873c3 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -321,12 +321,13 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n)
static inline void
virtqueue_disable_intr_packed(struct virtqueue *vq)
{
- uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
-
- *event_flags = RING_EVENT_FLAGS_DISABLE;
+ if (vq->event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
+ vq->event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
+ vq->ring_packed.driver_event->desc_event_flags =
+ vq->event_flags_shadow;
+ }
}
-
/**
* Tell the backend not to interrupt us.
*/
@@ -348,7 +349,6 @@ virtqueue_enable_intr_packed(struct virtqueue *vq)
uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
- virtio_wmb(vq->hw->weak_barriers);
vq->event_flags_shadow = RING_EVENT_FLAGS_ENABLE;
*event_flags = vq->event_flags_shadow;
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 02/10] net/virtio: fix interrupt helper for packed ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 02/10] net/virtio: fix interrupt helper for packed ring Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 12:48 ` Maxime Coquelin
2019-03-19 12:48 ` Maxime Coquelin
1 sibling, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:48 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev; +Cc: stable
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> When disabling interrupt, the shadow event flags should also be
> updated accordingly. The unnecessary wmb is also dropped.
>
> Fixes: e9f4feb7e622 ("net/virtio: add packed virtqueue helpers")
> Cc: stable@dpdk.org
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtqueue.h | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 02/10] net/virtio: fix interrupt helper for packed ring
2019-03-19 12:48 ` Maxime Coquelin
@ 2019-03-19 12:48 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:48 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev; +Cc: stable
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> When disabling interrupt, the shadow event flags should also be
> updated accordingly. The unnecessary wmb is also dropped.
>
> Fixes: e9f4feb7e622 ("net/virtio: add packed virtqueue helpers")
> Cc: stable@dpdk.org
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtqueue.h | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 03/10] net/virtio: add missing barrier in interrupt enable
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
` (2 preceding siblings ...)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 02/10] net/virtio: fix interrupt helper for packed ring Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 12:49 ` Maxime Coquelin
2019-03-19 6:43 ` [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring Tiwei Bie
` (7 subsequent siblings)
11 siblings, 2 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev; +Cc: stable
Typically, after enabling Rx interrupt, a check should be done
to make sure that there is no new incoming packets before going
to sleep. So a barrier is needed to make sure that any following
check won't happen before the interrupt is actually enabled.
Fixes: c056be239db5 ("net/virtio: add Rx interrupt enable/disable functions")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 78ba7bd29..ff16fb63e 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -850,10 +850,12 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
static int
virtio_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
+ struct virtio_hw *hw = dev->data->dev_private;
struct virtnet_rx *rxvq = dev->data->rx_queues[queue_id];
struct virtqueue *vq = rxvq->vq;
virtqueue_enable_intr(vq);
+ virtio_mb(hw->weak_barriers);
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 03/10] net/virtio: add missing barrier in interrupt enable
2019-03-19 6:43 ` [dpdk-dev] [PATCH 03/10] net/virtio: add missing barrier in interrupt enable Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 12:49 ` Maxime Coquelin
1 sibling, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev; +Cc: stable
Typically, after enabling Rx interrupt, a check should be done
to make sure that there is no new incoming packets before going
to sleep. So a barrier is needed to make sure that any following
check won't happen before the interrupt is actually enabled.
Fixes: c056be239db5 ("net/virtio: add Rx interrupt enable/disable functions")
Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 78ba7bd29..ff16fb63e 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -850,10 +850,12 @@ virtio_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
static int
virtio_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
+ struct virtio_hw *hw = dev->data->dev_private;
struct virtnet_rx *rxvq = dev->data->rx_queues[queue_id];
struct virtqueue *vq = rxvq->vq;
virtqueue_enable_intr(vq);
+ virtio_mb(hw->weak_barriers);
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 03/10] net/virtio: add missing barrier in interrupt enable
2019-03-19 6:43 ` [dpdk-dev] [PATCH 03/10] net/virtio: add missing barrier in interrupt enable Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 12:49 ` Maxime Coquelin
2019-03-19 12:49 ` Maxime Coquelin
1 sibling, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:49 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev; +Cc: stable
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Typically, after enabling Rx interrupt, a check should be done
> to make sure that there is no new incoming packets before going
> to sleep. So a barrier is needed to make sure that any following
> check won't happen before the interrupt is actually enabled.
>
> Fixes: c056be239db5 ("net/virtio: add Rx interrupt enable/disable functions")
> Cc: stable@dpdk.org
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 2 ++
> 1 file changed, 2 insertions(+)
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 03/10] net/virtio: add missing barrier in interrupt enable
2019-03-19 12:49 ` Maxime Coquelin
@ 2019-03-19 12:49 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:49 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev; +Cc: stable
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Typically, after enabling Rx interrupt, a check should be done
> to make sure that there is no new incoming packets before going
> to sleep. So a barrier is needed to make sure that any following
> check won't happen before the interrupt is actually enabled.
>
> Fixes: c056be239db5 ("net/virtio: add Rx interrupt enable/disable functions")
> Cc: stable@dpdk.org
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 2 ++
> 1 file changed, 2 insertions(+)
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
` (3 preceding siblings ...)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 03/10] net/virtio: add missing barrier in interrupt enable Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
` (2 more replies)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure Tiwei Bie
` (6 subsequent siblings)
11 siblings, 3 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Cache the AVAIL, USED and WRITE bits to avoid calculating
them as much as possible. Note that, the WRITE bit isn't
cached for control queue.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
drivers/net/virtio/virtqueue.h | 8 +++----
3 files changed, 32 insertions(+), 42 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index ff16fb63e..9060b6b33 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -149,7 +149,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
int head;
struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
struct virtio_pmd_ctrl *result;
- bool avail_wrap_counter;
+ uint16_t flags;
int sum = 0;
int nb_descs = 0;
int k;
@@ -161,14 +161,15 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
* One RX packet for ACK.
*/
head = vq->vq_avail_idx;
- avail_wrap_counter = vq->avail_wrap_counter;
+ flags = vq->cached_flags;
desc[head].addr = cvq->virtio_net_hdr_mem;
desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
for (k = 0; k < pkt_num; k++) {
@@ -177,34 +178,31 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
+ sizeof(ctrl->status) + sizeof(uint8_t) * sum;
desc[vq->vq_avail_idx].len = dlen[k];
desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ vq->cached_flags;
sum += dlen[k];
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
}
desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr);
desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
- desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
virtio_wmb(vq->hw->weak_barriers);
- desc[head].flags = VRING_DESC_F_NEXT |
- VRING_DESC_F_AVAIL(avail_wrap_counter) |
- VRING_DESC_F_USED(!avail_wrap_counter);
+ desc[head].flags = VRING_DESC_F_NEXT | flags;
virtio_wmb(vq->hw->weak_barriers);
virtqueue_notify(vq);
@@ -226,12 +224,12 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
"vq->vq_avail_idx=%d\n"
"vq->vq_used_cons_idx=%d\n"
- "vq->avail_wrap_counter=%d\n"
+ "vq->cached_flags=0x%x\n"
"vq->used_wrap_counter=%d\n",
vq->vq_free_cnt,
vq->vq_avail_idx,
vq->vq_used_cons_idx,
- vq->avail_wrap_counter,
+ vq->cached_flags,
vq->used_wrap_counter);
result = cvq->virtio_net_hdr_mz->addr;
@@ -491,11 +489,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
vq->vq_nentries = vq_size;
vq->event_flags_shadow = 0;
if (vtpci_packed_queue(hw)) {
- vq->avail_wrap_counter = 1;
vq->used_wrap_counter = 1;
- vq->avail_used_flags =
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ vq->cached_flags = VRING_DESC_F_AVAIL(1);
+ if (queue_type == VTNET_RQ)
+ vq->cached_flags |= VRING_DESC_F_WRITE;
}
/*
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 771d3c3f6..3c354baef 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -431,7 +431,7 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
struct rte_mbuf **cookie, uint16_t num)
{
struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
- uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
+ uint16_t flags = vq->cached_flags;
struct virtio_hw *hw = vq->hw;
struct vq_desc_extra *dxp;
uint16_t idx;
@@ -460,11 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
start_dp[idx].flags = flags;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
- vq->avail_used_flags =
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
- flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
+ flags = vq->cached_flags;
}
}
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);
@@ -643,7 +641,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
dxp->ndescs = 1;
dxp->cookie = cookie;
- flags = vq->avail_used_flags;
+ flags = vq->cached_flags;
/* prepend cannot fail, checked by caller */
hdr = (struct virtio_net_hdr *)
@@ -662,8 +660,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
- vq->avail_used_flags ^=
+ vq->cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
@@ -705,7 +702,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
head_dp = &vq->ring_packed.desc_packed[idx];
head_flags = cookie->next ? VRING_DESC_F_NEXT : 0;
- head_flags |= vq->avail_used_flags;
+ head_flags |= vq->cached_flags;
if (can_push) {
/* prepend cannot fail, checked by caller */
@@ -730,10 +727,8 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
idx++;
if (idx >= vq->vq_nentries) {
idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
- vq->avail_used_flags =
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
}
@@ -746,17 +741,15 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
start_dp[idx].len = cookie->data_len;
if (likely(idx != head_idx)) {
flags = cookie->next ? VRING_DESC_F_NEXT : 0;
- flags |= vq->avail_used_flags;
+ flags |= vq->cached_flags;
start_dp[idx].flags = flags;
}
prev = idx;
idx++;
if (idx >= vq->vq_nentries) {
idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
- vq->avail_used_flags =
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
} while ((cookie = cookie->next) != NULL);
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 24fa873c3..80c0c43c3 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -193,10 +193,10 @@ struct virtqueue {
struct virtio_hw *hw; /**< virtio_hw structure pointer. */
struct vring vq_ring; /**< vring keeping desc, used and avail */
struct vring_packed ring_packed; /**< vring keeping descs */
- bool avail_wrap_counter;
bool used_wrap_counter;
+ uint16_t cached_flags; /**< cached flags for descs */
uint16_t event_flags_shadow;
- uint16_t avail_used_flags;
+
/**
* Last consumed descriptor in the used table,
* trails vq_ring.used->idx.
@@ -478,9 +478,9 @@ virtqueue_notify(struct virtqueue *vq)
if (vtpci_packed_queue((vq)->hw)) { \
PMD_INIT_LOG(DEBUG, \
"VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d;" \
- "VQ: - avail_wrap_counter=%d; used_wrap_counter=%d", \
+ " cached_flags=0x%x; used_wrap_counter=%d", \
(vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \
- (vq)->vq_avail_idx, (vq)->avail_wrap_counter, \
+ (vq)->vq_avail_idx, (vq)->cached_flags, \
(vq)->used_wrap_counter); \
break; \
} \
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 8:54 ` Jens Freimann
2019-03-19 12:58 ` Maxime Coquelin
2 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Cache the AVAIL, USED and WRITE bits to avoid calculating
them as much as possible. Note that, the WRITE bit isn't
cached for control queue.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
drivers/net/virtio/virtqueue.h | 8 +++----
3 files changed, 32 insertions(+), 42 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index ff16fb63e..9060b6b33 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -149,7 +149,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
int head;
struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
struct virtio_pmd_ctrl *result;
- bool avail_wrap_counter;
+ uint16_t flags;
int sum = 0;
int nb_descs = 0;
int k;
@@ -161,14 +161,15 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
* One RX packet for ACK.
*/
head = vq->vq_avail_idx;
- avail_wrap_counter = vq->avail_wrap_counter;
+ flags = vq->cached_flags;
desc[head].addr = cvq->virtio_net_hdr_mem;
desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
for (k = 0; k < pkt_num; k++) {
@@ -177,34 +178,31 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
+ sizeof(ctrl->status) + sizeof(uint8_t) * sum;
desc[vq->vq_avail_idx].len = dlen[k];
desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ vq->cached_flags;
sum += dlen[k];
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
}
desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr);
desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
- desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
virtio_wmb(vq->hw->weak_barriers);
- desc[head].flags = VRING_DESC_F_NEXT |
- VRING_DESC_F_AVAIL(avail_wrap_counter) |
- VRING_DESC_F_USED(!avail_wrap_counter);
+ desc[head].flags = VRING_DESC_F_NEXT | flags;
virtio_wmb(vq->hw->weak_barriers);
virtqueue_notify(vq);
@@ -226,12 +224,12 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
"vq->vq_avail_idx=%d\n"
"vq->vq_used_cons_idx=%d\n"
- "vq->avail_wrap_counter=%d\n"
+ "vq->cached_flags=0x%x\n"
"vq->used_wrap_counter=%d\n",
vq->vq_free_cnt,
vq->vq_avail_idx,
vq->vq_used_cons_idx,
- vq->avail_wrap_counter,
+ vq->cached_flags,
vq->used_wrap_counter);
result = cvq->virtio_net_hdr_mz->addr;
@@ -491,11 +489,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
vq->vq_nentries = vq_size;
vq->event_flags_shadow = 0;
if (vtpci_packed_queue(hw)) {
- vq->avail_wrap_counter = 1;
vq->used_wrap_counter = 1;
- vq->avail_used_flags =
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ vq->cached_flags = VRING_DESC_F_AVAIL(1);
+ if (queue_type == VTNET_RQ)
+ vq->cached_flags |= VRING_DESC_F_WRITE;
}
/*
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 771d3c3f6..3c354baef 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -431,7 +431,7 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
struct rte_mbuf **cookie, uint16_t num)
{
struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
- uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
+ uint16_t flags = vq->cached_flags;
struct virtio_hw *hw = vq->hw;
struct vq_desc_extra *dxp;
uint16_t idx;
@@ -460,11 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
start_dp[idx].flags = flags;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
- vq->avail_used_flags =
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
- flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
+ flags = vq->cached_flags;
}
}
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);
@@ -643,7 +641,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
dxp->ndescs = 1;
dxp->cookie = cookie;
- flags = vq->avail_used_flags;
+ flags = vq->cached_flags;
/* prepend cannot fail, checked by caller */
hdr = (struct virtio_net_hdr *)
@@ -662,8 +660,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
- vq->avail_used_flags ^=
+ vq->cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
@@ -705,7 +702,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
head_dp = &vq->ring_packed.desc_packed[idx];
head_flags = cookie->next ? VRING_DESC_F_NEXT : 0;
- head_flags |= vq->avail_used_flags;
+ head_flags |= vq->cached_flags;
if (can_push) {
/* prepend cannot fail, checked by caller */
@@ -730,10 +727,8 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
idx++;
if (idx >= vq->vq_nentries) {
idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
- vq->avail_used_flags =
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
}
@@ -746,17 +741,15 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
start_dp[idx].len = cookie->data_len;
if (likely(idx != head_idx)) {
flags = cookie->next ? VRING_DESC_F_NEXT : 0;
- flags |= vq->avail_used_flags;
+ flags |= vq->cached_flags;
start_dp[idx].flags = flags;
}
prev = idx;
idx++;
if (idx >= vq->vq_nentries) {
idx -= vq->vq_nentries;
- vq->avail_wrap_counter ^= 1;
- vq->avail_used_flags =
- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
- VRING_DESC_F_USED(!vq->avail_wrap_counter);
+ vq->cached_flags ^=
+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
} while ((cookie = cookie->next) != NULL);
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 24fa873c3..80c0c43c3 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -193,10 +193,10 @@ struct virtqueue {
struct virtio_hw *hw; /**< virtio_hw structure pointer. */
struct vring vq_ring; /**< vring keeping desc, used and avail */
struct vring_packed ring_packed; /**< vring keeping descs */
- bool avail_wrap_counter;
bool used_wrap_counter;
+ uint16_t cached_flags; /**< cached flags for descs */
uint16_t event_flags_shadow;
- uint16_t avail_used_flags;
+
/**
* Last consumed descriptor in the used table,
* trails vq_ring.used->idx.
@@ -478,9 +478,9 @@ virtqueue_notify(struct virtqueue *vq)
if (vtpci_packed_queue((vq)->hw)) { \
PMD_INIT_LOG(DEBUG, \
"VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d;" \
- "VQ: - avail_wrap_counter=%d; used_wrap_counter=%d", \
+ " cached_flags=0x%x; used_wrap_counter=%d", \
(vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \
- (vq)->vq_avail_idx, (vq)->avail_wrap_counter, \
+ (vq)->vq_avail_idx, (vq)->cached_flags, \
(vq)->used_wrap_counter); \
break; \
} \
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 8:54 ` Jens Freimann
2019-03-19 8:54 ` Jens Freimann
2019-03-19 9:37 ` Tiwei Bie
2019-03-19 12:58 ` Maxime Coquelin
2 siblings, 2 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 8:54 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:06PM +0800, Tiwei Bie wrote:
>Cache the AVAIL, USED and WRITE bits to avoid calculating
>them as much as possible. Note that, the WRITE bit isn't
>cached for control queue.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
> drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
> drivers/net/virtio/virtqueue.h | 8 +++----
> 3 files changed, 32 insertions(+), 42 deletions(-)
>
>diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>index ff16fb63e..9060b6b33 100644
>--- a/drivers/net/virtio/virtio_ethdev.c
>+++ b/drivers/net/virtio/virtio_ethdev.c
>@@ -149,7 +149,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> int head;
> struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
> struct virtio_pmd_ctrl *result;
>- bool avail_wrap_counter;
>+ uint16_t flags;
> int sum = 0;
> int nb_descs = 0;
> int k;
>@@ -161,14 +161,15 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> * One RX packet for ACK.
> */
> head = vq->vq_avail_idx;
>- avail_wrap_counter = vq->avail_wrap_counter;
>+ flags = vq->cached_flags;
> desc[head].addr = cvq->virtio_net_hdr_mem;
> desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
> vq->vq_free_cnt--;
> nb_descs++;
> if (++vq->vq_avail_idx >= vq->vq_nentries) {
> vq->vq_avail_idx -= vq->vq_nentries;
>- vq->avail_wrap_counter ^= 1;
>+ vq->cached_flags ^=
>+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
Maybe name it avail_used_flags instead of cached flags. Also we could
use a constant value.
> }
>
> for (k = 0; k < pkt_num; k++) {
>@@ -177,34 +178,31 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
> desc[vq->vq_avail_idx].len = dlen[k];
> desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
>- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>- VRING_DESC_F_USED(!vq->avail_wrap_counter);
>+ vq->cached_flags;
> sum += dlen[k];
> vq->vq_free_cnt--;
> nb_descs++;
> if (++vq->vq_avail_idx >= vq->vq_nentries) {
> vq->vq_avail_idx -= vq->vq_nentries;
>- vq->avail_wrap_counter ^= 1;
>+ vq->cached_flags ^=
>+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> }
> }
>
> desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
> + sizeof(struct virtio_net_ctrl_hdr);
> desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
>- desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
>- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>- VRING_DESC_F_USED(!vq->avail_wrap_counter);
>+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
> vq->vq_free_cnt--;
> nb_descs++;
> if (++vq->vq_avail_idx >= vq->vq_nentries) {
> vq->vq_avail_idx -= vq->vq_nentries;
>- vq->avail_wrap_counter ^= 1;
>+ vq->cached_flags ^=
>+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> }
>
> virtio_wmb(vq->hw->weak_barriers);
>- desc[head].flags = VRING_DESC_F_NEXT |
>- VRING_DESC_F_AVAIL(avail_wrap_counter) |
>- VRING_DESC_F_USED(!avail_wrap_counter);
>+ desc[head].flags = VRING_DESC_F_NEXT | flags;
>
> virtio_wmb(vq->hw->weak_barriers);
> virtqueue_notify(vq);
>@@ -226,12 +224,12 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
> "vq->vq_avail_idx=%d\n"
> "vq->vq_used_cons_idx=%d\n"
>- "vq->avail_wrap_counter=%d\n"
>+ "vq->cached_flags=0x%x\n"
> "vq->used_wrap_counter=%d\n",
> vq->vq_free_cnt,
> vq->vq_avail_idx,
> vq->vq_used_cons_idx,
>- vq->avail_wrap_counter,
>+ vq->cached_flags,
> vq->used_wrap_counter);
>
> result = cvq->virtio_net_hdr_mz->addr;
>@@ -491,11 +489,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> vq->vq_nentries = vq_size;
> vq->event_flags_shadow = 0;
> if (vtpci_packed_queue(hw)) {
>- vq->avail_wrap_counter = 1;
> vq->used_wrap_counter = 1;
>- vq->avail_used_flags =
>- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>- VRING_DESC_F_USED(!vq->avail_wrap_counter);
>+ vq->cached_flags = VRING_DESC_F_AVAIL(1);
>+ if (queue_type == VTNET_RQ)
>+ vq->cached_flags |= VRING_DESC_F_WRITE;
> }
>
> /*
>diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>index 771d3c3f6..3c354baef 100644
>--- a/drivers/net/virtio/virtio_rxtx.c
>+++ b/drivers/net/virtio/virtio_rxtx.c
>@@ -431,7 +431,7 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
> struct rte_mbuf **cookie, uint16_t num)
> {
> struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
>- uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
>+ uint16_t flags = vq->cached_flags;
> struct virtio_hw *hw = vq->hw;
> struct vq_desc_extra *dxp;
> uint16_t idx;
>@@ -460,11 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
> start_dp[idx].flags = flags;
> if (++vq->vq_avail_idx >= vq->vq_nentries) {
> vq->vq_avail_idx -= vq->vq_nentries;
>- vq->avail_wrap_counter ^= 1;
>- vq->avail_used_flags =
>- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>- VRING_DESC_F_USED(!vq->avail_wrap_counter);
>- flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
>+ vq->cached_flags ^=
>+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>+ flags = vq->cached_flags;
same here. it's not really cached, it's pre-calculated. And here we
could also use a pre-calculated constand/define.
Otherwise looks good! Did you see any performance improvements?
regards,
Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 8:54 ` Jens Freimann
@ 2019-03-19 8:54 ` Jens Freimann
2019-03-19 9:37 ` Tiwei Bie
1 sibling, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 8:54 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:06PM +0800, Tiwei Bie wrote:
>Cache the AVAIL, USED and WRITE bits to avoid calculating
>them as much as possible. Note that, the WRITE bit isn't
>cached for control queue.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
> drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
> drivers/net/virtio/virtqueue.h | 8 +++----
> 3 files changed, 32 insertions(+), 42 deletions(-)
>
>diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>index ff16fb63e..9060b6b33 100644
>--- a/drivers/net/virtio/virtio_ethdev.c
>+++ b/drivers/net/virtio/virtio_ethdev.c
>@@ -149,7 +149,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> int head;
> struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
> struct virtio_pmd_ctrl *result;
>- bool avail_wrap_counter;
>+ uint16_t flags;
> int sum = 0;
> int nb_descs = 0;
> int k;
>@@ -161,14 +161,15 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> * One RX packet for ACK.
> */
> head = vq->vq_avail_idx;
>- avail_wrap_counter = vq->avail_wrap_counter;
>+ flags = vq->cached_flags;
> desc[head].addr = cvq->virtio_net_hdr_mem;
> desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
> vq->vq_free_cnt--;
> nb_descs++;
> if (++vq->vq_avail_idx >= vq->vq_nentries) {
> vq->vq_avail_idx -= vq->vq_nentries;
>- vq->avail_wrap_counter ^= 1;
>+ vq->cached_flags ^=
>+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
Maybe name it avail_used_flags instead of cached flags. Also we could
use a constant value.
> }
>
> for (k = 0; k < pkt_num; k++) {
>@@ -177,34 +178,31 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
> desc[vq->vq_avail_idx].len = dlen[k];
> desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
>- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>- VRING_DESC_F_USED(!vq->avail_wrap_counter);
>+ vq->cached_flags;
> sum += dlen[k];
> vq->vq_free_cnt--;
> nb_descs++;
> if (++vq->vq_avail_idx >= vq->vq_nentries) {
> vq->vq_avail_idx -= vq->vq_nentries;
>- vq->avail_wrap_counter ^= 1;
>+ vq->cached_flags ^=
>+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> }
> }
>
> desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
> + sizeof(struct virtio_net_ctrl_hdr);
> desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
>- desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
>- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>- VRING_DESC_F_USED(!vq->avail_wrap_counter);
>+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
> vq->vq_free_cnt--;
> nb_descs++;
> if (++vq->vq_avail_idx >= vq->vq_nentries) {
> vq->vq_avail_idx -= vq->vq_nentries;
>- vq->avail_wrap_counter ^= 1;
>+ vq->cached_flags ^=
>+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> }
>
> virtio_wmb(vq->hw->weak_barriers);
>- desc[head].flags = VRING_DESC_F_NEXT |
>- VRING_DESC_F_AVAIL(avail_wrap_counter) |
>- VRING_DESC_F_USED(!avail_wrap_counter);
>+ desc[head].flags = VRING_DESC_F_NEXT | flags;
>
> virtio_wmb(vq->hw->weak_barriers);
> virtqueue_notify(vq);
>@@ -226,12 +224,12 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
> "vq->vq_avail_idx=%d\n"
> "vq->vq_used_cons_idx=%d\n"
>- "vq->avail_wrap_counter=%d\n"
>+ "vq->cached_flags=0x%x\n"
> "vq->used_wrap_counter=%d\n",
> vq->vq_free_cnt,
> vq->vq_avail_idx,
> vq->vq_used_cons_idx,
>- vq->avail_wrap_counter,
>+ vq->cached_flags,
> vq->used_wrap_counter);
>
> result = cvq->virtio_net_hdr_mz->addr;
>@@ -491,11 +489,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> vq->vq_nentries = vq_size;
> vq->event_flags_shadow = 0;
> if (vtpci_packed_queue(hw)) {
>- vq->avail_wrap_counter = 1;
> vq->used_wrap_counter = 1;
>- vq->avail_used_flags =
>- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>- VRING_DESC_F_USED(!vq->avail_wrap_counter);
>+ vq->cached_flags = VRING_DESC_F_AVAIL(1);
>+ if (queue_type == VTNET_RQ)
>+ vq->cached_flags |= VRING_DESC_F_WRITE;
> }
>
> /*
>diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>index 771d3c3f6..3c354baef 100644
>--- a/drivers/net/virtio/virtio_rxtx.c
>+++ b/drivers/net/virtio/virtio_rxtx.c
>@@ -431,7 +431,7 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
> struct rte_mbuf **cookie, uint16_t num)
> {
> struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
>- uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
>+ uint16_t flags = vq->cached_flags;
> struct virtio_hw *hw = vq->hw;
> struct vq_desc_extra *dxp;
> uint16_t idx;
>@@ -460,11 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
> start_dp[idx].flags = flags;
> if (++vq->vq_avail_idx >= vq->vq_nentries) {
> vq->vq_avail_idx -= vq->vq_nentries;
>- vq->avail_wrap_counter ^= 1;
>- vq->avail_used_flags =
>- VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>- VRING_DESC_F_USED(!vq->avail_wrap_counter);
>- flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
>+ vq->cached_flags ^=
>+ VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>+ flags = vq->cached_flags;
same here. it's not really cached, it's pre-calculated. And here we
could also use a pre-calculated constand/define.
Otherwise looks good! Did you see any performance improvements?
regards,
Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 8:54 ` Jens Freimann
2019-03-19 8:54 ` Jens Freimann
@ 2019-03-19 9:37 ` Tiwei Bie
2019-03-19 9:37 ` Tiwei Bie
2019-03-19 10:11 ` Jens Freimann
1 sibling, 2 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 9:37 UTC (permalink / raw)
To: Jens Freimann; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 09:54:03AM +0100, Jens Freimann wrote:
> On Tue, Mar 19, 2019 at 02:43:06PM +0800, Tiwei Bie wrote:
> > Cache the AVAIL, USED and WRITE bits to avoid calculating
> > them as much as possible. Note that, the WRITE bit isn't
> > cached for control queue.
> >
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> > drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
> > drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
> > drivers/net/virtio/virtqueue.h | 8 +++----
> > 3 files changed, 32 insertions(+), 42 deletions(-)
> >
> > diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> > index ff16fb63e..9060b6b33 100644
> > --- a/drivers/net/virtio/virtio_ethdev.c
> > +++ b/drivers/net/virtio/virtio_ethdev.c
> > @@ -149,7 +149,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> > int head;
> > struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
> > struct virtio_pmd_ctrl *result;
> > - bool avail_wrap_counter;
> > + uint16_t flags;
> > int sum = 0;
> > int nb_descs = 0;
> > int k;
> > @@ -161,14 +161,15 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> > * One RX packet for ACK.
> > */
> > head = vq->vq_avail_idx;
> > - avail_wrap_counter = vq->avail_wrap_counter;
> > + flags = vq->cached_flags;
> > desc[head].addr = cvq->virtio_net_hdr_mem;
> > desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
> > vq->vq_free_cnt--;
> > nb_descs++;
> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
> > vq->vq_avail_idx -= vq->vq_nentries;
> > - vq->avail_wrap_counter ^= 1;
> > + vq->cached_flags ^=
> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>
> Maybe name it avail_used_flags instead of cached flags. Also we could
> use a constant value.
It also contains the WRITE bit (not just AVAIL and USED bits)
for Rx path. That's why I didn't name it as avail_used_flags.
>
> > }
> >
> > for (k = 0; k < pkt_num; k++) {
> > @@ -177,34 +178,31 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> > + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
> > desc[vq->vq_avail_idx].len = dlen[k];
> > desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
> > + vq->cached_flags;
> > sum += dlen[k];
> > vq->vq_free_cnt--;
> > nb_descs++;
> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
> > vq->vq_avail_idx -= vq->vq_nentries;
> > - vq->avail_wrap_counter ^= 1;
> > + vq->cached_flags ^=
> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> > }
> > }
> >
> > desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
> > + sizeof(struct virtio_net_ctrl_hdr);
> > desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
> > - desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
> > + desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
> > vq->vq_free_cnt--;
> > nb_descs++;
> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
> > vq->vq_avail_idx -= vq->vq_nentries;
> > - vq->avail_wrap_counter ^= 1;
> > + vq->cached_flags ^=
> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> > }
> >
> > virtio_wmb(vq->hw->weak_barriers);
> > - desc[head].flags = VRING_DESC_F_NEXT |
> > - VRING_DESC_F_AVAIL(avail_wrap_counter) |
> > - VRING_DESC_F_USED(!avail_wrap_counter);
> > + desc[head].flags = VRING_DESC_F_NEXT | flags;
> >
> > virtio_wmb(vq->hw->weak_barriers);
> > virtqueue_notify(vq);
> > @@ -226,12 +224,12 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> > PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
> > "vq->vq_avail_idx=%d\n"
> > "vq->vq_used_cons_idx=%d\n"
> > - "vq->avail_wrap_counter=%d\n"
> > + "vq->cached_flags=0x%x\n"
> > "vq->used_wrap_counter=%d\n",
> > vq->vq_free_cnt,
> > vq->vq_avail_idx,
> > vq->vq_used_cons_idx,
> > - vq->avail_wrap_counter,
> > + vq->cached_flags,
> > vq->used_wrap_counter);
> >
> > result = cvq->virtio_net_hdr_mz->addr;
> > @@ -491,11 +489,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> > vq->vq_nentries = vq_size;
> > vq->event_flags_shadow = 0;
> > if (vtpci_packed_queue(hw)) {
> > - vq->avail_wrap_counter = 1;
> > vq->used_wrap_counter = 1;
> > - vq->avail_used_flags =
> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
> > + vq->cached_flags = VRING_DESC_F_AVAIL(1);
> > + if (queue_type == VTNET_RQ)
> > + vq->cached_flags |= VRING_DESC_F_WRITE;
> > }
> >
> > /*
> > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> > index 771d3c3f6..3c354baef 100644
> > --- a/drivers/net/virtio/virtio_rxtx.c
> > +++ b/drivers/net/virtio/virtio_rxtx.c
> > @@ -431,7 +431,7 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
> > struct rte_mbuf **cookie, uint16_t num)
> > {
> > struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
> > - uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
> > + uint16_t flags = vq->cached_flags;
> > struct virtio_hw *hw = vq->hw;
> > struct vq_desc_extra *dxp;
> > uint16_t idx;
> > @@ -460,11 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
> > start_dp[idx].flags = flags;
> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
> > vq->vq_avail_idx -= vq->vq_nentries;
> > - vq->avail_wrap_counter ^= 1;
> > - vq->avail_used_flags =
> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
> > - flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
> > + vq->cached_flags ^=
> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> > + flags = vq->cached_flags;
>
> same here. it's not really cached, it's pre-calculated. And here we
> could also use a pre-calculated constand/define.
For pre-calculated constant/define, do you mean VRING_DESC_F_AVAIL(1)
and VRING_DESC_F_USED(1)? There is still little code in virtio-user
using them without constantly passing 1. I planned to fully get rid
of them in a separate patch later (but I can do it in this series if
anyone wants).
>
> Otherwise looks good! Did you see any performance improvements?
Yeah, I saw slightly better performance in a quick test.
Thanks,
Tiwei
>
>
> regards,
> Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 9:37 ` Tiwei Bie
@ 2019-03-19 9:37 ` Tiwei Bie
2019-03-19 10:11 ` Jens Freimann
1 sibling, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 9:37 UTC (permalink / raw)
To: Jens Freimann; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 09:54:03AM +0100, Jens Freimann wrote:
> On Tue, Mar 19, 2019 at 02:43:06PM +0800, Tiwei Bie wrote:
> > Cache the AVAIL, USED and WRITE bits to avoid calculating
> > them as much as possible. Note that, the WRITE bit isn't
> > cached for control queue.
> >
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> > drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
> > drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
> > drivers/net/virtio/virtqueue.h | 8 +++----
> > 3 files changed, 32 insertions(+), 42 deletions(-)
> >
> > diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> > index ff16fb63e..9060b6b33 100644
> > --- a/drivers/net/virtio/virtio_ethdev.c
> > +++ b/drivers/net/virtio/virtio_ethdev.c
> > @@ -149,7 +149,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> > int head;
> > struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
> > struct virtio_pmd_ctrl *result;
> > - bool avail_wrap_counter;
> > + uint16_t flags;
> > int sum = 0;
> > int nb_descs = 0;
> > int k;
> > @@ -161,14 +161,15 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> > * One RX packet for ACK.
> > */
> > head = vq->vq_avail_idx;
> > - avail_wrap_counter = vq->avail_wrap_counter;
> > + flags = vq->cached_flags;
> > desc[head].addr = cvq->virtio_net_hdr_mem;
> > desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
> > vq->vq_free_cnt--;
> > nb_descs++;
> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
> > vq->vq_avail_idx -= vq->vq_nentries;
> > - vq->avail_wrap_counter ^= 1;
> > + vq->cached_flags ^=
> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>
> Maybe name it avail_used_flags instead of cached flags. Also we could
> use a constant value.
It also contains the WRITE bit (not just AVAIL and USED bits)
for Rx path. That's why I didn't name it as avail_used_flags.
>
> > }
> >
> > for (k = 0; k < pkt_num; k++) {
> > @@ -177,34 +178,31 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> > + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
> > desc[vq->vq_avail_idx].len = dlen[k];
> > desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
> > + vq->cached_flags;
> > sum += dlen[k];
> > vq->vq_free_cnt--;
> > nb_descs++;
> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
> > vq->vq_avail_idx -= vq->vq_nentries;
> > - vq->avail_wrap_counter ^= 1;
> > + vq->cached_flags ^=
> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> > }
> > }
> >
> > desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
> > + sizeof(struct virtio_net_ctrl_hdr);
> > desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
> > - desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
> > + desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
> > vq->vq_free_cnt--;
> > nb_descs++;
> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
> > vq->vq_avail_idx -= vq->vq_nentries;
> > - vq->avail_wrap_counter ^= 1;
> > + vq->cached_flags ^=
> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> > }
> >
> > virtio_wmb(vq->hw->weak_barriers);
> > - desc[head].flags = VRING_DESC_F_NEXT |
> > - VRING_DESC_F_AVAIL(avail_wrap_counter) |
> > - VRING_DESC_F_USED(!avail_wrap_counter);
> > + desc[head].flags = VRING_DESC_F_NEXT | flags;
> >
> > virtio_wmb(vq->hw->weak_barriers);
> > virtqueue_notify(vq);
> > @@ -226,12 +224,12 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
> > PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
> > "vq->vq_avail_idx=%d\n"
> > "vq->vq_used_cons_idx=%d\n"
> > - "vq->avail_wrap_counter=%d\n"
> > + "vq->cached_flags=0x%x\n"
> > "vq->used_wrap_counter=%d\n",
> > vq->vq_free_cnt,
> > vq->vq_avail_idx,
> > vq->vq_used_cons_idx,
> > - vq->avail_wrap_counter,
> > + vq->cached_flags,
> > vq->used_wrap_counter);
> >
> > result = cvq->virtio_net_hdr_mz->addr;
> > @@ -491,11 +489,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
> > vq->vq_nentries = vq_size;
> > vq->event_flags_shadow = 0;
> > if (vtpci_packed_queue(hw)) {
> > - vq->avail_wrap_counter = 1;
> > vq->used_wrap_counter = 1;
> > - vq->avail_used_flags =
> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
> > + vq->cached_flags = VRING_DESC_F_AVAIL(1);
> > + if (queue_type == VTNET_RQ)
> > + vq->cached_flags |= VRING_DESC_F_WRITE;
> > }
> >
> > /*
> > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> > index 771d3c3f6..3c354baef 100644
> > --- a/drivers/net/virtio/virtio_rxtx.c
> > +++ b/drivers/net/virtio/virtio_rxtx.c
> > @@ -431,7 +431,7 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
> > struct rte_mbuf **cookie, uint16_t num)
> > {
> > struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
> > - uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
> > + uint16_t flags = vq->cached_flags;
> > struct virtio_hw *hw = vq->hw;
> > struct vq_desc_extra *dxp;
> > uint16_t idx;
> > @@ -460,11 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
> > start_dp[idx].flags = flags;
> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
> > vq->vq_avail_idx -= vq->vq_nentries;
> > - vq->avail_wrap_counter ^= 1;
> > - vq->avail_used_flags =
> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
> > - flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
> > + vq->cached_flags ^=
> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
> > + flags = vq->cached_flags;
>
> same here. it's not really cached, it's pre-calculated. And here we
> could also use a pre-calculated constand/define.
For pre-calculated constant/define, do you mean VRING_DESC_F_AVAIL(1)
and VRING_DESC_F_USED(1)? There is still little code in virtio-user
using them without constantly passing 1. I planned to fully get rid
of them in a separate patch later (but I can do it in this series if
anyone wants).
>
> Otherwise looks good! Did you see any performance improvements?
Yeah, I saw slightly better performance in a quick test.
Thanks,
Tiwei
>
>
> regards,
> Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 9:37 ` Tiwei Bie
2019-03-19 9:37 ` Tiwei Bie
@ 2019-03-19 10:11 ` Jens Freimann
2019-03-19 10:11 ` Jens Freimann
2019-03-19 12:50 ` Maxime Coquelin
1 sibling, 2 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 10:11 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 05:37:34PM +0800, Tiwei Bie wrote:
>On Tue, Mar 19, 2019 at 09:54:03AM +0100, Jens Freimann wrote:
>> On Tue, Mar 19, 2019 at 02:43:06PM +0800, Tiwei Bie wrote:
>> > Cache the AVAIL, USED and WRITE bits to avoid calculating
>> > them as much as possible. Note that, the WRITE bit isn't
>> > cached for control queue.
>> >
>> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>> > ---
>> > drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
>> > drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
>> > drivers/net/virtio/virtqueue.h | 8 +++----
>> > 3 files changed, 32 insertions(+), 42 deletions(-)
>> >
>> > diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>> > index ff16fb63e..9060b6b33 100644
>> > --- a/drivers/net/virtio/virtio_ethdev.c
>> > +++ b/drivers/net/virtio/virtio_ethdev.c
>> > @@ -149,7 +149,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
>> > int head;
>> > struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
>> > struct virtio_pmd_ctrl *result;
>> > - bool avail_wrap_counter;
>> > + uint16_t flags;
>> > int sum = 0;
>> > int nb_descs = 0;
>> > int k;
>> > @@ -161,14 +161,15 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
>> > * One RX packet for ACK.
>> > */
>> > head = vq->vq_avail_idx;
>> > - avail_wrap_counter = vq->avail_wrap_counter;
>> > + flags = vq->cached_flags;
>> > desc[head].addr = cvq->virtio_net_hdr_mem;
>> > desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
>> > vq->vq_free_cnt--;
>> > nb_descs++;
>> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
>> > vq->vq_avail_idx -= vq->vq_nentries;
>> > - vq->avail_wrap_counter ^= 1;
>> > + vq->cached_flags ^=
>> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>>
>> Maybe name it avail_used_flags instead of cached flags. Also we could
>> use a constant value.
>
>It also contains the WRITE bit (not just AVAIL and USED bits)
>for Rx path. That's why I didn't name it as avail_used_flags.
ok
>>
>> > }
>> >
>> > for (k = 0; k < pkt_num; k++) {
>> > @@ -177,34 +178,31 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
>> > + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
>> > desc[vq->vq_avail_idx].len = dlen[k];
>> > desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
>> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>> > + vq->cached_flags;
>> > sum += dlen[k];
>> > vq->vq_free_cnt--;
>> > nb_descs++;
>> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
>> > vq->vq_avail_idx -= vq->vq_nentries;
>> > - vq->avail_wrap_counter ^= 1;
>> > + vq->cached_flags ^=
>> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>> > }
>> > }
>> >
>> > desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
>> > + sizeof(struct virtio_net_ctrl_hdr);
>> > desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
>> > - desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
>> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>> > + desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
>> > vq->vq_free_cnt--;
>> > nb_descs++;
>> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
>> > vq->vq_avail_idx -= vq->vq_nentries;
>> > - vq->avail_wrap_counter ^= 1;
>> > + vq->cached_flags ^=
>> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>> > }
>> >
>> > virtio_wmb(vq->hw->weak_barriers);
>> > - desc[head].flags = VRING_DESC_F_NEXT |
>> > - VRING_DESC_F_AVAIL(avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!avail_wrap_counter);
>> > + desc[head].flags = VRING_DESC_F_NEXT | flags;
>> >
>> > virtio_wmb(vq->hw->weak_barriers);
>> > virtqueue_notify(vq);
>> > @@ -226,12 +224,12 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
>> > PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
>> > "vq->vq_avail_idx=%d\n"
>> > "vq->vq_used_cons_idx=%d\n"
>> > - "vq->avail_wrap_counter=%d\n"
>> > + "vq->cached_flags=0x%x\n"
>> > "vq->used_wrap_counter=%d\n",
>> > vq->vq_free_cnt,
>> > vq->vq_avail_idx,
>> > vq->vq_used_cons_idx,
>> > - vq->avail_wrap_counter,
>> > + vq->cached_flags,
>> > vq->used_wrap_counter);
>> >
>> > result = cvq->virtio_net_hdr_mz->addr;
>> > @@ -491,11 +489,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
>> > vq->vq_nentries = vq_size;
>> > vq->event_flags_shadow = 0;
>> > if (vtpci_packed_queue(hw)) {
>> > - vq->avail_wrap_counter = 1;
>> > vq->used_wrap_counter = 1;
>> > - vq->avail_used_flags =
>> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>> > + vq->cached_flags = VRING_DESC_F_AVAIL(1);
>> > + if (queue_type == VTNET_RQ)
>> > + vq->cached_flags |= VRING_DESC_F_WRITE;
>> > }
>> >
>> > /*
>> > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>> > index 771d3c3f6..3c354baef 100644
>> > --- a/drivers/net/virtio/virtio_rxtx.c
>> > +++ b/drivers/net/virtio/virtio_rxtx.c
>> > @@ -431,7 +431,7 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
>> > struct rte_mbuf **cookie, uint16_t num)
>> > {
>> > struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
>> > - uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
>> > + uint16_t flags = vq->cached_flags;
>> > struct virtio_hw *hw = vq->hw;
>> > struct vq_desc_extra *dxp;
>> > uint16_t idx;
>> > @@ -460,11 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
>> > start_dp[idx].flags = flags;
>> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
>> > vq->vq_avail_idx -= vq->vq_nentries;
>> > - vq->avail_wrap_counter ^= 1;
>> > - vq->avail_used_flags =
>> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>> > - flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
>> > + vq->cached_flags ^=
>> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>> > + flags = vq->cached_flags;
>>
>> same here. it's not really cached, it's pre-calculated. And here we
>> could also use a pre-calculated constand/define.
>
>For pre-calculated constant/define, do you mean VRING_DESC_F_AVAIL(1)
>and VRING_DESC_F_USED(1)? There is still little code in virtio-user
>using them without constantly passing 1. I planned to fully get rid
>of them in a separate patch later (but I can do it in this series if
>anyone wants).
Yes, that's what I meant. And it's fine by me if you do it in a
follow-up.
>
>>
>> Otherwise looks good! Did you see any performance improvements?
>
>Yeah, I saw slightly better performance in a quick test.
Nice.
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
regards,
Jens
>
>>
>>
>> regards,
>> Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 10:11 ` Jens Freimann
@ 2019-03-19 10:11 ` Jens Freimann
2019-03-19 12:50 ` Maxime Coquelin
1 sibling, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 10:11 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 05:37:34PM +0800, Tiwei Bie wrote:
>On Tue, Mar 19, 2019 at 09:54:03AM +0100, Jens Freimann wrote:
>> On Tue, Mar 19, 2019 at 02:43:06PM +0800, Tiwei Bie wrote:
>> > Cache the AVAIL, USED and WRITE bits to avoid calculating
>> > them as much as possible. Note that, the WRITE bit isn't
>> > cached for control queue.
>> >
>> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>> > ---
>> > drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
>> > drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
>> > drivers/net/virtio/virtqueue.h | 8 +++----
>> > 3 files changed, 32 insertions(+), 42 deletions(-)
>> >
>> > diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>> > index ff16fb63e..9060b6b33 100644
>> > --- a/drivers/net/virtio/virtio_ethdev.c
>> > +++ b/drivers/net/virtio/virtio_ethdev.c
>> > @@ -149,7 +149,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
>> > int head;
>> > struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
>> > struct virtio_pmd_ctrl *result;
>> > - bool avail_wrap_counter;
>> > + uint16_t flags;
>> > int sum = 0;
>> > int nb_descs = 0;
>> > int k;
>> > @@ -161,14 +161,15 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
>> > * One RX packet for ACK.
>> > */
>> > head = vq->vq_avail_idx;
>> > - avail_wrap_counter = vq->avail_wrap_counter;
>> > + flags = vq->cached_flags;
>> > desc[head].addr = cvq->virtio_net_hdr_mem;
>> > desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
>> > vq->vq_free_cnt--;
>> > nb_descs++;
>> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
>> > vq->vq_avail_idx -= vq->vq_nentries;
>> > - vq->avail_wrap_counter ^= 1;
>> > + vq->cached_flags ^=
>> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>>
>> Maybe name it avail_used_flags instead of cached flags. Also we could
>> use a constant value.
>
>It also contains the WRITE bit (not just AVAIL and USED bits)
>for Rx path. That's why I didn't name it as avail_used_flags.
ok
>>
>> > }
>> >
>> > for (k = 0; k < pkt_num; k++) {
>> > @@ -177,34 +178,31 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
>> > + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
>> > desc[vq->vq_avail_idx].len = dlen[k];
>> > desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
>> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>> > + vq->cached_flags;
>> > sum += dlen[k];
>> > vq->vq_free_cnt--;
>> > nb_descs++;
>> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
>> > vq->vq_avail_idx -= vq->vq_nentries;
>> > - vq->avail_wrap_counter ^= 1;
>> > + vq->cached_flags ^=
>> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>> > }
>> > }
>> >
>> > desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
>> > + sizeof(struct virtio_net_ctrl_hdr);
>> > desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
>> > - desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
>> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>> > + desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
>> > vq->vq_free_cnt--;
>> > nb_descs++;
>> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
>> > vq->vq_avail_idx -= vq->vq_nentries;
>> > - vq->avail_wrap_counter ^= 1;
>> > + vq->cached_flags ^=
>> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>> > }
>> >
>> > virtio_wmb(vq->hw->weak_barriers);
>> > - desc[head].flags = VRING_DESC_F_NEXT |
>> > - VRING_DESC_F_AVAIL(avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!avail_wrap_counter);
>> > + desc[head].flags = VRING_DESC_F_NEXT | flags;
>> >
>> > virtio_wmb(vq->hw->weak_barriers);
>> > virtqueue_notify(vq);
>> > @@ -226,12 +224,12 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
>> > PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
>> > "vq->vq_avail_idx=%d\n"
>> > "vq->vq_used_cons_idx=%d\n"
>> > - "vq->avail_wrap_counter=%d\n"
>> > + "vq->cached_flags=0x%x\n"
>> > "vq->used_wrap_counter=%d\n",
>> > vq->vq_free_cnt,
>> > vq->vq_avail_idx,
>> > vq->vq_used_cons_idx,
>> > - vq->avail_wrap_counter,
>> > + vq->cached_flags,
>> > vq->used_wrap_counter);
>> >
>> > result = cvq->virtio_net_hdr_mz->addr;
>> > @@ -491,11 +489,10 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
>> > vq->vq_nentries = vq_size;
>> > vq->event_flags_shadow = 0;
>> > if (vtpci_packed_queue(hw)) {
>> > - vq->avail_wrap_counter = 1;
>> > vq->used_wrap_counter = 1;
>> > - vq->avail_used_flags =
>> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>> > + vq->cached_flags = VRING_DESC_F_AVAIL(1);
>> > + if (queue_type == VTNET_RQ)
>> > + vq->cached_flags |= VRING_DESC_F_WRITE;
>> > }
>> >
>> > /*
>> > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>> > index 771d3c3f6..3c354baef 100644
>> > --- a/drivers/net/virtio/virtio_rxtx.c
>> > +++ b/drivers/net/virtio/virtio_rxtx.c
>> > @@ -431,7 +431,7 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
>> > struct rte_mbuf **cookie, uint16_t num)
>> > {
>> > struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
>> > - uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
>> > + uint16_t flags = vq->cached_flags;
>> > struct virtio_hw *hw = vq->hw;
>> > struct vq_desc_extra *dxp;
>> > uint16_t idx;
>> > @@ -460,11 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
>> > start_dp[idx].flags = flags;
>> > if (++vq->vq_avail_idx >= vq->vq_nentries) {
>> > vq->vq_avail_idx -= vq->vq_nentries;
>> > - vq->avail_wrap_counter ^= 1;
>> > - vq->avail_used_flags =
>> > - VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>> > - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>> > - flags = VRING_DESC_F_WRITE | vq->avail_used_flags;
>> > + vq->cached_flags ^=
>> > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
>> > + flags = vq->cached_flags;
>>
>> same here. it's not really cached, it's pre-calculated. And here we
>> could also use a pre-calculated constand/define.
>
>For pre-calculated constant/define, do you mean VRING_DESC_F_AVAIL(1)
>and VRING_DESC_F_USED(1)? There is still little code in virtio-user
>using them without constantly passing 1. I planned to fully get rid
>of them in a separate patch later (but I can do it in this series if
>anyone wants).
Yes, that's what I meant. And it's fine by me if you do it in a
follow-up.
>
>>
>> Otherwise looks good! Did you see any performance improvements?
>
>Yeah, I saw slightly better performance in a quick test.
Nice.
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
regards,
Jens
>
>>
>>
>> regards,
>> Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 10:11 ` Jens Freimann
2019-03-19 10:11 ` Jens Freimann
@ 2019-03-19 12:50 ` Maxime Coquelin
2019-03-19 12:50 ` Maxime Coquelin
1 sibling, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:50 UTC (permalink / raw)
To: Jens Freimann, Tiwei Bie; +Cc: zhihong.wang, dev
On 3/19/19 11:11 AM, Jens Freimann wrote:
>>>
>>> same here. it's not really cached, it's pre-calculated. And here we
>>> could also use a pre-calculated constand/define.
>>
>> For pre-calculated constant/define, do you mean VRING_DESC_F_AVAIL(1)
>> and VRING_DESC_F_USED(1)? There is still little code in virtio-user
>> using them without constantly passing 1. I planned to fully get rid
>> of them in a separate patch later (but I can do it in this series if
>> anyone wants).
>
> Yes, that's what I meant. And it's fine by me if you do it in a
> follow-up.
Agree, it can be done in a separate patch.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 12:50 ` Maxime Coquelin
@ 2019-03-19 12:50 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:50 UTC (permalink / raw)
To: Jens Freimann, Tiwei Bie; +Cc: zhihong.wang, dev
On 3/19/19 11:11 AM, Jens Freimann wrote:
>>>
>>> same here. it's not really cached, it's pre-calculated. And here we
>>> could also use a pre-calculated constand/define.
>>
>> For pre-calculated constant/define, do you mean VRING_DESC_F_AVAIL(1)
>> and VRING_DESC_F_USED(1)? There is still little code in virtio-user
>> using them without constantly passing 1. I planned to fully get rid
>> of them in a separate patch later (but I can do it in this series if
>> anyone wants).
>
> Yes, that's what I meant. And it's fine by me if you do it in a
> follow-up.
Agree, it can be done in a separate patch.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 8:54 ` Jens Freimann
@ 2019-03-19 12:58 ` Maxime Coquelin
2019-03-19 12:58 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:58 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Cache the AVAIL, USED and WRITE bits to avoid calculating
> them as much as possible. Note that, the WRITE bit isn't
> cached for control queue.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
> drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
> drivers/net/virtio/virtqueue.h | 8 +++----
> 3 files changed, 32 insertions(+), 42 deletions(-)
>
Nice patch!
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring
2019-03-19 12:58 ` Maxime Coquelin
@ 2019-03-19 12:58 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 12:58 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Cache the AVAIL, USED and WRITE bits to avoid calculating
> them as much as possible. Note that, the WRITE bit isn't
> cached for control queue.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 35 ++++++++++++++----------------
> drivers/net/virtio/virtio_rxtx.c | 31 ++++++++++----------------
> drivers/net/virtio/virtqueue.h | 8 +++----
> 3 files changed, 32 insertions(+), 42 deletions(-)
>
Nice patch!
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
` (4 preceding siblings ...)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 04/10] net/virtio: optimize flags update for packed ring Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
` (2 more replies)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure Tiwei Bie
` (5 subsequent siblings)
11 siblings, 3 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Put split ring and packed ring specific fields into separate
sub-structures, and also union them as they won't be available
at the same time.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
drivers/net/virtio/virtqueue.c | 6 +-
drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
7 files changed, 117 insertions(+), 109 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 9060b6b33..bc91ad493 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -147,7 +147,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
{
struct virtqueue *vq = cvq->vq;
int head;
- struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
struct virtio_pmd_ctrl *result;
uint16_t flags;
int sum = 0;
@@ -161,14 +161,14 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
* One RX packet for ACK.
*/
head = vq->vq_avail_idx;
- flags = vq->cached_flags;
+ flags = vq->vq_packed.cached_flags;
desc[head].addr = cvq->virtio_net_hdr_mem;
desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
@@ -178,13 +178,13 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
+ sizeof(ctrl->status) + sizeof(uint8_t) * sum;
desc[vq->vq_avail_idx].len = dlen[k];
desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
- vq->cached_flags;
+ vq->vq_packed.cached_flags;
sum += dlen[k];
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
}
@@ -192,12 +192,13 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr);
desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
- desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
+ vq->vq_packed.cached_flags;
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
@@ -218,19 +219,19 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
vq->vq_used_cons_idx += nb_descs;
if (vq->vq_used_cons_idx >= vq->vq_nentries) {
vq->vq_used_cons_idx -= vq->vq_nentries;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
"vq->vq_avail_idx=%d\n"
"vq->vq_used_cons_idx=%d\n"
- "vq->cached_flags=0x%x\n"
- "vq->used_wrap_counter=%d\n",
+ "vq->vq_packed.cached_flags=0x%x\n"
+ "vq->vq_packed.used_wrap_counter=%d\n",
vq->vq_free_cnt,
vq->vq_avail_idx,
vq->vq_used_cons_idx,
- vq->cached_flags,
- vq->used_wrap_counter);
+ vq->vq_packed.cached_flags,
+ vq->vq_packed.used_wrap_counter);
result = cvq->virtio_net_hdr_mz->addr;
return result;
@@ -280,30 +281,30 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
* At least one TX packet per argument;
* One RX packet for ACK.
*/
- vq->vq_ring.desc[head].flags = VRING_DESC_F_NEXT;
- vq->vq_ring.desc[head].addr = cvq->virtio_net_hdr_mem;
- vq->vq_ring.desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
+ vq->vq_split.ring.desc[head].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[head].addr = cvq->virtio_net_hdr_mem;
+ vq->vq_split.ring.desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
vq->vq_free_cnt--;
- i = vq->vq_ring.desc[head].next;
+ i = vq->vq_split.ring.desc[head].next;
for (k = 0; k < pkt_num; k++) {
- vq->vq_ring.desc[i].flags = VRING_DESC_F_NEXT;
- vq->vq_ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr)
+ sizeof(ctrl->status) + sizeof(uint8_t)*sum;
- vq->vq_ring.desc[i].len = dlen[k];
+ vq->vq_split.ring.desc[i].len = dlen[k];
sum += dlen[k];
vq->vq_free_cnt--;
- i = vq->vq_ring.desc[i].next;
+ i = vq->vq_split.ring.desc[i].next;
}
- vq->vq_ring.desc[i].flags = VRING_DESC_F_WRITE;
- vq->vq_ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_WRITE;
+ vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr);
- vq->vq_ring.desc[i].len = sizeof(ctrl->status);
+ vq->vq_split.ring.desc[i].len = sizeof(ctrl->status);
vq->vq_free_cnt--;
- vq->vq_desc_head_idx = vq->vq_ring.desc[i].next;
+ vq->vq_desc_head_idx = vq->vq_split.ring.desc[i].next;
vq_update_avail_ring(vq, head);
vq_update_avail_idx(vq);
@@ -324,16 +325,17 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
used_idx = (uint32_t)(vq->vq_used_cons_idx
& (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
idx = (uint32_t) uep->id;
desc_idx = idx;
- while (vq->vq_ring.desc[desc_idx].flags & VRING_DESC_F_NEXT) {
- desc_idx = vq->vq_ring.desc[desc_idx].next;
+ while (vq->vq_split.ring.desc[desc_idx].flags &
+ VRING_DESC_F_NEXT) {
+ desc_idx = vq->vq_split.ring.desc[desc_idx].next;
vq->vq_free_cnt++;
}
- vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+ vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
vq->vq_desc_head_idx = idx;
vq->vq_used_cons_idx++;
@@ -395,7 +397,6 @@ static void
virtio_init_vring(struct virtqueue *vq)
{
int size = vq->vq_nentries;
- struct vring *vr = &vq->vq_ring;
uint8_t *ring_mem = vq->vq_ring_virt_mem;
PMD_INIT_FUNC_TRACE();
@@ -409,10 +410,12 @@ virtio_init_vring(struct virtqueue *vq)
vq->vq_free_cnt = vq->vq_nentries;
memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
if (vtpci_packed_queue(vq->hw)) {
- vring_init_packed(&vq->ring_packed, ring_mem,
+ vring_init_packed(&vq->vq_packed.ring, ring_mem,
VIRTIO_PCI_VRING_ALIGN, size);
vring_desc_init_packed(vq, size);
} else {
+ struct vring *vr = &vq->vq_split.ring;
+
vring_init_split(vr, ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
vring_desc_init_split(vr->desc, size);
}
@@ -487,12 +490,12 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
vq->hw = hw;
vq->vq_queue_index = vtpci_queue_idx;
vq->vq_nentries = vq_size;
- vq->event_flags_shadow = 0;
if (vtpci_packed_queue(hw)) {
- vq->used_wrap_counter = 1;
- vq->cached_flags = VRING_DESC_F_AVAIL(1);
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = VRING_DESC_F_AVAIL(1);
+ vq->vq_packed.event_flags_shadow = 0;
if (queue_type == VTNET_RQ)
- vq->cached_flags |= VRING_DESC_F_WRITE;
+ vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE;
}
/*
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 3c354baef..02f8d9451 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -62,13 +62,13 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
struct vq_desc_extra *dxp;
uint16_t desc_idx_last = desc_idx;
- dp = &vq->vq_ring.desc[desc_idx];
+ dp = &vq->vq_split.ring.desc[desc_idx];
dxp = &vq->vq_descx[desc_idx];
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt + dxp->ndescs);
if ((dp->flags & VRING_DESC_F_INDIRECT) == 0) {
while (dp->flags & VRING_DESC_F_NEXT) {
desc_idx_last = dp->next;
- dp = &vq->vq_ring.desc[dp->next];
+ dp = &vq->vq_split.ring.desc[dp->next];
}
}
dxp->ndescs = 0;
@@ -81,7 +81,7 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END) {
vq->vq_desc_head_idx = desc_idx;
} else {
- dp_tail = &vq->vq_ring.desc[vq->vq_desc_tail_idx];
+ dp_tail = &vq->vq_split.ring.desc[vq->vq_desc_tail_idx];
dp_tail->next = desc_idx;
}
@@ -118,7 +118,7 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
struct vring_packed_desc *desc;
uint16_t i;
- desc = vq->ring_packed.desc_packed;
+ desc = vq->vq_packed.ring.desc_packed;
for (i = 0; i < num; i++) {
used_idx = vq->vq_used_cons_idx;
@@ -141,7 +141,7 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
vq->vq_used_cons_idx++;
if (vq->vq_used_cons_idx >= vq->vq_nentries) {
vq->vq_used_cons_idx -= vq->vq_nentries;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
}
@@ -160,7 +160,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
/* Caller does the check */
for (i = 0; i < num ; i++) {
used_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
desc_idx = (uint16_t) uep->id;
len[i] = uep->len;
cookie = (struct rte_mbuf *)vq->vq_descx[desc_idx].cookie;
@@ -199,7 +199,7 @@ virtqueue_dequeue_rx_inorder(struct virtqueue *vq,
for (i = 0; i < num; i++) {
used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
/* Desc idx same as used idx */
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
len[i] = uep->len;
cookie = (struct rte_mbuf *)vq->vq_descx[used_idx].cookie;
@@ -229,7 +229,7 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num)
{
uint16_t used_idx, id, curr_id, free_cnt = 0;
uint16_t size = vq->vq_nentries;
- struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
struct vq_desc_extra *dxp;
used_idx = vq->vq_used_cons_idx;
@@ -244,7 +244,7 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num)
num -= dxp->ndescs;
if (used_idx >= size) {
used_idx -= size;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
if (dxp->cookie != NULL) {
rte_pktmbuf_free(dxp->cookie);
@@ -261,7 +261,7 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num)
{
uint16_t used_idx, id;
uint16_t size = vq->vq_nentries;
- struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
struct vq_desc_extra *dxp;
used_idx = vq->vq_used_cons_idx;
@@ -272,7 +272,7 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num)
vq->vq_used_cons_idx += dxp->ndescs;
if (vq->vq_used_cons_idx >= size) {
vq->vq_used_cons_idx -= size;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
vq_ring_free_id_packed(vq, id);
if (dxp->cookie != NULL) {
@@ -302,7 +302,7 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
struct vq_desc_extra *dxp;
used_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
desc_idx = (uint16_t) uep->id;
dxp = &vq->vq_descx[desc_idx];
@@ -356,7 +356,7 @@ virtqueue_enqueue_refill_inorder(struct virtqueue *vq,
return -EMSGSIZE;
head_idx = vq->vq_desc_head_idx & (vq->vq_nentries - 1);
- start_dp = vq->vq_ring.desc;
+ start_dp = vq->vq_split.ring.desc;
while (i < num) {
idx = head_idx & (vq->vq_nentries - 1);
@@ -389,7 +389,7 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie,
{
struct vq_desc_extra *dxp;
struct virtio_hw *hw = vq->hw;
- struct vring_desc *start_dp = vq->vq_ring.desc;
+ struct vring_desc *start_dp = vq->vq_split.ring.desc;
uint16_t idx, i;
if (unlikely(vq->vq_free_cnt == 0))
@@ -430,8 +430,8 @@ static inline int
virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
struct rte_mbuf **cookie, uint16_t num)
{
- struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
- uint16_t flags = vq->cached_flags;
+ struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc_packed;
+ uint16_t flags = vq->vq_packed.cached_flags;
struct virtio_hw *hw = vq->hw;
struct vq_desc_extra *dxp;
uint16_t idx;
@@ -460,9 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
start_dp[idx].flags = flags;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
- flags = vq->cached_flags;
+ flags = vq->vq_packed.cached_flags;
}
}
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);
@@ -589,7 +589,7 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
uint16_t i = 0;
idx = vq->vq_desc_head_idx;
- start_dp = vq->vq_ring.desc;
+ start_dp = vq->vq_split.ring.desc;
while (i < num) {
idx = idx & (vq->vq_nentries - 1);
@@ -635,13 +635,13 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx;
idx = vq->vq_avail_idx;
- dp = &vq->ring_packed.desc_packed[idx];
+ dp = &vq->vq_packed.ring.desc_packed[idx];
dxp = &vq->vq_descx[id];
dxp->ndescs = 1;
dxp->cookie = cookie;
- flags = vq->cached_flags;
+ flags = vq->vq_packed.cached_flags;
/* prepend cannot fail, checked by caller */
hdr = (struct virtio_net_hdr *)
@@ -660,7 +660,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
@@ -698,11 +698,11 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
head_idx = vq->vq_avail_idx;
idx = head_idx;
prev = head_idx;
- start_dp = vq->ring_packed.desc_packed;
+ start_dp = vq->vq_packed.ring.desc_packed;
- head_dp = &vq->ring_packed.desc_packed[idx];
+ head_dp = &vq->vq_packed.ring.desc_packed[idx];
head_flags = cookie->next ? VRING_DESC_F_NEXT : 0;
- head_flags |= vq->cached_flags;
+ head_flags |= vq->vq_packed.cached_flags;
if (can_push) {
/* prepend cannot fail, checked by caller */
@@ -727,7 +727,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
idx++;
if (idx >= vq->vq_nentries) {
idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
}
@@ -741,14 +741,14 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
start_dp[idx].len = cookie->data_len;
if (likely(idx != head_idx)) {
flags = cookie->next ? VRING_DESC_F_NEXT : 0;
- flags |= vq->cached_flags;
+ flags |= vq->vq_packed.cached_flags;
start_dp[idx].flags = flags;
}
prev = idx;
idx++;
if (idx >= vq->vq_nentries) {
idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
} while ((cookie = cookie->next) != NULL);
@@ -791,7 +791,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
dxp->cookie = (void *)cookie;
dxp->ndescs = needed;
- start_dp = vq->vq_ring.desc;
+ start_dp = vq->vq_split.ring.desc;
if (can_push) {
/* prepend cannot fail, checked by caller */
@@ -844,7 +844,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
} while ((cookie = cookie->next) != NULL);
if (use_indirect)
- idx = vq->vq_ring.desc[head_idx].next;
+ idx = vq->vq_split.ring.desc[head_idx].next;
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);
@@ -919,8 +919,8 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
if (hw->use_simple_rx) {
for (desc_idx = 0; desc_idx < vq->vq_nentries;
desc_idx++) {
- vq->vq_ring.avail->ring[desc_idx] = desc_idx;
- vq->vq_ring.desc[desc_idx].flags =
+ vq->vq_split.ring.avail->ring[desc_idx] = desc_idx;
+ vq->vq_split.ring.desc[desc_idx].flags =
VRING_DESC_F_WRITE;
}
@@ -1050,7 +1050,7 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
if (!vtpci_packed_queue(hw)) {
if (hw->use_inorder_tx)
- vq->vq_ring.desc[vq->vq_nentries - 1].next = 0;
+ vq->vq_split.ring.desc[vq->vq_nentries - 1].next = 0;
}
VIRTQUEUE_DUMP(vq);
diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h
index dc97e4ccf..3d1296a23 100644
--- a/drivers/net/virtio/virtio_rxtx_simple.h
+++ b/drivers/net/virtio/virtio_rxtx_simple.h
@@ -27,7 +27,7 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq)
desc_idx = vq->vq_avail_idx & (vq->vq_nentries - 1);
sw_ring = &vq->sw_ring[desc_idx];
- start_dp = &vq->vq_ring.desc[desc_idx];
+ start_dp = &vq->vq_split.ring.desc[desc_idx];
ret = rte_mempool_get_bulk(rxvq->mpool, (void **)sw_ring,
RTE_VIRTIO_VPMD_RX_REARM_THRESH);
diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c
index d6207d7bb..cdc2a4d28 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_neon.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c
@@ -93,7 +93,7 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
nb_used = RTE_MIN(nb_used, nb_pkts);
desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
- rused = &vq->vq_ring.used->ring[desc_idx];
+ rused = &vq->vq_split.ring.used->ring[desc_idx];
sw_ring = &vq->sw_ring[desc_idx];
sw_ring_end = &vq->sw_ring[vq->vq_nentries];
diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c
index d768d0757..af76708d6 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_sse.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c
@@ -95,7 +95,7 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
nb_used = RTE_MIN(nb_used, nb_pkts);
desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
- rused = &vq->vq_ring.used->ring[desc_idx];
+ rused = &vq->vq_split.ring.used->ring[desc_idx];
sw_ring = &vq->sw_ring[desc_idx];
sw_ring_end = &vq->sw_ring[vq->vq_nentries];
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 5b03f7a27..79491db32 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -61,7 +61,7 @@ virtqueue_rxvq_flush_packed(struct virtqueue *vq)
struct vq_desc_extra *dxp;
uint16_t i;
- struct vring_packed_desc *descs = vq->ring_packed.desc_packed;
+ struct vring_packed_desc *descs = vq->vq_packed.ring.desc_packed;
int cnt = 0;
i = vq->vq_used_cons_idx;
@@ -75,7 +75,7 @@ virtqueue_rxvq_flush_packed(struct virtqueue *vq)
vq->vq_used_cons_idx++;
if (vq->vq_used_cons_idx >= vq->vq_nentries) {
vq->vq_used_cons_idx -= vq->vq_nentries;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
i = vq->vq_used_cons_idx;
}
@@ -96,7 +96,7 @@ virtqueue_rxvq_flush_split(struct virtqueue *vq)
for (i = 0; i < nb_used; i++) {
used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
if (hw->use_simple_rx) {
desc_idx = used_idx;
rte_pktmbuf_free(vq->sw_ring[desc_idx]);
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 80c0c43c3..48b3912e6 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -191,17 +191,22 @@ struct vq_desc_extra {
struct virtqueue {
struct virtio_hw *hw; /**< virtio_hw structure pointer. */
- struct vring vq_ring; /**< vring keeping desc, used and avail */
- struct vring_packed ring_packed; /**< vring keeping descs */
- bool used_wrap_counter;
- uint16_t cached_flags; /**< cached flags for descs */
- uint16_t event_flags_shadow;
+ union {
+ struct {
+ /**< vring keeping desc, used and avail */
+ struct vring ring;
+ } vq_split;
- /**
- * Last consumed descriptor in the used table,
- * trails vq_ring.used->idx.
- */
- uint16_t vq_used_cons_idx;
+ struct {
+ /**< vring keeping descs and events */
+ struct vring_packed ring;
+ bool used_wrap_counter;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ } vq_packed;
+ };
+
+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
uint16_t vq_nentries; /**< vring desc numbers */
uint16_t vq_free_cnt; /**< num of desc available */
uint16_t vq_avail_idx; /**< sync until needed */
@@ -289,7 +294,7 @@ desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq)
used = !!(flags & VRING_DESC_F_USED(1));
avail = !!(flags & VRING_DESC_F_AVAIL(1));
- return avail == used && used == vq->used_wrap_counter;
+ return avail == used && used == vq->vq_packed.used_wrap_counter;
}
static inline void
@@ -297,10 +302,10 @@ vring_desc_init_packed(struct virtqueue *vq, int n)
{
int i;
for (i = 0; i < n - 1; i++) {
- vq->ring_packed.desc_packed[i].id = i;
+ vq->vq_packed.ring.desc_packed[i].id = i;
vq->vq_descx[i].next = i + 1;
}
- vq->ring_packed.desc_packed[i].id = i;
+ vq->vq_packed.ring.desc_packed[i].id = i;
vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
}
@@ -321,10 +326,10 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n)
static inline void
virtqueue_disable_intr_packed(struct virtqueue *vq)
{
- if (vq->event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
- vq->event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
- vq->ring_packed.driver_event->desc_event_flags =
- vq->event_flags_shadow;
+ if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver_event->desc_event_flags =
+ vq->vq_packed.event_flags_shadow;
}
}
@@ -337,7 +342,7 @@ virtqueue_disable_intr(struct virtqueue *vq)
if (vtpci_packed_queue(vq->hw))
virtqueue_disable_intr_packed(vq);
else
- vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+ vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
}
/**
@@ -346,11 +351,10 @@ virtqueue_disable_intr(struct virtqueue *vq)
static inline void
virtqueue_enable_intr_packed(struct virtqueue *vq)
{
- uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
-
- if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
- vq->event_flags_shadow = RING_EVENT_FLAGS_ENABLE;
- *event_flags = vq->event_flags_shadow;
+ if (vq->vq_packed.event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_ENABLE;
+ vq->vq_packed.ring.driver_event->desc_event_flags =
+ vq->vq_packed.event_flags_shadow;
}
}
@@ -360,7 +364,7 @@ virtqueue_enable_intr_packed(struct virtqueue *vq)
static inline void
virtqueue_enable_intr_split(struct virtqueue *vq)
{
- vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT);
+ vq->vq_split.ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT);
}
/**
@@ -404,7 +408,8 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
return VTNET_TQ;
}
-#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
+#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_split.ring.used->idx - \
+ (vq)->vq_used_cons_idx))
void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx);
void vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t used_idx);
@@ -415,7 +420,7 @@ static inline void
vq_update_avail_idx(struct virtqueue *vq)
{
virtio_wmb(vq->hw->weak_barriers);
- vq->vq_ring.avail->idx = vq->vq_avail_idx;
+ vq->vq_split.ring.avail->idx = vq->vq_avail_idx;
}
static inline void
@@ -430,8 +435,8 @@ vq_update_avail_ring(struct virtqueue *vq, uint16_t desc_idx)
* descriptor.
*/
avail_idx = (uint16_t)(vq->vq_avail_idx & (vq->vq_nentries - 1));
- if (unlikely(vq->vq_ring.avail->ring[avail_idx] != desc_idx))
- vq->vq_ring.avail->ring[avail_idx] = desc_idx;
+ if (unlikely(vq->vq_split.ring.avail->ring[avail_idx] != desc_idx))
+ vq->vq_split.ring.avail->ring[avail_idx] = desc_idx;
vq->vq_avail_idx++;
}
@@ -443,7 +448,7 @@ virtqueue_kick_prepare(struct virtqueue *vq)
* the used->flags.
*/
virtio_mb(vq->hw->weak_barriers);
- return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY);
+ return !(vq->vq_split.ring.used->flags & VRING_USED_F_NO_NOTIFY);
}
static inline int
@@ -455,7 +460,7 @@ virtqueue_kick_prepare_packed(struct virtqueue *vq)
* Ensure updated data is visible to vhost before reading the flags.
*/
virtio_mb(vq->hw->weak_barriers);
- flags = vq->ring_packed.device_event->desc_event_flags;
+ flags = vq->vq_packed.ring.device_event->desc_event_flags;
return flags != RING_EVENT_FLAGS_DISABLE;
}
@@ -473,15 +478,15 @@ virtqueue_notify(struct virtqueue *vq)
#ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP
#define VIRTQUEUE_DUMP(vq) do { \
uint16_t used_idx, nused; \
- used_idx = (vq)->vq_ring.used->idx; \
+ used_idx = (vq)->vq_split.ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
if (vtpci_packed_queue((vq)->hw)) { \
PMD_INIT_LOG(DEBUG, \
"VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d;" \
" cached_flags=0x%x; used_wrap_counter=%d", \
(vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \
- (vq)->vq_avail_idx, (vq)->cached_flags, \
- (vq)->used_wrap_counter); \
+ (vq)->vq_avail_idx, (vq)->vq_packed.cached_flags, \
+ (vq)->vq_packed.used_wrap_counter); \
break; \
} \
PMD_INIT_LOG(DEBUG, \
@@ -489,9 +494,9 @@ virtqueue_notify(struct virtqueue *vq)
" avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \
" avail.flags=0x%x; used.flags=0x%x", \
(vq)->vq_nentries, (vq)->vq_free_cnt, nused, \
- (vq)->vq_desc_head_idx, (vq)->vq_ring.avail->idx, \
- (vq)->vq_used_cons_idx, (vq)->vq_ring.used->idx, \
- (vq)->vq_ring.avail->flags, (vq)->vq_ring.used->flags); \
+ (vq)->vq_desc_head_idx, (vq)->vq_split.ring.avail->idx, \
+ (vq)->vq_used_cons_idx, (vq)->vq_split.ring.used->idx, \
+ (vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \
} while (0)
#else
#define VIRTQUEUE_DUMP(vq) do { } while (0)
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 6:43 ` [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:44 ` Jens Freimann
2019-03-19 13:28 ` Maxime Coquelin
2 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Put split ring and packed ring specific fields into separate
sub-structures, and also union them as they won't be available
at the same time.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
drivers/net/virtio/virtqueue.c | 6 +-
drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
7 files changed, 117 insertions(+), 109 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 9060b6b33..bc91ad493 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -147,7 +147,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
{
struct virtqueue *vq = cvq->vq;
int head;
- struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
struct virtio_pmd_ctrl *result;
uint16_t flags;
int sum = 0;
@@ -161,14 +161,14 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
* One RX packet for ACK.
*/
head = vq->vq_avail_idx;
- flags = vq->cached_flags;
+ flags = vq->vq_packed.cached_flags;
desc[head].addr = cvq->virtio_net_hdr_mem;
desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
@@ -178,13 +178,13 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
+ sizeof(ctrl->status) + sizeof(uint8_t) * sum;
desc[vq->vq_avail_idx].len = dlen[k];
desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
- vq->cached_flags;
+ vq->vq_packed.cached_flags;
sum += dlen[k];
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
}
@@ -192,12 +192,13 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr);
desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
- desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE | vq->cached_flags;
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
+ vq->vq_packed.cached_flags;
vq->vq_free_cnt--;
nb_descs++;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
@@ -218,19 +219,19 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
vq->vq_used_cons_idx += nb_descs;
if (vq->vq_used_cons_idx >= vq->vq_nentries) {
vq->vq_used_cons_idx -= vq->vq_nentries;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\n"
"vq->vq_avail_idx=%d\n"
"vq->vq_used_cons_idx=%d\n"
- "vq->cached_flags=0x%x\n"
- "vq->used_wrap_counter=%d\n",
+ "vq->vq_packed.cached_flags=0x%x\n"
+ "vq->vq_packed.used_wrap_counter=%d\n",
vq->vq_free_cnt,
vq->vq_avail_idx,
vq->vq_used_cons_idx,
- vq->cached_flags,
- vq->used_wrap_counter);
+ vq->vq_packed.cached_flags,
+ vq->vq_packed.used_wrap_counter);
result = cvq->virtio_net_hdr_mz->addr;
return result;
@@ -280,30 +281,30 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
* At least one TX packet per argument;
* One RX packet for ACK.
*/
- vq->vq_ring.desc[head].flags = VRING_DESC_F_NEXT;
- vq->vq_ring.desc[head].addr = cvq->virtio_net_hdr_mem;
- vq->vq_ring.desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
+ vq->vq_split.ring.desc[head].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[head].addr = cvq->virtio_net_hdr_mem;
+ vq->vq_split.ring.desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
vq->vq_free_cnt--;
- i = vq->vq_ring.desc[head].next;
+ i = vq->vq_split.ring.desc[head].next;
for (k = 0; k < pkt_num; k++) {
- vq->vq_ring.desc[i].flags = VRING_DESC_F_NEXT;
- vq->vq_ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr)
+ sizeof(ctrl->status) + sizeof(uint8_t)*sum;
- vq->vq_ring.desc[i].len = dlen[k];
+ vq->vq_split.ring.desc[i].len = dlen[k];
sum += dlen[k];
vq->vq_free_cnt--;
- i = vq->vq_ring.desc[i].next;
+ i = vq->vq_split.ring.desc[i].next;
}
- vq->vq_ring.desc[i].flags = VRING_DESC_F_WRITE;
- vq->vq_ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_WRITE;
+ vq->vq_split.ring.desc[i].addr = cvq->virtio_net_hdr_mem
+ sizeof(struct virtio_net_ctrl_hdr);
- vq->vq_ring.desc[i].len = sizeof(ctrl->status);
+ vq->vq_split.ring.desc[i].len = sizeof(ctrl->status);
vq->vq_free_cnt--;
- vq->vq_desc_head_idx = vq->vq_ring.desc[i].next;
+ vq->vq_desc_head_idx = vq->vq_split.ring.desc[i].next;
vq_update_avail_ring(vq, head);
vq_update_avail_idx(vq);
@@ -324,16 +325,17 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
used_idx = (uint32_t)(vq->vq_used_cons_idx
& (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
idx = (uint32_t) uep->id;
desc_idx = idx;
- while (vq->vq_ring.desc[desc_idx].flags & VRING_DESC_F_NEXT) {
- desc_idx = vq->vq_ring.desc[desc_idx].next;
+ while (vq->vq_split.ring.desc[desc_idx].flags &
+ VRING_DESC_F_NEXT) {
+ desc_idx = vq->vq_split.ring.desc[desc_idx].next;
vq->vq_free_cnt++;
}
- vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+ vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
vq->vq_desc_head_idx = idx;
vq->vq_used_cons_idx++;
@@ -395,7 +397,6 @@ static void
virtio_init_vring(struct virtqueue *vq)
{
int size = vq->vq_nentries;
- struct vring *vr = &vq->vq_ring;
uint8_t *ring_mem = vq->vq_ring_virt_mem;
PMD_INIT_FUNC_TRACE();
@@ -409,10 +410,12 @@ virtio_init_vring(struct virtqueue *vq)
vq->vq_free_cnt = vq->vq_nentries;
memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
if (vtpci_packed_queue(vq->hw)) {
- vring_init_packed(&vq->ring_packed, ring_mem,
+ vring_init_packed(&vq->vq_packed.ring, ring_mem,
VIRTIO_PCI_VRING_ALIGN, size);
vring_desc_init_packed(vq, size);
} else {
+ struct vring *vr = &vq->vq_split.ring;
+
vring_init_split(vr, ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
vring_desc_init_split(vr->desc, size);
}
@@ -487,12 +490,12 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
vq->hw = hw;
vq->vq_queue_index = vtpci_queue_idx;
vq->vq_nentries = vq_size;
- vq->event_flags_shadow = 0;
if (vtpci_packed_queue(hw)) {
- vq->used_wrap_counter = 1;
- vq->cached_flags = VRING_DESC_F_AVAIL(1);
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = VRING_DESC_F_AVAIL(1);
+ vq->vq_packed.event_flags_shadow = 0;
if (queue_type == VTNET_RQ)
- vq->cached_flags |= VRING_DESC_F_WRITE;
+ vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE;
}
/*
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 3c354baef..02f8d9451 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -62,13 +62,13 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
struct vq_desc_extra *dxp;
uint16_t desc_idx_last = desc_idx;
- dp = &vq->vq_ring.desc[desc_idx];
+ dp = &vq->vq_split.ring.desc[desc_idx];
dxp = &vq->vq_descx[desc_idx];
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt + dxp->ndescs);
if ((dp->flags & VRING_DESC_F_INDIRECT) == 0) {
while (dp->flags & VRING_DESC_F_NEXT) {
desc_idx_last = dp->next;
- dp = &vq->vq_ring.desc[dp->next];
+ dp = &vq->vq_split.ring.desc[dp->next];
}
}
dxp->ndescs = 0;
@@ -81,7 +81,7 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END) {
vq->vq_desc_head_idx = desc_idx;
} else {
- dp_tail = &vq->vq_ring.desc[vq->vq_desc_tail_idx];
+ dp_tail = &vq->vq_split.ring.desc[vq->vq_desc_tail_idx];
dp_tail->next = desc_idx;
}
@@ -118,7 +118,7 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
struct vring_packed_desc *desc;
uint16_t i;
- desc = vq->ring_packed.desc_packed;
+ desc = vq->vq_packed.ring.desc_packed;
for (i = 0; i < num; i++) {
used_idx = vq->vq_used_cons_idx;
@@ -141,7 +141,7 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
vq->vq_used_cons_idx++;
if (vq->vq_used_cons_idx >= vq->vq_nentries) {
vq->vq_used_cons_idx -= vq->vq_nentries;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
}
@@ -160,7 +160,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts,
/* Caller does the check */
for (i = 0; i < num ; i++) {
used_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
desc_idx = (uint16_t) uep->id;
len[i] = uep->len;
cookie = (struct rte_mbuf *)vq->vq_descx[desc_idx].cookie;
@@ -199,7 +199,7 @@ virtqueue_dequeue_rx_inorder(struct virtqueue *vq,
for (i = 0; i < num; i++) {
used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
/* Desc idx same as used idx */
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
len[i] = uep->len;
cookie = (struct rte_mbuf *)vq->vq_descx[used_idx].cookie;
@@ -229,7 +229,7 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num)
{
uint16_t used_idx, id, curr_id, free_cnt = 0;
uint16_t size = vq->vq_nentries;
- struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
struct vq_desc_extra *dxp;
used_idx = vq->vq_used_cons_idx;
@@ -244,7 +244,7 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num)
num -= dxp->ndescs;
if (used_idx >= size) {
used_idx -= size;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
if (dxp->cookie != NULL) {
rte_pktmbuf_free(dxp->cookie);
@@ -261,7 +261,7 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num)
{
uint16_t used_idx, id;
uint16_t size = vq->vq_nentries;
- struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
struct vq_desc_extra *dxp;
used_idx = vq->vq_used_cons_idx;
@@ -272,7 +272,7 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num)
vq->vq_used_cons_idx += dxp->ndescs;
if (vq->vq_used_cons_idx >= size) {
vq->vq_used_cons_idx -= size;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
vq_ring_free_id_packed(vq, id);
if (dxp->cookie != NULL) {
@@ -302,7 +302,7 @@ virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
struct vq_desc_extra *dxp;
used_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
desc_idx = (uint16_t) uep->id;
dxp = &vq->vq_descx[desc_idx];
@@ -356,7 +356,7 @@ virtqueue_enqueue_refill_inorder(struct virtqueue *vq,
return -EMSGSIZE;
head_idx = vq->vq_desc_head_idx & (vq->vq_nentries - 1);
- start_dp = vq->vq_ring.desc;
+ start_dp = vq->vq_split.ring.desc;
while (i < num) {
idx = head_idx & (vq->vq_nentries - 1);
@@ -389,7 +389,7 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie,
{
struct vq_desc_extra *dxp;
struct virtio_hw *hw = vq->hw;
- struct vring_desc *start_dp = vq->vq_ring.desc;
+ struct vring_desc *start_dp = vq->vq_split.ring.desc;
uint16_t idx, i;
if (unlikely(vq->vq_free_cnt == 0))
@@ -430,8 +430,8 @@ static inline int
virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
struct rte_mbuf **cookie, uint16_t num)
{
- struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed;
- uint16_t flags = vq->cached_flags;
+ struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc_packed;
+ uint16_t flags = vq->vq_packed.cached_flags;
struct virtio_hw *hw = vq->hw;
struct vq_desc_extra *dxp;
uint16_t idx;
@@ -460,9 +460,9 @@ virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
start_dp[idx].flags = flags;
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
- flags = vq->cached_flags;
+ flags = vq->vq_packed.cached_flags;
}
}
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num);
@@ -589,7 +589,7 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
uint16_t i = 0;
idx = vq->vq_desc_head_idx;
- start_dp = vq->vq_ring.desc;
+ start_dp = vq->vq_split.ring.desc;
while (i < num) {
idx = idx & (vq->vq_nentries - 1);
@@ -635,13 +635,13 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx;
idx = vq->vq_avail_idx;
- dp = &vq->ring_packed.desc_packed[idx];
+ dp = &vq->vq_packed.ring.desc_packed[idx];
dxp = &vq->vq_descx[id];
dxp->ndescs = 1;
dxp->cookie = cookie;
- flags = vq->cached_flags;
+ flags = vq->vq_packed.cached_flags;
/* prepend cannot fail, checked by caller */
hdr = (struct virtio_net_hdr *)
@@ -660,7 +660,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
if (++vq->vq_avail_idx >= vq->vq_nentries) {
vq->vq_avail_idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
@@ -698,11 +698,11 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
head_idx = vq->vq_avail_idx;
idx = head_idx;
prev = head_idx;
- start_dp = vq->ring_packed.desc_packed;
+ start_dp = vq->vq_packed.ring.desc_packed;
- head_dp = &vq->ring_packed.desc_packed[idx];
+ head_dp = &vq->vq_packed.ring.desc_packed[idx];
head_flags = cookie->next ? VRING_DESC_F_NEXT : 0;
- head_flags |= vq->cached_flags;
+ head_flags |= vq->vq_packed.cached_flags;
if (can_push) {
/* prepend cannot fail, checked by caller */
@@ -727,7 +727,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
idx++;
if (idx >= vq->vq_nentries) {
idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
}
@@ -741,14 +741,14 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
start_dp[idx].len = cookie->data_len;
if (likely(idx != head_idx)) {
flags = cookie->next ? VRING_DESC_F_NEXT : 0;
- flags |= vq->cached_flags;
+ flags |= vq->vq_packed.cached_flags;
start_dp[idx].flags = flags;
}
prev = idx;
idx++;
if (idx >= vq->vq_nentries) {
idx -= vq->vq_nentries;
- vq->cached_flags ^=
+ vq->vq_packed.cached_flags ^=
VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1);
}
} while ((cookie = cookie->next) != NULL);
@@ -791,7 +791,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
dxp->cookie = (void *)cookie;
dxp->ndescs = needed;
- start_dp = vq->vq_ring.desc;
+ start_dp = vq->vq_split.ring.desc;
if (can_push) {
/* prepend cannot fail, checked by caller */
@@ -844,7 +844,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
} while ((cookie = cookie->next) != NULL);
if (use_indirect)
- idx = vq->vq_ring.desc[head_idx].next;
+ idx = vq->vq_split.ring.desc[head_idx].next;
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed);
@@ -919,8 +919,8 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx)
if (hw->use_simple_rx) {
for (desc_idx = 0; desc_idx < vq->vq_nentries;
desc_idx++) {
- vq->vq_ring.avail->ring[desc_idx] = desc_idx;
- vq->vq_ring.desc[desc_idx].flags =
+ vq->vq_split.ring.avail->ring[desc_idx] = desc_idx;
+ vq->vq_split.ring.desc[desc_idx].flags =
VRING_DESC_F_WRITE;
}
@@ -1050,7 +1050,7 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev,
if (!vtpci_packed_queue(hw)) {
if (hw->use_inorder_tx)
- vq->vq_ring.desc[vq->vq_nentries - 1].next = 0;
+ vq->vq_split.ring.desc[vq->vq_nentries - 1].next = 0;
}
VIRTQUEUE_DUMP(vq);
diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h
index dc97e4ccf..3d1296a23 100644
--- a/drivers/net/virtio/virtio_rxtx_simple.h
+++ b/drivers/net/virtio/virtio_rxtx_simple.h
@@ -27,7 +27,7 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq)
desc_idx = vq->vq_avail_idx & (vq->vq_nentries - 1);
sw_ring = &vq->sw_ring[desc_idx];
- start_dp = &vq->vq_ring.desc[desc_idx];
+ start_dp = &vq->vq_split.ring.desc[desc_idx];
ret = rte_mempool_get_bulk(rxvq->mpool, (void **)sw_ring,
RTE_VIRTIO_VPMD_RX_REARM_THRESH);
diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c
index d6207d7bb..cdc2a4d28 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_neon.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c
@@ -93,7 +93,7 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
nb_used = RTE_MIN(nb_used, nb_pkts);
desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
- rused = &vq->vq_ring.used->ring[desc_idx];
+ rused = &vq->vq_split.ring.used->ring[desc_idx];
sw_ring = &vq->sw_ring[desc_idx];
sw_ring_end = &vq->sw_ring[vq->vq_nentries];
diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c
index d768d0757..af76708d6 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_sse.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c
@@ -95,7 +95,7 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
nb_used = RTE_MIN(nb_used, nb_pkts);
desc_idx = (uint16_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1));
- rused = &vq->vq_ring.used->ring[desc_idx];
+ rused = &vq->vq_split.ring.used->ring[desc_idx];
sw_ring = &vq->sw_ring[desc_idx];
sw_ring_end = &vq->sw_ring[vq->vq_nentries];
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 5b03f7a27..79491db32 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -61,7 +61,7 @@ virtqueue_rxvq_flush_packed(struct virtqueue *vq)
struct vq_desc_extra *dxp;
uint16_t i;
- struct vring_packed_desc *descs = vq->ring_packed.desc_packed;
+ struct vring_packed_desc *descs = vq->vq_packed.ring.desc_packed;
int cnt = 0;
i = vq->vq_used_cons_idx;
@@ -75,7 +75,7 @@ virtqueue_rxvq_flush_packed(struct virtqueue *vq)
vq->vq_used_cons_idx++;
if (vq->vq_used_cons_idx >= vq->vq_nentries) {
vq->vq_used_cons_idx -= vq->vq_nentries;
- vq->used_wrap_counter ^= 1;
+ vq->vq_packed.used_wrap_counter ^= 1;
}
i = vq->vq_used_cons_idx;
}
@@ -96,7 +96,7 @@ virtqueue_rxvq_flush_split(struct virtqueue *vq)
for (i = 0; i < nb_used; i++) {
used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
if (hw->use_simple_rx) {
desc_idx = used_idx;
rte_pktmbuf_free(vq->sw_ring[desc_idx]);
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 80c0c43c3..48b3912e6 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -191,17 +191,22 @@ struct vq_desc_extra {
struct virtqueue {
struct virtio_hw *hw; /**< virtio_hw structure pointer. */
- struct vring vq_ring; /**< vring keeping desc, used and avail */
- struct vring_packed ring_packed; /**< vring keeping descs */
- bool used_wrap_counter;
- uint16_t cached_flags; /**< cached flags for descs */
- uint16_t event_flags_shadow;
+ union {
+ struct {
+ /**< vring keeping desc, used and avail */
+ struct vring ring;
+ } vq_split;
- /**
- * Last consumed descriptor in the used table,
- * trails vq_ring.used->idx.
- */
- uint16_t vq_used_cons_idx;
+ struct {
+ /**< vring keeping descs and events */
+ struct vring_packed ring;
+ bool used_wrap_counter;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ } vq_packed;
+ };
+
+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
uint16_t vq_nentries; /**< vring desc numbers */
uint16_t vq_free_cnt; /**< num of desc available */
uint16_t vq_avail_idx; /**< sync until needed */
@@ -289,7 +294,7 @@ desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq)
used = !!(flags & VRING_DESC_F_USED(1));
avail = !!(flags & VRING_DESC_F_AVAIL(1));
- return avail == used && used == vq->used_wrap_counter;
+ return avail == used && used == vq->vq_packed.used_wrap_counter;
}
static inline void
@@ -297,10 +302,10 @@ vring_desc_init_packed(struct virtqueue *vq, int n)
{
int i;
for (i = 0; i < n - 1; i++) {
- vq->ring_packed.desc_packed[i].id = i;
+ vq->vq_packed.ring.desc_packed[i].id = i;
vq->vq_descx[i].next = i + 1;
}
- vq->ring_packed.desc_packed[i].id = i;
+ vq->vq_packed.ring.desc_packed[i].id = i;
vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
}
@@ -321,10 +326,10 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n)
static inline void
virtqueue_disable_intr_packed(struct virtqueue *vq)
{
- if (vq->event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
- vq->event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
- vq->ring_packed.driver_event->desc_event_flags =
- vq->event_flags_shadow;
+ if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver_event->desc_event_flags =
+ vq->vq_packed.event_flags_shadow;
}
}
@@ -337,7 +342,7 @@ virtqueue_disable_intr(struct virtqueue *vq)
if (vtpci_packed_queue(vq->hw))
virtqueue_disable_intr_packed(vq);
else
- vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+ vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
}
/**
@@ -346,11 +351,10 @@ virtqueue_disable_intr(struct virtqueue *vq)
static inline void
virtqueue_enable_intr_packed(struct virtqueue *vq)
{
- uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
-
- if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
- vq->event_flags_shadow = RING_EVENT_FLAGS_ENABLE;
- *event_flags = vq->event_flags_shadow;
+ if (vq->vq_packed.event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_ENABLE;
+ vq->vq_packed.ring.driver_event->desc_event_flags =
+ vq->vq_packed.event_flags_shadow;
}
}
@@ -360,7 +364,7 @@ virtqueue_enable_intr_packed(struct virtqueue *vq)
static inline void
virtqueue_enable_intr_split(struct virtqueue *vq)
{
- vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT);
+ vq->vq_split.ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT);
}
/**
@@ -404,7 +408,8 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx)
return VTNET_TQ;
}
-#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
+#define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_split.ring.used->idx - \
+ (vq)->vq_used_cons_idx))
void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx);
void vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t used_idx);
@@ -415,7 +420,7 @@ static inline void
vq_update_avail_idx(struct virtqueue *vq)
{
virtio_wmb(vq->hw->weak_barriers);
- vq->vq_ring.avail->idx = vq->vq_avail_idx;
+ vq->vq_split.ring.avail->idx = vq->vq_avail_idx;
}
static inline void
@@ -430,8 +435,8 @@ vq_update_avail_ring(struct virtqueue *vq, uint16_t desc_idx)
* descriptor.
*/
avail_idx = (uint16_t)(vq->vq_avail_idx & (vq->vq_nentries - 1));
- if (unlikely(vq->vq_ring.avail->ring[avail_idx] != desc_idx))
- vq->vq_ring.avail->ring[avail_idx] = desc_idx;
+ if (unlikely(vq->vq_split.ring.avail->ring[avail_idx] != desc_idx))
+ vq->vq_split.ring.avail->ring[avail_idx] = desc_idx;
vq->vq_avail_idx++;
}
@@ -443,7 +448,7 @@ virtqueue_kick_prepare(struct virtqueue *vq)
* the used->flags.
*/
virtio_mb(vq->hw->weak_barriers);
- return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY);
+ return !(vq->vq_split.ring.used->flags & VRING_USED_F_NO_NOTIFY);
}
static inline int
@@ -455,7 +460,7 @@ virtqueue_kick_prepare_packed(struct virtqueue *vq)
* Ensure updated data is visible to vhost before reading the flags.
*/
virtio_mb(vq->hw->weak_barriers);
- flags = vq->ring_packed.device_event->desc_event_flags;
+ flags = vq->vq_packed.ring.device_event->desc_event_flags;
return flags != RING_EVENT_FLAGS_DISABLE;
}
@@ -473,15 +478,15 @@ virtqueue_notify(struct virtqueue *vq)
#ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP
#define VIRTQUEUE_DUMP(vq) do { \
uint16_t used_idx, nused; \
- used_idx = (vq)->vq_ring.used->idx; \
+ used_idx = (vq)->vq_split.ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
if (vtpci_packed_queue((vq)->hw)) { \
PMD_INIT_LOG(DEBUG, \
"VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d;" \
" cached_flags=0x%x; used_wrap_counter=%d", \
(vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \
- (vq)->vq_avail_idx, (vq)->cached_flags, \
- (vq)->used_wrap_counter); \
+ (vq)->vq_avail_idx, (vq)->vq_packed.cached_flags, \
+ (vq)->vq_packed.used_wrap_counter); \
break; \
} \
PMD_INIT_LOG(DEBUG, \
@@ -489,9 +494,9 @@ virtqueue_notify(struct virtqueue *vq)
" avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \
" avail.flags=0x%x; used.flags=0x%x", \
(vq)->vq_nentries, (vq)->vq_free_cnt, nused, \
- (vq)->vq_desc_head_idx, (vq)->vq_ring.avail->idx, \
- (vq)->vq_used_cons_idx, (vq)->vq_ring.used->idx, \
- (vq)->vq_ring.avail->flags, (vq)->vq_ring.used->flags); \
+ (vq)->vq_desc_head_idx, (vq)->vq_split.ring.avail->idx, \
+ (vq)->vq_used_cons_idx, (vq)->vq_split.ring.used->idx, \
+ (vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \
} while (0)
#else
#define VIRTQUEUE_DUMP(vq) do { } while (0)
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 6:43 ` [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 9:44 ` Jens Freimann
2019-03-19 9:44 ` Jens Freimann
2019-03-19 10:09 ` Tiwei Bie
2019-03-19 13:28 ` Maxime Coquelin
2 siblings, 2 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:44 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>Put split ring and packed ring specific fields into separate
>sub-structures, and also union them as they won't be available
>at the same time.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> drivers/net/virtio/virtqueue.c | 6 +-
> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> 7 files changed, 117 insertions(+), 109 deletions(-)
>
[snip]
...
>diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
>index 80c0c43c3..48b3912e6 100644
>--- a/drivers/net/virtio/virtqueue.h
>+++ b/drivers/net/virtio/virtqueue.h
>@@ -191,17 +191,22 @@ struct vq_desc_extra {
>
> struct virtqueue {
> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>- struct vring vq_ring; /**< vring keeping desc, used and avail */
>- struct vring_packed ring_packed; /**< vring keeping descs */
>- bool used_wrap_counter;
>- uint16_t cached_flags; /**< cached flags for descs */
>- uint16_t event_flags_shadow;
>+ union {
>+ struct {
>+ /**< vring keeping desc, used and avail */
>+ struct vring ring;
>+ } vq_split;
>
>- /**
>- * Last consumed descriptor in the used table,
>- * trails vq_ring.used->idx.
>- */
>- uint16_t vq_used_cons_idx;
>+ struct {
>+ /**< vring keeping descs and events */
>+ struct vring_packed ring;
>+ bool used_wrap_counter;
>+ uint16_t cached_flags; /**< cached flags for descs */
>+ uint16_t event_flags_shadow;
>+ } vq_packed;
>+ };
>+
>+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
> uint16_t vq_nentries; /**< vring desc numbers */
> uint16_t vq_free_cnt; /**< num of desc available */
> uint16_t vq_avail_idx; /**< sync until needed */
Honest question: What do we really gain by putting it in a union? We
save a little memory. But we also make code less readable IMHO.
If we do this, can we at least shorten some variable names, like drop
the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
vq->packed* we don't loose any context).
I'm not strictly against this change but I'm wondering if it's worth
it.
regards,
Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 9:44 ` Jens Freimann
@ 2019-03-19 9:44 ` Jens Freimann
2019-03-19 10:09 ` Tiwei Bie
1 sibling, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:44 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>Put split ring and packed ring specific fields into separate
>sub-structures, and also union them as they won't be available
>at the same time.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> drivers/net/virtio/virtqueue.c | 6 +-
> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> 7 files changed, 117 insertions(+), 109 deletions(-)
>
[snip]
...
>diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
>index 80c0c43c3..48b3912e6 100644
>--- a/drivers/net/virtio/virtqueue.h
>+++ b/drivers/net/virtio/virtqueue.h
>@@ -191,17 +191,22 @@ struct vq_desc_extra {
>
> struct virtqueue {
> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>- struct vring vq_ring; /**< vring keeping desc, used and avail */
>- struct vring_packed ring_packed; /**< vring keeping descs */
>- bool used_wrap_counter;
>- uint16_t cached_flags; /**< cached flags for descs */
>- uint16_t event_flags_shadow;
>+ union {
>+ struct {
>+ /**< vring keeping desc, used and avail */
>+ struct vring ring;
>+ } vq_split;
>
>- /**
>- * Last consumed descriptor in the used table,
>- * trails vq_ring.used->idx.
>- */
>- uint16_t vq_used_cons_idx;
>+ struct {
>+ /**< vring keeping descs and events */
>+ struct vring_packed ring;
>+ bool used_wrap_counter;
>+ uint16_t cached_flags; /**< cached flags for descs */
>+ uint16_t event_flags_shadow;
>+ } vq_packed;
>+ };
>+
>+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
> uint16_t vq_nentries; /**< vring desc numbers */
> uint16_t vq_free_cnt; /**< num of desc available */
> uint16_t vq_avail_idx; /**< sync until needed */
Honest question: What do we really gain by putting it in a union? We
save a little memory. But we also make code less readable IMHO.
If we do this, can we at least shorten some variable names, like drop
the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
vq->packed* we don't loose any context).
I'm not strictly against this change but I'm wondering if it's worth
it.
regards,
Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 9:44 ` Jens Freimann
2019-03-19 9:44 ` Jens Freimann
@ 2019-03-19 10:09 ` Tiwei Bie
2019-03-19 10:09 ` Tiwei Bie
2019-03-19 13:28 ` Maxime Coquelin
1 sibling, 2 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 10:09 UTC (permalink / raw)
To: Jens Freimann; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
> > Put split ring and packed ring specific fields into separate
> > sub-structures, and also union them as they won't be available
> > at the same time.
> >
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> > drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> > drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> > drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> > drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> > drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> > drivers/net/virtio/virtqueue.c | 6 +-
> > drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> > 7 files changed, 117 insertions(+), 109 deletions(-)
> >
> [snip]
> ...
> > diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> > index 80c0c43c3..48b3912e6 100644
> > --- a/drivers/net/virtio/virtqueue.h
> > +++ b/drivers/net/virtio/virtqueue.h
> > @@ -191,17 +191,22 @@ struct vq_desc_extra {
> >
> > struct virtqueue {
> > struct virtio_hw *hw; /**< virtio_hw structure pointer. */
> > - struct vring vq_ring; /**< vring keeping desc, used and avail */
> > - struct vring_packed ring_packed; /**< vring keeping descs */
> > - bool used_wrap_counter;
> > - uint16_t cached_flags; /**< cached flags for descs */
> > - uint16_t event_flags_shadow;
> > + union {
> > + struct {
> > + /**< vring keeping desc, used and avail */
> > + struct vring ring;
> > + } vq_split;
> >
> > - /**
> > - * Last consumed descriptor in the used table,
> > - * trails vq_ring.used->idx.
> > - */
> > - uint16_t vq_used_cons_idx;
> > + struct {
> > + /**< vring keeping descs and events */
> > + struct vring_packed ring;
> > + bool used_wrap_counter;
> > + uint16_t cached_flags; /**< cached flags for descs */
> > + uint16_t event_flags_shadow;
> > + } vq_packed;
> > + };
> > +
> > + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
> > uint16_t vq_nentries; /**< vring desc numbers */
> > uint16_t vq_free_cnt; /**< num of desc available */
> > uint16_t vq_avail_idx; /**< sync until needed */
>
> Honest question: What do we really gain by putting it in a union? We
> save a little memory. But we also make code less readable IMHO.
I think it will make it clear that fields like used_wrap_counter
are only available in packed ring which will make the code more
readable.
>
> If we do this, can we at least shorten some variable names, like drop
> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
> vq->packed* we don't loose any context).
I prefer to have consistent prefix like most fields in this
structure (although some fields don't really follow this).
Thanks,
Tiwei
>
> I'm not strictly against this change but I'm wondering if it's worth
> it.
>
> regards,
> Jens
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 10:09 ` Tiwei Bie
@ 2019-03-19 10:09 ` Tiwei Bie
2019-03-19 13:28 ` Maxime Coquelin
1 sibling, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 10:09 UTC (permalink / raw)
To: Jens Freimann; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
> > Put split ring and packed ring specific fields into separate
> > sub-structures, and also union them as they won't be available
> > at the same time.
> >
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> > drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> > drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> > drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> > drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> > drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> > drivers/net/virtio/virtqueue.c | 6 +-
> > drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> > 7 files changed, 117 insertions(+), 109 deletions(-)
> >
> [snip]
> ...
> > diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> > index 80c0c43c3..48b3912e6 100644
> > --- a/drivers/net/virtio/virtqueue.h
> > +++ b/drivers/net/virtio/virtqueue.h
> > @@ -191,17 +191,22 @@ struct vq_desc_extra {
> >
> > struct virtqueue {
> > struct virtio_hw *hw; /**< virtio_hw structure pointer. */
> > - struct vring vq_ring; /**< vring keeping desc, used and avail */
> > - struct vring_packed ring_packed; /**< vring keeping descs */
> > - bool used_wrap_counter;
> > - uint16_t cached_flags; /**< cached flags for descs */
> > - uint16_t event_flags_shadow;
> > + union {
> > + struct {
> > + /**< vring keeping desc, used and avail */
> > + struct vring ring;
> > + } vq_split;
> >
> > - /**
> > - * Last consumed descriptor in the used table,
> > - * trails vq_ring.used->idx.
> > - */
> > - uint16_t vq_used_cons_idx;
> > + struct {
> > + /**< vring keeping descs and events */
> > + struct vring_packed ring;
> > + bool used_wrap_counter;
> > + uint16_t cached_flags; /**< cached flags for descs */
> > + uint16_t event_flags_shadow;
> > + } vq_packed;
> > + };
> > +
> > + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
> > uint16_t vq_nentries; /**< vring desc numbers */
> > uint16_t vq_free_cnt; /**< num of desc available */
> > uint16_t vq_avail_idx; /**< sync until needed */
>
> Honest question: What do we really gain by putting it in a union? We
> save a little memory. But we also make code less readable IMHO.
I think it will make it clear that fields like used_wrap_counter
are only available in packed ring which will make the code more
readable.
>
> If we do this, can we at least shorten some variable names, like drop
> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
> vq->packed* we don't loose any context).
I prefer to have consistent prefix like most fields in this
structure (although some fields don't really follow this).
Thanks,
Tiwei
>
> I'm not strictly against this change but I'm wondering if it's worth
> it.
>
> regards,
> Jens
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 10:09 ` Tiwei Bie
2019-03-19 10:09 ` Tiwei Bie
@ 2019-03-19 13:28 ` Maxime Coquelin
2019-03-19 13:28 ` Maxime Coquelin
` (2 more replies)
1 sibling, 3 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:28 UTC (permalink / raw)
To: Tiwei Bie, Jens Freimann; +Cc: zhihong.wang, dev
On 3/19/19 11:09 AM, Tiwei Bie wrote:
> On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
>> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>>> Put split ring and packed ring specific fields into separate
>>> sub-structures, and also union them as they won't be available
>>> at the same time.
>>>
>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>> ---
>>> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
>>> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
>>> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
>>> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
>>> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
>>> drivers/net/virtio/virtqueue.c | 6 +-
>>> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
>>> 7 files changed, 117 insertions(+), 109 deletions(-)
>>>
>> [snip]
>> ...
>>> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
>>> index 80c0c43c3..48b3912e6 100644
>>> --- a/drivers/net/virtio/virtqueue.h
>>> +++ b/drivers/net/virtio/virtqueue.h
>>> @@ -191,17 +191,22 @@ struct vq_desc_extra {
>>>
>>> struct virtqueue {
>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>>> - struct vring vq_ring; /**< vring keeping desc, used and avail */
>>> - struct vring_packed ring_packed; /**< vring keeping descs */
>>> - bool used_wrap_counter;
>>> - uint16_t cached_flags; /**< cached flags for descs */
>>> - uint16_t event_flags_shadow;
>>> + union {
>>> + struct {
>>> + /**< vring keeping desc, used and avail */
>>> + struct vring ring;
>>> + } vq_split;
>>>
>>> - /**
>>> - * Last consumed descriptor in the used table,
>>> - * trails vq_ring.used->idx.
>>> - */
>>> - uint16_t vq_used_cons_idx;
>>> + struct {
>>> + /**< vring keeping descs and events */
>>> + struct vring_packed ring;
>>> + bool used_wrap_counter;
>>> + uint16_t cached_flags; /**< cached flags for descs */
>>> + uint16_t event_flags_shadow;
>>> + } vq_packed;
>>> + };
>>> +
>>> + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
>>> uint16_t vq_nentries; /**< vring desc numbers */
>>> uint16_t vq_free_cnt; /**< num of desc available */
>>> uint16_t vq_avail_idx; /**< sync until needed */
>>
>> Honest question: What do we really gain by putting it in a union? We
>> save a little memory. But we also make code less readable IMHO.
>
> I think it will make it clear that fields like used_wrap_counter
> are only available in packed ring which will make the code more
> readable.
>
>>
>> If we do this, can we at least shorten some variable names, like drop
>> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
>> vq->packed* we don't loose any context).
>
> I prefer to have consistent prefix like most fields in this
> structure (although some fields don't really follow this).
As Jens, I tend to agree that the vq_ prefix is quite redundant.
However, I think it is better to keep it in this patch for consistency.
Maybe it can be remove in a separate patch later?
> Thanks,
> Tiwei
>
>>
>> I'm not strictly against this change but I'm wondering if it's worth
>> it.
>>
>> regards,
>> Jens
>>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 13:28 ` Maxime Coquelin
@ 2019-03-19 13:28 ` Maxime Coquelin
2019-03-19 13:47 ` Jens Freimann
2019-03-20 4:35 ` Tiwei Bie
2 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:28 UTC (permalink / raw)
To: Tiwei Bie, Jens Freimann; +Cc: zhihong.wang, dev
On 3/19/19 11:09 AM, Tiwei Bie wrote:
> On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
>> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>>> Put split ring and packed ring specific fields into separate
>>> sub-structures, and also union them as they won't be available
>>> at the same time.
>>>
>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>> ---
>>> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
>>> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
>>> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
>>> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
>>> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
>>> drivers/net/virtio/virtqueue.c | 6 +-
>>> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
>>> 7 files changed, 117 insertions(+), 109 deletions(-)
>>>
>> [snip]
>> ...
>>> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
>>> index 80c0c43c3..48b3912e6 100644
>>> --- a/drivers/net/virtio/virtqueue.h
>>> +++ b/drivers/net/virtio/virtqueue.h
>>> @@ -191,17 +191,22 @@ struct vq_desc_extra {
>>>
>>> struct virtqueue {
>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>>> - struct vring vq_ring; /**< vring keeping desc, used and avail */
>>> - struct vring_packed ring_packed; /**< vring keeping descs */
>>> - bool used_wrap_counter;
>>> - uint16_t cached_flags; /**< cached flags for descs */
>>> - uint16_t event_flags_shadow;
>>> + union {
>>> + struct {
>>> + /**< vring keeping desc, used and avail */
>>> + struct vring ring;
>>> + } vq_split;
>>>
>>> - /**
>>> - * Last consumed descriptor in the used table,
>>> - * trails vq_ring.used->idx.
>>> - */
>>> - uint16_t vq_used_cons_idx;
>>> + struct {
>>> + /**< vring keeping descs and events */
>>> + struct vring_packed ring;
>>> + bool used_wrap_counter;
>>> + uint16_t cached_flags; /**< cached flags for descs */
>>> + uint16_t event_flags_shadow;
>>> + } vq_packed;
>>> + };
>>> +
>>> + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
>>> uint16_t vq_nentries; /**< vring desc numbers */
>>> uint16_t vq_free_cnt; /**< num of desc available */
>>> uint16_t vq_avail_idx; /**< sync until needed */
>>
>> Honest question: What do we really gain by putting it in a union? We
>> save a little memory. But we also make code less readable IMHO.
>
> I think it will make it clear that fields like used_wrap_counter
> are only available in packed ring which will make the code more
> readable.
>
>>
>> If we do this, can we at least shorten some variable names, like drop
>> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
>> vq->packed* we don't loose any context).
>
> I prefer to have consistent prefix like most fields in this
> structure (although some fields don't really follow this).
As Jens, I tend to agree that the vq_ prefix is quite redundant.
However, I think it is better to keep it in this patch for consistency.
Maybe it can be remove in a separate patch later?
> Thanks,
> Tiwei
>
>>
>> I'm not strictly against this change but I'm wondering if it's worth
>> it.
>>
>> regards,
>> Jens
>>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 13:28 ` Maxime Coquelin
2019-03-19 13:28 ` Maxime Coquelin
@ 2019-03-19 13:47 ` Jens Freimann
2019-03-19 13:47 ` Jens Freimann
2019-03-19 13:50 ` Maxime Coquelin
2019-03-20 4:35 ` Tiwei Bie
2 siblings, 2 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 13:47 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: Tiwei Bie, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
>
>
>On 3/19/19 11:09 AM, Tiwei Bie wrote:
>>On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
>>>On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>>>>Put split ring and packed ring specific fields into separate
>>>>sub-structures, and also union them as they won't be available
>>>>at the same time.
>>>>
>>>>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>>>---
>>>>drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
>>>>drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
>>>>drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
>>>>drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
>>>>drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
>>>>drivers/net/virtio/virtqueue.c | 6 +-
>>>>drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
>>>>7 files changed, 117 insertions(+), 109 deletions(-)
>>>>
>>>[snip]
>>>...
>>>>diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
>>>>index 80c0c43c3..48b3912e6 100644
>>>>--- a/drivers/net/virtio/virtqueue.h
>>>>+++ b/drivers/net/virtio/virtqueue.h
>>>>@@ -191,17 +191,22 @@ struct vq_desc_extra {
>>>>
>>>>struct virtqueue {
>>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>>>>- struct vring vq_ring; /**< vring keeping desc, used and avail */
>>>>- struct vring_packed ring_packed; /**< vring keeping descs */
>>>>- bool used_wrap_counter;
>>>>- uint16_t cached_flags; /**< cached flags for descs */
>>>>- uint16_t event_flags_shadow;
>>>>+ union {
>>>>+ struct {
>>>>+ /**< vring keeping desc, used and avail */
>>>>+ struct vring ring;
>>>>+ } vq_split;
>>>>
>>>>- /**
>>>>- * Last consumed descriptor in the used table,
>>>>- * trails vq_ring.used->idx.
>>>>- */
>>>>- uint16_t vq_used_cons_idx;
>>>>+ struct {
>>>>+ /**< vring keeping descs and events */
>>>>+ struct vring_packed ring;
>>>>+ bool used_wrap_counter;
>>>>+ uint16_t cached_flags; /**< cached flags for descs */
>>>>+ uint16_t event_flags_shadow;
>>>>+ } vq_packed;
>>>>+ };
>>>>+
>>>>+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
>>>> uint16_t vq_nentries; /**< vring desc numbers */
>>>> uint16_t vq_free_cnt; /**< num of desc available */
>>>> uint16_t vq_avail_idx; /**< sync until needed */
>>>
>>>Honest question: What do we really gain by putting it in a union? We
>>>save a little memory. But we also make code less readable IMHO.
>>
>>I think it will make it clear that fields like used_wrap_counter
>>are only available in packed ring which will make the code more
>>readable.
>>
>>>
>>>If we do this, can we at least shorten some variable names, like drop
>>>the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
>>>vq->packed* we don't loose any context).
>>
>>I prefer to have consistent prefix like most fields in this
>>structure (although some fields don't really follow this).
>
>As Jens, I tend to agree that the vq_ prefix is quite redundant.
>However, I think it is better to keep it in this patch for consistency.
>
>Maybe it can be remove in a separate patch later?
I thought it might be convenient to change it now as we are touching
all related code anyway. But I also don't want to block the patch because of
this cosmetic thing. So let's defer it to a later patch set.
regards,
Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 13:47 ` Jens Freimann
@ 2019-03-19 13:47 ` Jens Freimann
2019-03-19 13:50 ` Maxime Coquelin
1 sibling, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 13:47 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: Tiwei Bie, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
>
>
>On 3/19/19 11:09 AM, Tiwei Bie wrote:
>>On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
>>>On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>>>>Put split ring and packed ring specific fields into separate
>>>>sub-structures, and also union them as they won't be available
>>>>at the same time.
>>>>
>>>>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>>>---
>>>>drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
>>>>drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
>>>>drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
>>>>drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
>>>>drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
>>>>drivers/net/virtio/virtqueue.c | 6 +-
>>>>drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
>>>>7 files changed, 117 insertions(+), 109 deletions(-)
>>>>
>>>[snip]
>>>...
>>>>diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
>>>>index 80c0c43c3..48b3912e6 100644
>>>>--- a/drivers/net/virtio/virtqueue.h
>>>>+++ b/drivers/net/virtio/virtqueue.h
>>>>@@ -191,17 +191,22 @@ struct vq_desc_extra {
>>>>
>>>>struct virtqueue {
>>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>>>>- struct vring vq_ring; /**< vring keeping desc, used and avail */
>>>>- struct vring_packed ring_packed; /**< vring keeping descs */
>>>>- bool used_wrap_counter;
>>>>- uint16_t cached_flags; /**< cached flags for descs */
>>>>- uint16_t event_flags_shadow;
>>>>+ union {
>>>>+ struct {
>>>>+ /**< vring keeping desc, used and avail */
>>>>+ struct vring ring;
>>>>+ } vq_split;
>>>>
>>>>- /**
>>>>- * Last consumed descriptor in the used table,
>>>>- * trails vq_ring.used->idx.
>>>>- */
>>>>- uint16_t vq_used_cons_idx;
>>>>+ struct {
>>>>+ /**< vring keeping descs and events */
>>>>+ struct vring_packed ring;
>>>>+ bool used_wrap_counter;
>>>>+ uint16_t cached_flags; /**< cached flags for descs */
>>>>+ uint16_t event_flags_shadow;
>>>>+ } vq_packed;
>>>>+ };
>>>>+
>>>>+ uint16_t vq_used_cons_idx; /**< last consumed descriptor */
>>>> uint16_t vq_nentries; /**< vring desc numbers */
>>>> uint16_t vq_free_cnt; /**< num of desc available */
>>>> uint16_t vq_avail_idx; /**< sync until needed */
>>>
>>>Honest question: What do we really gain by putting it in a union? We
>>>save a little memory. But we also make code less readable IMHO.
>>
>>I think it will make it clear that fields like used_wrap_counter
>>are only available in packed ring which will make the code more
>>readable.
>>
>>>
>>>If we do this, can we at least shorten some variable names, like drop
>>>the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
>>>vq->packed* we don't loose any context).
>>
>>I prefer to have consistent prefix like most fields in this
>>structure (although some fields don't really follow this).
>
>As Jens, I tend to agree that the vq_ prefix is quite redundant.
>However, I think it is better to keep it in this patch for consistency.
>
>Maybe it can be remove in a separate patch later?
I thought it might be convenient to change it now as we are touching
all related code anyway. But I also don't want to block the patch because of
this cosmetic thing. So let's defer it to a later patch set.
regards,
Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 13:47 ` Jens Freimann
2019-03-19 13:47 ` Jens Freimann
@ 2019-03-19 13:50 ` Maxime Coquelin
2019-03-19 13:50 ` Maxime Coquelin
2019-03-19 14:59 ` Kevin Traynor
1 sibling, 2 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:50 UTC (permalink / raw)
To: Jens Freimann; +Cc: Tiwei Bie, zhihong.wang, dev
On 3/19/19 2:47 PM, Jens Freimann wrote:
> On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
>>
>>
>> On 3/19/19 11:09 AM, Tiwei Bie wrote:
>>> On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
>>>> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>>>>> Put split ring and packed ring specific fields into separate
>>>>> sub-structures, and also union them as they won't be available
>>>>> at the same time.
>>>>>
>>>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>>>> ---
>>>>> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
>>>>> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
>>>>> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
>>>>> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
>>>>> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
>>>>> drivers/net/virtio/virtqueue.c | 6 +-
>>>>> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
>>>>> 7 files changed, 117 insertions(+), 109 deletions(-)
>>>>>
>>>> [snip]
>>>> ...
>>>>> diff --git a/drivers/net/virtio/virtqueue.h
>>>>> b/drivers/net/virtio/virtqueue.h
>>>>> index 80c0c43c3..48b3912e6 100644
>>>>> --- a/drivers/net/virtio/virtqueue.h
>>>>> +++ b/drivers/net/virtio/virtqueue.h
>>>>> @@ -191,17 +191,22 @@ struct vq_desc_extra {
>>>>>
>>>>> struct virtqueue {
>>>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>>>>> - struct vring vq_ring; /**< vring keeping desc, used and avail */
>>>>> - struct vring_packed ring_packed; /**< vring keeping descs */
>>>>> - bool used_wrap_counter;
>>>>> - uint16_t cached_flags; /**< cached flags for descs */
>>>>> - uint16_t event_flags_shadow;
>>>>> + union {
>>>>> + struct {
>>>>> + /**< vring keeping desc, used and avail */
>>>>> + struct vring ring;
>>>>> + } vq_split;
>>>>>
>>>>> - /**
>>>>> - * Last consumed descriptor in the used table,
>>>>> - * trails vq_ring.used->idx.
>>>>> - */
>>>>> - uint16_t vq_used_cons_idx;
>>>>> + struct {
>>>>> + /**< vring keeping descs and events */
>>>>> + struct vring_packed ring;
>>>>> + bool used_wrap_counter;
>>>>> + uint16_t cached_flags; /**< cached flags for descs */
>>>>> + uint16_t event_flags_shadow;
>>>>> + } vq_packed;
>>>>> + };
>>>>> +
>>>>> + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
>>>>> uint16_t vq_nentries; /**< vring desc numbers */
>>>>> uint16_t vq_free_cnt; /**< num of desc available */
>>>>> uint16_t vq_avail_idx; /**< sync until needed */
>>>>
>>>> Honest question: What do we really gain by putting it in a union? We
>>>> save a little memory. But we also make code less readable IMHO.
>>>
>>> I think it will make it clear that fields like used_wrap_counter
>>> are only available in packed ring which will make the code more
>>> readable.
>>>
>>>>
>>>> If we do this, can we at least shorten some variable names, like drop
>>>> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
>>>> vq->packed* we don't loose any context).
>>>
>>> I prefer to have consistent prefix like most fields in this
>>> structure (although some fields don't really follow this).
>>
>> As Jens, I tend to agree that the vq_ prefix is quite redundant.
>> However, I think it is better to keep it in this patch for consistency.
>>
>> Maybe it can be remove in a separate patch later?
>
> I thought it might be convenient to change it now as we are touching
> all related code anyway. But I also don't want to block the patch
> because of
> this cosmetic thing. So let's defer it to a later patch set.
OK, when I meant later, I meant to remove vq_ prefix for all fields, not
only vq_split & vq_packed.
But yes, that's just cosmetic so let's keep it as is for now.
>
> regards,
> Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 13:50 ` Maxime Coquelin
@ 2019-03-19 13:50 ` Maxime Coquelin
2019-03-19 14:59 ` Kevin Traynor
1 sibling, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:50 UTC (permalink / raw)
To: Jens Freimann; +Cc: Tiwei Bie, zhihong.wang, dev
On 3/19/19 2:47 PM, Jens Freimann wrote:
> On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
>>
>>
>> On 3/19/19 11:09 AM, Tiwei Bie wrote:
>>> On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
>>>> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>>>>> Put split ring and packed ring specific fields into separate
>>>>> sub-structures, and also union them as they won't be available
>>>>> at the same time.
>>>>>
>>>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>>>> ---
>>>>> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
>>>>> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
>>>>> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
>>>>> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
>>>>> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
>>>>> drivers/net/virtio/virtqueue.c | 6 +-
>>>>> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
>>>>> 7 files changed, 117 insertions(+), 109 deletions(-)
>>>>>
>>>> [snip]
>>>> ...
>>>>> diff --git a/drivers/net/virtio/virtqueue.h
>>>>> b/drivers/net/virtio/virtqueue.h
>>>>> index 80c0c43c3..48b3912e6 100644
>>>>> --- a/drivers/net/virtio/virtqueue.h
>>>>> +++ b/drivers/net/virtio/virtqueue.h
>>>>> @@ -191,17 +191,22 @@ struct vq_desc_extra {
>>>>>
>>>>> struct virtqueue {
>>>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>>>>> - struct vring vq_ring; /**< vring keeping desc, used and avail */
>>>>> - struct vring_packed ring_packed; /**< vring keeping descs */
>>>>> - bool used_wrap_counter;
>>>>> - uint16_t cached_flags; /**< cached flags for descs */
>>>>> - uint16_t event_flags_shadow;
>>>>> + union {
>>>>> + struct {
>>>>> + /**< vring keeping desc, used and avail */
>>>>> + struct vring ring;
>>>>> + } vq_split;
>>>>>
>>>>> - /**
>>>>> - * Last consumed descriptor in the used table,
>>>>> - * trails vq_ring.used->idx.
>>>>> - */
>>>>> - uint16_t vq_used_cons_idx;
>>>>> + struct {
>>>>> + /**< vring keeping descs and events */
>>>>> + struct vring_packed ring;
>>>>> + bool used_wrap_counter;
>>>>> + uint16_t cached_flags; /**< cached flags for descs */
>>>>> + uint16_t event_flags_shadow;
>>>>> + } vq_packed;
>>>>> + };
>>>>> +
>>>>> + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
>>>>> uint16_t vq_nentries; /**< vring desc numbers */
>>>>> uint16_t vq_free_cnt; /**< num of desc available */
>>>>> uint16_t vq_avail_idx; /**< sync until needed */
>>>>
>>>> Honest question: What do we really gain by putting it in a union? We
>>>> save a little memory. But we also make code less readable IMHO.
>>>
>>> I think it will make it clear that fields like used_wrap_counter
>>> are only available in packed ring which will make the code more
>>> readable.
>>>
>>>>
>>>> If we do this, can we at least shorten some variable names, like drop
>>>> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
>>>> vq->packed* we don't loose any context).
>>>
>>> I prefer to have consistent prefix like most fields in this
>>> structure (although some fields don't really follow this).
>>
>> As Jens, I tend to agree that the vq_ prefix is quite redundant.
>> However, I think it is better to keep it in this patch for consistency.
>>
>> Maybe it can be remove in a separate patch later?
>
> I thought it might be convenient to change it now as we are touching
> all related code anyway. But I also don't want to block the patch
> because of
> this cosmetic thing. So let's defer it to a later patch set.
OK, when I meant later, I meant to remove vq_ prefix for all fields, not
only vq_split & vq_packed.
But yes, that's just cosmetic so let's keep it as is for now.
>
> regards,
> Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 13:50 ` Maxime Coquelin
2019-03-19 13:50 ` Maxime Coquelin
@ 2019-03-19 14:59 ` Kevin Traynor
2019-03-19 14:59 ` Kevin Traynor
2019-03-20 4:40 ` Tiwei Bie
1 sibling, 2 replies; 88+ messages in thread
From: Kevin Traynor @ 2019-03-19 14:59 UTC (permalink / raw)
To: Maxime Coquelin, Jens Freimann; +Cc: Tiwei Bie, zhihong.wang, dev
On 19/03/2019 13:50, Maxime Coquelin wrote:
>
>
> On 3/19/19 2:47 PM, Jens Freimann wrote:
>> On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
>>>
>>>
>>> On 3/19/19 11:09 AM, Tiwei Bie wrote:
>>>> On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
>>>>> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>>>>>> Put split ring and packed ring specific fields into separate
>>>>>> sub-structures, and also union them as they won't be available
>>>>>> at the same time.
>>>>>>
>>>>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>>>>> ---
>>>>>> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
>>>>>> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
>>>>>> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
>>>>>> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
>>>>>> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
>>>>>> drivers/net/virtio/virtqueue.c | 6 +-
>>>>>> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
>>>>>> 7 files changed, 117 insertions(+), 109 deletions(-)
>>>>>>
>>>>> [snip]
>>>>> ...
>>>>>> diff --git a/drivers/net/virtio/virtqueue.h
>>>>>> b/drivers/net/virtio/virtqueue.h
>>>>>> index 80c0c43c3..48b3912e6 100644
>>>>>> --- a/drivers/net/virtio/virtqueue.h
>>>>>> +++ b/drivers/net/virtio/virtqueue.h
>>>>>> @@ -191,17 +191,22 @@ struct vq_desc_extra {
>>>>>>
>>>>>> struct virtqueue {
>>>>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>>>>>> - struct vring vq_ring; /**< vring keeping desc, used and avail */
>>>>>> - struct vring_packed ring_packed; /**< vring keeping descs */
>>>>>> - bool used_wrap_counter;
>>>>>> - uint16_t cached_flags; /**< cached flags for descs */
>>>>>> - uint16_t event_flags_shadow;
>>>>>> + union {
>>>>>> + struct {
>>>>>> + /**< vring keeping desc, used and avail */
>>>>>> + struct vring ring;
>>>>>> + } vq_split;
>>>>>>
>>>>>> - /**
>>>>>> - * Last consumed descriptor in the used table,
>>>>>> - * trails vq_ring.used->idx.
>>>>>> - */
>>>>>> - uint16_t vq_used_cons_idx;
>>>>>> + struct {
>>>>>> + /**< vring keeping descs and events */
>>>>>> + struct vring_packed ring;
>>>>>> + bool used_wrap_counter;
>>>>>> + uint16_t cached_flags; /**< cached flags for descs */
>>>>>> + uint16_t event_flags_shadow;
>>>>>> + } vq_packed;
>>>>>> + };
>>>>>> +
>>>>>> + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
>>>>>> uint16_t vq_nentries; /**< vring desc numbers */
>>>>>> uint16_t vq_free_cnt; /**< num of desc available */
>>>>>> uint16_t vq_avail_idx; /**< sync until needed */
>>>>>
>>>>> Honest question: What do we really gain by putting it in a union? We
>>>>> save a little memory. But we also make code less readable IMHO.
>>>>
>>>> I think it will make it clear that fields like used_wrap_counter
>>>> are only available in packed ring which will make the code more
>>>> readable.
>>>>
>>>>>
>>>>> If we do this, can we at least shorten some variable names, like drop
>>>>> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
>>>>> vq->packed* we don't loose any context).
>>>>
>>>> I prefer to have consistent prefix like most fields in this
>>>> structure (although some fields don't really follow this).
>>>
>>> As Jens, I tend to agree that the vq_ prefix is quite redundant.
>>> However, I think it is better to keep it in this patch for consistency.
>>>
>>> Maybe it can be remove in a separate patch later?
>>
>> I thought it might be convenient to change it now as we are touching
>> all related code anyway. But I also don't want to block the patch
>> because of
>> this cosmetic thing. So let's defer it to a later patch set.
>
> OK, when I meant later, I meant to remove vq_ prefix for all fields, not
> only vq_split & vq_packed.
>
> But yes, that's just cosmetic so let's keep it as is for now.
>
I agree the vq_ prefix is not needed and I think the code is more
readable in general seeing the packed/split name when using the struct.
Please also consider that cosmetic changes in multiple places likely
mean backports will not apply cleanly to the stable branches anymore, so
it does have a cost. Although in this case, iirc packed rings are not in
18.11, so fixes might need dedicated backports from authors anyway, and
there haven't been too many virtio backports to date.
>>
>> regards,
>> Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 14:59 ` Kevin Traynor
@ 2019-03-19 14:59 ` Kevin Traynor
2019-03-20 4:40 ` Tiwei Bie
1 sibling, 0 replies; 88+ messages in thread
From: Kevin Traynor @ 2019-03-19 14:59 UTC (permalink / raw)
To: Maxime Coquelin, Jens Freimann; +Cc: Tiwei Bie, zhihong.wang, dev
On 19/03/2019 13:50, Maxime Coquelin wrote:
>
>
> On 3/19/19 2:47 PM, Jens Freimann wrote:
>> On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
>>>
>>>
>>> On 3/19/19 11:09 AM, Tiwei Bie wrote:
>>>> On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
>>>>> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
>>>>>> Put split ring and packed ring specific fields into separate
>>>>>> sub-structures, and also union them as they won't be available
>>>>>> at the same time.
>>>>>>
>>>>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>>>>> ---
>>>>>> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
>>>>>> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
>>>>>> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
>>>>>> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
>>>>>> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
>>>>>> drivers/net/virtio/virtqueue.c | 6 +-
>>>>>> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
>>>>>> 7 files changed, 117 insertions(+), 109 deletions(-)
>>>>>>
>>>>> [snip]
>>>>> ...
>>>>>> diff --git a/drivers/net/virtio/virtqueue.h
>>>>>> b/drivers/net/virtio/virtqueue.h
>>>>>> index 80c0c43c3..48b3912e6 100644
>>>>>> --- a/drivers/net/virtio/virtqueue.h
>>>>>> +++ b/drivers/net/virtio/virtqueue.h
>>>>>> @@ -191,17 +191,22 @@ struct vq_desc_extra {
>>>>>>
>>>>>> struct virtqueue {
>>>>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
>>>>>> - struct vring vq_ring; /**< vring keeping desc, used and avail */
>>>>>> - struct vring_packed ring_packed; /**< vring keeping descs */
>>>>>> - bool used_wrap_counter;
>>>>>> - uint16_t cached_flags; /**< cached flags for descs */
>>>>>> - uint16_t event_flags_shadow;
>>>>>> + union {
>>>>>> + struct {
>>>>>> + /**< vring keeping desc, used and avail */
>>>>>> + struct vring ring;
>>>>>> + } vq_split;
>>>>>>
>>>>>> - /**
>>>>>> - * Last consumed descriptor in the used table,
>>>>>> - * trails vq_ring.used->idx.
>>>>>> - */
>>>>>> - uint16_t vq_used_cons_idx;
>>>>>> + struct {
>>>>>> + /**< vring keeping descs and events */
>>>>>> + struct vring_packed ring;
>>>>>> + bool used_wrap_counter;
>>>>>> + uint16_t cached_flags; /**< cached flags for descs */
>>>>>> + uint16_t event_flags_shadow;
>>>>>> + } vq_packed;
>>>>>> + };
>>>>>> +
>>>>>> + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
>>>>>> uint16_t vq_nentries; /**< vring desc numbers */
>>>>>> uint16_t vq_free_cnt; /**< num of desc available */
>>>>>> uint16_t vq_avail_idx; /**< sync until needed */
>>>>>
>>>>> Honest question: What do we really gain by putting it in a union? We
>>>>> save a little memory. But we also make code less readable IMHO.
>>>>
>>>> I think it will make it clear that fields like used_wrap_counter
>>>> are only available in packed ring which will make the code more
>>>> readable.
>>>>
>>>>>
>>>>> If we do this, can we at least shorten some variable names, like drop
>>>>> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
>>>>> vq->packed* we don't loose any context).
>>>>
>>>> I prefer to have consistent prefix like most fields in this
>>>> structure (although some fields don't really follow this).
>>>
>>> As Jens, I tend to agree that the vq_ prefix is quite redundant.
>>> However, I think it is better to keep it in this patch for consistency.
>>>
>>> Maybe it can be remove in a separate patch later?
>>
>> I thought it might be convenient to change it now as we are touching
>> all related code anyway. But I also don't want to block the patch
>> because of
>> this cosmetic thing. So let's defer it to a later patch set.
>
> OK, when I meant later, I meant to remove vq_ prefix for all fields, not
> only vq_split & vq_packed.
>
> But yes, that's just cosmetic so let's keep it as is for now.
>
I agree the vq_ prefix is not needed and I think the code is more
readable in general seeing the packed/split name when using the struct.
Please also consider that cosmetic changes in multiple places likely
mean backports will not apply cleanly to the stable branches anymore, so
it does have a cost. Although in this case, iirc packed rings are not in
18.11, so fixes might need dedicated backports from authors anyway, and
there haven't been too many virtio backports to date.
>>
>> regards,
>> Jens
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 14:59 ` Kevin Traynor
2019-03-19 14:59 ` Kevin Traynor
@ 2019-03-20 4:40 ` Tiwei Bie
2019-03-20 4:40 ` Tiwei Bie
2019-03-20 17:50 ` Stephen Hemminger
1 sibling, 2 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-20 4:40 UTC (permalink / raw)
To: Kevin Traynor; +Cc: Maxime Coquelin, Jens Freimann, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:59:38PM +0000, Kevin Traynor wrote:
> On 19/03/2019 13:50, Maxime Coquelin wrote:
> >
> >
> > On 3/19/19 2:47 PM, Jens Freimann wrote:
> >> On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
> >>>
> >>>
> >>> On 3/19/19 11:09 AM, Tiwei Bie wrote:
> >>>> On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
> >>>>> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
> >>>>>> Put split ring and packed ring specific fields into separate
> >>>>>> sub-structures, and also union them as they won't be available
> >>>>>> at the same time.
> >>>>>>
> >>>>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> >>>>>> ---
> >>>>>> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> >>>>>> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> >>>>>> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> >>>>>> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> >>>>>> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> >>>>>> drivers/net/virtio/virtqueue.c | 6 +-
> >>>>>> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> >>>>>> 7 files changed, 117 insertions(+), 109 deletions(-)
> >>>>>>
> >>>>> [snip]
> >>>>> ...
> >>>>>> diff --git a/drivers/net/virtio/virtqueue.h
> >>>>>> b/drivers/net/virtio/virtqueue.h
> >>>>>> index 80c0c43c3..48b3912e6 100644
> >>>>>> --- a/drivers/net/virtio/virtqueue.h
> >>>>>> +++ b/drivers/net/virtio/virtqueue.h
> >>>>>> @@ -191,17 +191,22 @@ struct vq_desc_extra {
> >>>>>>
> >>>>>> struct virtqueue {
> >>>>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
> >>>>>> - struct vring vq_ring; /**< vring keeping desc, used and avail */
> >>>>>> - struct vring_packed ring_packed; /**< vring keeping descs */
> >>>>>> - bool used_wrap_counter;
> >>>>>> - uint16_t cached_flags; /**< cached flags for descs */
> >>>>>> - uint16_t event_flags_shadow;
> >>>>>> + union {
> >>>>>> + struct {
> >>>>>> + /**< vring keeping desc, used and avail */
> >>>>>> + struct vring ring;
> >>>>>> + } vq_split;
> >>>>>>
> >>>>>> - /**
> >>>>>> - * Last consumed descriptor in the used table,
> >>>>>> - * trails vq_ring.used->idx.
> >>>>>> - */
> >>>>>> - uint16_t vq_used_cons_idx;
> >>>>>> + struct {
> >>>>>> + /**< vring keeping descs and events */
> >>>>>> + struct vring_packed ring;
> >>>>>> + bool used_wrap_counter;
> >>>>>> + uint16_t cached_flags; /**< cached flags for descs */
> >>>>>> + uint16_t event_flags_shadow;
> >>>>>> + } vq_packed;
> >>>>>> + };
> >>>>>> +
> >>>>>> + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
> >>>>>> uint16_t vq_nentries; /**< vring desc numbers */
> >>>>>> uint16_t vq_free_cnt; /**< num of desc available */
> >>>>>> uint16_t vq_avail_idx; /**< sync until needed */
> >>>>>
> >>>>> Honest question: What do we really gain by putting it in a union? We
> >>>>> save a little memory. But we also make code less readable IMHO.
> >>>>
> >>>> I think it will make it clear that fields like used_wrap_counter
> >>>> are only available in packed ring which will make the code more
> >>>> readable.
> >>>>
> >>>>>
> >>>>> If we do this, can we at least shorten some variable names, like drop
> >>>>> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
> >>>>> vq->packed* we don't loose any context).
> >>>>
> >>>> I prefer to have consistent prefix like most fields in this
> >>>> structure (although some fields don't really follow this).
> >>>
> >>> As Jens, I tend to agree that the vq_ prefix is quite redundant.
> >>> However, I think it is better to keep it in this patch for consistency.
> >>>
> >>> Maybe it can be remove in a separate patch later?
> >>
> >> I thought it might be convenient to change it now as we are touching
> >> all related code anyway. But I also don't want to block the patch
> >> because of
> >> this cosmetic thing. So let's defer it to a later patch set.
> >
> > OK, when I meant later, I meant to remove vq_ prefix for all fields, not
> > only vq_split & vq_packed.
> >
> > But yes, that's just cosmetic so let's keep it as is for now.
> >
>
> I agree the vq_ prefix is not needed and I think the code is more
> readable in general seeing the packed/split name when using the struct.
>
> Please also consider that cosmetic changes in multiple places likely
> mean backports will not apply cleanly to the stable branches anymore, so
> it does have a cost.
Yeah, agree.
> Although in this case, iirc packed rings are not in
> 18.11, so fixes might need dedicated backports from authors anyway, and
> there haven't been too many virtio backports to date.
>
> >>
> >> regards,
> >> Jens
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-20 4:40 ` Tiwei Bie
@ 2019-03-20 4:40 ` Tiwei Bie
2019-03-20 17:50 ` Stephen Hemminger
1 sibling, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-20 4:40 UTC (permalink / raw)
To: Kevin Traynor; +Cc: Maxime Coquelin, Jens Freimann, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:59:38PM +0000, Kevin Traynor wrote:
> On 19/03/2019 13:50, Maxime Coquelin wrote:
> >
> >
> > On 3/19/19 2:47 PM, Jens Freimann wrote:
> >> On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
> >>>
> >>>
> >>> On 3/19/19 11:09 AM, Tiwei Bie wrote:
> >>>> On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
> >>>>> On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
> >>>>>> Put split ring and packed ring specific fields into separate
> >>>>>> sub-structures, and also union them as they won't be available
> >>>>>> at the same time.
> >>>>>>
> >>>>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> >>>>>> ---
> >>>>>> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> >>>>>> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> >>>>>> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> >>>>>> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> >>>>>> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> >>>>>> drivers/net/virtio/virtqueue.c | 6 +-
> >>>>>> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> >>>>>> 7 files changed, 117 insertions(+), 109 deletions(-)
> >>>>>>
> >>>>> [snip]
> >>>>> ...
> >>>>>> diff --git a/drivers/net/virtio/virtqueue.h
> >>>>>> b/drivers/net/virtio/virtqueue.h
> >>>>>> index 80c0c43c3..48b3912e6 100644
> >>>>>> --- a/drivers/net/virtio/virtqueue.h
> >>>>>> +++ b/drivers/net/virtio/virtqueue.h
> >>>>>> @@ -191,17 +191,22 @@ struct vq_desc_extra {
> >>>>>>
> >>>>>> struct virtqueue {
> >>>>>> struct virtio_hw *hw; /**< virtio_hw structure pointer. */
> >>>>>> - struct vring vq_ring; /**< vring keeping desc, used and avail */
> >>>>>> - struct vring_packed ring_packed; /**< vring keeping descs */
> >>>>>> - bool used_wrap_counter;
> >>>>>> - uint16_t cached_flags; /**< cached flags for descs */
> >>>>>> - uint16_t event_flags_shadow;
> >>>>>> + union {
> >>>>>> + struct {
> >>>>>> + /**< vring keeping desc, used and avail */
> >>>>>> + struct vring ring;
> >>>>>> + } vq_split;
> >>>>>>
> >>>>>> - /**
> >>>>>> - * Last consumed descriptor in the used table,
> >>>>>> - * trails vq_ring.used->idx.
> >>>>>> - */
> >>>>>> - uint16_t vq_used_cons_idx;
> >>>>>> + struct {
> >>>>>> + /**< vring keeping descs and events */
> >>>>>> + struct vring_packed ring;
> >>>>>> + bool used_wrap_counter;
> >>>>>> + uint16_t cached_flags; /**< cached flags for descs */
> >>>>>> + uint16_t event_flags_shadow;
> >>>>>> + } vq_packed;
> >>>>>> + };
> >>>>>> +
> >>>>>> + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
> >>>>>> uint16_t vq_nentries; /**< vring desc numbers */
> >>>>>> uint16_t vq_free_cnt; /**< num of desc available */
> >>>>>> uint16_t vq_avail_idx; /**< sync until needed */
> >>>>>
> >>>>> Honest question: What do we really gain by putting it in a union? We
> >>>>> save a little memory. But we also make code less readable IMHO.
> >>>>
> >>>> I think it will make it clear that fields like used_wrap_counter
> >>>> are only available in packed ring which will make the code more
> >>>> readable.
> >>>>
> >>>>>
> >>>>> If we do this, can we at least shorten some variable names, like drop
> >>>>> the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
> >>>>> vq->packed* we don't loose any context).
> >>>>
> >>>> I prefer to have consistent prefix like most fields in this
> >>>> structure (although some fields don't really follow this).
> >>>
> >>> As Jens, I tend to agree that the vq_ prefix is quite redundant.
> >>> However, I think it is better to keep it in this patch for consistency.
> >>>
> >>> Maybe it can be remove in a separate patch later?
> >>
> >> I thought it might be convenient to change it now as we are touching
> >> all related code anyway. But I also don't want to block the patch
> >> because of
> >> this cosmetic thing. So let's defer it to a later patch set.
> >
> > OK, when I meant later, I meant to remove vq_ prefix for all fields, not
> > only vq_split & vq_packed.
> >
> > But yes, that's just cosmetic so let's keep it as is for now.
> >
>
> I agree the vq_ prefix is not needed and I think the code is more
> readable in general seeing the packed/split name when using the struct.
>
> Please also consider that cosmetic changes in multiple places likely
> mean backports will not apply cleanly to the stable branches anymore, so
> it does have a cost.
Yeah, agree.
> Although in this case, iirc packed rings are not in
> 18.11, so fixes might need dedicated backports from authors anyway, and
> there haven't been too many virtio backports to date.
>
> >>
> >> regards,
> >> Jens
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-20 4:40 ` Tiwei Bie
2019-03-20 4:40 ` Tiwei Bie
@ 2019-03-20 17:50 ` Stephen Hemminger
2019-03-20 17:50 ` Stephen Hemminger
2019-03-21 14:18 ` Maxime Coquelin
1 sibling, 2 replies; 88+ messages in thread
From: Stephen Hemminger @ 2019-03-20 17:50 UTC (permalink / raw)
To: Tiwei Bie
Cc: Kevin Traynor, Maxime Coquelin, Jens Freimann, zhihong.wang, dev
On Wed, 20 Mar 2019 12:40:26 +0800
Tiwei Bie <tiwei.bie@intel.com> wrote:
> > I agree the vq_ prefix is not needed and I think the code is more
> > readable in general seeing the packed/split name when using the struct.
> >
> > Please also consider that cosmetic changes in multiple places likely
> > mean backports will not apply cleanly to the stable branches anymore, so
> > it does have a cost.
>
> Yeah, agree.
prefixing structure elements is an old school BSD thing back from when
compilers were not smart about namespaces for identifiers.
Agree that it is no longer needed.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-20 17:50 ` Stephen Hemminger
@ 2019-03-20 17:50 ` Stephen Hemminger
2019-03-21 14:18 ` Maxime Coquelin
1 sibling, 0 replies; 88+ messages in thread
From: Stephen Hemminger @ 2019-03-20 17:50 UTC (permalink / raw)
To: Tiwei Bie
Cc: Kevin Traynor, Maxime Coquelin, Jens Freimann, zhihong.wang, dev
On Wed, 20 Mar 2019 12:40:26 +0800
Tiwei Bie <tiwei.bie@intel.com> wrote:
> > I agree the vq_ prefix is not needed and I think the code is more
> > readable in general seeing the packed/split name when using the struct.
> >
> > Please also consider that cosmetic changes in multiple places likely
> > mean backports will not apply cleanly to the stable branches anymore, so
> > it does have a cost.
>
> Yeah, agree.
prefixing structure elements is an old school BSD thing back from when
compilers were not smart about namespaces for identifiers.
Agree that it is no longer needed.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-20 17:50 ` Stephen Hemminger
2019-03-20 17:50 ` Stephen Hemminger
@ 2019-03-21 14:18 ` Maxime Coquelin
2019-03-21 14:18 ` Maxime Coquelin
1 sibling, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-21 14:18 UTC (permalink / raw)
To: Stephen Hemminger, Tiwei Bie
Cc: Kevin Traynor, Jens Freimann, zhihong.wang, dev
On 3/20/19 6:50 PM, Stephen Hemminger wrote:
> On Wed, 20 Mar 2019 12:40:26 +0800
> Tiwei Bie <tiwei.bie@intel.com> wrote:
>
>>> I agree the vq_ prefix is not needed and I think the code is more
>>> readable in general seeing the packed/split name when using the struct.
>>>
>>> Please also consider that cosmetic changes in multiple places likely
>>> mean backports will not apply cleanly to the stable branches anymore, so
>>> it does have a cost.
>>
>> Yeah, agree.
>
>
> prefixing structure elements is an old school BSD thing back from when
> compilers were not smart about namespaces for identifiers.
Ha, good to know! I wasn't aware of that.
>
> Agree that it is no longer needed.
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-21 14:18 ` Maxime Coquelin
@ 2019-03-21 14:18 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-21 14:18 UTC (permalink / raw)
To: Stephen Hemminger, Tiwei Bie
Cc: Kevin Traynor, Jens Freimann, zhihong.wang, dev
On 3/20/19 6:50 PM, Stephen Hemminger wrote:
> On Wed, 20 Mar 2019 12:40:26 +0800
> Tiwei Bie <tiwei.bie@intel.com> wrote:
>
>>> I agree the vq_ prefix is not needed and I think the code is more
>>> readable in general seeing the packed/split name when using the struct.
>>>
>>> Please also consider that cosmetic changes in multiple places likely
>>> mean backports will not apply cleanly to the stable branches anymore, so
>>> it does have a cost.
>>
>> Yeah, agree.
>
>
> prefixing structure elements is an old school BSD thing back from when
> compilers were not smart about namespaces for identifiers.
Ha, good to know! I wasn't aware of that.
>
> Agree that it is no longer needed.
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 13:28 ` Maxime Coquelin
2019-03-19 13:28 ` Maxime Coquelin
2019-03-19 13:47 ` Jens Freimann
@ 2019-03-20 4:35 ` Tiwei Bie
2019-03-20 4:35 ` Tiwei Bie
2 siblings, 1 reply; 88+ messages in thread
From: Tiwei Bie @ 2019-03-20 4:35 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: Jens Freimann, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
> On 3/19/19 11:09 AM, Tiwei Bie wrote:
> > On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
> > > On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
> > > > Put split ring and packed ring specific fields into separate
> > > > sub-structures, and also union them as they won't be available
> > > > at the same time.
> > > >
> > > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > > ---
> > > > drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> > > > drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> > > > drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> > > > drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> > > > drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> > > > drivers/net/virtio/virtqueue.c | 6 +-
> > > > drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> > > > 7 files changed, 117 insertions(+), 109 deletions(-)
> > > >
> > > [snip]
> > > ...
> > > > diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> > > > index 80c0c43c3..48b3912e6 100644
> > > > --- a/drivers/net/virtio/virtqueue.h
> > > > +++ b/drivers/net/virtio/virtqueue.h
> > > > @@ -191,17 +191,22 @@ struct vq_desc_extra {
> > > >
> > > > struct virtqueue {
> > > > struct virtio_hw *hw; /**< virtio_hw structure pointer. */
> > > > - struct vring vq_ring; /**< vring keeping desc, used and avail */
> > > > - struct vring_packed ring_packed; /**< vring keeping descs */
> > > > - bool used_wrap_counter;
> > > > - uint16_t cached_flags; /**< cached flags for descs */
> > > > - uint16_t event_flags_shadow;
> > > > + union {
> > > > + struct {
> > > > + /**< vring keeping desc, used and avail */
> > > > + struct vring ring;
> > > > + } vq_split;
> > > >
> > > > - /**
> > > > - * Last consumed descriptor in the used table,
> > > > - * trails vq_ring.used->idx.
> > > > - */
> > > > - uint16_t vq_used_cons_idx;
> > > > + struct {
> > > > + /**< vring keeping descs and events */
> > > > + struct vring_packed ring;
> > > > + bool used_wrap_counter;
> > > > + uint16_t cached_flags; /**< cached flags for descs */
> > > > + uint16_t event_flags_shadow;
> > > > + } vq_packed;
> > > > + };
> > > > +
> > > > + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
> > > > uint16_t vq_nentries; /**< vring desc numbers */
> > > > uint16_t vq_free_cnt; /**< num of desc available */
> > > > uint16_t vq_avail_idx; /**< sync until needed */
> > >
> > > Honest question: What do we really gain by putting it in a union? We
> > > save a little memory. But we also make code less readable IMHO.
> >
> > I think it will make it clear that fields like used_wrap_counter
> > are only available in packed ring which will make the code more
> > readable.
> >
> > >
> > > If we do this, can we at least shorten some variable names, like drop
> > > the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
> > > vq->packed* we don't loose any context).
> >
> > I prefer to have consistent prefix like most fields in this
> > structure (although some fields don't really follow this).
>
> As Jens, I tend to agree that the vq_ prefix is quite redundant.
> However, I think it is better to keep it in this patch for consistency.
>
> Maybe it can be remove in a separate patch later?
Sounds good to me.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-20 4:35 ` Tiwei Bie
@ 2019-03-20 4:35 ` Tiwei Bie
0 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-20 4:35 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: Jens Freimann, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:28:30PM +0100, Maxime Coquelin wrote:
> On 3/19/19 11:09 AM, Tiwei Bie wrote:
> > On Tue, Mar 19, 2019 at 10:44:32AM +0100, Jens Freimann wrote:
> > > On Tue, Mar 19, 2019 at 02:43:07PM +0800, Tiwei Bie wrote:
> > > > Put split ring and packed ring specific fields into separate
> > > > sub-structures, and also union them as they won't be available
> > > > at the same time.
> > > >
> > > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > > ---
> > > > drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> > > > drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> > > > drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> > > > drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> > > > drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> > > > drivers/net/virtio/virtqueue.c | 6 +-
> > > > drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> > > > 7 files changed, 117 insertions(+), 109 deletions(-)
> > > >
> > > [snip]
> > > ...
> > > > diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
> > > > index 80c0c43c3..48b3912e6 100644
> > > > --- a/drivers/net/virtio/virtqueue.h
> > > > +++ b/drivers/net/virtio/virtqueue.h
> > > > @@ -191,17 +191,22 @@ struct vq_desc_extra {
> > > >
> > > > struct virtqueue {
> > > > struct virtio_hw *hw; /**< virtio_hw structure pointer. */
> > > > - struct vring vq_ring; /**< vring keeping desc, used and avail */
> > > > - struct vring_packed ring_packed; /**< vring keeping descs */
> > > > - bool used_wrap_counter;
> > > > - uint16_t cached_flags; /**< cached flags for descs */
> > > > - uint16_t event_flags_shadow;
> > > > + union {
> > > > + struct {
> > > > + /**< vring keeping desc, used and avail */
> > > > + struct vring ring;
> > > > + } vq_split;
> > > >
> > > > - /**
> > > > - * Last consumed descriptor in the used table,
> > > > - * trails vq_ring.used->idx.
> > > > - */
> > > > - uint16_t vq_used_cons_idx;
> > > > + struct {
> > > > + /**< vring keeping descs and events */
> > > > + struct vring_packed ring;
> > > > + bool used_wrap_counter;
> > > > + uint16_t cached_flags; /**< cached flags for descs */
> > > > + uint16_t event_flags_shadow;
> > > > + } vq_packed;
> > > > + };
> > > > +
> > > > + uint16_t vq_used_cons_idx; /**< last consumed descriptor */
> > > > uint16_t vq_nentries; /**< vring desc numbers */
> > > > uint16_t vq_free_cnt; /**< num of desc available */
> > > > uint16_t vq_avail_idx; /**< sync until needed */
> > >
> > > Honest question: What do we really gain by putting it in a union? We
> > > save a little memory. But we also make code less readable IMHO.
> >
> > I think it will make it clear that fields like used_wrap_counter
> > are only available in packed ring which will make the code more
> > readable.
> >
> > >
> > > If we do this, can we at least shorten some variable names, like drop
> > > the vq_ prefix? (It's used everywhere like vq->vq_packed*, so with
> > > vq->packed* we don't loose any context).
> >
> > I prefer to have consistent prefix like most fields in this
> > structure (although some fields don't really follow this).
>
> As Jens, I tend to agree that the vq_ prefix is quite redundant.
> However, I think it is better to keep it in this patch for consistency.
>
> Maybe it can be remove in a separate patch later?
Sounds good to me.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 6:43 ` [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:44 ` Jens Freimann
@ 2019-03-19 13:28 ` Maxime Coquelin
2019-03-19 13:28 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:28 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Put split ring and packed ring specific fields into separate
> sub-structures, and also union them as they won't be available
> at the same time.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> drivers/net/virtio/virtqueue.c | 6 +-
> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> 7 files changed, 117 insertions(+), 109 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure
2019-03-19 13:28 ` Maxime Coquelin
@ 2019-03-19 13:28 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:28 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Put split ring and packed ring specific fields into separate
> sub-structures, and also union them as they won't be available
> at the same time.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 71 +++++++++---------
> drivers/net/virtio/virtio_rxtx.c | 66 ++++++++---------
> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> drivers/net/virtio/virtqueue.c | 6 +-
> drivers/net/virtio/virtqueue.h | 77 +++++++++++---------
> 7 files changed, 117 insertions(+), 109 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
` (5 preceding siblings ...)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 05/10] net/virtio: refactor virtqueue structure Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
` (2 more replies)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure Tiwei Bie
` (4 subsequent siblings)
11 siblings, 3 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Drop redundant suffix (_packed and _event) from the fields in
packed ring structure.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 2 +-
drivers/net/virtio/virtio_ring.h | 15 ++++++-------
drivers/net/virtio/virtio_rxtx.c | 14 ++++++------
.../net/virtio/virtio_user/virtio_user_dev.c | 22 +++++++++----------
drivers/net/virtio/virtio_user_ethdev.c | 11 ++++------
drivers/net/virtio/virtqueue.c | 2 +-
drivers/net/virtio/virtqueue.h | 10 ++++-----
7 files changed, 36 insertions(+), 40 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index bc91ad493..f452a9a79 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -147,7 +147,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
{
struct virtqueue *vq = cvq->vq;
int head;
- struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
struct virtio_pmd_ctrl *result;
uint16_t flags;
int sum = 0;
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 5a37629fe..6abec4d87 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -78,10 +78,9 @@ struct vring_packed_desc_event {
struct vring_packed {
unsigned int num;
- struct vring_packed_desc *desc_packed;
- struct vring_packed_desc_event *driver_event;
- struct vring_packed_desc_event *device_event;
-
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
};
struct vring {
@@ -161,11 +160,11 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align,
unsigned int num)
{
vr->num = num;
- vr->desc_packed = (struct vring_packed_desc *)p;
- vr->driver_event = (struct vring_packed_desc_event *)(p +
+ vr->desc = (struct vring_packed_desc *)p;
+ vr->driver = (struct vring_packed_desc_event *)(p +
vr->num * sizeof(struct vring_packed_desc));
- vr->device_event = (struct vring_packed_desc_event *)
- RTE_ALIGN_CEIL(((uintptr_t)vr->driver_event +
+ vr->device = (struct vring_packed_desc_event *)
+ RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
sizeof(struct vring_packed_desc_event)), align);
}
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 02f8d9451..42d0f533c 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -118,7 +118,7 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
struct vring_packed_desc *desc;
uint16_t i;
- desc = vq->vq_packed.ring.desc_packed;
+ desc = vq->vq_packed.ring.desc;
for (i = 0; i < num; i++) {
used_idx = vq->vq_used_cons_idx;
@@ -229,7 +229,7 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num)
{
uint16_t used_idx, id, curr_id, free_cnt = 0;
uint16_t size = vq->vq_nentries;
- struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
struct vq_desc_extra *dxp;
used_idx = vq->vq_used_cons_idx;
@@ -261,7 +261,7 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num)
{
uint16_t used_idx, id;
uint16_t size = vq->vq_nentries;
- struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
struct vq_desc_extra *dxp;
used_idx = vq->vq_used_cons_idx;
@@ -430,7 +430,7 @@ static inline int
virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
struct rte_mbuf **cookie, uint16_t num)
{
- struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc;
uint16_t flags = vq->vq_packed.cached_flags;
struct virtio_hw *hw = vq->hw;
struct vq_desc_extra *dxp;
@@ -635,7 +635,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx;
idx = vq->vq_avail_idx;
- dp = &vq->vq_packed.ring.desc_packed[idx];
+ dp = &vq->vq_packed.ring.desc[idx];
dxp = &vq->vq_descx[id];
dxp->ndescs = 1;
@@ -698,9 +698,9 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
head_idx = vq->vq_avail_idx;
idx = head_idx;
prev = head_idx;
- start_dp = vq->vq_packed.ring.desc_packed;
+ start_dp = vq->vq_packed.ring.desc;
- head_dp = &vq->vq_packed.ring.desc_packed[idx];
+ head_dp = &vq->vq_packed.ring.desc[idx];
head_flags = cookie->next ? VRING_DESC_F_NEXT : 0;
head_flags |= vq->vq_packed.cached_flags;
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index d1157378d..2dc8f2051 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -52,11 +52,11 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
addr.desc_user_addr =
- (uint64_t)(uintptr_t)pq_vring->desc_packed;
+ (uint64_t)(uintptr_t)pq_vring->desc;
addr.avail_user_addr =
- (uint64_t)(uintptr_t)pq_vring->driver_event;
+ (uint64_t)(uintptr_t)pq_vring->driver;
addr.used_user_addr =
- (uint64_t)(uintptr_t)pq_vring->device_event;
+ (uint64_t)(uintptr_t)pq_vring->device;
} else {
addr.desc_user_addr = (uint64_t)(uintptr_t)vring->desc;
addr.avail_user_addr = (uint64_t)(uintptr_t)vring->avail;
@@ -650,30 +650,30 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
n_descs++;
idx_status = idx_data;
- while (vring->desc_packed[idx_status].flags & VRING_DESC_F_NEXT) {
+ while (vring->desc[idx_status].flags & VRING_DESC_F_NEXT) {
idx_status++;
if (idx_status >= dev->queue_size)
idx_status -= dev->queue_size;
n_descs++;
}
- hdr = (void *)(uintptr_t)vring->desc_packed[idx_hdr].addr;
+ hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
if (hdr->class == VIRTIO_NET_CTRL_MQ &&
hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
uint16_t queues;
queues = *(uint16_t *)(uintptr_t)
- vring->desc_packed[idx_data].addr;
+ vring->desc[idx_data].addr;
status = virtio_user_handle_mq(dev, queues);
}
/* Update status */
*(virtio_net_ctrl_ack *)(uintptr_t)
- vring->desc_packed[idx_status].addr = status;
+ vring->desc[idx_status].addr = status;
/* Update used descriptor */
- vring->desc_packed[idx_hdr].id = vring->desc_packed[idx_status].id;
- vring->desc_packed[idx_hdr].len = sizeof(status);
+ vring->desc[idx_hdr].id = vring->desc[idx_status].id;
+ vring->desc[idx_hdr].len = sizeof(status);
return n_descs;
}
@@ -685,14 +685,14 @@ virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t queue_idx)
struct vring_packed *vring = &dev->packed_vrings[queue_idx];
uint16_t n_descs;
- while (desc_is_avail(&vring->desc_packed[vq->used_idx],
+ while (desc_is_avail(&vring->desc[vq->used_idx],
vq->used_wrap_counter)) {
n_descs = virtio_user_handle_ctrl_msg_packed(dev, vring,
vq->used_idx);
rte_smp_wmb();
- vring->desc_packed[vq->used_idx].flags =
+ vring->desc[vq->used_idx].flags =
VRING_DESC_F_WRITE |
VRING_DESC_F_AVAIL(vq->used_wrap_counter) |
VRING_DESC_F_USED(vq->used_wrap_counter);
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 6423e1f61..c5a76bd91 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -290,17 +290,14 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
sizeof(struct vring_packed_desc_event),
VIRTIO_PCI_VRING_ALIGN);
vring->num = vq->vq_nentries;
- vring->desc_packed =
- (void *)(uintptr_t)desc_addr;
- vring->driver_event =
- (void *)(uintptr_t)avail_addr;
- vring->device_event =
- (void *)(uintptr_t)used_addr;
+ vring->desc = (void *)(uintptr_t)desc_addr;
+ vring->driver = (void *)(uintptr_t)avail_addr;
+ vring->device = (void *)(uintptr_t)used_addr;
dev->packed_queues[queue_idx].avail_wrap_counter = true;
dev->packed_queues[queue_idx].used_wrap_counter = true;
for (i = 0; i < vring->num; i++)
- vring->desc_packed[i].flags = 0;
+ vring->desc[i].flags = 0;
}
static void
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 79491db32..5ff1e3587 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -61,7 +61,7 @@ virtqueue_rxvq_flush_packed(struct virtqueue *vq)
struct vq_desc_extra *dxp;
uint16_t i;
- struct vring_packed_desc *descs = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *descs = vq->vq_packed.ring.desc;
int cnt = 0;
i = vq->vq_used_cons_idx;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 48b3912e6..78df6d390 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -302,10 +302,10 @@ vring_desc_init_packed(struct virtqueue *vq, int n)
{
int i;
for (i = 0; i < n - 1; i++) {
- vq->vq_packed.ring.desc_packed[i].id = i;
+ vq->vq_packed.ring.desc[i].id = i;
vq->vq_descx[i].next = i + 1;
}
- vq->vq_packed.ring.desc_packed[i].id = i;
+ vq->vq_packed.ring.desc[i].id = i;
vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
}
@@ -328,7 +328,7 @@ virtqueue_disable_intr_packed(struct virtqueue *vq)
{
if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
- vq->vq_packed.ring.driver_event->desc_event_flags =
+ vq->vq_packed.ring.driver->desc_event_flags =
vq->vq_packed.event_flags_shadow;
}
}
@@ -353,7 +353,7 @@ virtqueue_enable_intr_packed(struct virtqueue *vq)
{
if (vq->vq_packed.event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_ENABLE;
- vq->vq_packed.ring.driver_event->desc_event_flags =
+ vq->vq_packed.ring.driver->desc_event_flags =
vq->vq_packed.event_flags_shadow;
}
}
@@ -460,7 +460,7 @@ virtqueue_kick_prepare_packed(struct virtqueue *vq)
* Ensure updated data is visible to vhost before reading the flags.
*/
virtio_mb(vq->hw->weak_barriers);
- flags = vq->vq_packed.ring.device_event->desc_event_flags;
+ flags = vq->vq_packed.ring.device->desc_event_flags;
return flags != RING_EVENT_FLAGS_DISABLE;
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure
2019-03-19 6:43 ` [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:47 ` Jens Freimann
2019-03-19 13:29 ` Maxime Coquelin
2 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Drop redundant suffix (_packed and _event) from the fields in
packed ring structure.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 2 +-
drivers/net/virtio/virtio_ring.h | 15 ++++++-------
drivers/net/virtio/virtio_rxtx.c | 14 ++++++------
.../net/virtio/virtio_user/virtio_user_dev.c | 22 +++++++++----------
drivers/net/virtio/virtio_user_ethdev.c | 11 ++++------
drivers/net/virtio/virtqueue.c | 2 +-
drivers/net/virtio/virtqueue.h | 10 ++++-----
7 files changed, 36 insertions(+), 40 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index bc91ad493..f452a9a79 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -147,7 +147,7 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
{
struct virtqueue *vq = cvq->vq;
int head;
- struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
struct virtio_pmd_ctrl *result;
uint16_t flags;
int sum = 0;
diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index 5a37629fe..6abec4d87 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -78,10 +78,9 @@ struct vring_packed_desc_event {
struct vring_packed {
unsigned int num;
- struct vring_packed_desc *desc_packed;
- struct vring_packed_desc_event *driver_event;
- struct vring_packed_desc_event *device_event;
-
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
};
struct vring {
@@ -161,11 +160,11 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align,
unsigned int num)
{
vr->num = num;
- vr->desc_packed = (struct vring_packed_desc *)p;
- vr->driver_event = (struct vring_packed_desc_event *)(p +
+ vr->desc = (struct vring_packed_desc *)p;
+ vr->driver = (struct vring_packed_desc_event *)(p +
vr->num * sizeof(struct vring_packed_desc));
- vr->device_event = (struct vring_packed_desc_event *)
- RTE_ALIGN_CEIL(((uintptr_t)vr->driver_event +
+ vr->device = (struct vring_packed_desc_event *)
+ RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
sizeof(struct vring_packed_desc_event)), align);
}
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 02f8d9451..42d0f533c 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -118,7 +118,7 @@ virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
struct vring_packed_desc *desc;
uint16_t i;
- desc = vq->vq_packed.ring.desc_packed;
+ desc = vq->vq_packed.ring.desc;
for (i = 0; i < num; i++) {
used_idx = vq->vq_used_cons_idx;
@@ -229,7 +229,7 @@ virtio_xmit_cleanup_inorder_packed(struct virtqueue *vq, int num)
{
uint16_t used_idx, id, curr_id, free_cnt = 0;
uint16_t size = vq->vq_nentries;
- struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
struct vq_desc_extra *dxp;
used_idx = vq->vq_used_cons_idx;
@@ -261,7 +261,7 @@ virtio_xmit_cleanup_normal_packed(struct virtqueue *vq, int num)
{
uint16_t used_idx, id;
uint16_t size = vq->vq_nentries;
- struct vring_packed_desc *desc = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
struct vq_desc_extra *dxp;
used_idx = vq->vq_used_cons_idx;
@@ -430,7 +430,7 @@ static inline int
virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq,
struct rte_mbuf **cookie, uint16_t num)
{
- struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *start_dp = vq->vq_packed.ring.desc;
uint16_t flags = vq->vq_packed.cached_flags;
struct virtio_hw *hw = vq->hw;
struct vq_desc_extra *dxp;
@@ -635,7 +635,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx;
idx = vq->vq_avail_idx;
- dp = &vq->vq_packed.ring.desc_packed[idx];
+ dp = &vq->vq_packed.ring.desc[idx];
dxp = &vq->vq_descx[id];
dxp->ndescs = 1;
@@ -698,9 +698,9 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
head_idx = vq->vq_avail_idx;
idx = head_idx;
prev = head_idx;
- start_dp = vq->vq_packed.ring.desc_packed;
+ start_dp = vq->vq_packed.ring.desc;
- head_dp = &vq->vq_packed.ring.desc_packed[idx];
+ head_dp = &vq->vq_packed.ring.desc[idx];
head_flags = cookie->next ? VRING_DESC_F_NEXT : 0;
head_flags |= vq->vq_packed.cached_flags;
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index d1157378d..2dc8f2051 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -52,11 +52,11 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
addr.desc_user_addr =
- (uint64_t)(uintptr_t)pq_vring->desc_packed;
+ (uint64_t)(uintptr_t)pq_vring->desc;
addr.avail_user_addr =
- (uint64_t)(uintptr_t)pq_vring->driver_event;
+ (uint64_t)(uintptr_t)pq_vring->driver;
addr.used_user_addr =
- (uint64_t)(uintptr_t)pq_vring->device_event;
+ (uint64_t)(uintptr_t)pq_vring->device;
} else {
addr.desc_user_addr = (uint64_t)(uintptr_t)vring->desc;
addr.avail_user_addr = (uint64_t)(uintptr_t)vring->avail;
@@ -650,30 +650,30 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev,
n_descs++;
idx_status = idx_data;
- while (vring->desc_packed[idx_status].flags & VRING_DESC_F_NEXT) {
+ while (vring->desc[idx_status].flags & VRING_DESC_F_NEXT) {
idx_status++;
if (idx_status >= dev->queue_size)
idx_status -= dev->queue_size;
n_descs++;
}
- hdr = (void *)(uintptr_t)vring->desc_packed[idx_hdr].addr;
+ hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr;
if (hdr->class == VIRTIO_NET_CTRL_MQ &&
hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) {
uint16_t queues;
queues = *(uint16_t *)(uintptr_t)
- vring->desc_packed[idx_data].addr;
+ vring->desc[idx_data].addr;
status = virtio_user_handle_mq(dev, queues);
}
/* Update status */
*(virtio_net_ctrl_ack *)(uintptr_t)
- vring->desc_packed[idx_status].addr = status;
+ vring->desc[idx_status].addr = status;
/* Update used descriptor */
- vring->desc_packed[idx_hdr].id = vring->desc_packed[idx_status].id;
- vring->desc_packed[idx_hdr].len = sizeof(status);
+ vring->desc[idx_hdr].id = vring->desc[idx_status].id;
+ vring->desc[idx_hdr].len = sizeof(status);
return n_descs;
}
@@ -685,14 +685,14 @@ virtio_user_handle_cq_packed(struct virtio_user_dev *dev, uint16_t queue_idx)
struct vring_packed *vring = &dev->packed_vrings[queue_idx];
uint16_t n_descs;
- while (desc_is_avail(&vring->desc_packed[vq->used_idx],
+ while (desc_is_avail(&vring->desc[vq->used_idx],
vq->used_wrap_counter)) {
n_descs = virtio_user_handle_ctrl_msg_packed(dev, vring,
vq->used_idx);
rte_smp_wmb();
- vring->desc_packed[vq->used_idx].flags =
+ vring->desc[vq->used_idx].flags =
VRING_DESC_F_WRITE |
VRING_DESC_F_AVAIL(vq->used_wrap_counter) |
VRING_DESC_F_USED(vq->used_wrap_counter);
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 6423e1f61..c5a76bd91 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -290,17 +290,14 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
sizeof(struct vring_packed_desc_event),
VIRTIO_PCI_VRING_ALIGN);
vring->num = vq->vq_nentries;
- vring->desc_packed =
- (void *)(uintptr_t)desc_addr;
- vring->driver_event =
- (void *)(uintptr_t)avail_addr;
- vring->device_event =
- (void *)(uintptr_t)used_addr;
+ vring->desc = (void *)(uintptr_t)desc_addr;
+ vring->driver = (void *)(uintptr_t)avail_addr;
+ vring->device = (void *)(uintptr_t)used_addr;
dev->packed_queues[queue_idx].avail_wrap_counter = true;
dev->packed_queues[queue_idx].used_wrap_counter = true;
for (i = 0; i < vring->num; i++)
- vring->desc_packed[i].flags = 0;
+ vring->desc[i].flags = 0;
}
static void
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 79491db32..5ff1e3587 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -61,7 +61,7 @@ virtqueue_rxvq_flush_packed(struct virtqueue *vq)
struct vq_desc_extra *dxp;
uint16_t i;
- struct vring_packed_desc *descs = vq->vq_packed.ring.desc_packed;
+ struct vring_packed_desc *descs = vq->vq_packed.ring.desc;
int cnt = 0;
i = vq->vq_used_cons_idx;
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 48b3912e6..78df6d390 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -302,10 +302,10 @@ vring_desc_init_packed(struct virtqueue *vq, int n)
{
int i;
for (i = 0; i < n - 1; i++) {
- vq->vq_packed.ring.desc_packed[i].id = i;
+ vq->vq_packed.ring.desc[i].id = i;
vq->vq_descx[i].next = i + 1;
}
- vq->vq_packed.ring.desc_packed[i].id = i;
+ vq->vq_packed.ring.desc[i].id = i;
vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
}
@@ -328,7 +328,7 @@ virtqueue_disable_intr_packed(struct virtqueue *vq)
{
if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
- vq->vq_packed.ring.driver_event->desc_event_flags =
+ vq->vq_packed.ring.driver->desc_event_flags =
vq->vq_packed.event_flags_shadow;
}
}
@@ -353,7 +353,7 @@ virtqueue_enable_intr_packed(struct virtqueue *vq)
{
if (vq->vq_packed.event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_ENABLE;
- vq->vq_packed.ring.driver_event->desc_event_flags =
+ vq->vq_packed.ring.driver->desc_event_flags =
vq->vq_packed.event_flags_shadow;
}
}
@@ -460,7 +460,7 @@ virtqueue_kick_prepare_packed(struct virtqueue *vq)
* Ensure updated data is visible to vhost before reading the flags.
*/
virtio_mb(vq->hw->weak_barriers);
- flags = vq->vq_packed.ring.device_event->desc_event_flags;
+ flags = vq->vq_packed.ring.device->desc_event_flags;
return flags != RING_EVENT_FLAGS_DISABLE;
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure
2019-03-19 6:43 ` [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 9:47 ` Jens Freimann
2019-03-19 9:47 ` Jens Freimann
2019-03-19 13:29 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:47 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:08PM +0800, Tiwei Bie wrote:
>Drop redundant suffix (_packed and _event) from the fields in
>packed ring structure.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 2 +-
> drivers/net/virtio/virtio_ring.h | 15 ++++++-------
> drivers/net/virtio/virtio_rxtx.c | 14 ++++++------
> .../net/virtio/virtio_user/virtio_user_dev.c | 22 +++++++++----------
> drivers/net/virtio/virtio_user_ethdev.c | 11 ++++------
> drivers/net/virtio/virtqueue.c | 2 +-
> drivers/net/virtio/virtqueue.h | 10 ++++-----
> 7 files changed, 36 insertions(+), 40 deletions(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure
2019-03-19 9:47 ` Jens Freimann
@ 2019-03-19 9:47 ` Jens Freimann
0 siblings, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:47 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:08PM +0800, Tiwei Bie wrote:
>Drop redundant suffix (_packed and _event) from the fields in
>packed ring structure.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 2 +-
> drivers/net/virtio/virtio_ring.h | 15 ++++++-------
> drivers/net/virtio/virtio_rxtx.c | 14 ++++++------
> .../net/virtio/virtio_user/virtio_user_dev.c | 22 +++++++++----------
> drivers/net/virtio/virtio_user_ethdev.c | 11 ++++------
> drivers/net/virtio/virtqueue.c | 2 +-
> drivers/net/virtio/virtqueue.h | 10 ++++-----
> 7 files changed, 36 insertions(+), 40 deletions(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure
2019-03-19 6:43 ` [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:47 ` Jens Freimann
@ 2019-03-19 13:29 ` Maxime Coquelin
2019-03-19 13:29 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:29 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Drop redundant suffix (_packed and _event) from the fields in
> packed ring structure.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 2 +-
> drivers/net/virtio/virtio_ring.h | 15 ++++++-------
> drivers/net/virtio/virtio_rxtx.c | 14 ++++++------
> .../net/virtio/virtio_user/virtio_user_dev.c | 22 +++++++++----------
> drivers/net/virtio/virtio_user_ethdev.c | 11 ++++------
> drivers/net/virtio/virtqueue.c | 2 +-
> drivers/net/virtio/virtqueue.h | 10 ++++-----
> 7 files changed, 36 insertions(+), 40 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure
2019-03-19 13:29 ` Maxime Coquelin
@ 2019-03-19 13:29 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:29 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Drop redundant suffix (_packed and _event) from the fields in
> packed ring structure.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 2 +-
> drivers/net/virtio/virtio_ring.h | 15 ++++++-------
> drivers/net/virtio/virtio_rxtx.c | 14 ++++++------
> .../net/virtio/virtio_user/virtio_user_dev.c | 22 +++++++++----------
> drivers/net/virtio/virtio_user_ethdev.c | 11 ++++------
> drivers/net/virtio/virtqueue.c | 2 +-
> drivers/net/virtio/virtqueue.h | 10 ++++-----
> 7 files changed, 36 insertions(+), 40 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
` (6 preceding siblings ...)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 06/10] net/virtio: drop redundant suffix in packed ring structure Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
` (2 more replies)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring Tiwei Bie
` (3 subsequent siblings)
11 siblings, 3 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Drop the unused field tx_indir_pq from virtio_tx_region
structure.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 10 +---------
drivers/net/virtio/virtqueue.h | 8 ++------
2 files changed, 3 insertions(+), 15 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index f452a9a79..8aa250997 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -603,17 +603,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
memset(txr, 0, vq_size * sizeof(*txr));
for (i = 0; i < vq_size; i++) {
struct vring_desc *start_dp = txr[i].tx_indir;
- struct vring_packed_desc *start_dp_packed =
- txr[i].tx_indir_pq;
/* first indirect descriptor is always the tx header */
- if (vtpci_packed_queue(hw)) {
- start_dp_packed->addr = txvq->virtio_net_hdr_mem
- + i * sizeof(*txr)
- + offsetof(struct virtio_tx_region,
- tx_hdr);
- start_dp_packed->len = hw->vtnet_hdr_size;
- } else {
+ if (!vtpci_packed_queue(hw)) {
vring_desc_init_split(start_dp,
RTE_DIM(txr[i].tx_indir));
start_dp->addr = txvq->virtio_net_hdr_mem
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 78df6d390..6dab7db8e 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -277,12 +277,8 @@ struct virtio_net_hdr_mrg_rxbuf {
#define VIRTIO_MAX_TX_INDIRECT 8
struct virtio_tx_region {
struct virtio_net_hdr_mrg_rxbuf tx_hdr;
- union {
- struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT]
- __attribute__((__aligned__(16)));
- struct vring_packed_desc tx_indir_pq[VIRTIO_MAX_TX_INDIRECT]
- __attribute__((__aligned__(16)));
- };
+ struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT]
+ __attribute__((__aligned__(16)));
};
static inline int
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure
2019-03-19 6:43 ` [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:51 ` Jens Freimann
2019-03-19 13:33 ` Maxime Coquelin
2 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Drop the unused field tx_indir_pq from virtio_tx_region
structure.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 10 +---------
drivers/net/virtio/virtqueue.h | 8 ++------
2 files changed, 3 insertions(+), 15 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index f452a9a79..8aa250997 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -603,17 +603,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx)
memset(txr, 0, vq_size * sizeof(*txr));
for (i = 0; i < vq_size; i++) {
struct vring_desc *start_dp = txr[i].tx_indir;
- struct vring_packed_desc *start_dp_packed =
- txr[i].tx_indir_pq;
/* first indirect descriptor is always the tx header */
- if (vtpci_packed_queue(hw)) {
- start_dp_packed->addr = txvq->virtio_net_hdr_mem
- + i * sizeof(*txr)
- + offsetof(struct virtio_tx_region,
- tx_hdr);
- start_dp_packed->len = hw->vtnet_hdr_size;
- } else {
+ if (!vtpci_packed_queue(hw)) {
vring_desc_init_split(start_dp,
RTE_DIM(txr[i].tx_indir));
start_dp->addr = txvq->virtio_net_hdr_mem
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 78df6d390..6dab7db8e 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -277,12 +277,8 @@ struct virtio_net_hdr_mrg_rxbuf {
#define VIRTIO_MAX_TX_INDIRECT 8
struct virtio_tx_region {
struct virtio_net_hdr_mrg_rxbuf tx_hdr;
- union {
- struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT]
- __attribute__((__aligned__(16)));
- struct vring_packed_desc tx_indir_pq[VIRTIO_MAX_TX_INDIRECT]
- __attribute__((__aligned__(16)));
- };
+ struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT]
+ __attribute__((__aligned__(16)));
};
static inline int
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure
2019-03-19 6:43 ` [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 9:51 ` Jens Freimann
2019-03-19 9:51 ` Jens Freimann
2019-03-19 13:33 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:51 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:09PM +0800, Tiwei Bie wrote:
>Drop the unused field tx_indir_pq from virtio_tx_region
>structure.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 10 +---------
> drivers/net/virtio/virtqueue.h | 8 ++------
> 2 files changed, 3 insertions(+), 15 deletions(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure
2019-03-19 9:51 ` Jens Freimann
@ 2019-03-19 9:51 ` Jens Freimann
0 siblings, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:51 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:09PM +0800, Tiwei Bie wrote:
>Drop the unused field tx_indir_pq from virtio_tx_region
>structure.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 10 +---------
> drivers/net/virtio/virtqueue.h | 8 ++------
> 2 files changed, 3 insertions(+), 15 deletions(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure
2019-03-19 6:43 ` [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:51 ` Jens Freimann
@ 2019-03-19 13:33 ` Maxime Coquelin
2019-03-19 13:33 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:33 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Drop the unused field tx_indir_pq from virtio_tx_region
> structure.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 10 +---------
> drivers/net/virtio/virtqueue.h | 8 ++------
> 2 files changed, 3 insertions(+), 15 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure
2019-03-19 13:33 ` Maxime Coquelin
@ 2019-03-19 13:33 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:33 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Drop the unused field tx_indir_pq from virtio_tx_region
> structure.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 10 +---------
> drivers/net/virtio/virtqueue.h | 8 ++------
> 2 files changed, 3 insertions(+), 15 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
` (7 preceding siblings ...)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 07/10] net/virtio: drop unused field in Tx region structure Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
` (2 more replies)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq " Tiwei Bie
` (2 subsequent siblings)
11 siblings, 3 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Add a helper for disabling interrupts in split ring to make the
code consistent with the corresponding code in packed ring.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtqueue.h | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 6dab7db8e..5cea7cb4a 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -317,7 +317,7 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n)
}
/**
- * Tell the backend not to interrupt us.
+ * Tell the backend not to interrupt us. Implementation for packed virtqueues.
*/
static inline void
virtqueue_disable_intr_packed(struct virtqueue *vq)
@@ -329,6 +329,15 @@ virtqueue_disable_intr_packed(struct virtqueue *vq)
}
}
+/**
+ * Tell the backend not to interrupt us. Implementation for split virtqueues.
+ */
+static inline void
+virtqueue_disable_intr_split(struct virtqueue *vq)
+{
+ vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+}
+
/**
* Tell the backend not to interrupt us.
*/
@@ -338,7 +347,7 @@ virtqueue_disable_intr(struct virtqueue *vq)
if (vtpci_packed_queue(vq->hw))
virtqueue_disable_intr_packed(vq);
else
- vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+ virtqueue_disable_intr_split(vq);
}
/**
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:53 ` Jens Freimann
2019-03-19 13:34 ` Maxime Coquelin
2 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Add a helper for disabling interrupts in split ring to make the
code consistent with the corresponding code in packed ring.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtqueue.h | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 6dab7db8e..5cea7cb4a 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -317,7 +317,7 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n)
}
/**
- * Tell the backend not to interrupt us.
+ * Tell the backend not to interrupt us. Implementation for packed virtqueues.
*/
static inline void
virtqueue_disable_intr_packed(struct virtqueue *vq)
@@ -329,6 +329,15 @@ virtqueue_disable_intr_packed(struct virtqueue *vq)
}
}
+/**
+ * Tell the backend not to interrupt us. Implementation for split virtqueues.
+ */
+static inline void
+virtqueue_disable_intr_split(struct virtqueue *vq)
+{
+ vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+}
+
/**
* Tell the backend not to interrupt us.
*/
@@ -338,7 +347,7 @@ virtqueue_disable_intr(struct virtqueue *vq)
if (vtpci_packed_queue(vq->hw))
virtqueue_disable_intr_packed(vq);
else
- vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+ virtqueue_disable_intr_split(vq);
}
/**
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 9:53 ` Jens Freimann
2019-03-19 9:53 ` Jens Freimann
2019-03-19 13:34 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:53 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:10PM +0800, Tiwei Bie wrote:
>Add a helper for disabling interrupts in split ring to make the
>code consistent with the corresponding code in packed ring.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtqueue.h | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring
2019-03-19 9:53 ` Jens Freimann
@ 2019-03-19 9:53 ` Jens Freimann
0 siblings, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:53 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:10PM +0800, Tiwei Bie wrote:
>Add a helper for disabling interrupts in split ring to make the
>code consistent with the corresponding code in packed ring.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtqueue.h | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:53 ` Jens Freimann
@ 2019-03-19 13:34 ` Maxime Coquelin
2019-03-19 13:34 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:34 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Add a helper for disabling interrupts in split ring to make the
> code consistent with the corresponding code in packed ring.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtqueue.h | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring
2019-03-19 13:34 ` Maxime Coquelin
@ 2019-03-19 13:34 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:34 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Add a helper for disabling interrupts in split ring to make the
> code consistent with the corresponding code in packed ring.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtqueue.h | 13 +++++++++++--
> 1 file changed, 11 insertions(+), 2 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq helper for split ring
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
` (8 preceding siblings ...)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 08/10] net/virtio: add interrupt helper for split ring Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
` (2 more replies)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path Tiwei Bie
2019-03-20 7:35 ` [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Maxime Coquelin
11 siblings, 3 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Add a helper for sending commands in split ring to make the
code consistent with the corresponding code in packed ring.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 76 +++++++++++++++++-------------
1 file changed, 43 insertions(+), 33 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 8aa250997..85b223451 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -237,44 +237,18 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
return result;
}
-static int
-virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
- int *dlen, int pkt_num)
+static struct virtio_pmd_ctrl *
+virtio_send_command_split(struct virtnet_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num)
{
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq = cvq->vq;
uint32_t head, i;
int k, sum = 0;
- virtio_net_ctrl_ack status = ~0;
- struct virtio_pmd_ctrl *result;
- struct virtqueue *vq;
- ctrl->status = status;
-
- if (!cvq || !cvq->vq) {
- PMD_INIT_LOG(ERR, "Control queue is not supported.");
- return -1;
- }
-
- rte_spinlock_lock(&cvq->lock);
- vq = cvq->vq;
head = vq->vq_desc_head_idx;
- PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, "
- "vq->hw->cvq = %p vq = %p",
- vq->vq_desc_head_idx, status, vq->hw->cvq, vq);
-
- if (vq->vq_free_cnt < pkt_num + 2 || pkt_num < 1) {
- rte_spinlock_unlock(&cvq->lock);
- return -1;
- }
-
- memcpy(cvq->virtio_net_hdr_mz->addr, ctrl,
- sizeof(struct virtio_pmd_ctrl));
-
- if (vtpci_packed_queue(vq->hw)) {
- result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num);
- goto out_unlock;
- }
-
/*
* Format is enforced in qemu code:
* One TX packet for header;
@@ -346,8 +320,44 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
vq->vq_free_cnt, vq->vq_desc_head_idx);
result = cvq->virtio_net_hdr_mz->addr;
+ return result;
+}
+
+static int
+virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num)
+{
+ virtio_net_ctrl_ack status = ~0;
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq;
+
+ ctrl->status = status;
+
+ if (!cvq || !cvq->vq) {
+ PMD_INIT_LOG(ERR, "Control queue is not supported.");
+ return -1;
+ }
+
+ rte_spinlock_lock(&cvq->lock);
+ vq = cvq->vq;
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, "
+ "vq->hw->cvq = %p vq = %p",
+ vq->vq_desc_head_idx, status, vq->hw->cvq, vq);
+
+ if (vq->vq_free_cnt < pkt_num + 2 || pkt_num < 1) {
+ rte_spinlock_unlock(&cvq->lock);
+ return -1;
+ }
+
+ memcpy(cvq->virtio_net_hdr_mz->addr, ctrl,
+ sizeof(struct virtio_pmd_ctrl));
+
+ if (vtpci_packed_queue(vq->hw))
+ result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num);
+ else
+ result = virtio_send_command_split(cvq, ctrl, dlen, pkt_num);
-out_unlock:
rte_spinlock_unlock(&cvq->lock);
return result->status;
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq helper for split ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq " Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:54 ` Jens Freimann
2019-03-19 13:54 ` Maxime Coquelin
2 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
Add a helper for sending commands in split ring to make the
code consistent with the corresponding code in packed ring.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_ethdev.c | 76 +++++++++++++++++-------------
1 file changed, 43 insertions(+), 33 deletions(-)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 8aa250997..85b223451 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -237,44 +237,18 @@ virtio_send_command_packed(struct virtnet_ctl *cvq,
return result;
}
-static int
-virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
- int *dlen, int pkt_num)
+static struct virtio_pmd_ctrl *
+virtio_send_command_split(struct virtnet_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num)
{
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq = cvq->vq;
uint32_t head, i;
int k, sum = 0;
- virtio_net_ctrl_ack status = ~0;
- struct virtio_pmd_ctrl *result;
- struct virtqueue *vq;
- ctrl->status = status;
-
- if (!cvq || !cvq->vq) {
- PMD_INIT_LOG(ERR, "Control queue is not supported.");
- return -1;
- }
-
- rte_spinlock_lock(&cvq->lock);
- vq = cvq->vq;
head = vq->vq_desc_head_idx;
- PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, "
- "vq->hw->cvq = %p vq = %p",
- vq->vq_desc_head_idx, status, vq->hw->cvq, vq);
-
- if (vq->vq_free_cnt < pkt_num + 2 || pkt_num < 1) {
- rte_spinlock_unlock(&cvq->lock);
- return -1;
- }
-
- memcpy(cvq->virtio_net_hdr_mz->addr, ctrl,
- sizeof(struct virtio_pmd_ctrl));
-
- if (vtpci_packed_queue(vq->hw)) {
- result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num);
- goto out_unlock;
- }
-
/*
* Format is enforced in qemu code:
* One TX packet for header;
@@ -346,8 +320,44 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
vq->vq_free_cnt, vq->vq_desc_head_idx);
result = cvq->virtio_net_hdr_mz->addr;
+ return result;
+}
+
+static int
+virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num)
+{
+ virtio_net_ctrl_ack status = ~0;
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq;
+
+ ctrl->status = status;
+
+ if (!cvq || !cvq->vq) {
+ PMD_INIT_LOG(ERR, "Control queue is not supported.");
+ return -1;
+ }
+
+ rte_spinlock_lock(&cvq->lock);
+ vq = cvq->vq;
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, "
+ "vq->hw->cvq = %p vq = %p",
+ vq->vq_desc_head_idx, status, vq->hw->cvq, vq);
+
+ if (vq->vq_free_cnt < pkt_num + 2 || pkt_num < 1) {
+ rte_spinlock_unlock(&cvq->lock);
+ return -1;
+ }
+
+ memcpy(cvq->virtio_net_hdr_mz->addr, ctrl,
+ sizeof(struct virtio_pmd_ctrl));
+
+ if (vtpci_packed_queue(vq->hw))
+ result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num);
+ else
+ result = virtio_send_command_split(cvq, ctrl, dlen, pkt_num);
-out_unlock:
rte_spinlock_unlock(&cvq->lock);
return result->status;
}
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq helper for split ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq " Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 9:54 ` Jens Freimann
2019-03-19 9:54 ` Jens Freimann
2019-03-19 13:54 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:54 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:11PM +0800, Tiwei Bie wrote:
>Add a helper for sending commands in split ring to make the
>code consistent with the corresponding code in packed ring.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 76 +++++++++++++++++-------------
> 1 file changed, 43 insertions(+), 33 deletions(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq helper for split ring
2019-03-19 9:54 ` Jens Freimann
@ 2019-03-19 9:54 ` Jens Freimann
0 siblings, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 9:54 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:11PM +0800, Tiwei Bie wrote:
>Add a helper for sending commands in split ring to make the
>code consistent with the corresponding code in packed ring.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c | 76 +++++++++++++++++-------------
> 1 file changed, 43 insertions(+), 33 deletions(-)
>
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq helper for split ring
2019-03-19 6:43 ` [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq " Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 9:54 ` Jens Freimann
@ 2019-03-19 13:54 ` Maxime Coquelin
2019-03-19 13:54 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:54 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Add a helper for sending commands in split ring to make the
> code consistent with the corresponding code in packed ring.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 76 +++++++++++++++++-------------
> 1 file changed, 43 insertions(+), 33 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq helper for split ring
2019-03-19 13:54 ` Maxime Coquelin
@ 2019-03-19 13:54 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 13:54 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Add a helper for sending commands in split ring to make the
> code consistent with the corresponding code in packed ring.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_ethdev.c | 76 +++++++++++++++++-------------
> 1 file changed, 43 insertions(+), 33 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
` (9 preceding siblings ...)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 09/10] net/virtio: add ctrl vq " Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
` (2 more replies)
2019-03-20 7:35 ` [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Maxime Coquelin
11 siblings, 3 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
This patch improves descriptors refill by using the same
batching strategy as done in in-order and mergeable path.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
1 file changed, 34 insertions(+), 26 deletions(-)
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 42d0f533c..5f6796bdb 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -1211,7 +1211,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct virtnet_rx *rxvq = rx_queue;
struct virtqueue *vq = rxvq->vq;
struct virtio_hw *hw = vq->hw;
- struct rte_mbuf *rxm, *new_mbuf;
+ struct rte_mbuf *rxm;
uint16_t nb_used, num, nb_rx;
uint32_t len[VIRTIO_MBUF_BURST_SZ];
struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];
@@ -1281,20 +1281,24 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxvq->stats.packets += nb_rx;
/* Allocate new mbuf for the used descriptor */
- while (likely(!virtqueue_full(vq))) {
- new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool);
- if (unlikely(new_mbuf == NULL)) {
- struct rte_eth_dev *dev
- = &rte_eth_devices[rxvq->port_id];
- dev->data->rx_mbuf_alloc_failed++;
- break;
+ if (likely(!virtqueue_full(vq))) {
+ uint16_t free_cnt = vq->vq_free_cnt;
+ struct rte_mbuf *new_pkts[free_cnt];
+
+ if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts,
+ free_cnt) == 0)) {
+ error = virtqueue_enqueue_recv_refill(vq, new_pkts,
+ free_cnt);
+ if (unlikely(error)) {
+ for (i = 0; i < free_cnt; i++)
+ rte_pktmbuf_free(new_pkts[i]);
+ }
+ nb_enqueued += free_cnt;
+ } else {
+ struct rte_eth_dev *dev =
+ &rte_eth_devices[rxvq->port_id];
+ dev->data->rx_mbuf_alloc_failed += free_cnt;
}
- error = virtqueue_enqueue_recv_refill(vq, &new_mbuf, 1);
- if (unlikely(error)) {
- rte_pktmbuf_free(new_mbuf);
- break;
- }
- nb_enqueued++;
}
if (likely(nb_enqueued)) {
@@ -1316,7 +1320,7 @@ virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts,
struct virtnet_rx *rxvq = rx_queue;
struct virtqueue *vq = rxvq->vq;
struct virtio_hw *hw = vq->hw;
- struct rte_mbuf *rxm, *new_mbuf;
+ struct rte_mbuf *rxm;
uint16_t num, nb_rx;
uint32_t len[VIRTIO_MBUF_BURST_SZ];
struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];
@@ -1380,20 +1384,24 @@ virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts,
rxvq->stats.packets += nb_rx;
/* Allocate new mbuf for the used descriptor */
- while (likely(!virtqueue_full(vq))) {
- new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool);
- if (unlikely(new_mbuf == NULL)) {
+ if (likely(!virtqueue_full(vq))) {
+ uint16_t free_cnt = vq->vq_free_cnt;
+ struct rte_mbuf *new_pkts[free_cnt];
+
+ if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts,
+ free_cnt) == 0)) {
+ error = virtqueue_enqueue_recv_refill_packed(vq,
+ new_pkts, free_cnt);
+ if (unlikely(error)) {
+ for (i = 0; i < free_cnt; i++)
+ rte_pktmbuf_free(new_pkts[i]);
+ }
+ nb_enqueued += free_cnt;
+ } else {
struct rte_eth_dev *dev =
&rte_eth_devices[rxvq->port_id];
- dev->data->rx_mbuf_alloc_failed++;
- break;
+ dev->data->rx_mbuf_alloc_failed += free_cnt;
}
- error = virtqueue_enqueue_recv_refill_packed(vq, &new_mbuf, 1);
- if (unlikely(error)) {
- rte_pktmbuf_free(new_mbuf);
- break;
- }
- nb_enqueued++;
}
if (likely(nb_enqueued)) {
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 6:43 ` [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path Tiwei Bie
@ 2019-03-19 6:43 ` Tiwei Bie
2019-03-19 10:04 ` Jens Freimann
2019-03-19 14:15 ` Maxime Coquelin
2 siblings, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 6:43 UTC (permalink / raw)
To: maxime.coquelin, zhihong.wang, dev
This patch improves descriptors refill by using the same
batching strategy as done in in-order and mergeable path.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
1 file changed, 34 insertions(+), 26 deletions(-)
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index 42d0f533c..5f6796bdb 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -1211,7 +1211,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
struct virtnet_rx *rxvq = rx_queue;
struct virtqueue *vq = rxvq->vq;
struct virtio_hw *hw = vq->hw;
- struct rte_mbuf *rxm, *new_mbuf;
+ struct rte_mbuf *rxm;
uint16_t nb_used, num, nb_rx;
uint32_t len[VIRTIO_MBUF_BURST_SZ];
struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];
@@ -1281,20 +1281,24 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxvq->stats.packets += nb_rx;
/* Allocate new mbuf for the used descriptor */
- while (likely(!virtqueue_full(vq))) {
- new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool);
- if (unlikely(new_mbuf == NULL)) {
- struct rte_eth_dev *dev
- = &rte_eth_devices[rxvq->port_id];
- dev->data->rx_mbuf_alloc_failed++;
- break;
+ if (likely(!virtqueue_full(vq))) {
+ uint16_t free_cnt = vq->vq_free_cnt;
+ struct rte_mbuf *new_pkts[free_cnt];
+
+ if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts,
+ free_cnt) == 0)) {
+ error = virtqueue_enqueue_recv_refill(vq, new_pkts,
+ free_cnt);
+ if (unlikely(error)) {
+ for (i = 0; i < free_cnt; i++)
+ rte_pktmbuf_free(new_pkts[i]);
+ }
+ nb_enqueued += free_cnt;
+ } else {
+ struct rte_eth_dev *dev =
+ &rte_eth_devices[rxvq->port_id];
+ dev->data->rx_mbuf_alloc_failed += free_cnt;
}
- error = virtqueue_enqueue_recv_refill(vq, &new_mbuf, 1);
- if (unlikely(error)) {
- rte_pktmbuf_free(new_mbuf);
- break;
- }
- nb_enqueued++;
}
if (likely(nb_enqueued)) {
@@ -1316,7 +1320,7 @@ virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts,
struct virtnet_rx *rxvq = rx_queue;
struct virtqueue *vq = rxvq->vq;
struct virtio_hw *hw = vq->hw;
- struct rte_mbuf *rxm, *new_mbuf;
+ struct rte_mbuf *rxm;
uint16_t num, nb_rx;
uint32_t len[VIRTIO_MBUF_BURST_SZ];
struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ];
@@ -1380,20 +1384,24 @@ virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts,
rxvq->stats.packets += nb_rx;
/* Allocate new mbuf for the used descriptor */
- while (likely(!virtqueue_full(vq))) {
- new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool);
- if (unlikely(new_mbuf == NULL)) {
+ if (likely(!virtqueue_full(vq))) {
+ uint16_t free_cnt = vq->vq_free_cnt;
+ struct rte_mbuf *new_pkts[free_cnt];
+
+ if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts,
+ free_cnt) == 0)) {
+ error = virtqueue_enqueue_recv_refill_packed(vq,
+ new_pkts, free_cnt);
+ if (unlikely(error)) {
+ for (i = 0; i < free_cnt; i++)
+ rte_pktmbuf_free(new_pkts[i]);
+ }
+ nb_enqueued += free_cnt;
+ } else {
struct rte_eth_dev *dev =
&rte_eth_devices[rxvq->port_id];
- dev->data->rx_mbuf_alloc_failed++;
- break;
+ dev->data->rx_mbuf_alloc_failed += free_cnt;
}
- error = virtqueue_enqueue_recv_refill_packed(vq, &new_mbuf, 1);
- if (unlikely(error)) {
- rte_pktmbuf_free(new_mbuf);
- break;
- }
- nb_enqueued++;
}
if (likely(nb_enqueued)) {
--
2.17.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 6:43 ` [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
@ 2019-03-19 10:04 ` Jens Freimann
2019-03-19 10:04 ` Jens Freimann
2019-03-19 10:28 ` Tiwei Bie
2019-03-19 14:15 ` Maxime Coquelin
2 siblings, 2 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 10:04 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:12PM +0800, Tiwei Bie wrote:
>This patch improves descriptors refill by using the same
>batching strategy as done in in-order and mergeable path.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
> 1 file changed, 34 insertions(+), 26 deletions(-)
>
Looks good. How much do we gain by this?
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 10:04 ` Jens Freimann
@ 2019-03-19 10:04 ` Jens Freimann
2019-03-19 10:28 ` Tiwei Bie
1 sibling, 0 replies; 88+ messages in thread
From: Jens Freimann @ 2019-03-19 10:04 UTC (permalink / raw)
To: Tiwei Bie; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 02:43:12PM +0800, Tiwei Bie wrote:
>This patch improves descriptors refill by using the same
>batching strategy as done in in-order and mergeable path.
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
> 1 file changed, 34 insertions(+), 26 deletions(-)
>
Looks good. How much do we gain by this?
Reviewed-by: Jens Freimann <jfreimann@redhat.com>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 10:04 ` Jens Freimann
2019-03-19 10:04 ` Jens Freimann
@ 2019-03-19 10:28 ` Tiwei Bie
2019-03-19 10:28 ` Tiwei Bie
2019-03-19 11:08 ` Maxime Coquelin
1 sibling, 2 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 10:28 UTC (permalink / raw)
To: Jens Freimann; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 11:04:24AM +0100, Jens Freimann wrote:
> On Tue, Mar 19, 2019 at 02:43:12PM +0800, Tiwei Bie wrote:
> > This patch improves descriptors refill by using the same
> > batching strategy as done in in-order and mergeable path.
> >
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> > drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
> > 1 file changed, 34 insertions(+), 26 deletions(-)
> >
> Looks good. How much do we gain by this?
The gain is very visible on my side. E.g. for packed ring,
the PPS changed from ~10786973 to ~11636990 in a macfwd test
between two testpmds.
Thanks,
Tiwei
>
> Reviewed-by: Jens Freimann <jfreimann@redhat.com>
>
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 10:28 ` Tiwei Bie
@ 2019-03-19 10:28 ` Tiwei Bie
2019-03-19 11:08 ` Maxime Coquelin
1 sibling, 0 replies; 88+ messages in thread
From: Tiwei Bie @ 2019-03-19 10:28 UTC (permalink / raw)
To: Jens Freimann; +Cc: maxime.coquelin, zhihong.wang, dev
On Tue, Mar 19, 2019 at 11:04:24AM +0100, Jens Freimann wrote:
> On Tue, Mar 19, 2019 at 02:43:12PM +0800, Tiwei Bie wrote:
> > This patch improves descriptors refill by using the same
> > batching strategy as done in in-order and mergeable path.
> >
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> > drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
> > 1 file changed, 34 insertions(+), 26 deletions(-)
> >
> Looks good. How much do we gain by this?
The gain is very visible on my side. E.g. for packed ring,
the PPS changed from ~10786973 to ~11636990 in a macfwd test
between two testpmds.
Thanks,
Tiwei
>
> Reviewed-by: Jens Freimann <jfreimann@redhat.com>
>
>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 10:28 ` Tiwei Bie
2019-03-19 10:28 ` Tiwei Bie
@ 2019-03-19 11:08 ` Maxime Coquelin
2019-03-19 11:08 ` Maxime Coquelin
1 sibling, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 11:08 UTC (permalink / raw)
To: Tiwei Bie, Jens Freimann; +Cc: zhihong.wang, dev
On 3/19/19 11:28 AM, Tiwei Bie wrote:
> On Tue, Mar 19, 2019 at 11:04:24AM +0100, Jens Freimann wrote:
>> On Tue, Mar 19, 2019 at 02:43:12PM +0800, Tiwei Bie wrote:
>>> This patch improves descriptors refill by using the same
>>> batching strategy as done in in-order and mergeable path.
>>>
>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>> ---
>>> drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
>>> 1 file changed, 34 insertions(+), 26 deletions(-)
>>>
>> Looks good. How much do we gain by this?
>
> The gain is very visible on my side. E.g. for packed ring,
> the PPS changed from ~10786973 to ~11636990 in a macfwd test
> between two testpmds.
Nice!
> Thanks,
> Tiwei
>
>>
>> Reviewed-by: Jens Freimann <jfreimann@redhat.com>
>>
>>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 11:08 ` Maxime Coquelin
@ 2019-03-19 11:08 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 11:08 UTC (permalink / raw)
To: Tiwei Bie, Jens Freimann; +Cc: zhihong.wang, dev
On 3/19/19 11:28 AM, Tiwei Bie wrote:
> On Tue, Mar 19, 2019 at 11:04:24AM +0100, Jens Freimann wrote:
>> On Tue, Mar 19, 2019 at 02:43:12PM +0800, Tiwei Bie wrote:
>>> This patch improves descriptors refill by using the same
>>> batching strategy as done in in-order and mergeable path.
>>>
>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>> ---
>>> drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
>>> 1 file changed, 34 insertions(+), 26 deletions(-)
>>>
>> Looks good. How much do we gain by this?
>
> The gain is very visible on my side. E.g. for packed ring,
> the PPS changed from ~10786973 to ~11636990 in a macfwd test
> between two testpmds.
Nice!
> Thanks,
> Tiwei
>
>>
>> Reviewed-by: Jens Freimann <jfreimann@redhat.com>
>>
>>
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 6:43 ` [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path Tiwei Bie
2019-03-19 6:43 ` Tiwei Bie
2019-03-19 10:04 ` Jens Freimann
@ 2019-03-19 14:15 ` Maxime Coquelin
2019-03-19 14:15 ` Maxime Coquelin
2 siblings, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 14:15 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> This patch improves descriptors refill by using the same
> batching strategy as done in in-order and mergeable path.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
> 1 file changed, 34 insertions(+), 26 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path
2019-03-19 14:15 ` Maxime Coquelin
@ 2019-03-19 14:15 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-19 14:15 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> This patch improves descriptors refill by using the same
> batching strategy as done in in-order and mergeable path.
>
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
> drivers/net/virtio/virtio_rxtx.c | 60 ++++++++++++++++++--------------
> 1 file changed, 34 insertions(+), 26 deletions(-)
>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring
2019-03-19 6:43 [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Tiwei Bie
` (10 preceding siblings ...)
2019-03-19 6:43 ` [dpdk-dev] [PATCH 10/10] net/virtio: improve batching in standard Rx path Tiwei Bie
@ 2019-03-20 7:35 ` Maxime Coquelin
2019-03-20 7:35 ` Maxime Coquelin
11 siblings, 1 reply; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-20 7:35 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Tiwei Bie (10):
> net/virtio: fix typo in packed ring init
> net/virtio: fix interrupt helper for packed ring
> net/virtio: add missing barrier in interrupt enable
> net/virtio: optimize flags update for packed ring
> net/virtio: refactor virtqueue structure
> net/virtio: drop redundant suffix in packed ring structure
> net/virtio: drop unused field in Tx region structure
> net/virtio: add interrupt helper for split ring
> net/virtio: add ctrl vq helper for split ring
> net/virtio: improve batching in standard Rx path
>
> drivers/net/virtio/virtio_ethdev.c | 172 +++++++++---------
> drivers/net/virtio/virtio_ring.h | 15 +-
> drivers/net/virtio/virtio_rxtx.c | 139 +++++++-------
> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> .../net/virtio/virtio_user/virtio_user_dev.c | 22 +--
> drivers/net/virtio/virtio_user_ethdev.c | 11 +-
> drivers/net/virtio/virtqueue.c | 6 +-
> drivers/net/virtio/virtqueue.h | 100 +++++-----
> 10 files changed, 241 insertions(+), 230 deletions(-)
>
Applied to dpdk-next-virtio/master branch.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring
2019-03-20 7:35 ` [dpdk-dev] [PATCH 00/10] net/virtio: cleanups and fixes for packed/split ring Maxime Coquelin
@ 2019-03-20 7:35 ` Maxime Coquelin
0 siblings, 0 replies; 88+ messages in thread
From: Maxime Coquelin @ 2019-03-20 7:35 UTC (permalink / raw)
To: Tiwei Bie, zhihong.wang, dev
On 3/19/19 7:43 AM, Tiwei Bie wrote:
> Tiwei Bie (10):
> net/virtio: fix typo in packed ring init
> net/virtio: fix interrupt helper for packed ring
> net/virtio: add missing barrier in interrupt enable
> net/virtio: optimize flags update for packed ring
> net/virtio: refactor virtqueue structure
> net/virtio: drop redundant suffix in packed ring structure
> net/virtio: drop unused field in Tx region structure
> net/virtio: add interrupt helper for split ring
> net/virtio: add ctrl vq helper for split ring
> net/virtio: improve batching in standard Rx path
>
> drivers/net/virtio/virtio_ethdev.c | 172 +++++++++---------
> drivers/net/virtio/virtio_ring.h | 15 +-
> drivers/net/virtio/virtio_rxtx.c | 139 +++++++-------
> drivers/net/virtio/virtio_rxtx_simple.h | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_neon.c | 2 +-
> drivers/net/virtio/virtio_rxtx_simple_sse.c | 2 +-
> .../net/virtio/virtio_user/virtio_user_dev.c | 22 +--
> drivers/net/virtio/virtio_user_ethdev.c | 11 +-
> drivers/net/virtio/virtqueue.c | 6 +-
> drivers/net/virtio/virtqueue.h | 100 +++++-----
> 10 files changed, 241 insertions(+), 230 deletions(-)
>
Applied to dpdk-next-virtio/master branch.
Thanks,
Maxime
^ permalink raw reply [flat|nested] 88+ messages in thread