* [dpdk-stable] [PATCH] net/virtio: revert forcing IOVA as VA mode for virtio-user
@ 2021-09-29 20:17 Maxime Coquelin
2021-09-29 21:15 ` Olivier Matz
2021-09-30 7:26 ` David Marchand
0 siblings, 2 replies; 5+ messages in thread
From: Maxime Coquelin @ 2021-09-29 20:17 UTC (permalink / raw)
To: olivier.matz, david.marchand, chenbo.xia, dev; +Cc: Maxime Coquelin, stable
This patch removes the simplification in Virtio descriptors
handling, where their buffer addresses are IOVAs for Virtio
PCI devices, and VA-only for Virtio-user devices, which
added a requirement on Virtio-user that it only supported
IOVA as VA.
This change introduced a regression for applications using
Virtio-user and other physical PMDs that require IOVA as PA
because they don't use an IOMMU.
This patch reverts to the old behaviour, but needed to be
reworked because of the refactoring that happened in v21.02.
Fixes: 17043a2909bb ("net/virtio: force IOVA as VA mode for virtio-user")
Cc: stable@dpdk.org
Reported-by: Olivier Matz <olivier.matz@6wind.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio.h | 1 +
drivers/net/virtio/virtio_ethdev.c | 25 +++++++++++++----
drivers/net/virtio/virtio_rxtx.c | 28 ++++++++++++--------
drivers/net/virtio/virtio_rxtx_packed.h | 2 +-
drivers/net/virtio/virtio_rxtx_packed_avx.h | 8 +++---
drivers/net/virtio/virtio_rxtx_packed_neon.h | 8 +++---
drivers/net/virtio/virtio_rxtx_simple.h | 3 ++-
drivers/net/virtio/virtio_user_ethdev.c | 7 ++++-
drivers/net/virtio/virtqueue.h | 22 ++++++++++++++-
9 files changed, 76 insertions(+), 28 deletions(-)
diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
index b4f21dc0c7..7118e5d24c 100644
--- a/drivers/net/virtio/virtio.h
+++ b/drivers/net/virtio/virtio.h
@@ -221,6 +221,7 @@ struct virtio_hw {
uint8_t *rss_key;
uint64_t req_guest_features;
struct virtnet_ctl *cvq;
+ bool use_va;
};
struct virtio_ops {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b4bd1f07c1..8055be88a2 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -567,12 +567,16 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
memset(mz->addr, 0, mz->len);
- vq->vq_ring_mem = mz->iova;
+ if (hw->use_va)
+ vq->vq_ring_mem = (uintptr_t)mz->addr;
+ else
+ vq->vq_ring_mem = mz->iova;
+
vq->vq_ring_virt_mem = mz->addr;
PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64,
- (uint64_t)mz->iova);
+ (uint64_t)vq->vq_ring_mem);
PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%" PRIx64,
- (uint64_t)(uintptr_t)mz->addr);
+ (uint64_t)(uintptr_t)vq->vq_ring_virt_mem);
virtio_init_vring(vq);
@@ -622,17 +626,28 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
txvq->port_id = dev->data->port_id;
txvq->mz = mz;
txvq->virtio_net_hdr_mz = hdr_mz;
- txvq->virtio_net_hdr_mem = hdr_mz->iova;
+ if (hw->use_va)
+ txvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr;
+ else
+ txvq->virtio_net_hdr_mem = hdr_mz->iova;
} else if (queue_type == VTNET_CQ) {
cvq = &vq->cq;
cvq->mz = mz;
cvq->virtio_net_hdr_mz = hdr_mz;
- cvq->virtio_net_hdr_mem = hdr_mz->iova;
+ if (hw->use_va)
+ cvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr;
+ else
+ cvq->virtio_net_hdr_mem = hdr_mz->iova;
memset(cvq->virtio_net_hdr_mz->addr, 0, rte_mem_page_size());
hw->cvq = cvq;
}
+ if (hw->use_va)
+ vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_addr);
+ else
+ vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_iova);
+
if (queue_type == VTNET_TQ) {
struct virtio_tx_region *txr;
unsigned int i;
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index b9d7c8d18f..0f3c286438 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -271,10 +271,13 @@ virtqueue_enqueue_refill_inorder(struct virtqueue *vq,
dxp->cookie = (void *)cookies[i];
dxp->ndescs = 1;
- start_dp[idx].addr = cookies[i]->buf_iova +
- RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
- start_dp[idx].len = cookies[i]->buf_len -
- RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;
+ start_dp[idx].addr =
+ VIRTIO_MBUF_ADDR(cookies[i], vq) +
+ RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
+ start_dp[idx].len =
+ cookies[i]->buf_len -
+ RTE_PKTMBUF_HEADROOM +
+ hw->vtnet_hdr_size;
start_dp[idx].flags = VRING_DESC_F_WRITE;
vq_update_avail_ring(vq, idx);
@@ -310,10 +313,12 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie,
dxp->cookie = (void *)cookie[i];
dxp->ndescs = 1;
- start_dp[idx].addr = cookie[i]->buf_iova +
+ start_dp[idx].addr =
+ VIRTIO_MBUF_ADDR(cookie[i], vq) +
RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
- start_dp[idx].len = cookie[i]->buf_len -
- RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;
+ start_dp[idx].len =
+ cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM +
+ hw->vtnet_hdr_size;
start_dp[idx].flags = VRING_DESC_F_WRITE;
vq->vq_desc_head_idx = start_dp[idx].next;
vq_update_avail_ring(vq, idx);
@@ -336,7 +341,7 @@ virtqueue_refill_single_packed(struct virtqueue *vq,
uint16_t flags = vq->vq_packed.cached_flags;
struct virtio_hw *hw = vq->hw;
- dp->addr = cookie->buf_iova +
+ dp->addr = VIRTIO_MBUF_ADDR(cookie, vq) +
RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
dp->len = cookie->buf_len -
RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;
@@ -482,7 +487,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
else
virtqueue_xmit_offload(hdr, cookies[i]);
- start_dp[idx].addr = rte_mbuf_data_iova(cookies[i]) - head_size;
+ start_dp[idx].addr =
+ VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq) - head_size;
start_dp[idx].len = cookies[i]->data_len + head_size;
start_dp[idx].flags = 0;
@@ -529,7 +535,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
else
virtqueue_xmit_offload(hdr, cookie);
- dp->addr = rte_mbuf_data_iova(cookie) - head_size;
+ dp->addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq) - head_size;
dp->len = cookie->data_len + head_size;
dp->id = id;
@@ -617,7 +623,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
virtqueue_xmit_offload(hdr, cookie);
do {
- start_dp[idx].addr = rte_mbuf_data_iova(cookie);
+ start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
start_dp[idx].len = cookie->data_len;
if (prepend_header) {
start_dp[idx].addr -= head_size;
diff --git a/drivers/net/virtio/virtio_rxtx_packed.h b/drivers/net/virtio/virtio_rxtx_packed.h
index 1d1db60da8..77e5cb37e7 100644
--- a/drivers/net/virtio/virtio_rxtx_packed.h
+++ b/drivers/net/virtio/virtio_rxtx_packed.h
@@ -288,7 +288,7 @@ virtio_recv_refill_packed_vec(struct virtnet_rx *rxvq,
dxp = &vq->vq_descx[idx + i];
dxp->cookie = (void *)cookie[total_num + i];
- addr = cookie[total_num + i]->buf_iova +
+ addr = VIRTIO_MBUF_ADDR(cookie[total_num + i], vq) +
RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
start_dp[idx + i].addr = addr;
start_dp[idx + i].len = cookie[total_num + i]->buf_len
diff --git a/drivers/net/virtio/virtio_rxtx_packed_avx.h b/drivers/net/virtio/virtio_rxtx_packed_avx.h
index c819d2e4f2..8cb71f3fe6 100644
--- a/drivers/net/virtio/virtio_rxtx_packed_avx.h
+++ b/drivers/net/virtio/virtio_rxtx_packed_avx.h
@@ -71,13 +71,13 @@ virtqueue_enqueue_batch_packed_vec(struct virtnet_tx *txvq,
}
__m512i descs_base = _mm512_set_epi64(tx_pkts[3]->data_len,
- tx_pkts[3]->buf_iova,
+ VIRTIO_MBUF_ADDR(tx_pkts[3], vq),
tx_pkts[2]->data_len,
- tx_pkts[2]->buf_iova,
+ VIRTIO_MBUF_ADDR(tx_pkts[2], vq),
tx_pkts[1]->data_len,
- tx_pkts[1]->buf_iova,
+ VIRTIO_MBUF_ADDR(tx_pkts[1], vq),
tx_pkts[0]->data_len,
- tx_pkts[0]->buf_iova);
+ VIRTIO_MBUF_ADDR(tx_pkts[0], vq));
/* id offset and data offset */
__m512i data_offsets = _mm512_set_epi64((uint64_t)3 << ID_BITS_OFFSET,
diff --git a/drivers/net/virtio/virtio_rxtx_packed_neon.h b/drivers/net/virtio/virtio_rxtx_packed_neon.h
index f19e618635..c222ebf00c 100644
--- a/drivers/net/virtio/virtio_rxtx_packed_neon.h
+++ b/drivers/net/virtio/virtio_rxtx_packed_neon.h
@@ -97,12 +97,12 @@ virtqueue_enqueue_batch_packed_vec(struct virtnet_tx *txvq,
uint64x2x2_t desc[PACKED_BATCH_SIZE / 2];
uint64x2_t base_addr0 = {
- tx_pkts[0]->buf_iova + tx_pkts[0]->data_off,
- tx_pkts[1]->buf_iova + tx_pkts[1]->data_off
+ VIRTIO_MBUF_ADDR(tx_pkts[0], vq) + tx_pkts[0]->data_off,
+ VIRTIO_MBUF_ADDR(tx_pkts[1], vq) + tx_pkts[1]->data_off
};
uint64x2_t base_addr1 = {
- tx_pkts[2]->buf_iova + tx_pkts[2]->data_off,
- tx_pkts[3]->buf_iova + tx_pkts[3]->data_off
+ VIRTIO_MBUF_ADDR(tx_pkts[2], vq) + tx_pkts[2]->data_off,
+ VIRTIO_MBUF_ADDR(tx_pkts[3], vq) + tx_pkts[3]->data_off
};
desc[0].val[0] = base_addr0;
diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h
index f258771fcf..497c9a0e32 100644
--- a/drivers/net/virtio/virtio_rxtx_simple.h
+++ b/drivers/net/virtio/virtio_rxtx_simple.h
@@ -43,7 +43,8 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq)
p = (uintptr_t)&sw_ring[i]->rearm_data;
*(uint64_t *)p = rxvq->mbuf_initializer;
- start_dp[i].addr = sw_ring[i]->buf_iova +
+ start_dp[i].addr =
+ VIRTIO_MBUF_ADDR(sw_ring[i], vq) +
RTE_PKTMBUF_HEADROOM - vq->hw->vtnet_hdr_size;
start_dp[i].len = sw_ring[i]->buf_len -
RTE_PKTMBUF_HEADROOM + vq->hw->vtnet_hdr_size;
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 688c1104d5..0271098f0d 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -657,6 +657,12 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev)
goto end;
}
+ /*
+ * Virtio-user requires using virtual addresses for the descriptors
+ * buffers, whatever other devices require
+ */
+ hw->use_va = true;
+
/* previously called by pci probing for physical dev */
if (eth_virtio_dev_init(eth_dev) < 0) {
PMD_INIT_LOG(ERR, "eth_virtio_dev_init fails");
@@ -769,7 +775,6 @@ static struct rte_vdev_driver virtio_user_driver = {
.remove = virtio_user_pmd_remove,
.dma_map = virtio_user_pmd_dma_map,
.dma_unmap = virtio_user_pmd_dma_unmap,
- .drv_flags = RTE_VDEV_DRV_NEED_IOVA_AS_VA,
};
RTE_PMD_REGISTER_VDEV(net_virtio_user, virtio_user_driver);
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index bee03182d6..8780b7f27a 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -113,6 +113,25 @@ virtqueue_store_flags_packed(struct vring_packed_desc *dp,
#define VIRTQUEUE_MAX_NAME_SZ 32
+/**
+ * Return the IOVA (or virtual address in case of virtio-user) of mbuf
+ * data buffer.
+ *
+ * The address is firstly casted to the word size (sizeof(uintptr_t))
+ * before casting it to uint64_t. This is to make it work with different
+ * combination of word size (64 bit and 32 bit) and virtio device
+ * (virtio-pci and virtio-user).
+ */
+#define VIRTIO_MBUF_ADDR(mb, vq) \
+ ((uint64_t)(*(uintptr_t *)((uintptr_t)(mb) + (vq)->mbuf_addr_offset)))
+
+/**
+ * Return the physical address (or virtual address in case of
+ * virtio-user) of mbuf data buffer, taking care of mbuf data offset
+ */
+#define VIRTIO_MBUF_DATA_DMA_ADDR(mb, vq) \
+ (VIRTIO_MBUF_ADDR(mb, vq) + (mb)->data_off)
+
#define VTNET_SQ_RQ_QUEUE_IDX 0
#define VTNET_SQ_TQ_QUEUE_IDX 1
#define VTNET_SQ_CQ_QUEUE_IDX 2
@@ -273,6 +292,7 @@ struct virtqueue {
void *vq_ring_virt_mem; /**< linear address of vring*/
unsigned int vq_ring_size;
+ uint16_t mbuf_addr_offset;
union {
struct virtnet_rx rxq;
@@ -760,7 +780,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
do {
uint16_t flags;
- start_dp[idx].addr = rte_mbuf_data_iova(cookie);
+ start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
start_dp[idx].len = cookie->data_len;
if (prepend_header) {
start_dp[idx].addr -= head_size;
--
2.31.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-stable] [PATCH] net/virtio: revert forcing IOVA as VA mode for virtio-user
2021-09-29 20:17 [dpdk-stable] [PATCH] net/virtio: revert forcing IOVA as VA mode for virtio-user Maxime Coquelin
@ 2021-09-29 21:15 ` Olivier Matz
2021-09-30 8:25 ` Maxime Coquelin
2021-09-30 7:26 ` David Marchand
1 sibling, 1 reply; 5+ messages in thread
From: Olivier Matz @ 2021-09-29 21:15 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: david.marchand, chenbo.xia, dev, stable
Hi Maxime,
On Wed, Sep 29, 2021 at 10:17:39PM +0200, Maxime Coquelin wrote:
> This patch removes the simplification in Virtio descriptors
> handling, where their buffer addresses are IOVAs for Virtio
> PCI devices, and VA-only for Virtio-user devices, which
> added a requirement on Virtio-user that it only supported
> IOVA as VA.
>
> This change introduced a regression for applications using
> Virtio-user and other physical PMDs that require IOVA as PA
> because they don't use an IOMMU.
>
> This patch reverts to the old behaviour, but needed to be
> reworked because of the refactoring that happened in v21.02.
>
> Fixes: 17043a2909bb ("net/virtio: force IOVA as VA mode for virtio-user")
> Cc: stable@dpdk.org
>
> Reported-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Tested-by: Olivier Matz <olivier.matz@6wind.com>
Many thanks for your quick solution on this!
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-stable] [PATCH] net/virtio: revert forcing IOVA as VA mode for virtio-user
2021-09-29 20:17 [dpdk-stable] [PATCH] net/virtio: revert forcing IOVA as VA mode for virtio-user Maxime Coquelin
2021-09-29 21:15 ` Olivier Matz
@ 2021-09-30 7:26 ` David Marchand
2021-09-30 7:43 ` Maxime Coquelin
1 sibling, 1 reply; 5+ messages in thread
From: David Marchand @ 2021-09-30 7:26 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: Olivier Matz, Xia, Chenbo, dev, dpdk stable
Hello Maxime,
On Wed, Sep 29, 2021 at 10:18 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch removes the simplification in Virtio descriptors
> handling, where their buffer addresses are IOVAs for Virtio
> PCI devices, and VA-only for Virtio-user devices, which
> added a requirement on Virtio-user that it only supported
> IOVA as VA.
>
> This change introduced a regression for applications using
> Virtio-user and other physical PMDs that require IOVA as PA
> because they don't use an IOMMU.
>
> This patch reverts to the old behaviour, but needed to be
> reworked because of the refactoring that happened in v21.02.
>
> Fixes: 17043a2909bb ("net/virtio: force IOVA as VA mode for virtio-user")
> Cc: stable@dpdk.org
>
> Reported-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
This patch does not apply on next-virtio, but you are best placed to
figure this out :-).
Quick look, only nits otherwise patch lgtm.
> ---
> drivers/net/virtio/virtio.h | 1 +
> drivers/net/virtio/virtio_ethdev.c | 25 +++++++++++++----
> drivers/net/virtio/virtio_rxtx.c | 28 ++++++++++++--------
> drivers/net/virtio/virtio_rxtx_packed.h | 2 +-
> drivers/net/virtio/virtio_rxtx_packed_avx.h | 8 +++---
> drivers/net/virtio/virtio_rxtx_packed_neon.h | 8 +++---
> drivers/net/virtio/virtio_rxtx_simple.h | 3 ++-
> drivers/net/virtio/virtio_user_ethdev.c | 7 ++++-
> drivers/net/virtio/virtqueue.h | 22 ++++++++++++++-
> 9 files changed, 76 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
> index b4f21dc0c7..7118e5d24c 100644
> --- a/drivers/net/virtio/virtio.h
> +++ b/drivers/net/virtio/virtio.h
> @@ -221,6 +221,7 @@ struct virtio_hw {
> uint8_t *rss_key;
> uint64_t req_guest_features;
> struct virtnet_ctl *cvq;
> + bool use_va;
> };
>
> struct virtio_ops {
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index b4bd1f07c1..8055be88a2 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -567,12 +567,16 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
>
> memset(mz->addr, 0, mz->len);
>
> - vq->vq_ring_mem = mz->iova;
> + if (hw->use_va)
> + vq->vq_ring_mem = (uintptr_t)mz->addr;
> + else
> + vq->vq_ring_mem = mz->iova;
> +
> vq->vq_ring_virt_mem = mz->addr;
> PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64,
> - (uint64_t)mz->iova);
> + (uint64_t)vq->vq_ring_mem);
vq_ring_mem is a rte_iova_t which is a uint64_t.
Cast is unneeded.
> PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%" PRIx64,
> - (uint64_t)(uintptr_t)mz->addr);
> + (uint64_t)(uintptr_t)vq->vq_ring_virt_mem);
Why not display with %p and drop casts?
>
> virtio_init_vring(vq);
>
> @@ -622,17 +626,28 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
> txvq->port_id = dev->data->port_id;
> txvq->mz = mz;
> txvq->virtio_net_hdr_mz = hdr_mz;
> - txvq->virtio_net_hdr_mem = hdr_mz->iova;
> + if (hw->use_va)
> + txvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr;
> + else
> + txvq->virtio_net_hdr_mem = hdr_mz->iova;
> } else if (queue_type == VTNET_CQ) {
> cvq = &vq->cq;
> cvq->mz = mz;
> cvq->virtio_net_hdr_mz = hdr_mz;
> - cvq->virtio_net_hdr_mem = hdr_mz->iova;
> + if (hw->use_va)
> + cvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr;
> + else
> + cvq->virtio_net_hdr_mem = hdr_mz->iova;
> memset(cvq->virtio_net_hdr_mz->addr, 0, rte_mem_page_size());
>
> hw->cvq = cvq;
> }
>
> + if (hw->use_va)
> + vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_addr);
> + else
> + vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_iova);
> +
> if (queue_type == VTNET_TQ) {
> struct virtio_tx_region *txr;
> unsigned int i;
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index b9d7c8d18f..0f3c286438 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -271,10 +271,13 @@ virtqueue_enqueue_refill_inorder(struct virtqueue *vq,
> dxp->cookie = (void *)cookies[i];
> dxp->ndescs = 1;
>
> - start_dp[idx].addr = cookies[i]->buf_iova +
> - RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
> - start_dp[idx].len = cookies[i]->buf_len -
> - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;
> + start_dp[idx].addr =
> + VIRTIO_MBUF_ADDR(cookies[i], vq) +
> + RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
A single <tab> is enough indent.
> + start_dp[idx].len =
> + cookies[i]->buf_len -
> + RTE_PKTMBUF_HEADROOM +
> + hw->vtnet_hdr_size;
This part needs no update.
In the end for this hunk, we only need:
- start_dp[idx].addr = cookies[i]->buf_iova +
+ start_dp[idx].addr = VIRTIO_MBUF_ADDR(cookies[i], vq) +
> start_dp[idx].flags = VRING_DESC_F_WRITE;
>
> vq_update_avail_ring(vq, idx);
> @@ -310,10 +313,12 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie,
> dxp->cookie = (void *)cookie[i];
> dxp->ndescs = 1;
>
> - start_dp[idx].addr = cookie[i]->buf_iova +
> + start_dp[idx].addr =
> + VIRTIO_MBUF_ADDR(cookie[i], vq) +
> RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
> - start_dp[idx].len = cookie[i]->buf_len -
> - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;
> + start_dp[idx].len =
> + cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM +
> + hw->vtnet_hdr_size;
> start_dp[idx].flags = VRING_DESC_F_WRITE;
> vq->vq_desc_head_idx = start_dp[idx].next;
> vq_update_avail_ring(vq, idx);
Same comment as above, we only need:
- start_dp[idx].addr = cookie[i]->buf_iova +
+ start_dp[idx].addr = VIRTIO_MBUF_ADDR(cookie[i], vq) +
> @@ -336,7 +341,7 @@ virtqueue_refill_single_packed(struct virtqueue *vq,
> uint16_t flags = vq->vq_packed.cached_flags;
> struct virtio_hw *hw = vq->hw;
>
> - dp->addr = cookie->buf_iova +
> + dp->addr = VIRTIO_MBUF_ADDR(cookie, vq) +
> RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
> dp->len = cookie->buf_len -
> RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;
> @@ -482,7 +487,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
> else
> virtqueue_xmit_offload(hdr, cookies[i]);
>
> - start_dp[idx].addr = rte_mbuf_data_iova(cookies[i]) - head_size;
> + start_dp[idx].addr =
> + VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq) - head_size;
We could go a little over 80 columns.
> start_dp[idx].len = cookies[i]->data_len + head_size;
> start_dp[idx].flags = 0;
>
> @@ -529,7 +535,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq,
> else
> virtqueue_xmit_offload(hdr, cookie);
>
> - dp->addr = rte_mbuf_data_iova(cookie) - head_size;
> + dp->addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq) - head_size;
> dp->len = cookie->data_len + head_size;
> dp->id = id;
>
> @@ -617,7 +623,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
> virtqueue_xmit_offload(hdr, cookie);
>
> do {
> - start_dp[idx].addr = rte_mbuf_data_iova(cookie);
> + start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq);
> start_dp[idx].len = cookie->data_len;
> if (prepend_header) {
> start_dp[idx].addr -= head_size;
[snip]
> diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h
> index f258771fcf..497c9a0e32 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple.h
> +++ b/drivers/net/virtio/virtio_rxtx_simple.h
> @@ -43,7 +43,8 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq)
> p = (uintptr_t)&sw_ring[i]->rearm_data;
> *(uint64_t *)p = rxvq->mbuf_initializer;
>
> - start_dp[i].addr = sw_ring[i]->buf_iova +
> + start_dp[i].addr =
> + VIRTIO_MBUF_ADDR(sw_ring[i], vq) +
This fits in 80 columns.
> RTE_PKTMBUF_HEADROOM - vq->hw->vtnet_hdr_size;
> start_dp[i].len = sw_ring[i]->buf_len -
> RTE_PKTMBUF_HEADROOM + vq->hw->vtnet_hdr_size;
[snip]
--
David Marchand
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-stable] [PATCH] net/virtio: revert forcing IOVA as VA mode for virtio-user
2021-09-30 7:26 ` David Marchand
@ 2021-09-30 7:43 ` Maxime Coquelin
0 siblings, 0 replies; 5+ messages in thread
From: Maxime Coquelin @ 2021-09-30 7:43 UTC (permalink / raw)
To: David Marchand; +Cc: Olivier Matz, Xia, Chenbo, dev, dpdk stable
Hi David,
On 9/30/21 09:26, David Marchand wrote:
> Hello Maxime,
>
>
> On Wed, Sep 29, 2021 at 10:18 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>>
>> This patch removes the simplification in Virtio descriptors
>> handling, where their buffer addresses are IOVAs for Virtio
>> PCI devices, and VA-only for Virtio-user devices, which
>> added a requirement on Virtio-user that it only supported
>> IOVA as VA.
>>
>> This change introduced a regression for applications using
>> Virtio-user and other physical PMDs that require IOVA as PA
>> because they don't use an IOMMU.
>>
>> This patch reverts to the old behaviour, but needed to be
>> reworked because of the refactoring that happened in v21.02.
>>
>> Fixes: 17043a2909bb ("net/virtio: force IOVA as VA mode for virtio-user")
>> Cc: stable@dpdk.org
>>
>> Reported-by: Olivier Matz <olivier.matz@6wind.com>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>
> This patch does not apply on next-virtio, but you are best placed to
> figure this out :-).
:) I can confirm, I missed my RSS series in between.
> Quick look, only nits otherwise patch lgtm.
>
>
>> ---
>> drivers/net/virtio/virtio.h | 1 +
>> drivers/net/virtio/virtio_ethdev.c | 25 +++++++++++++----
>> drivers/net/virtio/virtio_rxtx.c | 28 ++++++++++++--------
>> drivers/net/virtio/virtio_rxtx_packed.h | 2 +-
>> drivers/net/virtio/virtio_rxtx_packed_avx.h | 8 +++---
>> drivers/net/virtio/virtio_rxtx_packed_neon.h | 8 +++---
>> drivers/net/virtio/virtio_rxtx_simple.h | 3 ++-
>> drivers/net/virtio/virtio_user_ethdev.c | 7 ++++-
>> drivers/net/virtio/virtqueue.h | 22 ++++++++++++++-
>> 9 files changed, 76 insertions(+), 28 deletions(-)
>>
>> diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h
>> index b4f21dc0c7..7118e5d24c 100644
>> --- a/drivers/net/virtio/virtio.h
>> +++ b/drivers/net/virtio/virtio.h
>> @@ -221,6 +221,7 @@ struct virtio_hw {
>> uint8_t *rss_key;
>> uint64_t req_guest_features;
>> struct virtnet_ctl *cvq;
>> + bool use_va;
>> };
>>
>> struct virtio_ops {
>> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>> index b4bd1f07c1..8055be88a2 100644
>> --- a/drivers/net/virtio/virtio_ethdev.c
>> +++ b/drivers/net/virtio/virtio_ethdev.c
>> @@ -567,12 +567,16 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
>>
>> memset(mz->addr, 0, mz->len);
>>
>> - vq->vq_ring_mem = mz->iova;
>> + if (hw->use_va)
>> + vq->vq_ring_mem = (uintptr_t)mz->addr;
>> + else
>> + vq->vq_ring_mem = mz->iova;
>> +
>> vq->vq_ring_virt_mem = mz->addr;
>> PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64,
>> - (uint64_t)mz->iova);
>> + (uint64_t)vq->vq_ring_mem);
>
> vq_ring_mem is a rte_iova_t which is a uint64_t.
> Cast is unneeded.
>
>> PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%" PRIx64,
>> - (uint64_t)(uintptr_t)mz->addr);
>> + (uint64_t)(uintptr_t)vq->vq_ring_virt_mem);
>
> Why not display with %p and drop casts?
Agree, I'll rework these undeed casts.
>
>>
>> virtio_init_vring(vq);
>>
>> @@ -622,17 +626,28 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx)
>> txvq->port_id = dev->data->port_id;
>> txvq->mz = mz;
>> txvq->virtio_net_hdr_mz = hdr_mz;
>> - txvq->virtio_net_hdr_mem = hdr_mz->iova;
>> + if (hw->use_va)
>> + txvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr;
>> + else
>> + txvq->virtio_net_hdr_mem = hdr_mz->iova;
>> } else if (queue_type == VTNET_CQ) {
>> cvq = &vq->cq;
>> cvq->mz = mz;
>> cvq->virtio_net_hdr_mz = hdr_mz;
>> - cvq->virtio_net_hdr_mem = hdr_mz->iova;
>> + if (hw->use_va)
>> + cvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr;
>> + else
>> + cvq->virtio_net_hdr_mem = hdr_mz->iova;
>> memset(cvq->virtio_net_hdr_mz->addr, 0, rte_mem_page_size());
>>
>> hw->cvq = cvq;
>> }
>>
>> + if (hw->use_va)
>> + vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_addr);
>> + else
>> + vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_iova);
>> +
>> if (queue_type == VTNET_TQ) {
>> struct virtio_tx_region *txr;
>> unsigned int i;
>> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>> index b9d7c8d18f..0f3c286438 100644
>> --- a/drivers/net/virtio/virtio_rxtx.c
>> +++ b/drivers/net/virtio/virtio_rxtx.c
>> @@ -271,10 +271,13 @@ virtqueue_enqueue_refill_inorder(struct virtqueue *vq,
>> dxp->cookie = (void *)cookies[i];
>> dxp->ndescs = 1;
>>
>> - start_dp[idx].addr = cookies[i]->buf_iova +
>> - RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
>> - start_dp[idx].len = cookies[i]->buf_len -
>> - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size;
>> + start_dp[idx].addr =
>> + VIRTIO_MBUF_ADDR(cookies[i], vq) +
>> + RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size;
>
> A single <tab> is enough indent.
Yes, clean-ups I did got re-introduced with the revert.
I will rework them in next revision (and will add a few more cleanups I
missed initially).
Thanks,
Maxime
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-stable] [PATCH] net/virtio: revert forcing IOVA as VA mode for virtio-user
2021-09-29 21:15 ` Olivier Matz
@ 2021-09-30 8:25 ` Maxime Coquelin
0 siblings, 0 replies; 5+ messages in thread
From: Maxime Coquelin @ 2021-09-30 8:25 UTC (permalink / raw)
To: Olivier Matz; +Cc: david.marchand, chenbo.xia, dev, stable
On 9/29/21 23:15, Olivier Matz wrote:
> Hi Maxime,
>
> On Wed, Sep 29, 2021 at 10:17:39PM +0200, Maxime Coquelin wrote:
>> This patch removes the simplification in Virtio descriptors
>> handling, where their buffer addresses are IOVAs for Virtio
>> PCI devices, and VA-only for Virtio-user devices, which
>> added a requirement on Virtio-user that it only supported
>> IOVA as VA.
>>
>> This change introduced a regression for applications using
>> Virtio-user and other physical PMDs that require IOVA as PA
>> because they don't use an IOMMU.
>>
>> This patch reverts to the old behaviour, but needed to be
>> reworked because of the refactoring that happened in v21.02.
>>
>> Fixes: 17043a2909bb ("net/virtio: force IOVA as VA mode for virtio-user")
>> Cc: stable@dpdk.org
>>
>> Reported-by: Olivier Matz <olivier.matz@6wind.com>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>
> Tested-by: Olivier Matz <olivier.matz@6wind.com>
>
> Many thanks for your quick solution on this!
>
You're welcome, thanks for reporting.
I just notice your reply to v1, so I missed to report your Tested-by on
v2 (which only has cosmetic changes). Feel free to add it.
Maxime
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-09-30 8:25 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-29 20:17 [dpdk-stable] [PATCH] net/virtio: revert forcing IOVA as VA mode for virtio-user Maxime Coquelin
2021-09-29 21:15 ` Olivier Matz
2021-09-30 8:25 ` Maxime Coquelin
2021-09-30 7:26 ` David Marchand
2021-09-30 7:43 ` Maxime Coquelin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).