From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 161C3A0C41; Thu, 30 Sep 2021 09:26:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7AE6C410EC; Thu, 30 Sep 2021 09:26:39 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 6CEF9410EB for ; Thu, 30 Sep 2021 09:26:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1632986796; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=YAXca2RYCvWam4eJz1xdH4KkXvXweaa2qT08C3lJW04=; b=aIYq365EE48cvoToGEKFU/vr9z1leHLZ5RtbCXsq5PKXOgEu0JkGMRrzSRfvvzmMOhpWk/ 0l/Mw7WtE3EGS0V+mtCtELsx8eqbLKUyVnbeSG6Mh+9sLYIae2nLLX9TyWcN4l9DJKK6Ri y06FyryDhvwtSTHTYarqZ9y0gxG0LKI= Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-422-PvVfBampPqKLl-cfaJ1bnA-1; Thu, 30 Sep 2021 03:26:34 -0400 X-MC-Unique: PvVfBampPqKLl-cfaJ1bnA-1 Received: by mail-lf1-f72.google.com with SMTP id m2-20020ac24ac2000000b003f524eae63eso4762815lfp.10 for ; Thu, 30 Sep 2021 00:26:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=YAXca2RYCvWam4eJz1xdH4KkXvXweaa2qT08C3lJW04=; b=HepaxCgxV9zj+PzFt2TCttU3KGlstL8yPeLLpiSGDBGGV9p0vWg+tLGAj1RPy9L2T7 7LrVdbbCFKQ4Uet+82miUXwSp2b7wEqq/Q0dY0PxeuxqAE8ruX4AQ6zqSsKDheDk6RVk 7eN749Z/BoymoGWLI/lInifOm31M+0B6NJnEr6t9V/ZHUWVI+PLipSHurq4QW75Oy8Zc uES9U+iFOw0c7vx32/2QnMcEQ3mpJtZkuiiMo2/whEHy9uIGR9EJCBIB5Y6PY4TxnMKY sAyhe8qn/JbVB2W69sm0oL+sP8lP3g4d9dCYB3ai/EzVCTKvzlodFwbx3aqQ8i/cce6V UPmw== X-Gm-Message-State: AOAM530pK3Vvt2vL6xbtVjE7Xt8vTCBxpLk6c7dgfvBZ68IH0JVN/7+6 atZ4ZOiihL/UCfokhEIamR9QNvUcI689n0D5zp3M4fREWLbMUYvPDUKvL1jhqx1dwzkaoK9xn/S unxOEJbcwaUhGKvGUzUo= X-Received: by 2002:ac2:4e0d:: with SMTP id e13mr4210339lfr.560.1632986792643; Thu, 30 Sep 2021 00:26:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyNt4EIypdDEG6bvSm8KohRKk2ZZFQAOJgSTa2Lx8Pu6wwYKdVw201rdU+UqWcdto78GRQS3ILt/dqvvM/5e00= X-Received: by 2002:ac2:4e0d:: with SMTP id e13mr4210306lfr.560.1632986792271; Thu, 30 Sep 2021 00:26:32 -0700 (PDT) MIME-Version: 1.0 References: <20210929201739.176306-1-maxime.coquelin@redhat.com> In-Reply-To: <20210929201739.176306-1-maxime.coquelin@redhat.com> From: David Marchand Date: Thu, 30 Sep 2021 09:26:20 +0200 Message-ID: To: Maxime Coquelin Cc: Olivier Matz , "Xia, Chenbo" , dev , dpdk stable Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dmarchan@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH] net/virtio: revert forcing IOVA as VA mode for virtio-user X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hello Maxime, On Wed, Sep 29, 2021 at 10:18 PM Maxime Coquelin wrote: > > This patch removes the simplification in Virtio descriptors > handling, where their buffer addresses are IOVAs for Virtio > PCI devices, and VA-only for Virtio-user devices, which > added a requirement on Virtio-user that it only supported > IOVA as VA. > > This change introduced a regression for applications using > Virtio-user and other physical PMDs that require IOVA as PA > because they don't use an IOMMU. > > This patch reverts to the old behaviour, but needed to be > reworked because of the refactoring that happened in v21.02. > > Fixes: 17043a2909bb ("net/virtio: force IOVA as VA mode for virtio-user") > Cc: stable@dpdk.org > > Reported-by: Olivier Matz > Signed-off-by: Maxime Coquelin This patch does not apply on next-virtio, but you are best placed to figure this out :-). Quick look, only nits otherwise patch lgtm. > --- > drivers/net/virtio/virtio.h | 1 + > drivers/net/virtio/virtio_ethdev.c | 25 +++++++++++++---- > drivers/net/virtio/virtio_rxtx.c | 28 ++++++++++++-------- > drivers/net/virtio/virtio_rxtx_packed.h | 2 +- > drivers/net/virtio/virtio_rxtx_packed_avx.h | 8 +++--- > drivers/net/virtio/virtio_rxtx_packed_neon.h | 8 +++--- > drivers/net/virtio/virtio_rxtx_simple.h | 3 ++- > drivers/net/virtio/virtio_user_ethdev.c | 7 ++++- > drivers/net/virtio/virtqueue.h | 22 ++++++++++++++- > 9 files changed, 76 insertions(+), 28 deletions(-) > > diff --git a/drivers/net/virtio/virtio.h b/drivers/net/virtio/virtio.h > index b4f21dc0c7..7118e5d24c 100644 > --- a/drivers/net/virtio/virtio.h > +++ b/drivers/net/virtio/virtio.h > @@ -221,6 +221,7 @@ struct virtio_hw { > uint8_t *rss_key; > uint64_t req_guest_features; > struct virtnet_ctl *cvq; > + bool use_va; > }; > > struct virtio_ops { > diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c > index b4bd1f07c1..8055be88a2 100644 > --- a/drivers/net/virtio/virtio_ethdev.c > +++ b/drivers/net/virtio/virtio_ethdev.c > @@ -567,12 +567,16 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx) > > memset(mz->addr, 0, mz->len); > > - vq->vq_ring_mem = mz->iova; > + if (hw->use_va) > + vq->vq_ring_mem = (uintptr_t)mz->addr; > + else > + vq->vq_ring_mem = mz->iova; > + > vq->vq_ring_virt_mem = mz->addr; > PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64, > - (uint64_t)mz->iova); > + (uint64_t)vq->vq_ring_mem); vq_ring_mem is a rte_iova_t which is a uint64_t. Cast is unneeded. > PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%" PRIx64, > - (uint64_t)(uintptr_t)mz->addr); > + (uint64_t)(uintptr_t)vq->vq_ring_virt_mem); Why not display with %p and drop casts? > > virtio_init_vring(vq); > > @@ -622,17 +626,28 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t queue_idx) > txvq->port_id = dev->data->port_id; > txvq->mz = mz; > txvq->virtio_net_hdr_mz = hdr_mz; > - txvq->virtio_net_hdr_mem = hdr_mz->iova; > + if (hw->use_va) > + txvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr; > + else > + txvq->virtio_net_hdr_mem = hdr_mz->iova; > } else if (queue_type == VTNET_CQ) { > cvq = &vq->cq; > cvq->mz = mz; > cvq->virtio_net_hdr_mz = hdr_mz; > - cvq->virtio_net_hdr_mem = hdr_mz->iova; > + if (hw->use_va) > + cvq->virtio_net_hdr_mem = (uintptr_t)hdr_mz->addr; > + else > + cvq->virtio_net_hdr_mem = hdr_mz->iova; > memset(cvq->virtio_net_hdr_mz->addr, 0, rte_mem_page_size()); > > hw->cvq = cvq; > } > > + if (hw->use_va) > + vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_addr); > + else > + vq->mbuf_addr_offset = offsetof(struct rte_mbuf, buf_iova); > + > if (queue_type == VTNET_TQ) { > struct virtio_tx_region *txr; > unsigned int i; > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c > index b9d7c8d18f..0f3c286438 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -271,10 +271,13 @@ virtqueue_enqueue_refill_inorder(struct virtqueue *vq, > dxp->cookie = (void *)cookies[i]; > dxp->ndescs = 1; > > - start_dp[idx].addr = cookies[i]->buf_iova + > - RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; > - start_dp[idx].len = cookies[i]->buf_len - > - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; > + start_dp[idx].addr = > + VIRTIO_MBUF_ADDR(cookies[i], vq) + > + RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; A single is enough indent. > + start_dp[idx].len = > + cookies[i]->buf_len - > + RTE_PKTMBUF_HEADROOM + > + hw->vtnet_hdr_size; This part needs no update. In the end for this hunk, we only need: - start_dp[idx].addr = cookies[i]->buf_iova + + start_dp[idx].addr = VIRTIO_MBUF_ADDR(cookies[i], vq) + > start_dp[idx].flags = VRING_DESC_F_WRITE; > > vq_update_avail_ring(vq, idx); > @@ -310,10 +313,12 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie, > dxp->cookie = (void *)cookie[i]; > dxp->ndescs = 1; > > - start_dp[idx].addr = cookie[i]->buf_iova + > + start_dp[idx].addr = > + VIRTIO_MBUF_ADDR(cookie[i], vq) + > RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; > - start_dp[idx].len = cookie[i]->buf_len - > - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; > + start_dp[idx].len = > + cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM + > + hw->vtnet_hdr_size; > start_dp[idx].flags = VRING_DESC_F_WRITE; > vq->vq_desc_head_idx = start_dp[idx].next; > vq_update_avail_ring(vq, idx); Same comment as above, we only need: - start_dp[idx].addr = cookie[i]->buf_iova + + start_dp[idx].addr = VIRTIO_MBUF_ADDR(cookie[i], vq) + > @@ -336,7 +341,7 @@ virtqueue_refill_single_packed(struct virtqueue *vq, > uint16_t flags = vq->vq_packed.cached_flags; > struct virtio_hw *hw = vq->hw; > > - dp->addr = cookie->buf_iova + > + dp->addr = VIRTIO_MBUF_ADDR(cookie, vq) + > RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; > dp->len = cookie->buf_len - > RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; > @@ -482,7 +487,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, > else > virtqueue_xmit_offload(hdr, cookies[i]); > > - start_dp[idx].addr = rte_mbuf_data_iova(cookies[i]) - head_size; > + start_dp[idx].addr = > + VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq) - head_size; We could go a little over 80 columns. > start_dp[idx].len = cookies[i]->data_len + head_size; > start_dp[idx].flags = 0; > > @@ -529,7 +535,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, > else > virtqueue_xmit_offload(hdr, cookie); > > - dp->addr = rte_mbuf_data_iova(cookie) - head_size; > + dp->addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq) - head_size; > dp->len = cookie->data_len + head_size; > dp->id = id; > > @@ -617,7 +623,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, > virtqueue_xmit_offload(hdr, cookie); > > do { > - start_dp[idx].addr = rte_mbuf_data_iova(cookie); > + start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); > start_dp[idx].len = cookie->data_len; > if (prepend_header) { > start_dp[idx].addr -= head_size; [snip] > diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h > index f258771fcf..497c9a0e32 100644 > --- a/drivers/net/virtio/virtio_rxtx_simple.h > +++ b/drivers/net/virtio/virtio_rxtx_simple.h > @@ -43,7 +43,8 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq) > p = (uintptr_t)&sw_ring[i]->rearm_data; > *(uint64_t *)p = rxvq->mbuf_initializer; > > - start_dp[i].addr = sw_ring[i]->buf_iova + > + start_dp[i].addr = > + VIRTIO_MBUF_ADDR(sw_ring[i], vq) + This fits in 80 columns. > RTE_PKTMBUF_HEADROOM - vq->hw->vtnet_hdr_size; > start_dp[i].len = sw_ring[i]->buf_len - > RTE_PKTMBUF_HEADROOM + vq->hw->vtnet_hdr_size; [snip] -- David Marchand