From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B39FD455F3; Thu, 11 Jul 2024 14:10:40 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3638842E60; Thu, 11 Jul 2024 14:09:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4ED3E42E64 for ; Thu, 11 Jul 2024 14:09:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 46BA77vT013375; Thu, 11 Jul 2024 05:09:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:message-id :mime-version:subject:to; s=pfpt0220; bh=DQ8KpsS7jkQo6UdwE/LujST U2zRRIG62gE8tK/sQQuA=; b=Ew4pHGrRyB9Wy03sW4Q2cPngl0MFD33HZphAb3U v+rAY7tvQOGOaI2tuAZj20BoCSzIMmD5r0MHzEbOxqK7YWRs471TBD4JtM+UwEVh UFPS1BEIhDbDM5tfILRK43/DpjdxJ7vCYAEFUEpBOYX5Y1s95MaD4D1s+mmwMMyI h7QMwdSlNsTyrMM+KTpjaTpGtI8+JiB36K9bEI0K3LErUmtfJujww/+VhrAoxEr7 oCy0jQWGothuLjnntAncxkfOxrs0bpV1H+yVWu6trLZH4cLQBJnagM8cd1OxbkF5 isFU8D0UALXkl8mYdbHrcwfBN2KAoHr07L+eBc3lXFS3dVQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 40adbv0ant-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 11 Jul 2024 05:09:40 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 11 Jul 2024 05:09:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 11 Jul 2024 05:09:39 -0700 Received: from localhost.localdomain (unknown [10.28.36.175]) by maili.marvell.com (Postfix) with ESMTP id 26A7D3F7095; Thu, 11 Jul 2024 05:09:36 -0700 (PDT) From: Srujana Challa To: , , CC: , , , Subject: [PATCH dpdk-next-virtio] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA Date: Thu, 11 Jul 2024 17:39:35 +0530 Message-ID: <20240711120936.2382969-1-schalla@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: H-wtFdjA8SECCnTwssl8I13_IY52QCp3 X-Proofpoint-GUID: H-wtFdjA8SECCnTwssl8I13_IY52QCp3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-07-11_08,2024-07-11_01,2024-05-17_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch modifies the code to convert descriptor buffer IOVA addresses to virtual addresses only when use_va flag is false. 'Fixes: 67e9e504dae2 ("net/virtio_user: convert cq descriptor IOVA address to Virtual address")' Signed-off-by: Srujana Challa --- .../net/virtio/virtio_user/virtio_user_dev.c | 28 +++++++++++-------- 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index fed66d2ae9..94e0ddcb94 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -905,12 +905,12 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs) #define CVQ_MAX_DATA_DESCS 32 static inline void * -virtio_user_iova2virt(rte_iova_t iova) +virtio_user_iova2virt(rte_iova_t iova, bool use_va) { - if (rte_eal_iova_mode() == RTE_IOVA_VA) - return (void *)(uintptr_t)iova; - else + if (rte_eal_iova_mode() == RTE_IOVA_PA && !use_va) return rte_mem_iova2virt(iova); + else + return (void *)(uintptr_t)iova; } static uint32_t @@ -922,6 +922,7 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri uint16_t i, idx_data, idx_status; uint32_t n_descs = 0; int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0; + bool use_va = dev->hw.use_va; /* locate desc for header, data, and status */ idx_data = vring->desc[idx_hdr].next; @@ -938,18 +939,18 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri idx_status = i; n_descs++; - hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr); + hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr, use_va); if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { uint16_t queues, *addr; - addr = virtio_user_iova2virt(vring->desc[idx_data].addr); + addr = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va); queues = *addr; status = virtio_user_handle_mq(dev, queues); } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { struct virtio_net_ctrl_rss *rss; - rss = virtio_user_iova2virt(vring->desc[idx_data].addr); + rss = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va); status = virtio_user_handle_mq(dev, rss->max_tx_vq); } else if (hdr->class == VIRTIO_NET_CTRL_RX || hdr->class == VIRTIO_NET_CTRL_MAC || @@ -962,7 +963,8 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen); /* Update status */ - *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status; + *(virtio_net_ctrl_ack *) + virtio_user_iova2virt(vring->desc[idx_status].addr, use_va) = status; return n_descs; } @@ -987,6 +989,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, /* initialize to one, header is first */ uint32_t n_descs = 1; int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0; + bool use_va = dev->hw.use_va; /* locate desc for header, data, and status */ idx_data = idx_hdr + 1; @@ -1004,18 +1007,18 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, n_descs++; } - hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr); + hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr, use_va); if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { uint16_t queues, *addr; - addr = virtio_user_iova2virt(vring->desc[idx_data].addr); + addr = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va); queues = *addr; status = virtio_user_handle_mq(dev, queues); } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { struct virtio_net_ctrl_rss *rss; - rss = virtio_user_iova2virt(vring->desc[idx_data].addr); + rss = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va); status = virtio_user_handle_mq(dev, rss->max_tx_vq); } else if (hdr->class == VIRTIO_NET_CTRL_RX || hdr->class == VIRTIO_NET_CTRL_MAC || @@ -1028,7 +1031,8 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen); /* Update status */ - *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status; + *(virtio_net_ctrl_ack *) + virtio_user_iova2virt(vring->desc[idx_status].addr, use_va) = status; /* Update used descriptor */ vring->desc[idx_hdr].id = vring->desc[idx_status].id; -- 2.25.1