From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 417B7455F7; Thu, 11 Jul 2024 14:44:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2FF7542E50; Thu, 11 Jul 2024 14:44:44 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id EC6EA4065B for ; Thu, 11 Jul 2024 14:44:41 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 46B2UFOW017506; Thu, 11 Jul 2024 05:44:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:message-id :mime-version:subject:to; s=pfpt0220; bh=b8MQf4upLGPFC++Lq0U0uJl b8auVtbjTERPF5EDRFCU=; b=JzmkbEcnsABlmpoWb0FUZVgYkFOtrS7DvCTgMqY PqN290tRn1hFKQYOBhu5AIQnFlMj75XyaCPSpVVp+gl8++a5CcMqgmu0z29MDw9Z 7Eo+xfy6mHFR1KtQcCfa78mxhYwyZATXyh6IxXcpLhznBSnO3+pbLtkmknhdyZRZ AaUIG+Vm2OIEk+NHW1IZvzphB0idoctIvpJ0UFV+6ycPEVFYuwydifwSTFQuxK62 QiRZu8SmSkwPR4e9DBg/Pq+AjADcxrEtJu02WVtfQT7FWQ87/0KdnVovINuURwR4 V3laNchT91V/hWQ7MTitCW37Zj3Z/qKqOtS5H1GtHVUU8Eg== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 409wmdcm6t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 11 Jul 2024 05:44:41 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 11 Jul 2024 05:44:40 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 11 Jul 2024 05:44:40 -0700 Received: from localhost.localdomain (unknown [10.28.36.175]) by maili.marvell.com (Postfix) with ESMTP id C3C813F7040; Thu, 11 Jul 2024 05:44:37 -0700 (PDT) From: Srujana Challa To: , , CC: , , , , Subject: [PATCH v2] net/virtio_user: fix issue with converting cq descriptor IOVA address to VA Date: Thu, 11 Jul 2024 18:14:36 +0530 Message-ID: <20240711124436.2383232-1-schalla@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: uLhKQVsVXYuQ17IgFXtFpQHpzvkK_lXZ X-Proofpoint-ORIG-GUID: uLhKQVsVXYuQ17IgFXtFpQHpzvkK_lXZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-07-11_08,2024-07-11_01,2024-05-17_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch modifies the code to convert descriptor buffer IOVA addresses to virtual addresses only when use_va flag is false. This patch resolves a segmentation fault with the vhost-user backend that occurs during the processing of the shadow control queue. 'Fixes: 67e9e504dae2 ("net/virtio_user: convert cq descriptor IOVA address to Virtual address")' Reported-by: David Marchand Signed-off-by: Srujana Challa --- v2: - Added Reported-by tag. .../net/virtio/virtio_user/virtio_user_dev.c | 28 +++++++++++-------- 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index fed66d2ae9..94e0ddcb94 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -905,12 +905,12 @@ virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs) #define CVQ_MAX_DATA_DESCS 32 static inline void * -virtio_user_iova2virt(rte_iova_t iova) +virtio_user_iova2virt(rte_iova_t iova, bool use_va) { - if (rte_eal_iova_mode() == RTE_IOVA_VA) - return (void *)(uintptr_t)iova; - else + if (rte_eal_iova_mode() == RTE_IOVA_PA && !use_va) return rte_mem_iova2virt(iova); + else + return (void *)(uintptr_t)iova; } static uint32_t @@ -922,6 +922,7 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri uint16_t i, idx_data, idx_status; uint32_t n_descs = 0; int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0; + bool use_va = dev->hw.use_va; /* locate desc for header, data, and status */ idx_data = vring->desc[idx_hdr].next; @@ -938,18 +939,18 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri idx_status = i; n_descs++; - hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr); + hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr, use_va); if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { uint16_t queues, *addr; - addr = virtio_user_iova2virt(vring->desc[idx_data].addr); + addr = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va); queues = *addr; status = virtio_user_handle_mq(dev, queues); } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { struct virtio_net_ctrl_rss *rss; - rss = virtio_user_iova2virt(vring->desc[idx_data].addr); + rss = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va); status = virtio_user_handle_mq(dev, rss->max_tx_vq); } else if (hdr->class == VIRTIO_NET_CTRL_RX || hdr->class == VIRTIO_NET_CTRL_MAC || @@ -962,7 +963,8 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen); /* Update status */ - *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status; + *(virtio_net_ctrl_ack *) + virtio_user_iova2virt(vring->desc[idx_status].addr, use_va) = status; return n_descs; } @@ -987,6 +989,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, /* initialize to one, header is first */ uint32_t n_descs = 1; int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0; + bool use_va = dev->hw.use_va; /* locate desc for header, data, and status */ idx_data = idx_hdr + 1; @@ -1004,18 +1007,18 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, n_descs++; } - hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr); + hdr = virtio_user_iova2virt(vring->desc[idx_hdr].addr, use_va); if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { uint16_t queues, *addr; - addr = virtio_user_iova2virt(vring->desc[idx_data].addr); + addr = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va); queues = *addr; status = virtio_user_handle_mq(dev, queues); } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { struct virtio_net_ctrl_rss *rss; - rss = virtio_user_iova2virt(vring->desc[idx_data].addr); + rss = virtio_user_iova2virt(vring->desc[idx_data].addr, use_va); status = virtio_user_handle_mq(dev, rss->max_tx_vq); } else if (hdr->class == VIRTIO_NET_CTRL_RX || hdr->class == VIRTIO_NET_CTRL_MAC || @@ -1028,7 +1031,8 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen); /* Update status */ - *(virtio_net_ctrl_ack *)virtio_user_iova2virt(vring->desc[idx_status].addr) = status; + *(virtio_net_ctrl_ack *) + virtio_user_iova2virt(vring->desc[idx_status].addr, use_va) = status; /* Update used descriptor */ vring->desc[idx_hdr].id = vring->desc[idx_status].id; -- 2.25.1