From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6BED143BE7; Mon, 26 Feb 2024 11:04:48 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 56FD9402CC; Mon, 26 Feb 2024 11:04:48 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id BADDE402C8 for ; Mon, 26 Feb 2024 11:04:46 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 41Q9snXF014392; Mon, 26 Feb 2024 02:04:45 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding:content-type; s=pfpt0220; bh=B5Kxgg1g yMba/8909qGfmfOTHelaEhC6aBtJuY5ILDY=; b=Ke1NZi8nOeQg+pC/QDeugLV3 ZI6BsuMkv1aSPP4vIUoA8fhg3304We8RCNmfwgNHhRRal5ZgJYDIp212+q9B4Bmj VAOR5OwyFtQl4F1DC516hr2kAQwYCwvgmKbLvxZNHF3TXUxIuBM19wm1mHC756LE fEvbQA8r4z5XORv/tvUJxW8Z83rgMY7CYtyOr0Q3Dgwh/HiBlRzuorm8nSgslAJ8 1pkSnXY4o7srkRvinXZaNjisqK9OupPhxo6OrmO5x3stOLYlu3dj/pNNGx6oy1Kz 3r20SIGQ0pmwcD8EmEdPDVAYdF39WXhWu3QTfrHA1lzeZdayzMvYVrKenr+K4Q== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3wgre8r0x6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 26 Feb 2024 02:04:45 -0800 (PST) Received: from DC6WP-EXCH01.marvell.com (10.76.176.21) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1258.12; Mon, 26 Feb 2024 02:04:44 -0800 Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH01.marvell.com (10.76.176.21) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 26 Feb 2024 05:04:43 -0500 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Mon, 26 Feb 2024 02:04:43 -0800 Received: from localhost.localdomain (unknown [10.28.36.175]) by maili.marvell.com (Postfix) with ESMTP id A1D383F7081; Mon, 26 Feb 2024 02:04:40 -0800 (PST) From: Srujana Challa To: , , CC: , , , Subject: [PATCH] net/virtio-user: support IOVA as PA mode for vDPA backend Date: Mon, 26 Feb 2024 15:34:39 +0530 Message-ID: <20240226100439.2127008-1-schalla@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: 59d6D6Tq-tI0eWdWVk7kJ-MOemk2pKB9 X-Proofpoint-GUID: 59d6D6Tq-tI0eWdWVk7kJ-MOemk2pKB9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-02-26_07,2024-02-23_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Disable use_va flag for VDPA backend type and fixes the issues with shadow control command processing, when it is disabled. This will help to make virtio user driver works in IOVA as PA mode for vDPA backend. Signed-off-by: Srujana Challa --- drivers/net/virtio/virtio_ring.h | 12 ++- .../net/virtio/virtio_user/virtio_user_dev.c | 86 ++++++++++--------- drivers/net/virtio/virtio_user_ethdev.c | 10 ++- drivers/net/virtio/virtqueue.c | 4 +- 4 files changed, 65 insertions(+), 47 deletions(-) diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h index e848c0b73b..998605dbb5 100644 --- a/drivers/net/virtio/virtio_ring.h +++ b/drivers/net/virtio/virtio_ring.h @@ -83,6 +83,7 @@ struct vring_packed_desc_event { struct vring_packed { unsigned int num; + rte_iova_t desc_iova; struct vring_packed_desc *desc; struct vring_packed_desc_event *driver; struct vring_packed_desc_event *device; @@ -90,6 +91,7 @@ struct vring_packed { struct vring { unsigned int num; + rte_iova_t desc_iova; struct vring_desc *desc; struct vring_avail *avail; struct vring_used *used; @@ -149,11 +151,12 @@ vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align) return size; } static inline void -vring_init_split(struct vring *vr, uint8_t *p, unsigned long align, - unsigned int num) +vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova, + unsigned long align, unsigned int num) { vr->num = num; vr->desc = (struct vring_desc *) p; + vr->desc_iova = iova; vr->avail = (struct vring_avail *) (p + num * sizeof(struct vring_desc)); vr->used = (void *) @@ -161,11 +164,12 @@ vring_init_split(struct vring *vr, uint8_t *p, unsigned long align, } static inline void -vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align, - unsigned int num) +vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova, + unsigned long align, unsigned int num) { vr->num = num; vr->desc = (struct vring_packed_desc *)p; + vr->desc_iova = iova; vr->driver = (struct vring_packed_desc_event *)(p + vr->num * sizeof(struct vring_packed_desc)); vr->device = (struct vring_packed_desc_event *) diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index d395fc1676..55e71e4842 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -62,6 +62,7 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel) struct vhost_vring_state state; struct vring *vring = &dev->vrings.split[queue_sel]; struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel]; + uint64_t desc_addr, avail_addr, used_addr; struct vhost_vring_addr addr = { .index = queue_sel, .log_guest_addr = 0, @@ -81,16 +82,23 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel) } if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) { - addr.desc_user_addr = - (uint64_t)(uintptr_t)pq_vring->desc; - addr.avail_user_addr = - (uint64_t)(uintptr_t)pq_vring->driver; - addr.used_user_addr = - (uint64_t)(uintptr_t)pq_vring->device; + desc_addr = pq_vring->desc_iova; + avail_addr = desc_addr + pq_vring->num * sizeof(struct vring_packed_desc); + used_addr = RTE_ALIGN_CEIL(avail_addr + sizeof(struct vring_packed_desc_event), + VIRTIO_VRING_ALIGN); + + addr.desc_user_addr = desc_addr; + addr.avail_user_addr = avail_addr; + addr.used_user_addr = used_addr; } else { - addr.desc_user_addr = (uint64_t)(uintptr_t)vring->desc; - addr.avail_user_addr = (uint64_t)(uintptr_t)vring->avail; - addr.used_user_addr = (uint64_t)(uintptr_t)vring->used; + desc_addr = vring->desc_iova; + avail_addr = desc_addr + vring->num * sizeof(struct vring_desc); + used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]), + VIRTIO_VRING_ALIGN); + + addr.desc_user_addr = desc_addr; + addr.avail_user_addr = avail_addr; + addr.used_user_addr = used_addr; } state.index = queue_sel; @@ -885,11 +893,11 @@ static uint32_t virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vring, uint16_t idx_hdr) { - struct virtio_net_ctrl_hdr *hdr; virtio_net_ctrl_ack status = ~0; - uint16_t i, idx_data, idx_status; + uint16_t i, idx_data; uint32_t n_descs = 0; int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0; + struct virtio_pmd_ctrl *ctrl; /* locate desc for header, data, and status */ idx_data = vring->desc[idx_hdr].next; @@ -902,34 +910,33 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri n_descs++; } - /* locate desc for status */ - idx_status = i; n_descs++; - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr; - if (hdr->class == VIRTIO_NET_CTRL_MQ && - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { + /* Access control command via VA from CVQ */ + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr; + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ && + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { uint16_t queues; - queues = *(uint16_t *)(uintptr_t)vring->desc[idx_data].addr; + queues = *(uint16_t *)ctrl->data; status = virtio_user_handle_mq(dev, queues); - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ && + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { struct virtio_net_ctrl_rss *rss; - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr; + rss = (struct virtio_net_ctrl_rss *)ctrl->data; status = virtio_user_handle_mq(dev, rss->max_tx_vq); - } else if (hdr->class == VIRTIO_NET_CTRL_RX || - hdr->class == VIRTIO_NET_CTRL_MAC || - hdr->class == VIRTIO_NET_CTRL_VLAN) { + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX || + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC || + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) { status = 0; } if (!status && dev->scvq) - status = virtio_send_command(&dev->scvq->cq, - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen); + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen, nb_dlen); /* Update status */ - *(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr = status; + ctrl->status = status; return n_descs; } @@ -948,7 +955,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, struct vring_packed *vring, uint16_t idx_hdr) { - struct virtio_net_ctrl_hdr *hdr; + struct virtio_pmd_ctrl *ctrl; virtio_net_ctrl_ack status = ~0; uint16_t idx_data, idx_status; /* initialize to one, header is first */ @@ -971,32 +978,31 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, n_descs++; } - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr; - if (hdr->class == VIRTIO_NET_CTRL_MQ && - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { + /* Access control command via VA from CVQ */ + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr; + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ && + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { uint16_t queues; - queues = *(uint16_t *)(uintptr_t) - vring->desc[idx_data].addr; + queues = *(uint16_t *)ctrl->data; status = virtio_user_handle_mq(dev, queues); - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ && + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { struct virtio_net_ctrl_rss *rss; - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr; + rss = (struct virtio_net_ctrl_rss *)ctrl->data; status = virtio_user_handle_mq(dev, rss->max_tx_vq); - } else if (hdr->class == VIRTIO_NET_CTRL_RX || - hdr->class == VIRTIO_NET_CTRL_MAC || - hdr->class == VIRTIO_NET_CTRL_VLAN) { + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX || + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC || + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) { status = 0; } if (!status && dev->scvq) - status = virtio_send_command(&dev->scvq->cq, - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen); + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen, nb_dlen); /* Update status */ - *(virtio_net_ctrl_ack *)(uintptr_t) - vring->desc[idx_status].addr = status; + ctrl->status = status; /* Update used descriptor */ vring->desc[idx_hdr].id = vring->desc[idx_status].id; diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c index bf9de36d8f..ae6593ba0b 100644 --- a/drivers/net/virtio/virtio_user_ethdev.c +++ b/drivers/net/virtio/virtio_user_ethdev.c @@ -198,6 +198,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq, sizeof(struct vring_packed_desc_event), VIRTIO_VRING_ALIGN); vring->num = vq->vq_nentries; + vring->desc_iova = vq->vq_ring_mem; vring->desc = (void *)(uintptr_t)desc_addr; vring->driver = (void *)(uintptr_t)avail_addr; vring->device = (void *)(uintptr_t)used_addr; @@ -221,6 +222,7 @@ virtio_user_setup_queue_split(struct virtqueue *vq, struct virtio_user_dev *dev) VIRTIO_VRING_ALIGN); dev->vrings.split[queue_idx].num = vq->vq_nentries; + dev->vrings.split[queue_idx].desc_iova = vq->vq_ring_mem; dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr; dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr; dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr; @@ -689,7 +691,13 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev) * Virtio-user requires using virtual addresses for the descriptors * buffers, whatever other devices require */ - hw->use_va = true; + if (backend_type == VIRTIO_USER_BACKEND_VHOST_VDPA) + /* VDPA backend requires using iova for the buffers to make it + * work in IOVA as PA mode also. + */ + hw->use_va = false; + else + hw->use_va = true; /* previously called by pci probing for physical dev */ if (eth_virtio_dev_init(eth_dev) < 0) { diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c index 6f419665f1..cf46abfd06 100644 --- a/drivers/net/virtio/virtqueue.c +++ b/drivers/net/virtio/virtqueue.c @@ -282,13 +282,13 @@ virtio_init_vring(struct virtqueue *vq) vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries); if (virtio_with_packed_queue(vq->hw)) { - vring_init_packed(&vq->vq_packed.ring, ring_mem, + vring_init_packed(&vq->vq_packed.ring, ring_mem, vq->vq_ring_mem, VIRTIO_VRING_ALIGN, size); vring_desc_init_packed(vq, size); } else { struct vring *vr = &vq->vq_split.ring; - vring_init_split(vr, ring_mem, VIRTIO_VRING_ALIGN, size); + vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_VRING_ALIGN, size); vring_desc_init_split(vr->desc, size); } /* -- 2.25.1