DPDK patches and discussions
 help / color / mirror / Atom feed
From: Tiwei Bie <tiwei.bie@intel.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: zhihong.wang@intel.com, amorenoz@redhat.com,
	xiao.w.wang@intel.com, dev@dpdk.org, jfreimann@redhat.com,
	stable@dpdk.org, jasowang@redhat.com
Subject: Re: [dpdk-dev] [PATCH 10/15] net/virtio: add vDPA op to configure and start the device
Date: Tue, 3 Sep 2019 16:49:09 +0800	[thread overview]
Message-ID: <20190903084909.GA9622@___> (raw)
In-Reply-To: <b7b58dd1-7ac5-9ba3-32cd-d749d695617b@redhat.com>

On Tue, Sep 03, 2019 at 09:40:25AM +0200, Maxime Coquelin wrote:
> On 9/3/19 7:30 AM, Tiwei Bie wrote:
> > On Thu, Aug 29, 2019 at 09:59:55AM +0200, Maxime Coquelin wrote:
> >> In order to support multi-queue, we need to implement the control
> >> path. The problem is that both the Vhost-user master and slave use
> >> VAs in their processes address spaces as IOVAs, which creates
> >> collusions between the data rings IOVAs managed the master, and
> >> the Control ring IOVAs. The trick here is to remmap the Control
> >> ring memory to another range, after the slave is aware of master's
> >> ranges.
> >>
> >> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> >> ---
> >>  drivers/net/virtio/virtio_vdpa.c | 255 +++++++++++++++++++++++++++++++
> >>  1 file changed, 255 insertions(+)
> >>
> >> diff --git a/drivers/net/virtio/virtio_vdpa.c b/drivers/net/virtio/virtio_vdpa.c
> >> index fc52a8e92..13b4dd07d 100644
> >> --- a/drivers/net/virtio/virtio_vdpa.c
> >> +++ b/drivers/net/virtio/virtio_vdpa.c
> >> @@ -106,6 +106,127 @@ find_internal_resource_by_dev(struct rte_pci_device *pdev)
> >>  	return list;
> >>  }
> >>  
> >> +static int
> >> +virtio_vdpa_dma_map_ctrl_queue(struct virtio_vdpa_device *dev, int do_map,
> >> +		uint64_t iova)
> >> +{
> >> +	const struct rte_memzone *mz;
> >> +	int ret;
> >> +
> >> +	/*
> >> +	 * IOVAs are processes VAs. We cannot use them as the Data and Control
> >> +	 * paths are run in different processes, which may (does) lead to
> >> +	 * collusions. The trick here is to fixup Ctrl path IOVAs so that they
> >> +	 * start after the Data path ranges.
> >> +	 */
> >> +	if (do_map) {
> >> +		mz = dev->cvq->cq.mz;
> >> +		ret = rte_vfio_container_dma_map(dev->vfio_container_fd,
> >> +				(uint64_t)(uintptr_t)mz->addr,
> >> +				iova, mz->len);
> >> +		if (ret < 0) {
> >> +			DRV_LOG(ERR, "Failed to map ctrl ring (%d)", ret);
> >> +			return ret;
> >> +		}
> >> +
> >> +		dev->cvq->vq_ring_mem = iova;
> >> +		iova += mz->len;
> >> +
> >> +		mz = dev->cvq->cq.virtio_net_hdr_mz;
> >> +		ret = rte_vfio_container_dma_map(dev->vfio_container_fd,
> >> +				(uint64_t)(uintptr_t)mz->addr,
> >> +				iova, mz->len);
> >> +		if (ret < 0) {
> >> +			DRV_LOG(ERR, "Failed to map ctrl headers (%d)", ret);
> >> +			return ret;
> >> +		}
> > 
> > This will allow guest to access the cq.mz and cq.virtio_net_hdr_mz
> > via the device which may have potential risks.
> 
> I get what you mean, but I'm not sure to see how we could avoid that.
> AFAIU, we need to map the control queue in the device IOMMU, otherwise
> how could the host (in case of virtual device) or the NIC (in case of
> Virtio offload), could access the ring?
> Any thoughts?

I also don't see a way to avoid that. That's why I said in below
thread that I think the control queue based interface seems not a
quite good interface for a backend device:

https://lkml.org/lkml/2019/9/2/934

In IFCVF NIC, we added a MMIO based interface to replace control
queue for the multiqueue setup in vDPA mode.

Jason is proposing some changes to make virtio device suitable
for backend device. I'm not sure whether it's possible to cover
this case as well..

Regards,
Tiwei

> 
> Thanks,
> Maxime
> > Regards,
> > Tiwei
> > 
> >> +
> >> +		dev->cvq->cq.virtio_net_hdr_mem = iova;
> >> +	} else {
> >> +		mz = dev->cvq->cq.mz;
> >> +		ret = rte_vfio_container_dma_unmap(dev->vfio_container_fd,
> >> +				(uint64_t)(uintptr_t)mz->addr,
> >> +				iova, mz->len);
> >> +		if (ret < 0) {
> >> +			DRV_LOG(ERR, "Failed to unmap ctrl ring (%d)", ret);
> >> +			return ret;
> >> +		}
> >> +
> >> +		dev->cvq->vq_ring_mem = 0;
> >> +		iova += mz->len;
> >> +
> >> +		mz = dev->cvq->cq.virtio_net_hdr_mz;
> >> +		ret = rte_vfio_container_dma_unmap(dev->vfio_container_fd,
> >> +				(uint64_t)(uintptr_t)mz->addr,
> >> +				iova, mz->len);
> >> +		if (ret < 0) {
> >> +			DRV_LOG(ERR, "Failed to unmap ctrl headers (%d)", ret);
> >> +			return ret;
> >> +		}
> >> +
> >> +		dev->cvq->cq.virtio_net_hdr_mem = 0;
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static int
> >> +virtio_vdpa_dma_map(struct virtio_vdpa_device *dev, int do_map)
> >> +{
> >> +	uint32_t i;
> >> +	int ret;
> >> +	struct rte_vhost_memory *mem = NULL;
> >> +	int vfio_container_fd;
> >> +	uint64_t avail_iova = 0;
> >> +
> >> +	ret = rte_vhost_get_mem_table(dev->vid, &mem);
> >> +	if (ret < 0 || !mem) {
> >> +		DRV_LOG(ERR, "failed to get VM memory layout.");
> >> +		return ret;
> >> +	}
> >> +
> >> +	vfio_container_fd = dev->vfio_container_fd;
> >> +
> >> +	for (i = 0; i < mem->nregions; i++) {
> >> +		struct rte_vhost_mem_region *reg;
> >> +
> >> +		reg = &mem->regions[i];
> >> +		DRV_LOG(INFO, "%s, region %u: HVA 0x%" PRIx64 ", "
> >> +			"GPA 0x%" PRIx64 ", size 0x%" PRIx64 ".",
> >> +			do_map ? "DMA map" : "DMA unmap", i,
> >> +			reg->host_user_addr, reg->guest_phys_addr, reg->size);
> >> +
> >> +		if (reg->guest_phys_addr + reg->size > avail_iova)
> >> +			avail_iova = reg->guest_phys_addr + reg->size;
> >> +
> >> +		if (do_map) {
> >> +			ret = rte_vfio_container_dma_map(vfio_container_fd,
> >> +				reg->host_user_addr, reg->guest_phys_addr,
> >> +				reg->size);
> >> +			if (ret < 0) {
> >> +				DRV_LOG(ERR, "DMA map failed.");
> >> +				goto exit;
> >> +			}
> >> +		} else {
> >> +			ret = rte_vfio_container_dma_unmap(vfio_container_fd,
> >> +				reg->host_user_addr, reg->guest_phys_addr,
> >> +				reg->size);
> >> +			if (ret < 0) {
> >> +				DRV_LOG(ERR, "DMA unmap failed.");
> >> +				goto exit;
> >> +			}
> >> +		}
> >> +	}
> >> +
> >> +	if (dev->cvq)
> >> +		ret = virtio_vdpa_dma_map_ctrl_queue(dev, do_map, avail_iova);
> >> +
> >> +exit:
> >> +	free(mem);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >>  static int
> >>  virtio_vdpa_vfio_setup(struct virtio_vdpa_device *dev)
> >>  {
> >> @@ -216,10 +337,144 @@ virtio_vdpa_get_protocol_features(int did __rte_unused, uint64_t *features)
> >>  	return 0;
> >>  }
> >>  
> >> +static uint64_t
> >> +hva_to_gpa(int vid, uint64_t hva)
> >> +{
> >> +	struct rte_vhost_memory *mem = NULL;
> >> +	struct rte_vhost_mem_region *reg;
> >> +	uint32_t i;
> >> +	uint64_t gpa = 0;
> >> +
> >> +	if (rte_vhost_get_mem_table(vid, &mem) < 0)
> >> +		goto exit;
> >> +
> >> +	for (i = 0; i < mem->nregions; i++) {
> >> +		reg = &mem->regions[i];
> >> +
> >> +		if (hva >= reg->host_user_addr &&
> >> +				hva < reg->host_user_addr + reg->size) {
> >> +			gpa = hva - reg->host_user_addr + reg->guest_phys_addr;
> >> +			break;
> >> +		}
> >> +	}
> >> +
> >> +exit:
> >> +	if (mem)
> >> +		free(mem);
> >> +	return gpa;
> >> +}
> >> +
> >> +static int
> >> +virtio_vdpa_start(struct virtio_vdpa_device *dev)
> >> +{
> >> +	struct virtio_hw *hw = &dev->hw;
> >> +	int i, vid, nr_vring, ret;
> >> +	struct rte_vhost_vring vr;
> >> +	struct virtio_pmd_ctrl ctrl;
> >> +	int dlen[1];
> >> +
> >> +	vid = dev->vid;
> >> +	nr_vring = rte_vhost_get_vring_num(vid);
> >> +
> >> +	if (dev->vqs)
> >> +		rte_free(dev->vqs);
> >> +
> >> +	dev->vqs = rte_zmalloc("virtio_vdpa", sizeof(*dev->vqs) * nr_vring, 0);
> >> +
> >> +	for (i = 0; i < nr_vring; i++) {
> >> +		struct virtqueue *vq = &dev->vqs[i];
> >> +
> >> +		rte_vhost_get_vhost_vring(vid, i, &vr);
> >> +
> >> +		vq->vq_queue_index = i;
> >> +		vq->vq_nentries = vr.size;
> >> +		vq->vq_ring_mem = hva_to_gpa(vid, (uint64_t)(uintptr_t)vr.desc);
> >> +		if (vq->vq_ring_mem  == 0) {
> >> +			DRV_LOG(ERR, "Fail to get GPA for descriptor ring.");
> >> +			ret = -1;
> >> +			goto out_free_vqs;
> >> +		}
> >> +
> >> +		ret = VTPCI_OPS(hw)->setup_queue(hw, vq);
> >> +		if (ret) {
> >> +			DRV_LOG(ERR, "Fail to setup queue.");
> >> +			goto out_free_vqs;
> >> +		}
> >> +	}
> >> +
> >> +	if (dev->cvq) {
> >> +		ret = VTPCI_OPS(hw)->setup_queue(hw, dev->cvq);
> >> +		if (ret) {
> >> +			DRV_LOG(ERR, "Fail to setup ctrl queue.");
> >> +			goto out_free_vqs;
> >> +		}
> >> +	}
> >> +
> >> +	vtpci_set_status(hw, VIRTIO_CONFIG_STATUS_DRIVER_OK);
> >> +
> >> +	if (!dev->cvq)
> >> +		return 0;
> >> +
> >> +	ctrl.hdr.class = VIRTIO_NET_CTRL_MQ;
> >> +	ctrl.hdr.cmd = VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET;
> >> +	memcpy(ctrl.data, &dev->max_queue_pairs, sizeof(uint16_t));
> >> +
> >> +	dlen[0] = sizeof(uint16_t);
> >> +
> >> +	ret = virtio_send_command(hw->cvq, &ctrl, dlen, 1);
> >> +	if (ret) {
> >> +		DRV_LOG(ERR, "Multiqueue configured but send command "
> >> +			  "failed, this is too late now...");
> >> +		ret = -EINVAL;
> >> +		goto out_free_vqs;
> >> +	}
> >> +
> >> +	return 0;
> >> +out_free_vqs:
> >> +	rte_free(dev->vqs);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +static int
> >> +virtio_vdpa_dev_config(int vid)
> >> +{
> >> +	int did, ret;
> >> +	struct internal_list *list;
> >> +	struct virtio_vdpa_device *dev;
> >> +
> >> +	did = rte_vhost_get_vdpa_device_id(vid);
> >> +	list = find_internal_resource_by_did(did);
> >> +	if (list == NULL) {
> >> +		DRV_LOG(ERR, "Invalid device id: %d", did);
> >> +		return -1;
> >> +	}
> >> +
> >> +	dev = list->dev;
> >> +	dev->vid = vid;
> >> +
> >> +	rte_spinlock_lock(&dev->lock);
> >> +
> >> +	ret = virtio_vdpa_dma_map(dev, 1);
> >> +	if (ret)
> >> +		goto out_unlock;
> >> +
> >> +	ret = virtio_vdpa_start(dev);
> >> +
> >> +	if (rte_vhost_host_notifier_ctrl(vid, true) != 0)
> >> +		DRV_LOG(NOTICE, "vDPA (%d): software relay is used.", did);
> >> +
> >> +out_unlock:
> >> +	rte_spinlock_unlock(&dev->lock);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >>  static struct rte_vdpa_dev_ops virtio_vdpa_ops = {
> >>  	.get_queue_num = virtio_vdpa_get_queue_num,
> >>  	.get_features = virtio_vdpa_get_features,
> >>  	.get_protocol_features = virtio_vdpa_get_protocol_features,
> >> +	.dev_conf = virtio_vdpa_dev_config,
> >>  };
> >>  
> >>  static inline int
> >> -- 
> >> 2.21.0
> >>

  reply	other threads:[~2019-09-03  8:51 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-29  7:59 [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver Maxime Coquelin
2019-08-29  7:59 ` [dpdk-dev] [PATCH 01/15] vhost: remove vhost kernel header inclusion Maxime Coquelin
2019-09-02  6:03   ` Tiwei Bie
2019-09-03  7:24     ` Maxime Coquelin
2019-08-29  7:59 ` [dpdk-dev] [PATCH 02/15] vhost: configure vDPA as soon as the device is ready Maxime Coquelin
2019-09-02  8:34   ` Ye Xiaolong
2019-09-02  9:02     ` Wang, Xiao W
2019-09-03  7:34       ` Maxime Coquelin
2019-09-03 10:58         ` Wang, Xiao W
2019-08-29  7:59 ` [dpdk-dev] [PATCH 03/15] net/virtio: move control path fonctions in virtqueue file Maxime Coquelin
2019-09-02  6:05   ` Tiwei Bie
2019-08-29  7:59 ` [dpdk-dev] [PATCH 04/15] net/virtio: add virtio PCI subsystem device ID declaration Maxime Coquelin
2019-09-02  6:14   ` Tiwei Bie
2019-09-03  7:25     ` Maxime Coquelin
2019-08-29  7:59 ` [dpdk-dev] [PATCH 05/15] net/virtio: save notify bar ID in virtio HW struct Maxime Coquelin
2019-09-02  6:17   ` Tiwei Bie
2019-08-29  7:59 ` [dpdk-dev] [PATCH 06/15] net/virtio: add skeleton for virtio vDPA driver Maxime Coquelin
2019-09-02  6:27   ` Tiwei Bie
2019-09-03  7:25     ` Maxime Coquelin
2019-08-29  7:59 ` [dpdk-dev] [PATCH 07/15] net/virtio: add vDPA ops to get number of queue Maxime Coquelin
2019-09-02  6:32   ` Tiwei Bie
2019-08-29  7:59 ` [dpdk-dev] [PATCH 08/15] net/virtio: add virtio vDPA op to get features Maxime Coquelin
2019-09-02  6:43   ` Tiwei Bie
2019-09-03  7:27     ` Maxime Coquelin
2019-08-29  7:59 ` [dpdk-dev] [PATCH 09/15] net/virtio: add virtio vDPA op to get protocol features Maxime Coquelin
2019-09-02  6:46   ` Tiwei Bie
2019-08-29  7:59 ` [dpdk-dev] [PATCH 10/15] net/virtio: add vDPA op to configure and start the device Maxime Coquelin
2019-09-03  5:30   ` Tiwei Bie
2019-09-03  7:40     ` Maxime Coquelin
2019-09-03  8:49       ` Tiwei Bie [this message]
2019-09-04  4:06         ` Jason Wang
2019-09-04  6:56           ` Maxime Coquelin
2019-09-05  2:57             ` Tiwei Bie
2019-08-29  7:59 ` [dpdk-dev] [PATCH 11/15] net/virtio: add vDPA op to stop and close " Maxime Coquelin
2019-09-02  7:07   ` Tiwei Bie
2019-09-03  7:30     ` Maxime Coquelin
2019-08-29  7:59 ` [dpdk-dev] [PATCH 12/15] net/virtio: add vDPA op to set features Maxime Coquelin
2019-08-29  7:59 ` [dpdk-dev] [PATCH 13/15] net/virtio: add vDPA ops to get VFIO FDs Maxime Coquelin
2019-09-03  4:47   ` Tiwei Bie
2019-08-29  7:59 ` [dpdk-dev] [PATCH 14/15] net/virtio: add vDPA op to get notification area Maxime Coquelin
2019-09-03  5:02   ` Tiwei Bie
2019-09-03  7:36     ` Maxime Coquelin
2019-09-03  8:40       ` Tiwei Bie
2019-08-29  8:00 ` [dpdk-dev] [PATCH 15/15] doc: add documentation for Virtio vDPA driver Maxime Coquelin
2019-09-09 11:55 ` [dpdk-dev] [PATCH 00/15] Introduce " Shahaf Shuler
2019-09-10  7:46   ` Maxime Coquelin
2019-09-10 13:44     ` Shahaf Shuler
2019-09-10 13:56       ` Maxime Coquelin
2019-09-11  5:15         ` Shahaf Shuler
2019-09-11  7:15           ` Maxime Coquelin
2019-10-24  6:32 ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190903084909.GA9622@___ \
    --to=tiwei.bie@intel.com \
    --cc=amorenoz@redhat.com \
    --cc=dev@dpdk.org \
    --cc=jasowang@redhat.com \
    --cc=jfreimann@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    --cc=xiao.w.wang@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).