From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7B77A034F; Mon, 30 Aug 2021 13:35:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 51B304116B; Mon, 30 Aug 2021 13:35:48 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 18F9D41169 for ; Mon, 30 Aug 2021 13:35:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1630323345; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u4TLQc5GFJHkyfEwqlaednA1fBG/bkJEXtNUWlW4kS0=; b=h4IVwaa5v8GpjdlDGcOOjLbpYVXRObqc7aw0T6r5ck0u5bCzrrssYvbA8oJ48+AgEIntds tv5CDCk+K7TWVLFK9kODCYRvWlW2QoT2HipGpik5TFiqFpyvnA4bByuTOLC6jpZi9hYtX3 3g+YaD94RxMTubOm5PEtSB2jHDVu0BM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-6-virmBTKfMQ6uyprJW6enfA-1; Mon, 30 Aug 2021 07:35:44 -0400 X-MC-Unique: virmBTKfMQ6uyprJW6enfA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1CDAE106F8BB; Mon, 30 Aug 2021 11:35:43 +0000 (UTC) Received: from [10.39.208.27] (unknown [10.39.208.27]) by smtp.corp.redhat.com (Postfix) with ESMTPS id BD9203AFF; Mon, 30 Aug 2021 11:35:41 +0000 (UTC) To: Vijay Srivastava , dev@dpdk.org Cc: chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru, Vijay Kumar Srivastava References: <20210706164418.32615-1-vsrivast@xilinx.com> <20210706164418.32615-7-vsrivast@xilinx.com> From: Maxime Coquelin Message-ID: Date: Mon, 30 Aug 2021 13:35:40 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210706164418.32615-7-vsrivast@xilinx.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH 06/10] vdpa/sfc: add support for dev conf and dev close ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/6/21 6:44 PM, Vijay Srivastava wrote: > From: Vijay Kumar Srivastava > > Implement vDPA ops dev_conf and dev_close for DMA mapping, > interrupt and virtqueue configurations. > > Signed-off-by: Vijay Kumar Srivastava > --- > drivers/vdpa/sfc/sfc_vdpa.c | 6 + > drivers/vdpa/sfc/sfc_vdpa.h | 43 ++++ > drivers/vdpa/sfc/sfc_vdpa_hw.c | 70 ++++++ > drivers/vdpa/sfc/sfc_vdpa_ops.c | 527 ++++++++++++++++++++++++++++++++++++++-- > drivers/vdpa/sfc/sfc_vdpa_ops.h | 28 +++ > 5 files changed, 654 insertions(+), 20 deletions(-) > ... > diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c > index 84e680f..047bcc4 100644 > --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c > +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c > @@ -8,6 +8,7 @@ > #include > #include > #include > +#include > > #include "efx.h" > #include "sfc_vdpa.h" > @@ -104,6 +105,75 @@ > memset(esmp, 0, sizeof(*esmp)); > } > > +int > +sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *ops_data, bool do_map) > +{ > + uint32_t i, j; > + int rc; > + struct rte_vhost_memory *vhost_mem = NULL; > + struct rte_vhost_mem_region *mem_reg = NULL; > + int vfio_container_fd; > + void *dev; > + > + dev = ops_data->dev_handle; > + vfio_container_fd = > + sfc_vdpa_adapter_by_dev_handle(dev)->vfio_container_fd; > + > + rc = rte_vhost_get_mem_table(ops_data->vid, &vhost_mem); > + if (rc < 0) { > + sfc_vdpa_err(dev, > + "failed to get VM memory layout"); > + goto error; > + } > + > + for (i = 0; i < vhost_mem->nregions; i++) { > + mem_reg = &vhost_mem->regions[i]; > + > + if (do_map) { > + rc = rte_vfio_container_dma_map(vfio_container_fd, > + mem_reg->host_user_addr, > + mem_reg->guest_phys_addr, > + mem_reg->size); > + if (rc < 0) { > + sfc_vdpa_err(dev, > + "DMA map failed : %s", > + rte_strerror(rte_errno)); > + goto failed_vfio_dma_map; > + } > + } else { > + rc = rte_vfio_container_dma_unmap(vfio_container_fd, > + mem_reg->host_user_addr, > + mem_reg->guest_phys_addr, > + mem_reg->size); > + if (rc < 0) { > + sfc_vdpa_err(dev, > + "DMA unmap failed : %s", > + rte_strerror(rte_errno)); > + goto error; > + } > + } > + } > + > + free(vhost_mem); > + > + return 0; > + > +failed_vfio_dma_map: > + for (j = 0; j < i; j++) { > + mem_reg = &vhost_mem->regions[j]; > + rc = rte_vfio_container_dma_unmap(vfio_container_fd, > + mem_reg->host_user_addr, > + mem_reg->guest_phys_addr, > + mem_reg->size); > + } > + > +error: > + if (vhost_mem) The NULL check is not necessary, free() takes care of doing it. > + free(vhost_mem); > + > + return rc; > +} > + > static int > sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva, > const efx_bar_region_t *mem_ebrp) > diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c ... > > +static uint64_t > +hva_to_gpa(int vid, uint64_t hva) > +{ > + struct rte_vhost_memory *vhost_mem = NULL; > + struct rte_vhost_mem_region *mem_reg = NULL; > + uint32_t i; > + uint64_t gpa = 0; > + > + if (rte_vhost_get_mem_table(vid, &vhost_mem) < 0) > + goto error; > + > + for (i = 0; i < vhost_mem->nregions; i++) { > + mem_reg = &vhost_mem->regions[i]; > + > + if (hva >= mem_reg->host_user_addr && > + hva < mem_reg->host_user_addr + mem_reg->size) { > + gpa = (hva - mem_reg->host_user_addr) + > + mem_reg->guest_phys_addr; > + break; > + } > + } > + > +error: > + if (vhost_mem) Ditto. > + free(vhost_mem); > + return gpa; > +} > + > +static int > +sfc_vdpa_enable_vfio_intr(struct sfc_vdpa_ops_data *ops_data) > +{ > + int rc; > + int *irq_fd_ptr; > + int vfio_dev_fd; > + uint32_t i, num_vring; > + struct rte_vhost_vring vring; > + struct vfio_irq_set *irq_set; > + struct rte_pci_device *pci_dev; > + char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN]; > + void *dev; > + > + num_vring = rte_vhost_get_vring_num(ops_data->vid); > + dev = ops_data->dev_handle; > + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; > + pci_dev = sfc_vdpa_adapter_by_dev_handle(dev)->pdev; > + > + irq_set = (struct vfio_irq_set *)irq_set_buf; > + irq_set->argsz = sizeof(irq_set_buf); > + irq_set->count = num_vring + 1; > + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | > + VFIO_IRQ_SET_ACTION_TRIGGER; > + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; > + irq_set->start = 0; > + irq_fd_ptr = (int *)&irq_set->data; > + irq_fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = pci_dev->intr_handle.fd; > + > + for (i = 0; i < num_vring; i++) { > + rte_vhost_get_vhost_vring(ops_data->vid, i, &vring); This function may fail (even if really unlikely that it happens), maybe better to avoid using non-initialized callfd value if it happened. > + irq_fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd; > + } > + > + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); > + if (rc) { > + sfc_vdpa_err(ops_data->dev_handle, > + "error enabling MSI-X interrupts: %s", > + strerror(errno)); > + return -1; > + } > + > + return 0; > +} > +