From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id DE169A0679 for ; Wed, 3 Apr 2019 09:49:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2971F5A6E; Wed, 3 Apr 2019 09:49:23 +0200 (CEST) Received: from mail.oneaccess-net.com (mail2.belgium.oneaccess-net.com [91.183.184.101]) by dpdk.org (Postfix) with ESMTP id AFB67378B for ; Wed, 3 Apr 2019 09:49:21 +0200 (CEST) Received: from [10.0.20.16] (10.0.20.16) by mail.oneaccess-net.com (10.0.24.95) with Microsoft SMTP Server (TLS) id 14.3.435.0; Wed, 3 Apr 2019 09:49:21 +0200 To: "Burakov, Anatoly" , , "Maxime Coquelin" References: <20190312093826.GA914268@bricha3-MOBL.ger.corp.intel.com> <20190312102009.GA932176@bricha3-MOBL.ger.corp.intel.com> <38327cd0-7fa2-9172-5343-0cded8e51594@intel.com> <38a5d7ab-7449-fabb-46f7-6acf9d99680f@oneaccess-net.com> From: John Sucaet Message-ID: <0a4f8c32-f580-ca9d-20fb-ee2d3e509c37@oneaccess-net.com> Date: Wed, 3 Apr 2019 09:49:20 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format="flowed" Content-Transfer-Encoding: 8bit Content-Language: en-US Subject: Re: [dpdk-dev] Query : Does Binding with vfio-pci is supported inside qemu-kvm guest/vm instance.? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190403074920.CztMnkJtFAFt-LCZfJ7O3F35loKi0_vgXkJCjLFCYxs@z> Thanks, Anatoly. Maxime, could you enlighten me a bit? I basically would like to know whether I should be able to make my 32-bit dpdk application work with virtio-pci-net and vfio-pci (with a 64-bit kernel), or if I should do the effort to port to 64-bit (which I would like to avoid for now). Thank you John On 04/02/2019 03:38 PM, Burakov, Anatoly wrote: > On 02-Apr-19 11:38 AM, John Sucaet wrote: >> Hi Anatoly, >> >> As you said: There's no reason to use igb_uio, ever! > > That was partly tongue in cheek, but point taken :) > >> I would like to ask whether vfio-pci with or without vIOMMU >> should/could work for virtio-pci net devices in the case of a 32-bit >> dpdk application, on a 64-bit kernel (4.9) inside a guest VM >> (qemu-2.10.2-1.fc27)? >> >> I tried both a 64-bit and a 32-bit version of the same application, >> but only in the case of the 64-bit application, the port was found by >> eal. The 32-bit application gave errors like: >> >> EAL: pci_map_resource(): cannot mmap(16, 0xf4a01000, 0x4000, 0x0): >> Invalid argument (0xffffffff) >> EAL: Failed to map pci BAR4 >> EAL:   0000:00:02.0 mapping BAR4 failed: Invalid argument >> EAL: Can't write to PCI bar (0) : offset (12) >> EAL: Can't read from PCI bar (0) : offset (12) >> EAL: Can't read from PCI bar (0) : offset (12) >> EAL: Can't write to PCI bar (0) : offset (12) >> EAL: Can't read from PCI bar (0) : offset (12) >> EAL: Can't write to PCI bar (0) : offset (12) >> EAL: Can't read from PCI bar (0) : offset (0) >> EAL: Can't write to PCI bar (0) : offset (4) >> EAL: Can't write to PCI bar (0) : offset (14) >> EAL: Can't read from PCI bar (0) : offset (14) >> EAL: Can't read from PCI bar (0) : offset (1a) >> EAL: Can't read from PCI bar (0) : offset (1c) >> EAL: Can't write to PCI bar (0) : offset (e) >> EAL: Can't read from PCI bar (0) : offset (c) >> virtio_init_queue(): virtqueue size is not powerof 2 >> EAL: Requested device 0000:00:02.0 cannot be used >> >> Maybe you have an idea what went wrong here? >> >> By preference, I would like to continue to use the 32-bit application >> which worked fine with the igb_uio driver. > > Unfortunately, i am not very familiar with virtio and wouldn't know > whether it's supposed to work under these conditions. Perhaps Maxime > would be of more help here (CC'd). > >> >> >> Thanks >> >> John >> >> >> On 03/12/2019 11:57 AM, Burakov, Anatoly wrote: >>> On 12-Mar-19 10:20 AM, Bruce Richardson wrote: >>>> On Tue, Mar 12, 2019 at 05:54:39PM +0800, Jason Wang wrote: >>>>> >>>>> On 2019/3/12 下午5:42, Thanneeru Srinivasulu wrote: >>>>>> Thanks Bruce.. >>>>>> >>>>>> On Tue, Mar 12, 2019 at 3:08 PM Bruce Richardson >>>>>> wrote: >>>>>>> On Tue, Mar 12, 2019 at 10:57:55AM +0530, Thanneeru Srinivasulu >>>>>>> wrote: >>>>>>>> Hi Everyone. >>>>>>>> >>>>>>>> I did attached pice  to Guest VM using vfio-pci with qemu >>>>>>>> command, and then >>>>>>>> tried binding the pcie bdf with vfio-pci, observing binding >>>>>>>> failure with >>>>>>>> vfio-pci. >>>>>>>> >>>>>>>> Where as when tryied with igb_uio, everything works fine. >>>>>>>> >>>>>>>> Does Binding with vfio-pci is supported inside VM/guest? >>>>>>>> >>>>>>> vfio support requires the presence of an IOMMU, and you >>>>>>> generally don't >>>>>>> have an IOMMU available in a VM. >>>>>>> >>>>>>> /Bruce >>>>> >>>>> >>>>> Actually, Qemu support vIOMMU + VFIO in guest[1], all you need is >>>>> to add a >>>>> intel IOMMU and enabling caching mode. >>>>> >>>>> Thanks >>>>> >>>>> >>>>> [1] >>>>> >>>>> https://www.lfasiallc.com/wp-content/uploads/2017/11/Device-Assignment-with-Nested-Guests-and-DPDK_Peter-Xu.pdf >>>>> >>>>> >>>> Thanks for the info. >>>> >>>> /Bruce >>>> >>> >>> One more thing: even without vIOMMU, VFIO has no-IOMMU mode which >>> can be enabled (for a recent-enough kernel). This will make VFIO >>> work even in cases where the guest doesn't have IOMMU emulation. >>> See? There's no reason to use igb_uio, ever! :D >>> >> >> > >