From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by dpdk.space (Postfix) with ESMTP id AED8FA0679
	for <public@inbox.dpdk.org>; Thu,  4 Apr 2019 10:23:58 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 6D6DB5920;
	Thu,  4 Apr 2019 10:23:57 +0200 (CEST)
Received: from mail.oneaccess-net.com (mail2.belgium.oneaccess-net.com
 [91.183.184.101]) by dpdk.org (Postfix) with ESMTP id 549375911
 for <dev@dpdk.org>; Thu,  4 Apr 2019 10:23:56 +0200 (CEST)
Received: from [10.0.21.137] (10.0.21.137) by mail.oneaccess-net.com
 (10.0.24.95) with Microsoft SMTP Server (TLS) id 14.3.435.0; Thu, 4 Apr 2019
 10:23:56 +0200
To: Maxime Coquelin <maxime.coquelin@redhat.com>, "Burakov, Anatoly"
 <anatoly.burakov@intel.com>, <dev@dpdk.org>
References: <CAMtOeK+i0XhNLci79FH-fTxHSGejj8+upMZ4Wvhpt_uxEu5WNw@mail.gmail.com>
 <20190312093826.GA914268@bricha3-MOBL.ger.corp.intel.com>
 <CAMtOeK+x5mTM8xf_JnXY9ZtZjqkovQ+DPNoWf_K0hKyZnO0wWQ@mail.gmail.com>
 <fd1e74ae-4bae-c8ff-36a1-730cc61ccb48@redhat.com>
 <20190312102009.GA932176@bricha3-MOBL.ger.corp.intel.com>
 <38327cd0-7fa2-9172-5343-0cded8e51594@intel.com>
 <38a5d7ab-7449-fabb-46f7-6acf9d99680f@oneaccess-net.com>
 <b0f9eb7f-f90d-9794-c733-3d479db069e3@intel.com>
 <0a4f8c32-f580-ca9d-20fb-ee2d3e509c37@oneaccess-net.com>
 <5453b9f7-b4b8-5831-50b3-abd52d796a24@redhat.com>
From: John Sucaet <john.sucaet@oneaccess-net.com>
Message-ID: <0f34022d-10cf-afa4-12a7-9da0c4e162d2@oneaccess-net.com>
Date: Thu, 4 Apr 2019 10:23:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101
 Thunderbird/52.9.1
MIME-Version: 1.0
In-Reply-To: <5453b9f7-b4b8-5831-50b3-abd52d796a24@redhat.com>
Content-Type: text/plain; charset="UTF-8"; format="flowed"
Content-Transfer-Encoding: 8bit
Content-Language: en-US
Subject: Re: [dpdk-dev] Query : Does Binding with vfio-pci is supported
 inside qemu-kvm guest/vm instance.?
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>
Message-ID: <20190404082355.qCDteRwRx9nF_T1Sg35PyrnNOXtxH2CEpzJKCf1JZb4@z>

Hi Maxime,

Thanks for your answer. I want to add that I have this problem both with 
18.08 and 18.11. Also I am using qemu with machine type pc-q35-2.10 (and 
pcie-root). When I change it to machine type pc-i440fx-2.4 (and 
pci-root), I don't see the problem. Ports are in that case, detected in 
the 32-bit dpdk application, as expected.

Hope this helps in finding a reason why mmap failed.

Thanks

John

On 04/03/2019 09:54 AM, Maxime Coquelin wrote:
> Hi John,
>
> On 4/3/19 9:49 AM, John Sucaet wrote:
>> Thanks, Anatoly.
>>
>> Maxime, could you enlighten me a bit? I basically would like to know 
>> whether I should be able to make my 32-bit dpdk application work with 
>> virtio-pci-net and vfio-pci (with a 64-bit kernel), or if I should do 
>> the effort to port to 64-bit (which I would like to avoid for now).
>
> I think that it should work, but that's not something I have tried.
> I will try to reproduce it this week to get a precise idea of what is
> going wrong.
>
> Thanks for reporting the issue,
> Maxime
>
>> Thank you
>> John
>>
>> On 04/02/2019 03:38 PM, Burakov, Anatoly wrote:
>>> On 02-Apr-19 11:38 AM, John Sucaet wrote:
>>>> Hi Anatoly,
>>>>
>>>> As you said: There's no reason to use igb_uio, ever!
>>>
>>> That was partly tongue in cheek, but point taken :)
>>>
>>>> I would like to ask whether vfio-pci with or without vIOMMU 
>>>> should/could work for virtio-pci net devices in the case of a 
>>>> 32-bit dpdk application, on a 64-bit kernel (4.9) inside a guest VM 
>>>> (qemu-2.10.2-1.fc27)?
>>>>
>>>> I tried both a 64-bit and a 32-bit version of the same application, 
>>>> but only in the case of the 64-bit application, the port was found 
>>>> by eal. The 32-bit application gave errors like:
>>>>
>>>> EAL: pci_map_resource(): cannot mmap(16, 0xf4a01000, 0x4000, 0x0): 
>>>> Invalid argument (0xffffffff)
>>>> EAL: Failed to map pci BAR4
>>>> EAL:   0000:00:02.0 mapping BAR4 failed: Invalid argument
>>>> EAL: Can't write to PCI bar (0) : offset (12)
>>>> EAL: Can't read from PCI bar (0) : offset (12)
>>>> EAL: Can't read from PCI bar (0) : offset (12)
>>>> EAL: Can't write to PCI bar (0) : offset (12)
>>>> EAL: Can't read from PCI bar (0) : offset (12)
>>>> EAL: Can't write to PCI bar (0) : offset (12)
>>>> EAL: Can't read from PCI bar (0) : offset (0)
>>>> EAL: Can't write to PCI bar (0) : offset (4)
>>>> EAL: Can't write to PCI bar (0) : offset (14)
>>>> EAL: Can't read from PCI bar (0) : offset (14)
>>>> EAL: Can't read from PCI bar (0) : offset (1a)
>>>> EAL: Can't read from PCI bar (0) : offset (1c)
>>>> EAL: Can't write to PCI bar (0) : offset (e)
>>>> EAL: Can't read from PCI bar (0) : offset (c)
>>>> virtio_init_queue(): virtqueue size is not powerof 2
>>>> EAL: Requested device 0000:00:02.0 cannot be used
>>>>
>>>> Maybe you have an idea what went wrong here?
>>>>
>>>> By preference, I would like to continue to use the 32-bit 
>>>> application which worked fine with the igb_uio driver.
>>>
>>> Unfortunately, i am not very familiar with virtio and wouldn't know 
>>> whether it's supposed to work under these conditions. Perhaps Maxime 
>>> would be of more help here (CC'd).
>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>> John
>>>>
>>>>
>>>> On 03/12/2019 11:57 AM, Burakov, Anatoly wrote:
>>>>> On 12-Mar-19 10:20 AM, Bruce Richardson wrote:
>>>>>> On Tue, Mar 12, 2019 at 05:54:39PM +0800, Jason Wang wrote:
>>>>>>>
>>>>>>> On 2019/3/12 下午5:42, Thanneeru Srinivasulu wrote:
>>>>>>>> Thanks Bruce..
>>>>>>>>
>>>>>>>> On Tue, Mar 12, 2019 at 3:08 PM Bruce Richardson
>>>>>>>> <bruce.richardson@intel.com> wrote:
>>>>>>>>> On Tue, Mar 12, 2019 at 10:57:55AM +0530, Thanneeru 
>>>>>>>>> Srinivasulu wrote:
>>>>>>>>>> Hi Everyone.
>>>>>>>>>>
>>>>>>>>>> I did attached pice  to Guest VM using vfio-pci with qemu 
>>>>>>>>>> command, and then
>>>>>>>>>> tried binding the pcie bdf with vfio-pci, observing binding 
>>>>>>>>>> failure with
>>>>>>>>>> vfio-pci.
>>>>>>>>>>
>>>>>>>>>> Where as when tryied with igb_uio, everything works fine.
>>>>>>>>>>
>>>>>>>>>> Does Binding with vfio-pci is supported inside VM/guest?
>>>>>>>>>>
>>>>>>>>> vfio support requires the presence of an IOMMU, and you 
>>>>>>>>> generally don't
>>>>>>>>> have an IOMMU available in a VM.
>>>>>>>>>
>>>>>>>>> /Bruce
>>>>>>>
>>>>>>>
>>>>>>> Actually, Qemu support vIOMMU + VFIO in guest[1], all you need 
>>>>>>> is to add a
>>>>>>> intel IOMMU and enabling caching mode.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>> [1]
>>>>>>>
>>>>>>> https://www.lfasiallc.com/wp-content/uploads/2017/11/Device-Assignment-with-Nested-Guests-and-DPDK_Peter-Xu.pdf 
>>>>>>>
>>>>>>>
>>>>>> Thanks for the info.
>>>>>>
>>>>>> /Bruce
>>>>>>
>>>>>
>>>>> One more thing: even without vIOMMU, VFIO has no-IOMMU mode which 
>>>>> can be enabled (for a recent-enough kernel). This will make VFIO 
>>>>> work even in cases where the guest doesn't have IOMMU emulation. 
>>>>> See? There's no reason to use igb_uio, ever! :D
>>>>>
>>>>
>>>>
>>>
>>>
>>
> .
>