From: John Sucaet <john.sucaet@oneaccess-net.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>, <dev@dpdk.org>,
"Maxime Coquelin" <maxime.coquelin@redhat.com>
Subject: Re: [dpdk-dev] Query : Does Binding with vfio-pci is supported inside qemu-kvm guest/vm instance.?
Date: Wed, 3 Apr 2019 09:49:20 +0200 [thread overview]
Message-ID: <0a4f8c32-f580-ca9d-20fb-ee2d3e509c37@oneaccess-net.com> (raw)
In-Reply-To: <b0f9eb7f-f90d-9794-c733-3d479db069e3@intel.com>
Thanks, Anatoly.
Maxime, could you enlighten me a bit? I basically would like to know
whether I should be able to make my 32-bit dpdk application work with
virtio-pci-net and vfio-pci (with a 64-bit kernel), or if I should do
the effort to port to 64-bit (which I would like to avoid for now).
Thank you
John
On 04/02/2019 03:38 PM, Burakov, Anatoly wrote:
> On 02-Apr-19 11:38 AM, John Sucaet wrote:
>> Hi Anatoly,
>>
>> As you said: There's no reason to use igb_uio, ever!
>
> That was partly tongue in cheek, but point taken :)
>
>> I would like to ask whether vfio-pci with or without vIOMMU
>> should/could work for virtio-pci net devices in the case of a 32-bit
>> dpdk application, on a 64-bit kernel (4.9) inside a guest VM
>> (qemu-2.10.2-1.fc27)?
>>
>> I tried both a 64-bit and a 32-bit version of the same application,
>> but only in the case of the 64-bit application, the port was found by
>> eal. The 32-bit application gave errors like:
>>
>> EAL: pci_map_resource(): cannot mmap(16, 0xf4a01000, 0x4000, 0x0):
>> Invalid argument (0xffffffff)
>> EAL: Failed to map pci BAR4
>> EAL: 0000:00:02.0 mapping BAR4 failed: Invalid argument
>> EAL: Can't write to PCI bar (0) : offset (12)
>> EAL: Can't read from PCI bar (0) : offset (12)
>> EAL: Can't read from PCI bar (0) : offset (12)
>> EAL: Can't write to PCI bar (0) : offset (12)
>> EAL: Can't read from PCI bar (0) : offset (12)
>> EAL: Can't write to PCI bar (0) : offset (12)
>> EAL: Can't read from PCI bar (0) : offset (0)
>> EAL: Can't write to PCI bar (0) : offset (4)
>> EAL: Can't write to PCI bar (0) : offset (14)
>> EAL: Can't read from PCI bar (0) : offset (14)
>> EAL: Can't read from PCI bar (0) : offset (1a)
>> EAL: Can't read from PCI bar (0) : offset (1c)
>> EAL: Can't write to PCI bar (0) : offset (e)
>> EAL: Can't read from PCI bar (0) : offset (c)
>> virtio_init_queue(): virtqueue size is not powerof 2
>> EAL: Requested device 0000:00:02.0 cannot be used
>>
>> Maybe you have an idea what went wrong here?
>>
>> By preference, I would like to continue to use the 32-bit application
>> which worked fine with the igb_uio driver.
>
> Unfortunately, i am not very familiar with virtio and wouldn't know
> whether it's supposed to work under these conditions. Perhaps Maxime
> would be of more help here (CC'd).
>
>>
>>
>> Thanks
>>
>> John
>>
>>
>> On 03/12/2019 11:57 AM, Burakov, Anatoly wrote:
>>> On 12-Mar-19 10:20 AM, Bruce Richardson wrote:
>>>> On Tue, Mar 12, 2019 at 05:54:39PM +0800, Jason Wang wrote:
>>>>>
>>>>> On 2019/3/12 下午5:42, Thanneeru Srinivasulu wrote:
>>>>>> Thanks Bruce..
>>>>>>
>>>>>> On Tue, Mar 12, 2019 at 3:08 PM Bruce Richardson
>>>>>> <bruce.richardson@intel.com> wrote:
>>>>>>> On Tue, Mar 12, 2019 at 10:57:55AM +0530, Thanneeru Srinivasulu
>>>>>>> wrote:
>>>>>>>> Hi Everyone.
>>>>>>>>
>>>>>>>> I did attached pice to Guest VM using vfio-pci with qemu
>>>>>>>> command, and then
>>>>>>>> tried binding the pcie bdf with vfio-pci, observing binding
>>>>>>>> failure with
>>>>>>>> vfio-pci.
>>>>>>>>
>>>>>>>> Where as when tryied with igb_uio, everything works fine.
>>>>>>>>
>>>>>>>> Does Binding with vfio-pci is supported inside VM/guest?
>>>>>>>>
>>>>>>> vfio support requires the presence of an IOMMU, and you
>>>>>>> generally don't
>>>>>>> have an IOMMU available in a VM.
>>>>>>>
>>>>>>> /Bruce
>>>>>
>>>>>
>>>>> Actually, Qemu support vIOMMU + VFIO in guest[1], all you need is
>>>>> to add a
>>>>> intel IOMMU and enabling caching mode.
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>> [1]
>>>>>
>>>>> https://www.lfasiallc.com/wp-content/uploads/2017/11/Device-Assignment-with-Nested-Guests-and-DPDK_Peter-Xu.pdf
>>>>>
>>>>>
>>>> Thanks for the info.
>>>>
>>>> /Bruce
>>>>
>>>
>>> One more thing: even without vIOMMU, VFIO has no-IOMMU mode which
>>> can be enabled (for a recent-enough kernel). This will make VFIO
>>> work even in cases where the guest doesn't have IOMMU emulation.
>>> See? There's no reason to use igb_uio, ever! :D
>>>
>>
>>
>
>
next prev parent reply other threads:[~2019-04-03 7:49 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-12 5:27 Thanneeru Srinivasulu
2019-03-12 9:38 ` Bruce Richardson
2019-03-12 9:42 ` Thanneeru Srinivasulu
2019-03-12 9:54 ` Jason Wang
2019-03-12 10:20 ` Bruce Richardson
2019-03-12 10:57 ` Burakov, Anatoly
[not found] ` <CGME20190312110957eucas1p1af74c8d26bf80fd815847ed8086a1e78@eucas1p1.samsung.com>
2019-03-12 11:09 ` Ilya Maximets
2019-04-02 10:38 ` John Sucaet
2019-04-02 10:38 ` John Sucaet
2019-04-02 13:38 ` Burakov, Anatoly
2019-04-02 13:38 ` Burakov, Anatoly
2019-04-03 7:49 ` John Sucaet [this message]
2019-04-03 7:49 ` John Sucaet
2019-04-03 7:54 ` Maxime Coquelin
2019-04-03 7:54 ` Maxime Coquelin
2019-04-04 8:23 ` John Sucaet
2019-04-04 8:23 ` John Sucaet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0a4f8c32-f580-ca9d-20fb-ee2d3e509c37@oneaccess-net.com \
--to=john.sucaet@oneaccess-net.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).