From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: "Hu, Xuekun" <xuekun.hu@intel.com>, Ravi Kerur <rkerur@gmail.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
"Lu, Wenzhuo" <wenzhuo.lu@intel.com>
Subject: Re: [dpdk-dev] IXGBE, IOMMU DMAR DRHD handling fault issue
Date: Tue, 15 Jan 2019 11:22:18 +0000 [thread overview]
Message-ID: <d93cb032-afb7-fc28-9ad7-b372fcf566cf@intel.com> (raw)
In-Reply-To: <88A92D351643BA4CB23E303155170626635418D0@SHSMSX103.ccr.corp.intel.com>
On 15-Jan-19 7:07 AM, Hu, Xuekun wrote:
> Hi, Ravi
>
> Did you resolve this issue that VF used in guest with vIOMMU enabled? I googled, but still can't get the answer that it is driver or qemu vt-d emulation issue.
>
> Currently I met the same issue again that host reported DMAR error:
> [59939.130110] DMAR: DRHD: handling fault status reg 2
> [59939.130116] DMAR: [DMA Read] Request device [83:10.0] fault addr 15f03d000 [fault reason 06] PTE Read access is not set
> [59940.180859] ixgbe 0000:83:00.0: Issuing VFLR with pending transactions
> [59940.180863] ixgbe 0000:83:00.0: Issuing VFLR for VF 0000:83:10.0
> [59989.344683] ixgbe 0000:83:00.0 ens817f0: VF Reset msg received from vf 0
>
> I'm using DPDK 18.11 in guest, and ixgbe.ko in host.
>
> Thx, Xuekun
>
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Burakov, Anatoly
> Sent: Friday, February 16, 2018 5:42 PM
> To: Ravi Kerur <rkerur@gmail.com>
> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] IXGBE, IOMMU DMAR DRHD handling fault issue
>
> On 15-Feb-18 8:53 PM, Ravi Kerur wrote:
>>
>>
>> On Thu, Feb 15, 2018 at 10:27 AM, Ravi Kerur <rkerur@gmail.com
>> <mailto:rkerur@gmail.com>> wrote:
>>
>>
>>
>> On Thu, Feb 15, 2018 at 2:28 AM, Burakov, Anatoly
>> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
>>
>> On 14-Feb-18 8:00 PM, Ravi Kerur wrote:
>>
>>
>> Earlier I was focusing only on DMAR errors and I might have
>> said 'it worked' when I didn't notice them on host when dpdk
>> was started on guest. When trying to send packets out of
>> that interface from guest I did see DMAR errors. I am
>> attaching information you requested. I have enabled
>> log-level=8 and files contain dpdk EAL/PMD logs as well.
>>
>>
>> Great, now we're on the same page.
>>
>>
>> Snippets below
>>
>> on host, DMAR fault address from dmesg
>>
>> [351576.998109] DMAR: DRHD: handling fault status reg 702
>> [351576.998113] DMAR: [DMA Read] Request device [04:10.0]
>> fault addr 257617000 [fault reason 06] PTE Read access is
>> not set
>>
>> on guest (dump phys_mem_layout)
>>
>> Segment 235: phys:0x257600000, len:2097152,
>> virt:0x7fce87e00000, socket_id:0, hugepage_sz:2097152,
>> nchannel:0, nrank:0
>> ...
>> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fce87e0f4c0
>> sw_sc_ring=0x7fce87e07380 hw_ring=0x7fce87e17600
>> dma_addr=0x257617600
>> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fce89c67d40
>> sw_sc_ring=0x7fce89c5fc00 hw_ring=0x7fce89c6fe80
>> dma_addr=0x25406fe80
>> ...
>>
>>
>> To me this looks like host (i.e. either QEMU or the PF driver)
>> is trying to do DMA using guest-physical (and not
>> host-physical). I'm not too well-versed in how QEMU works, but
>> i'm pretty sure that's not supposed to happen.
>>
>> Is PF also bound to DPDK, or are you using native Linux ixgbe
>> driver?
>>
>>
>> Thanks for your help. I cannot use PF with DPDK (vfio-pci), VF
>> interfaces disappear after it is bound to DPDK. If there is a way to
>> use PF and VF with DPDK let me know I can try it out. I am not sure
>> how to move forward on this, Is CPU/IXGBE PF driver playing a role?
>> Following are the versions I have
>>
>> lscpu
>> Architecture: x86_64
>> CPU op-mode(s): 32-bit, 64-bit
>> Byte Order: Little Endian
>> CPU(s): 56
>> On-line CPU(s) list: 0-27
>> Off-line CPU(s) list: 28-55
>> Thread(s) per core: 1
>> Core(s) per socket: 14
>> Socket(s): 2
>> NUMA node(s): 2
>> Vendor ID: GenuineIntel
>> CPU family: 6
>> Model: 63
>> Model name: Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
>> Stepping: 2
>> CPU MHz: 2500.610
>> CPU max MHz: 3000.0000
>> CPU min MHz: 1200.0000
>> BogoMIPS: 4000.74
>> Virtualization: VT-x
>> L1d cache: 32K
>> L1i cache: 32K
>> L2 cache: 256K
>> L3 cache: 35840K
>> NUMA node0 CPU(s): 0-13
>> NUMA node1 CPU(s): 14-27
>>
>> # ethtool -i enp4s0f0
>> driver: ixgbe
>> version: 5.3.3
>> firmware-version: 0x800007b8, 1.1018.0
>> bus-info: 0000:04:00.0
>> supports-statistics: yes
>> supports-test: yes
>> supports-eeprom-access: yes
>> supports-register-dump: yes
>> supports-priv-flags: yes
>>
>> Thanks,
>> Ravi
>>
>>
>>
>> Debugging this I could co-relate doing interface link-up associated
>> with the dpdk inside the guest causes DMAR errors on host and an
>> additional vflr message.
>>
>> [ 8135.861622] DMAR: DRHD: handling fault status reg 402 [
>> 8135.861627] DMAR: [DMA Read] Request device [04:10.0] fault addr
>> 1b648a000 [fault reason 06] PTE Read access is not set [ 8136.588074]
>> ixgbe 0000:04:00.0: Issuing VFLR with pending transactions [
>> 8136.588079] ixgbe 0000:04:00.0: Issuing VFLR for VF 0000:04:10.0
>>
>> Looked at ixgbe driver code 'ixgbe_issue_vf_flr' is called from
>> 'ixgbe_check_for_bad_vf' or 'ixgbe_io_error_detected' functions. Is it
>> possible that dpdk pmd vf driver is missing some fixes/porting from
>> ixgbevf driver since this issue is not seen when ixgbevf kernel driver
>> is used?
>>
>
> Could very well be. +CC ixgbe maintainers which might be of further help debugging this issue.
>
> --
> Thanks,
> Anatoly
>
You might also want to look here: https://bugs.dpdk.org/show_bug.cgi?id=76
There are apparently issues with some kernel versions that will manifest
themselves as problems with using VF devices with IOMMU in a VM.
--
Thanks,
Anatoly
next prev parent reply other threads:[~2019-01-15 11:22 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-23 17:25 Ravi Kerur
2018-01-24 10:31 ` Burakov, Anatoly
2018-01-24 19:13 ` Ravi Kerur
2018-01-25 10:49 ` Burakov, Anatoly
2018-01-29 22:35 ` Ravi Kerur
2018-01-31 9:59 ` Burakov, Anatoly
2018-01-31 21:51 ` Ravi Kerur
2018-02-01 10:10 ` Burakov, Anatoly
2018-02-01 19:26 ` Ravi Kerur
2018-02-02 10:28 ` Burakov, Anatoly
2018-02-02 20:21 ` Ravi Kerur
2018-02-02 20:51 ` Ravi Kerur
2018-02-05 10:01 ` Burakov, Anatoly
2018-02-06 17:55 ` Ravi Kerur
2018-02-08 11:20 ` Burakov, Anatoly
2018-02-09 17:41 ` Ravi Kerur
2018-02-10 10:11 ` Burakov, Anatoly
2018-02-10 10:58 ` Burakov, Anatoly
2018-02-10 17:53 ` Ravi Kerur
2018-02-12 10:13 ` Burakov, Anatoly
2018-02-12 22:00 ` Ravi Kerur
2018-02-13 14:31 ` Burakov, Anatoly
2018-02-14 20:00 ` Ravi Kerur
2018-02-15 10:28 ` Burakov, Anatoly
2018-02-15 18:27 ` Ravi Kerur
2018-02-15 20:53 ` Ravi Kerur
2018-02-16 9:41 ` Burakov, Anatoly
2019-01-15 7:07 ` Hu, Xuekun
2019-01-15 11:22 ` Burakov, Anatoly [this message]
2019-01-15 13:07 ` Hu, Xuekun
2019-01-21 13:18 ` Hu, Xuekun
2019-01-21 13:39 ` Burakov, Anatoly
2019-01-21 14:44 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d93cb032-afb7-fc28-9ad7-b372fcf566cf@intel.com \
--to=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=konstantin.ananyev@intel.com \
--cc=rkerur@gmail.com \
--cc=wenzhuo.lu@intel.com \
--cc=xuekun.hu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).