DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Correct setup of sfc
@ 2018-06-13  9:46 Filip Janiszewski
  2018-06-13 17:46 ` Andrew Rybchenko
  0 siblings, 1 reply; 7+ messages in thread
From: Filip Janiszewski @ 2018-06-13  9:46 UTC (permalink / raw)
  To: users

Hi,

I'm trying to test a SF card (Flareon Ultra SFN7142Q Dual-Port 40GbE) in
our testing box, the details of the device are:

.
Solarstorm firmware update utility [v7.1.1]
Copyright Solarflare Communications 2006-2018, Level 5 Networks 2002-2005

enp101s0f0 - MAC: 00-0F-53-2C-3A-10
    Firmware version:   v7.1.1
    Controller type:    Solarflare SFC9100 family
    Controller version: v6.2.7.1000
    Boot ROM version:   v5.1.0.1005

The Boot ROM firmware is up to date
The controller firmware is up to date

enp101s0f1 - MAC: 00-0F-53-2C-3A-11
    Firmware version:   v7.1.1
    Controller type:    Solarflare SFC9100 family
    Controller version: v6.2.7.1000
    Boot ROM version:   v5.1.0.1005

The Boot ROM firmware is up to date
The controller firmware is up to date
.

The DPDK lib (version 18.05) has been build with
CONFIG_RTE_LIBRTE_SFC_EFX_PMD=y as per instructions, the sfc driver
seems to be loaded, the output of 'lsmod | grep sfc' is:

.
sfc                   470393  0
.

In the kernel logs i see that the card links are ready:

.
[  178.496825] sfc 0000:65:00.0 enp101s0f0: link up at 40000Mbps
full-duplex (MTU 1500)
[  178.496856] IPv6: ADDRCONF(NETDEV_CHANGE): enp101s0f0: link becomes ready
[  178.497789] sfc 0000:65:00.1 enp101s0f1: link up at 40000Mbps
full-duplex (MTU 1500)
[  178.498356] IPv6: ADDRCONF(NETDEV_CHANGE): enp101s0f1: link becomes ready
.

But DPDK seems to not recognize properly the card, it is visible in the
initial EAL logs:

.
EAL: Detected 28 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:00:1f.6 on NUMA socket 0
EAL:   probe driver: 8086:15b8 net_e1000_em
EAL: PCI device 0000:65:00.0 on NUMA socket 0
EAL:   probe driver: 1924:923 net_sfc_efx
EAL: PCI device 0000:65:00.1 on NUMA socket 0
EAL:   probe driver: 1924:923 net_sfc_efx
EAL: PCI device 0000:b3:00.0 on NUMA socket 0
EAL:   probe driver: 15b3:1015 net_mlx5
net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old
OFED/rdma-core version or firmware configuration
EAL: PCI device 0000:b3:00.1 on NUMA socket 0
EAL:   probe driver: 15b3:1015 net_mlx5
net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old
OFED/rdma-core version or firmware configuration
.

but perhaps the driver probe net_sfc_efx seems to not work as
rte_eth_dev_count_avail returns only 2 instead of 4 (there's another MLX
card on board).

Any suggestion on what might be missing here?

Thanks

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] Correct setup of sfc
  2018-06-13  9:46 [dpdk-users] Correct setup of sfc Filip Janiszewski
@ 2018-06-13 17:46 ` Andrew Rybchenko
  2018-06-13 19:14   ` Filip Janiszewski
  0 siblings, 1 reply; 7+ messages in thread
From: Andrew Rybchenko @ 2018-06-13 17:46 UTC (permalink / raw)
  To: Filip Janiszewski, users

Hi Filip,

On 06/13/2018 12:46 PM, Filip Janiszewski wrote:
> Hi,
>
> I'm trying to test a SF card (Flareon Ultra SFN7142Q Dual-Port 40GbE) in
> our testing box, the details of the device are:
>
> .
> Solarstorm firmware update utility [v7.1.1]
> Copyright Solarflare Communications 2006-2018, Level 5 Networks 2002-2005
>
> enp101s0f0 - MAC: 00-0F-53-2C-3A-10
>      Firmware version:   v7.1.1
>      Controller type:    Solarflare SFC9100 family
>      Controller version: v6.2.7.1000
>      Boot ROM version:   v5.1.0.1005
>
> The Boot ROM firmware is up to date
> The controller firmware is up to date
>
> enp101s0f1 - MAC: 00-0F-53-2C-3A-11
>      Firmware version:   v7.1.1
>      Controller type:    Solarflare SFC9100 family
>      Controller version: v6.2.7.1000
>      Boot ROM version:   v5.1.0.1005
>
> The Boot ROM firmware is up to date
> The controller firmware is up to date
> .
>
> The DPDK lib (version 18.05) has been build with
> CONFIG_RTE_LIBRTE_SFC_EFX_PMD=y as per instructions, the sfc driver
> seems to be loaded, the output of 'lsmod | grep sfc' is:
>
> .
> sfc                   470393  0
> .

PCI devices of Solarflare NIC should be bound to vfio, uio-pci-generic or
igb_uio (part of DPDK) module. In the case of Solarflare NICs, Linux 
driver is
not required and not used in DPDK.

So, you should load one of above modules (depending on your server
IOMMU configuration), push already created interfaces down and rebind
Solarflare PCI functions to the driver, something like:

modprobe vfio-pci
ip link set enp101s0f0 down
ip link set enp101s0f1 down
dpdk-devbind.py --bind=vfio-pci 0000:65:00.0 0000:65:00.1

The above assumes that dpdk-devbind.py script is in PATH.
And start DPDK as you do before.

Regards,
Andrew.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] Correct setup of sfc
  2018-06-13 17:46 ` Andrew Rybchenko
@ 2018-06-13 19:14   ` Filip Janiszewski
  2018-06-15  9:44     ` Andrew Rybchenko
  2018-06-15 15:34     ` Rosen, Rami
  0 siblings, 2 replies; 7+ messages in thread
From: Filip Janiszewski @ 2018-06-13 19:14 UTC (permalink / raw)
  To: Andrew Rybchenko, users

Hi Andrew,


> PCI devices of Solarflare NIC should be bound to vfio, uio-pci-generic or
> igb_uio (part of DPDK) module. In the case of Solarflare NICs, Linux
> driver is
> not required and not used in DPDK.
> 
> So, you should load one of above modules (depending on your server
> IOMMU configuration), push already created interfaces down and rebind
> Solarflare PCI functions to the driver, something like:
> 
> modprobe vfio-pci
> ip link set enp101s0f0 down
> ip link set enp101s0f1 down
> dpdk-devbind.py --bind=vfio-pci 0000:65:00.0 0000:65:00.1
> 
> The above assumes that dpdk-devbind.py script is in PATH.
> And start DPDK as you do before.
> 

For some reason vfio-pci is refusing to bind the device:

.
root usertools : ./dpdk-devbind.py --bind=vfio_pci 0000:65:00.0 0000:65:00.1
Error: bind failed for 0000:65:00.0 - Cannot open
/sys/bus/pci/drivers/vfio_pci/bind
Error: bind failed for 0000:65:00.1 - Cannot open
/sys/bus/pci/drivers/vfio_pci/bind
.

also:

.
root usertools : echo 0000:65:00.0 > /sys/bus/pci/drivers/vfio-pci/bind
bash: echo: write error: No such device
.

In the kernel command line I've included 'iommu=pt intel_iommu=on', but
still not working. Well I guess this is not a DPDK issue anymore.

Thanks!

> Regards,
> Andrew.

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] Correct setup of sfc
  2018-06-13 19:14   ` Filip Janiszewski
@ 2018-06-15  9:44     ` Andrew Rybchenko
  2018-06-15 17:02       ` Filip Janiszewski
  2018-06-15 15:34     ` Rosen, Rami
  1 sibling, 1 reply; 7+ messages in thread
From: Andrew Rybchenko @ 2018-06-15  9:44 UTC (permalink / raw)
  To: Filip Janiszewski, users

On 06/13/2018 10:14 PM, Filip Janiszewski wrote:
> Hi Andrew,
>
>
>> PCI devices of Solarflare NIC should be bound to vfio, uio-pci-generic or
>> igb_uio (part of DPDK) module. In the case of Solarflare NICs, Linux
>> driver is
>> not required and not used in DPDK.
>>
>> So, you should load one of above modules (depending on your server
>> IOMMU configuration), push already created interfaces down and rebind
>> Solarflare PCI functions to the driver, something like:
>>
>> modprobe vfio-pci
>> ip link set enp101s0f0 down
>> ip link set enp101s0f1 down
>> dpdk-devbind.py --bind=vfio-pci 0000:65:00.0 0000:65:00.1
>>
>> The above assumes that dpdk-devbind.py script is in PATH.
>> And start DPDK as you do before.
>>
> For some reason vfio-pci is refusing to bind the device:
>
> .
> root usertools : ./dpdk-devbind.py --bind=vfio_pci 0000:65:00.0 0000:65:00.1
> Error: bind failed for 0000:65:00.0 - Cannot open
> /sys/bus/pci/drivers/vfio_pci/bind
> Error: bind failed for 0000:65:00.1 - Cannot open
> /sys/bus/pci/drivers/vfio_pci/bind
> .
>
> also:
>
> .
> root usertools : echo 0000:65:00.0 > /sys/bus/pci/drivers/vfio-pci/bind
> bash: echo: write error: No such device
> .
>
> In the kernel command line I've included 'iommu=pt intel_iommu=on', but
> still not working. Well I guess this is not a DPDK issue anymore.

What does the following command shows?
./dpdk-devbind.py --status-dev net

Is IOMMU (VT-d) enabled in BIOS/UEFI? You can check for files/symlinks 
in /sys/class/iommu.

As an experiment I'd try uio-pci-generic as well.
Also exact error logs from dmesg could be useful to understand what's 
going on.

Andrew.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] Correct setup of sfc
  2018-06-13 19:14   ` Filip Janiszewski
  2018-06-15  9:44     ` Andrew Rybchenko
@ 2018-06-15 15:34     ` Rosen, Rami
  1 sibling, 0 replies; 7+ messages in thread
From: Rosen, Rami @ 2018-06-15 15:34 UTC (permalink / raw)
  To: Filip Janiszewski, Andrew Rybchenko, users

Hi Filip,

What does:
ls -al /sys/bus/pci/drivers show ? In case 
you have there "vfio-pci" and not "vfio_pci" (as many kernels have),
can you please try:

./dpdk-devbind.py --bind=vfio-pci 0000:65:00.0 0000:65:00.1

Instead of 

./dpdk-devbind.py --bind=vfio_pci 0000:65:00.0 0000:65:00.1

(notice the difference in the "--bind" parameter)

And if it also fails, please post the here the error and also the kernel log 
message (you can get it for example by dmesg | tail -n 15)

Regards,
Rami Rosen

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] Correct setup of sfc
  2018-06-15  9:44     ` Andrew Rybchenko
@ 2018-06-15 17:02       ` Filip Janiszewski
  2018-06-15 17:41         ` Filip Janiszewski
  0 siblings, 1 reply; 7+ messages in thread
From: Filip Janiszewski @ 2018-06-15 17:02 UTC (permalink / raw)
  To: Andrew Rybchenko, users, rami.rosen

Adding here also Rami Rosen to continue just one thread.

First of all thanks for replying, now here's the current status:

It seems that there might be some problems with IOMMU, according to
dmesg logs:

.
root build : dmesg | grep IOMMU
[    0.000000] DMAR: IOMMU enabled
[ 1179.652950] vboxpci: IOMMU not found (not registered)
.

In particular the second line is a clear warning that IOMMU is not
enabled, will try to work this out through the bios options. Also, as
Andrew suggested I've had a look at /sys/class/iommu and it's empty:

.
root build : ll /sys/class/iommu/
total 0
.

Anyway the output of --status-dev net is:

.
root usertools : ./dpdk-devbind.py --status-dev net

Network devices using DPDK-compatible driver
============================================
<none>





Network devices using kernel driver


===================================


0000:00:1f.6 'Ethernet Connection (2) I219-V 15b8' if=enp0s31f6
drv=e1000e unused= *Active*

0000:65:00.0 'SFC9140 10/40G Ethernet Controller 0923' if=enp101s0f0
drv=sfc unused=

0000:65:00.1 'SFC9140 10/40G Ethernet Controller 0923' if=enp101s0f1
drv=sfc unused=

0000:b3:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f0
drv=mlx5_core unused=

0000:b3:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f1
drv=mlx5_core unused=




Other Network devices


=====================


<none>
.

I've been able to use the Solarflar device by binding it to the
uio_pci_generic driver:

.
root usertools : ./dpdk-devbind.py --status-dev net

Network devices using DPDK-compatible driver
============================================
0000:65:00.0 'SFC9140 10/40G Ethernet Controller 0923'
drv=uio_pci_generic unused=sfc
0000:65:00.1 'SFC9140 10/40G Ethernet Controller 0923'
drv=uio_pci_generic unused=sfc

Network devices using kernel driver
===================================
0000:00:1f.6 'Ethernet Connection (2) I219-V 15b8' if=enp0s31f6
drv=e1000e unused=uio_pci_generic *Active*
0000:b3:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f0
drv=mlx5_core unused=uio_pci_generic
0000:b3:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f1
drv=mlx5_core unused=uio_pci_generic
.

and the card is properly recognized:

.
EAL: PCI device 0000:00:1f.6 on NUMA socket 0
EAL:   probe driver: 8086:15b8 net_e1000_em
EAL: PCI device 0000:65:00.0 on NUMA socket 0
EAL:   probe driver: 1924:923 net_sfc_efx
PMD: sfc_efx 0000:65:00.0 #0: running FW variant is ultra-low-latency
PMD: sfc_efx 0000:65:00.0 #0: use ef10 Rx datapath
PMD: sfc_efx 0000:65:00.0 #0: use ef10 Tx datapath
EAL: PCI device 0000:65:00.1 on NUMA socket 0
EAL:   probe driver: 1924:923 net_sfc_efx
PMD: sfc_efx 0000:65:00.1 #1: running FW variant is ultra-low-latency
PMD: sfc_efx 0000:65:00.1 #1: use ef10 Rx datapath
PMD: sfc_efx 0000:65:00.1 #1: use ef10 Tx datapath
.

On the other hand, attempting to bind to vfio-pci as Rami suggested is
not working:

.
root usertools : ./dpdk-devbind.py --bind=vfio-pci 0000:65:00.0 0000:65:00.1
Error: bind failed for 0000:65:00.0 - Cannot bind to driver vfio-pci
Error: bind failed for 0000:65:00.1 - Cannot bind to driver vfio-pci
.

And as requested, those are the dmesg logs:

.
[ 2146.354434] VFIO - User Level meta-driver version: 0.3
[ 2159.418937] vfio-pci: probe of 0000:65:00.0 failed with error -22
[ 2159.434537] vfio-pci: probe of 0000:65:00.1 failed with error -22
[ 2199.773476] vfio-pci: probe of 0000:65:00.0 failed with error -22
[ 2199.788973] vfio-pci: probe of 0000:65:00.1 failed with error -22
.

Which as far as I understand are referring to IOMMU not being enable on
the machine, I guess I'll have to find some specific option for that as
the BIOS suggest that "Intel Virtualization" is enabled, but that might
refer to what we need here.

Filip

Il 15/06/18 11:44, Andrew Rybchenko ha scritto:
> On 06/13/2018 10:14 PM, Filip Janiszewski wrote:
>> Hi Andrew,
>>
>>
>>> PCI devices of Solarflare NIC should be bound to vfio,
>>> uio-pci-generic or
>>> igb_uio (part of DPDK) module. In the case of Solarflare NICs, Linux
>>> driver is
>>> not required and not used in DPDK.
>>>
>>> So, you should load one of above modules (depending on your server
>>> IOMMU configuration), push already created interfaces down and rebind
>>> Solarflare PCI functions to the driver, something like:
>>>
>>> modprobe vfio-pci
>>> ip link set enp101s0f0 down
>>> ip link set enp101s0f1 down
>>> dpdk-devbind.py --bind=vfio-pci 0000:65:00.0 0000:65:00.1
>>>
>>> The above assumes that dpdk-devbind.py script is in PATH.
>>> And start DPDK as you do before.
>>>
>> For some reason vfio-pci is refusing to bind the device:
>>
>> .
>> root usertools : ./dpdk-devbind.py --bind=vfio_pci 0000:65:00.0
>> 0000:65:00.1
>> Error: bind failed for 0000:65:00.0 - Cannot open
>> /sys/bus/pci/drivers/vfio_pci/bind
>> Error: bind failed for 0000:65:00.1 - Cannot open
>> /sys/bus/pci/drivers/vfio_pci/bind
>> .
>>
>> also:
>>
>> .
>> root usertools : echo 0000:65:00.0 > /sys/bus/pci/drivers/vfio-pci/bind
>> bash: echo: write error: No such device
>> .
>>
>> In the kernel command line I've included 'iommu=pt intel_iommu=on', but
>> still not working. Well I guess this is not a DPDK issue anymore.
> 
> What does the following command shows?
> ./dpdk-devbind.py --status-dev net
> 
> Is IOMMU (VT-d) enabled in BIOS/UEFI? You can check for files/symlinks
> in /sys/class/iommu.
> 
> As an experiment I'd try uio-pci-generic as well.
> Also exact error logs from dmesg could be useful to understand what's
> going on.
> 
> Andrew.

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] Correct setup of sfc
  2018-06-15 17:02       ` Filip Janiszewski
@ 2018-06-15 17:41         ` Filip Janiszewski
  0 siblings, 0 replies; 7+ messages in thread
From: Filip Janiszewski @ 2018-06-15 17:41 UTC (permalink / raw)
  To: Andrew Rybchenko, users, rami.rosen

Hi,

Apparently there was an additional option to click in the BIOS, now I'm
able to bind to vfio-pci and IOMMU is working correctly:

.
root usertools : ./dpdk-devbind.py --status-dev net

Network devices using DPDK-compatible driver
============================================
0000:65:00.0 'SFC9140 10/40G Ethernet Controller 0923' drv=vfio-pci
unused=sfc
0000:65:00.1 'SFC9140 10/40G Ethernet Controller 0923' drv=vfio-pci
unused=sfc

Network devices using kernel driver
===================================
0000:00:1f.6 'Ethernet Connection (2) I219-V 15b8' if=enp0s31f6
drv=e1000e unused=vfio-pci *Active*
0000:b3:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f0
drv=mlx5_core unused=vfio-pci
0000:b3:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f1
drv=mlx5_core unused=vfio-pci

Other Network devices
=====================
<none>

.

also:

.
root build : ll /sys/class/iommu/
total 0
lrwxrwxrwx. 1 root root 0 Jun 15  2018 dmar0 ->
../../devices/virtual/iommu/dmar0
lrwxrwxrwx. 1 root root 0 Jun 15  2018 dmar1 ->
../../devices/virtual/iommu/dmar1
lrwxrwxrwx. 1 root root 0 Jun 15  2018 dmar2 ->
../../devices/virtual/iommu/dmar2
lrwxrwxrwx. 1 root root 0 Jun 15  2018 dmar3 ->
../../devices/virtual/iommu/dmar3

.

Thanks everybody for the support.


Il 15/06/18 19:02, Filip Janiszewski ha scritto:
> Adding here also Rami Rosen to continue just one thread.
> 
> First of all thanks for replying, now here's the current status:
> 
> It seems that there might be some problems with IOMMU, according to
> dmesg logs:
> 
> .
> root build : dmesg | grep IOMMU
> [    0.000000] DMAR: IOMMU enabled
> [ 1179.652950] vboxpci: IOMMU not found (not registered)
> .
> 
> In particular the second line is a clear warning that IOMMU is not
> enabled, will try to work this out through the bios options. Also, as
> Andrew suggested I've had a look at /sys/class/iommu and it's empty:
> 
> .
> root build : ll /sys/class/iommu/
> total 0
> .
> 
> Anyway the output of --status-dev net is:
> 
> .
> root usertools : ./dpdk-devbind.py --status-dev net
> 
> Network devices using DPDK-compatible driver
> ============================================
> <none>
> 
> 
> 
> 
> 
> Network devices using kernel driver
> 
> 
> ===================================
> 
> 
> 0000:00:1f.6 'Ethernet Connection (2) I219-V 15b8' if=enp0s31f6
> drv=e1000e unused= *Active*
> 
> 0000:65:00.0 'SFC9140 10/40G Ethernet Controller 0923' if=enp101s0f0
> drv=sfc unused=
> 
> 0000:65:00.1 'SFC9140 10/40G Ethernet Controller 0923' if=enp101s0f1
> drv=sfc unused=
> 
> 0000:b3:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f0
> drv=mlx5_core unused=
> 
> 0000:b3:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f1
> drv=mlx5_core unused=
> 
> 
> 
> 
> Other Network devices
> 
> 
> =====================
> 
> 
> <none>
> .
> 
> I've been able to use the Solarflar device by binding it to the
> uio_pci_generic driver:
> 
> .
> root usertools : ./dpdk-devbind.py --status-dev net
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:65:00.0 'SFC9140 10/40G Ethernet Controller 0923'
> drv=uio_pci_generic unused=sfc
> 0000:65:00.1 'SFC9140 10/40G Ethernet Controller 0923'
> drv=uio_pci_generic unused=sfc
> 
> Network devices using kernel driver
> ===================================
> 0000:00:1f.6 'Ethernet Connection (2) I219-V 15b8' if=enp0s31f6
> drv=e1000e unused=uio_pci_generic *Active*
> 0000:b3:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f0
> drv=mlx5_core unused=uio_pci_generic
> 0000:b3:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp179s0f1
> drv=mlx5_core unused=uio_pci_generic
> .
> 
> and the card is properly recognized:
> 
> .
> EAL: PCI device 0000:00:1f.6 on NUMA socket 0
> EAL:   probe driver: 8086:15b8 net_e1000_em
> EAL: PCI device 0000:65:00.0 on NUMA socket 0
> EAL:   probe driver: 1924:923 net_sfc_efx
> PMD: sfc_efx 0000:65:00.0 #0: running FW variant is ultra-low-latency
> PMD: sfc_efx 0000:65:00.0 #0: use ef10 Rx datapath
> PMD: sfc_efx 0000:65:00.0 #0: use ef10 Tx datapath
> EAL: PCI device 0000:65:00.1 on NUMA socket 0
> EAL:   probe driver: 1924:923 net_sfc_efx
> PMD: sfc_efx 0000:65:00.1 #1: running FW variant is ultra-low-latency
> PMD: sfc_efx 0000:65:00.1 #1: use ef10 Rx datapath
> PMD: sfc_efx 0000:65:00.1 #1: use ef10 Tx datapath
> .
> 
> On the other hand, attempting to bind to vfio-pci as Rami suggested is
> not working:
> 
> .
> root usertools : ./dpdk-devbind.py --bind=vfio-pci 0000:65:00.0 0000:65:00.1
> Error: bind failed for 0000:65:00.0 - Cannot bind to driver vfio-pci
> Error: bind failed for 0000:65:00.1 - Cannot bind to driver vfio-pci
> .
> 
> And as requested, those are the dmesg logs:
> 
> .
> [ 2146.354434] VFIO - User Level meta-driver version: 0.3
> [ 2159.418937] vfio-pci: probe of 0000:65:00.0 failed with error -22
> [ 2159.434537] vfio-pci: probe of 0000:65:00.1 failed with error -22
> [ 2199.773476] vfio-pci: probe of 0000:65:00.0 failed with error -22
> [ 2199.788973] vfio-pci: probe of 0000:65:00.1 failed with error -22
> .
> 
> Which as far as I understand are referring to IOMMU not being enable on
> the machine, I guess I'll have to find some specific option for that as
> the BIOS suggest that "Intel Virtualization" is enabled, but that might
> refer to what we need here.
> 
> Filip
> 
> Il 15/06/18 11:44, Andrew Rybchenko ha scritto:
>> On 06/13/2018 10:14 PM, Filip Janiszewski wrote:
>>> Hi Andrew,
>>>
>>>
>>>> PCI devices of Solarflare NIC should be bound to vfio,
>>>> uio-pci-generic or
>>>> igb_uio (part of DPDK) module. In the case of Solarflare NICs, Linux
>>>> driver is
>>>> not required and not used in DPDK.
>>>>
>>>> So, you should load one of above modules (depending on your server
>>>> IOMMU configuration), push already created interfaces down and rebind
>>>> Solarflare PCI functions to the driver, something like:
>>>>
>>>> modprobe vfio-pci
>>>> ip link set enp101s0f0 down
>>>> ip link set enp101s0f1 down
>>>> dpdk-devbind.py --bind=vfio-pci 0000:65:00.0 0000:65:00.1
>>>>
>>>> The above assumes that dpdk-devbind.py script is in PATH.
>>>> And start DPDK as you do before.
>>>>
>>> For some reason vfio-pci is refusing to bind the device:
>>>
>>> .
>>> root usertools : ./dpdk-devbind.py --bind=vfio_pci 0000:65:00.0
>>> 0000:65:00.1
>>> Error: bind failed for 0000:65:00.0 - Cannot open
>>> /sys/bus/pci/drivers/vfio_pci/bind
>>> Error: bind failed for 0000:65:00.1 - Cannot open
>>> /sys/bus/pci/drivers/vfio_pci/bind
>>> .
>>>
>>> also:
>>>
>>> .
>>> root usertools : echo 0000:65:00.0 > /sys/bus/pci/drivers/vfio-pci/bind
>>> bash: echo: write error: No such device
>>> .
>>>
>>> In the kernel command line I've included 'iommu=pt intel_iommu=on', but
>>> still not working. Well I guess this is not a DPDK issue anymore.
>>
>> What does the following command shows?
>> ./dpdk-devbind.py --status-dev net
>>
>> Is IOMMU (VT-d) enabled in BIOS/UEFI? You can check for files/symlinks
>> in /sys/class/iommu.
>>
>> As an experiment I'd try uio-pci-generic as well.
>> Also exact error logs from dmesg could be useful to understand what's
>> going on.
>>
>> Andrew.
> 

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-06-15 17:42 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-13  9:46 [dpdk-users] Correct setup of sfc Filip Janiszewski
2018-06-13 17:46 ` Andrew Rybchenko
2018-06-13 19:14   ` Filip Janiszewski
2018-06-15  9:44     ` Andrew Rybchenko
2018-06-15 17:02       ` Filip Janiszewski
2018-06-15 17:41         ` Filip Janiszewski
2018-06-15 15:34     ` Rosen, Rami

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).