DPDK usage discussions
 help / color / Atom feed
* [dpdk-users] only one vdev net_af_xdp being recognized
@ 2019-07-08 11:18 Jags N
  2019-07-10  1:22 ` Jags N
  0 siblings, 1 reply; 8+ messages in thread
From: Jags N @ 2019-07-08 11:18 UTC (permalink / raw)
  To: users

Hi,

I am trying to understand net_af_xdp, and find that dpdk is recognizing
only one vdev net_af_xdp, hence only one port (port 0) is getting
configured. Requesting help to know if I am missing any information on
net_af_xdp support in dpdk, or if I have provided the EAL parameters wrong.
Kindly advice.

I am running Fedora 30.1-2 as Guest VM on Virtual Box VM Manager with Linux
Kernel 5.1.0, and dpdk-19.05. The interfaces are emulated ones mentioned
below,

lspci output ...
00:09.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
Controller (Copper) (rev 02)
00:0a.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
Controller (Copper) (rev 02)

DPDK testpmd is executed as mentioned below,

[root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev net_af_xdp,iface=enp0s9
 --vdev net_af_xdp,iface=enp0s10 --iova-mode=va -- --portmask=0x3
EAL: Detected 3 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using unreliable
clock cycles !
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100e net_e1000_em
EAL: PCI device 0000:00:08.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100e net_e1000_em
EAL: PCI device 0000:00:09.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:00:0a.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 08:00:27:68:5B:66
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
Press enter to exit

Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0
 ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0

----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all
ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Done

Bye...

Regards,
Jags

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] only one vdev net_af_xdp being recognized
  2019-07-08 11:18 [dpdk-users] only one vdev net_af_xdp being recognized Jags N
@ 2019-07-10  1:22 ` Jags N
  2019-07-10  8:59   ` Ye Xiaolong
  0 siblings, 1 reply; 8+ messages in thread
From: Jags N @ 2019-07-10  1:22 UTC (permalink / raw)
  To: users

Hi,

Continuing on my previous email,

https://doc.dpdk.org/guides/rel_notes/release_19_08.html  release not says
- Added multi-queue support to allow one af_xdp vdev with multiple netdev
queues

Does it in anyway imply only one af_xdp vdev is supported as of now, and
more than one af_xdp vdev may not be recognized ?

Regards,
Jags

On Mon, Jul 8, 2019 at 4:48 PM Jags N <jagsnn@gmail.com> wrote:

> Hi,
>
> I am trying to understand net_af_xdp, and find that dpdk is recognizing
> only one vdev net_af_xdp, hence only one port (port 0) is getting
> configured. Requesting help to know if I am missing any information on
> net_af_xdp support in dpdk, or if I have provided the EAL parameters wrong.
> Kindly advice.
>
> I am running Fedora 30.1-2 as Guest VM on Virtual Box VM Manager with
> Linux Kernel 5.1.0, and dpdk-19.05. The interfaces are emulated ones
> mentioned below,
>
> lspci output ...
> 00:09.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
> Controller (Copper) (rev 02)
> 00:0a.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
> Controller (Copper) (rev 02)
>
> DPDK testpmd is executed as mentioned below,
>
> [root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev
> net_af_xdp,iface=enp0s9  --vdev net_af_xdp,iface=enp0s10 --iova-mode=va --
> --portmask=0x3
> EAL: Detected 3 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Probing VFIO support...
> EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using unreliable
> clock cycles !
> EAL: PCI device 0000:00:03.0 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:100e net_e1000_em
> EAL: PCI device 0000:00:08.0 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:100e net_e1000_em
> EAL: PCI device 0000:00:09.0 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:100f net_e1000_em
> EAL: PCI device 0000:00:0a.0 on NUMA socket -1
> EAL:   Invalid NUMA socket, default to 0
> EAL:   probe driver: 8086:100f net_e1000_em
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port
> will pair with itself.
>
> Configuring Port 0 (socket 0)
> Port 0: 08:00:27:68:5B:66
> Checking link statuses...
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
> enabled, MP allocation mode: native
> Logical Core 1 (socket 0) forwards packets on 1 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>
>   io packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=1
>   port 0: RX queue number: 1 Tx queue number: 1
>     Rx offloads=0x0 Tx offloads=0x0
>     RX queue: 0
>       RX desc=0 - RX free threshold=0
>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       RX Offloads=0x0
>     TX queue: 0
>       TX desc=0 - TX free threshold=0
>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       TX offloads=0x0 - TX RS bit threshold=0
> Press enter to exit
>
> Telling cores to stop...
> Waiting for lcores to finish...
>
>   ---------------------- Forward statistics for port 0
>  ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>
> ----------------------------------------------------------------------------
>
>   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
> Done.
>
> Stopping port 0...
> Stopping ports...
> Done
>
> Shutting down port 0...
> Closing ports...
> Done
>
> Bye...
>
> Regards,
> Jags
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] only one vdev net_af_xdp being recognized
  2019-07-10  1:22 ` Jags N
@ 2019-07-10  8:59   ` Ye Xiaolong
  2019-07-11  2:21     ` Jags N
  0 siblings, 1 reply; 8+ messages in thread
From: Ye Xiaolong @ 2019-07-10  8:59 UTC (permalink / raw)
  To: Jags N; +Cc: users

Hi,

On 07/10, Jags N wrote:
>Hi,
>
>Continuing on my previous email,
>
>https://doc.dpdk.org/guides/rel_notes/release_19_08.html  release not says
>- Added multi-queue support to allow one af_xdp vdev with multiple netdev
>queues
>
>Does it in anyway imply only one af_xdp vdev is supported as of now, and
>more than one af_xdp vdev may not be recognized ?

Multiple af_xdp vdevs are supported.

>
>Regards,
>Jags
>
>On Mon, Jul 8, 2019 at 4:48 PM Jags N <jagsnn@gmail.com> wrote:
>
>> Hi,
>>
>> I am trying to understand net_af_xdp, and find that dpdk is recognizing
>> only one vdev net_af_xdp, hence only one port (port 0) is getting
>> configured. Requesting help to know if I am missing any information on
>> net_af_xdp support in dpdk, or if I have provided the EAL parameters wrong.
>> Kindly advice.
>>
>> I am running Fedora 30.1-2 as Guest VM on Virtual Box VM Manager with
>> Linux Kernel 5.1.0, and dpdk-19.05. The interfaces are emulated ones
>> mentioned below,
>>
>> lspci output ...
>> 00:09.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
>> Controller (Copper) (rev 02)
>> 00:0a.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
>> Controller (Copper) (rev 02)
>>
>> DPDK testpmd is executed as mentioned below,
>>
>> [root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev
>> net_af_xdp,iface=enp0s9  --vdev net_af_xdp,iface=enp0s10 --iova-mode=va --
>> --portmask=0x3

Here you need to use

--vdev net_af_xdp0,iface=enp0s9 --vdev net_af_xdp1,iface=enp0s10

Thanks,
Xiaolong

>> EAL: Detected 3 lcore(s)
>> EAL: Detected 1 NUMA nodes
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Probing VFIO support...
>> EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using unreliable
>> clock cycles !
>> EAL: PCI device 0000:00:03.0 on NUMA socket -1
>> EAL:   Invalid NUMA socket, default to 0
>> EAL:   probe driver: 8086:100e net_e1000_em
>> EAL: PCI device 0000:00:08.0 on NUMA socket -1
>> EAL:   Invalid NUMA socket, default to 0
>> EAL:   probe driver: 8086:100e net_e1000_em
>> EAL: PCI device 0000:00:09.0 on NUMA socket -1
>> EAL:   Invalid NUMA socket, default to 0
>> EAL:   probe driver: 8086:100f net_e1000_em
>> EAL: PCI device 0000:00:0a.0 on NUMA socket -1
>> EAL:   Invalid NUMA socket, default to 0
>> EAL:   probe driver: 8086:100f net_e1000_em
>> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176,
>> socket=0
>> testpmd: preferred mempool ops selected: ring_mp_mc
>>
>> Warning! port-topology=paired and odd forward ports number, the last port
>> will pair with itself.
>>
>> Configuring Port 0 (socket 0)
>> Port 0: 08:00:27:68:5B:66
>> Checking link statuses...
>> Done
>> No commandline core given, start packet forwarding
>> io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
>> enabled, MP allocation mode: native
>> Logical Core 1 (socket 0) forwards packets on 1 streams:
>>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>>
>>   io packet forwarding packets/burst=32
>>   nb forwarding cores=1 - nb forwarding ports=1
>>   port 0: RX queue number: 1 Tx queue number: 1
>>     Rx offloads=0x0 Tx offloads=0x0
>>     RX queue: 0
>>       RX desc=0 - RX free threshold=0
>>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>       RX Offloads=0x0
>>     TX queue: 0
>>       TX desc=0 - TX free threshold=0
>>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>       TX offloads=0x0 - TX RS bit threshold=0
>> Press enter to exit
>>
>> Telling cores to stop...
>> Waiting for lcores to finish...
>>
>>   ---------------------- Forward statistics for port 0
>>  ----------------------
>>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>>
>> ----------------------------------------------------------------------------
>>
>>   +++++++++++++++ Accumulated forward statistics for all
>> ports+++++++++++++++
>>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>>
>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>
>> Done.
>>
>> Stopping port 0...
>> Stopping ports...
>> Done
>>
>> Shutting down port 0...
>> Closing ports...
>> Done
>>
>> Bye...
>>
>> Regards,
>> Jags
>>
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] only one vdev net_af_xdp being recognized
  2019-07-10  8:59   ` Ye Xiaolong
@ 2019-07-11  2:21     ` Jags N
  2019-07-11  9:15       ` Ye Xiaolong
  0 siblings, 1 reply; 8+ messages in thread
From: Jags N @ 2019-07-11  2:21 UTC (permalink / raw)
  To: Ye Xiaolong; +Cc: users

Hi Xiaolong,

Thanks much !  That works.

I am now facing - xsk_configure(): Failed to create xsk socket.

Port 0 is fine, Port 1 is showing the problem.

I am checking "tools/lib/bpf/xsk.c:xsk_socket__create()" further on this.
Meanwhile just asking if any obvious reasons, if I am missing anything ?

[root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev
 net_af_xdp0,iface=enp0s9 --vdev net_af_xdp1,iface=enp0s10 --iova-mode=va
EAL: Detected 3 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Debug dataplane logs available - lower performance
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using unreliable
clock cycles !
EAL: PCI device 0000:00:03.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100e net_e1000_em
EAL: PCI device 0000:00:08.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100e net_e1000_em
EAL: PCI device 0000:00:09.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:00:0a.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 08:00:27:68:5B:66
Configuring Port 1 (socket 0)
xsk_configure(): Failed to create xsk socket.
eth_rx_queue_setup(): Failed to configure xdp socket
Fail to configure port 1 rx queues
EAL: Error - exiting with code: 1
  Cause: Start ports failed
[root@localhost app]#

Regards,
Jags

On Wed, Jul 10, 2019 at 7:47 AM Ye Xiaolong <xiaolong.ye@intel.com> wrote:

> Hi,
>
> On 07/10, Jags N wrote:
> >Hi,
> >
> >Continuing on my previous email,
> >
> >https://doc.dpdk.org/guides/rel_notes/release_19_08.html  release not
> says
> >- Added multi-queue support to allow one af_xdp vdev with multiple netdev
> >queues
> >
> >Does it in anyway imply only one af_xdp vdev is supported as of now, and
> >more than one af_xdp vdev may not be recognized ?
>
> Multiple af_xdp vdevs are supported.
>
> >
> >Regards,
> >Jags
> >
> >On Mon, Jul 8, 2019 at 4:48 PM Jags N <jagsnn@gmail.com> wrote:
> >
> >> Hi,
> >>
> >> I am trying to understand net_af_xdp, and find that dpdk is recognizing
> >> only one vdev net_af_xdp, hence only one port (port 0) is getting
> >> configured. Requesting help to know if I am missing any information on
> >> net_af_xdp support in dpdk, or if I have provided the EAL parameters
> wrong.
> >> Kindly advice.
> >>
> >> I am running Fedora 30.1-2 as Guest VM on Virtual Box VM Manager with
> >> Linux Kernel 5.1.0, and dpdk-19.05. The interfaces are emulated ones
> >> mentioned below,
> >>
> >> lspci output ...
> >> 00:09.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
> >> Controller (Copper) (rev 02)
> >> 00:0a.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
> >> Controller (Copper) (rev 02)
> >>
> >> DPDK testpmd is executed as mentioned below,
> >>
> >> [root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev
> >> net_af_xdp,iface=enp0s9  --vdev net_af_xdp,iface=enp0s10 --iova-mode=va
> --
> >> --portmask=0x3
>
> Here you need to use
>
> --vdev net_af_xdp0,iface=enp0s9 --vdev net_af_xdp1,iface=enp0s10
>
> Thanks,
> Xiaolong
>
> >> EAL: Detected 3 lcore(s)
> >> EAL: Detected 1 NUMA nodes
> >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >> EAL: Probing VFIO support...
> >> EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using
> unreliable
> >> clock cycles !
> >> EAL: PCI device 0000:00:03.0 on NUMA socket -1
> >> EAL:   Invalid NUMA socket, default to 0
> >> EAL:   probe driver: 8086:100e net_e1000_em
> >> EAL: PCI device 0000:00:08.0 on NUMA socket -1
> >> EAL:   Invalid NUMA socket, default to 0
> >> EAL:   probe driver: 8086:100e net_e1000_em
> >> EAL: PCI device 0000:00:09.0 on NUMA socket -1
> >> EAL:   Invalid NUMA socket, default to 0
> >> EAL:   probe driver: 8086:100f net_e1000_em
> >> EAL: PCI device 0000:00:0a.0 on NUMA socket -1
> >> EAL:   Invalid NUMA socket, default to 0
> >> EAL:   probe driver: 8086:100f net_e1000_em
> >> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> size=2176,
> >> socket=0
> >> testpmd: preferred mempool ops selected: ring_mp_mc
> >>
> >> Warning! port-topology=paired and odd forward ports number, the last
> port
> >> will pair with itself.
> >>
> >> Configuring Port 0 (socket 0)
> >> Port 0: 08:00:27:68:5B:66
> >> Checking link statuses...
> >> Done
> >> No commandline core given, start packet forwarding
> >> io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
> >> enabled, MP allocation mode: native
> >> Logical Core 1 (socket 0) forwards packets on 1 streams:
> >>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> >>
> >>   io packet forwarding packets/burst=32
> >>   nb forwarding cores=1 - nb forwarding ports=1
> >>   port 0: RX queue number: 1 Tx queue number: 1
> >>     Rx offloads=0x0 Tx offloads=0x0
> >>     RX queue: 0
> >>       RX desc=0 - RX free threshold=0
> >>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
> >>       RX Offloads=0x0
> >>     TX queue: 0
> >>       TX desc=0 - TX free threshold=0
> >>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
> >>       TX offloads=0x0 - TX RS bit threshold=0
> >> Press enter to exit
> >>
> >> Telling cores to stop...
> >> Waiting for lcores to finish...
> >>
> >>   ---------------------- Forward statistics for port 0
> >>  ----------------------
> >>   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >>   TX-packets: 0              TX-dropped: 0             TX-total: 0
> >>
> >>
> ----------------------------------------------------------------------------
> >>
> >>   +++++++++++++++ Accumulated forward statistics for all
> >> ports+++++++++++++++
> >>   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >>   TX-packets: 0              TX-dropped: 0             TX-total: 0
> >>
> >>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>
> >> Done.
> >>
> >> Stopping port 0...
> >> Stopping ports...
> >> Done
> >>
> >> Shutting down port 0...
> >> Closing ports...
> >> Done
> >>
> >> Bye...
> >>
> >> Regards,
> >> Jags
> >>
> >>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] only one vdev net_af_xdp being recognized
  2019-07-11  9:15       ` Ye Xiaolong
@ 2019-07-11  6:18         ` Jags N
  2019-07-13 17:11           ` Jags N
  0 siblings, 1 reply; 8+ messages in thread
From: Jags N @ 2019-07-11  6:18 UTC (permalink / raw)
  To: Ye Xiaolong; +Cc: users

Hi,

<Truncated tail part of email for clarity>

I do check for "ip link list" and turn off the xdp with "ip link set dev
<dev> xdp off". Even had confirmed with bpftool for any residue map before
executing testpmd.

[root@localhost app]# bpftool map
15: lpm_trie  flags 0x1
        key 8B  value 8B  max_entries 1  memlock 4096B
16: lpm_trie  flags 0x1
        key 20B  value 8B  max_entries 1  memlock 4096B
17: lpm_trie  flags 0x1
        key 8B  value 8B  max_entries 1  memlock 4096B
18: lpm_trie  flags 0x1
        key 20B  value 8B  max_entries 1  memlock 4096B
19: lpm_trie  flags 0x1
        key 8B  value 8B  max_entries 1  memlock 4096B
20: lpm_trie  flags 0x1
        key 20B  value 8B  max_entries 1  memlock 4096B
[root@localhost app]#

Another observation was that the first vdev in the EAL argument sequence
comes up. Say, if I swap enp0s9 and enp0s10, then the Port 0 succeeds with
xdp socket on enp0s10, and Port 1 fails to create xdp socket on enp0s9.  So
basically only the first vdev succeeds in xdp socket creation.

With EAL argument :      --vdev  net_af_xdp0,iface=enp0s9 --vdev
net_af_xdp1,iface=enp0s10

4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdpgeneric qdisc
fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:68:5b:66 brd ff:ff:ff:ff:ff:ff
    prog/xdp id 47 tag 688894a68871a50f jited
5: enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6f:f4:61 brd ff:ff:ff:ff:ff:ff

With EAL argument :  --vdev  net_af_xdp0,iface=enp0s10 --vdev
net_af_xdp1,iface=enp0s9

4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:68:5b:66 brd ff:ff:ff:ff:ff:ff
5: enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 xdpgeneric qdisc
fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6f:f4:61 brd ff:ff:ff:ff:ff:ff
    prog/xdp id 46 tag 688894a68871a50f jited

Let me check further.

Regards,
Jagdish

On Thu, Jul 11, 2019 at 8:03 AM Ye Xiaolong <xiaolong.ye@intel.com> wrote:

> Hi,
>
> On 07/11, Jags N wrote:
> >Hi Xiaolong,
> >
> >Thanks much !  That works.
> >
> >I am now facing - xsk_configure(): Failed to create xsk socket.
> >
> >Port 0 is fine, Port 1 is showing the problem.
>
> Has port 1 been brought up?
> Another reason may be that you've run with port 1 before, and somehow it
> left
> without proper cleanup of the xdp program (you can verfy it by `./bpftool
> map -p`
> to see whether there is existed xskmap, you can build the bpftool in
> tools/bpf/bpftool)
> you can try reboot your system and try again.
>
> Thanks,
> Xiaolong
>
> >
> >I am checking "tools/lib/bpf/xsk.c:xsk_socket__create()" further on this.
> >Meanwhile just asking if any obvious reasons, if I am missing anything ?
> >
> >[root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev
> > net_af_xdp0,iface=enp0s9 --vdev net_af_xdp1,iface=enp0s10 --iova-mode=va
> >EAL: Detected 3 lcore(s)
> >EAL: Detected 1 NUMA nodes
> >EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >EAL: Debug dataplane logs available - lower performance
> >EAL: Probing VFIO support...
> >EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using unreliable
> >clock cycles !
> >EAL: PCI device 0000:00:03.0 on NUMA socket -1
> >EAL:   Invalid NUMA socket, default to 0
> >EAL:   probe driver: 8086:100e net_e1000_em
> >EAL: PCI device 0000:00:08.0 on NUMA socket -1
> >EAL:   Invalid NUMA socket, default to 0
> >EAL:   probe driver: 8086:100e net_e1000_em
> >EAL: PCI device 0000:00:09.0 on NUMA socket -1
> >EAL:   Invalid NUMA socket, default to 0
> >EAL:   probe driver: 8086:100f net_e1000_em
> >EAL: PCI device 0000:00:0a.0 on NUMA socket -1
> >EAL:   Invalid NUMA socket, default to 0
> >EAL:   probe driver: 8086:100f net_e1000_em
> >testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176,
> >socket=0
> >testpmd: preferred mempool ops selected: ring_mp_mc
> >Configuring Port 0 (socket 0)
> >Port 0: 08:00:27:68:5B:66
> >Configuring Port 1 (socket 0)
> >xsk_configure(): Failed to create xsk socket.
> >eth_rx_queue_setup(): Failed to configure xdp socket
> >Fail to configure port 1 rx queues
> >EAL: Error - exiting with code: 1
> >  Cause: Start ports failed
> >[root@localhost app]#
> >
> >Regards,
> >Jags
> >
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] only one vdev net_af_xdp being recognized
  2019-07-11  2:21     ` Jags N
@ 2019-07-11  9:15       ` Ye Xiaolong
  2019-07-11  6:18         ` Jags N
  0 siblings, 1 reply; 8+ messages in thread
From: Ye Xiaolong @ 2019-07-11  9:15 UTC (permalink / raw)
  To: Jags N; +Cc: users

Hi,

On 07/11, Jags N wrote:
>Hi Xiaolong,
>
>Thanks much !  That works.
>
>I am now facing - xsk_configure(): Failed to create xsk socket.
>
>Port 0 is fine, Port 1 is showing the problem.

Has port 1 been brought up? 
Another reason may be that you've run with port 1 before, and somehow it left
without proper cleanup of the xdp program (you can verfy it by `./bpftool map -p`
to see whether there is existed xskmap, you can build the bpftool in tools/bpf/bpftool)
you can try reboot your system and try again.

Thanks,
Xiaolong

>
>I am checking "tools/lib/bpf/xsk.c:xsk_socket__create()" further on this.
>Meanwhile just asking if any obvious reasons, if I am missing anything ?
>
>[root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev
> net_af_xdp0,iface=enp0s9 --vdev net_af_xdp1,iface=enp0s10 --iova-mode=va
>EAL: Detected 3 lcore(s)
>EAL: Detected 1 NUMA nodes
>EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>EAL: Debug dataplane logs available - lower performance
>EAL: Probing VFIO support...
>EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using unreliable
>clock cycles !
>EAL: PCI device 0000:00:03.0 on NUMA socket -1
>EAL:   Invalid NUMA socket, default to 0
>EAL:   probe driver: 8086:100e net_e1000_em
>EAL: PCI device 0000:00:08.0 on NUMA socket -1
>EAL:   Invalid NUMA socket, default to 0
>EAL:   probe driver: 8086:100e net_e1000_em
>EAL: PCI device 0000:00:09.0 on NUMA socket -1
>EAL:   Invalid NUMA socket, default to 0
>EAL:   probe driver: 8086:100f net_e1000_em
>EAL: PCI device 0000:00:0a.0 on NUMA socket -1
>EAL:   Invalid NUMA socket, default to 0
>EAL:   probe driver: 8086:100f net_e1000_em
>testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176,
>socket=0
>testpmd: preferred mempool ops selected: ring_mp_mc
>Configuring Port 0 (socket 0)
>Port 0: 08:00:27:68:5B:66
>Configuring Port 1 (socket 0)
>xsk_configure(): Failed to create xsk socket.
>eth_rx_queue_setup(): Failed to configure xdp socket
>Fail to configure port 1 rx queues
>EAL: Error - exiting with code: 1
>  Cause: Start ports failed
>[root@localhost app]#
>
>Regards,
>Jags
>
>On Wed, Jul 10, 2019 at 7:47 AM Ye Xiaolong <xiaolong.ye@intel.com> wrote:
>
>> Hi,
>>
>> On 07/10, Jags N wrote:
>> >Hi,
>> >
>> >Continuing on my previous email,
>> >
>> >https://doc.dpdk.org/guides/rel_notes/release_19_08.html  release not
>> says
>> >- Added multi-queue support to allow one af_xdp vdev with multiple netdev
>> >queues
>> >
>> >Does it in anyway imply only one af_xdp vdev is supported as of now, and
>> >more than one af_xdp vdev may not be recognized ?
>>
>> Multiple af_xdp vdevs are supported.
>>
>> >
>> >Regards,
>> >Jags
>> >
>> >On Mon, Jul 8, 2019 at 4:48 PM Jags N <jagsnn@gmail.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> I am trying to understand net_af_xdp, and find that dpdk is recognizing
>> >> only one vdev net_af_xdp, hence only one port (port 0) is getting
>> >> configured. Requesting help to know if I am missing any information on
>> >> net_af_xdp support in dpdk, or if I have provided the EAL parameters
>> wrong.
>> >> Kindly advice.
>> >>
>> >> I am running Fedora 30.1-2 as Guest VM on Virtual Box VM Manager with
>> >> Linux Kernel 5.1.0, and dpdk-19.05. The interfaces are emulated ones
>> >> mentioned below,
>> >>
>> >> lspci output ...
>> >> 00:09.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
>> >> Controller (Copper) (rev 02)
>> >> 00:0a.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
>> >> Controller (Copper) (rev 02)
>> >>
>> >> DPDK testpmd is executed as mentioned below,
>> >>
>> >> [root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev
>> >> net_af_xdp,iface=enp0s9  --vdev net_af_xdp,iface=enp0s10 --iova-mode=va
>> --
>> >> --portmask=0x3
>>
>> Here you need to use
>>
>> --vdev net_af_xdp0,iface=enp0s9 --vdev net_af_xdp1,iface=enp0s10
>>
>> Thanks,
>> Xiaolong
>>
>> >> EAL: Detected 3 lcore(s)
>> >> EAL: Detected 1 NUMA nodes
>> >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> >> EAL: Probing VFIO support...
>> >> EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using
>> unreliable
>> >> clock cycles !
>> >> EAL: PCI device 0000:00:03.0 on NUMA socket -1
>> >> EAL:   Invalid NUMA socket, default to 0
>> >> EAL:   probe driver: 8086:100e net_e1000_em
>> >> EAL: PCI device 0000:00:08.0 on NUMA socket -1
>> >> EAL:   Invalid NUMA socket, default to 0
>> >> EAL:   probe driver: 8086:100e net_e1000_em
>> >> EAL: PCI device 0000:00:09.0 on NUMA socket -1
>> >> EAL:   Invalid NUMA socket, default to 0
>> >> EAL:   probe driver: 8086:100f net_e1000_em
>> >> EAL: PCI device 0000:00:0a.0 on NUMA socket -1
>> >> EAL:   Invalid NUMA socket, default to 0
>> >> EAL:   probe driver: 8086:100f net_e1000_em
>> >> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
>> size=2176,
>> >> socket=0
>> >> testpmd: preferred mempool ops selected: ring_mp_mc
>> >>
>> >> Warning! port-topology=paired and odd forward ports number, the last
>> port
>> >> will pair with itself.
>> >>
>> >> Configuring Port 0 (socket 0)
>> >> Port 0: 08:00:27:68:5B:66
>> >> Checking link statuses...
>> >> Done
>> >> No commandline core given, start packet forwarding
>> >> io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
>> >> enabled, MP allocation mode: native
>> >> Logical Core 1 (socket 0) forwards packets on 1 streams:
>> >>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>> >>
>> >>   io packet forwarding packets/burst=32
>> >>   nb forwarding cores=1 - nb forwarding ports=1
>> >>   port 0: RX queue number: 1 Tx queue number: 1
>> >>     Rx offloads=0x0 Tx offloads=0x0
>> >>     RX queue: 0
>> >>       RX desc=0 - RX free threshold=0
>> >>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>> >>       RX Offloads=0x0
>> >>     TX queue: 0
>> >>       TX desc=0 - TX free threshold=0
>> >>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>> >>       TX offloads=0x0 - TX RS bit threshold=0
>> >> Press enter to exit
>> >>
>> >> Telling cores to stop...
>> >> Waiting for lcores to finish...
>> >>
>> >>   ---------------------- Forward statistics for port 0
>> >>  ----------------------
>> >>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>> >>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>> >>
>> >>
>> ----------------------------------------------------------------------------
>> >>
>> >>   +++++++++++++++ Accumulated forward statistics for all
>> >> ports+++++++++++++++
>> >>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>> >>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>> >>
>> >>
>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> >>
>> >> Done.
>> >>
>> >> Stopping port 0...
>> >> Stopping ports...
>> >> Done
>> >>
>> >> Shutting down port 0...
>> >> Closing ports...
>> >> Done
>> >>
>> >> Bye...
>> >>
>> >> Regards,
>> >> Jags
>> >>
>> >>
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] only one vdev net_af_xdp being recognized
  2019-07-11  6:18         ` Jags N
@ 2019-07-13 17:11           ` Jags N
  2019-07-14  8:01             ` Ye Xiaolong
  0 siblings, 1 reply; 8+ messages in thread
From: Jags N @ 2019-07-13 17:11 UTC (permalink / raw)
  To: Ye Xiaolong; +Cc: users

Hi Xiaolong,

Played around a bit and found that for the 2nd vdev that was failing,
./tools/lib/bpf/xsk.c:xsk_create_bpf_maps() was succeeding for
"qidconf_map", but failing for "xsks_map" map creation. Guessed could be
related to memory.

Tried increasing the max locked memory as suggested in many bpf links
available in net, with "ulimit -l 128", and I am able to run testpmd with
two vdev

Thanks much for the help.

Regards,
Jags

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] only one vdev net_af_xdp being recognized
  2019-07-13 17:11           ` Jags N
@ 2019-07-14  8:01             ` Ye Xiaolong
  0 siblings, 0 replies; 8+ messages in thread
From: Ye Xiaolong @ 2019-07-14  8:01 UTC (permalink / raw)
  To: Jags N; +Cc: users

On 07/13, Jags N wrote:
>Hi Xiaolong,
>
>Played around a bit and found that for the 2nd vdev that was failing,
>./tools/lib/bpf/xsk.c:xsk_create_bpf_maps() was succeeding for
>"qidconf_map", but failing for "xsks_map" map creation. Guessed could be
>related to memory.
>
>Tried increasing the max locked memory as suggested in many bpf links
>available in net, with "ulimit -l 128", and I am able to run testpmd with
>two vdev

Good to know that, thanks for you update.

Thanks,
Xiaolong

>
>Thanks much for the help.
>
>Regards,
>Jags

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, back to index

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-08 11:18 [dpdk-users] only one vdev net_af_xdp being recognized Jags N
2019-07-10  1:22 ` Jags N
2019-07-10  8:59   ` Ye Xiaolong
2019-07-11  2:21     ` Jags N
2019-07-11  9:15       ` Ye Xiaolong
2019-07-11  6:18         ` Jags N
2019-07-13 17:11           ` Jags N
2019-07-14  8:01             ` Ye Xiaolong

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/ public-inbox