DPDK usage discussions
 help / color / mirror / Atom feed
* test-pmd strange output on AF_XDP
@ 2022-04-29 12:56 Alessio Igor Bogani
  2022-04-29 13:50 ` Loftus, Ciara
  0 siblings, 1 reply; 3+ messages in thread
From: Alessio Igor Bogani @ 2022-04-29 12:56 UTC (permalink / raw)
  To: users

Dear DPDK users,

Sorry for my very bad English.

I'm trying to test DPDK on two systems which are connected to each
other with a cross cable. The first one is a generic Intel x86_64
which uses net_e1000_igb. The second one is an ARMv7 which uses af_xdp
and shows a strange test-pmd output.

Launching test-pmd on the Intel system using:
dpdk-testpmd --no-telemetry --no-huge -l0-1 -n1  --
--port-topology=chained  --forward-mode=txonly  --total-num-mbufs=2048
--stats-period=1 --eth-peer=0,50:51:a9:98:d4:26
produces a reasonable output:
EAL: Detected CPU lcores: 2
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can
be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(2)
EAL: Probe PCI driver: net_e1000_igb (8086:157b) device: 0000:01:00.0 (socket 0)
Set txonly packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:07:32:6F:EF:0D
Checking link statuses...
Done
No commandline core given, start packet forwarding
txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=50:51:A9:98:D4:26

  txonly packet forwarding packets/burst=32
  packet len=64 - nb packet segments=1
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=8 hthresh=1  wthresh=16
      TX offloads=0x0 - TX RS bit threshold=0
  ######################## NIC statistics for port 0  ########################
  RX-packets: 11         RX-missed: 0          RX-bytes:  932
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 67473342   TX-errors: 0          TX-bytes:  4318294500

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:      1420391          Tx-bps:    727240920
  ############################################################################

On the ARMv7 I have launched test-pmd using:
dpdk-testpmd --no-telemetry --no-huge -l0-1 -n1 --vdev
net_af_xdp0,iface=eth0  -- --port-topology=chained
--forward-mode=rxonly --total-num-mbufs=2048 --stats-period=1
--eth-peer=0,00:07:32:6F:EF:0D
producing a strange (for me) output:
EAL: Detected CPU lcores: 2
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can
be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
Set rxonly packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
libbpf: elf: skipping unrecognized data section(7) .xdp_run_config
libbpf: elf: skipping unrecognized data section(8) xdp_metadata
libxdp: XDP flag not supported by libxdp.
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: prog 'xdp_dispatcher': BPF program load failed: Invalid argument
libbpf: prog 'xdp_dispatcher': -- BEGIN PROG LOAD LOG --
Validating prog0() func#1...
btf_vmlinux is malformed
Arg#0 type PTR in prog0() is not supported yet.
processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0
peak_states 0 mark_read 0
-- END PROG LOAD LOG --
libbpf: failed to load program 'xdp_dispatcher'
libbpf: failed to load object '/usr/local/lib/bpf//xdp-dispatcher.o'
libxdp: Failed to load dispatcher: Invalid argument
libxdp: Falling back to loading single prog without dispatcher
Port 0: 50:51:A9:98:D4:26
Checking link statuses...
Done
No commandline core given, start packet forwarding
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=00:07:32:6F:EF:0D

  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
 ######################## NIC statistics for port 0  ########################
  RX-packets: 0          RX-missed: 7363489    RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

Is it the output expected? What did I do wrong?

Thanks!

Ciao,
Alessio

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: test-pmd strange output on AF_XDP
  2022-04-29 12:56 test-pmd strange output on AF_XDP Alessio Igor Bogani
@ 2022-04-29 13:50 ` Loftus, Ciara
  2022-04-29 19:00   ` Alessio Igor Bogani
  0 siblings, 1 reply; 3+ messages in thread
From: Loftus, Ciara @ 2022-04-29 13:50 UTC (permalink / raw)
  To: Alessio Igor Bogani; +Cc: users

> 
> Dear DPDK users,
> 
> Sorry for my very bad English.
> 
> I'm trying to test DPDK on two systems which are connected to each
> other with a cross cable. The first one is a generic Intel x86_64
> which uses net_e1000_igb. The second one is an ARMv7 which uses af_xdp
> and shows a strange test-pmd output.
> 
> Launching test-pmd on the Intel system using:
> dpdk-testpmd --no-telemetry --no-huge -l0-1 -n1  --
> --port-topology=chained  --forward-mode=txonly  --total-num-mbufs=2048
> --stats-period=1 --eth-peer=0,50:51:a9:98:d4:26
> produces a reasonable output:
> EAL: Detected CPU lcores: 2
> EAL: Detected NUMA nodes: 1
> EAL: Static memory layout is selected, amount of reserved memory can
> be adjusted with -m or --socket-mem
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: VFIO support initialized
> EAL: Using IOMMU type 1 (Type 1)
> EAL: Ignore mapping IO port bar(2)
> EAL: Probe PCI driver: net_e1000_igb (8086:157b) device: 0000:01:00.0
> (socket 0)
> Set txonly packet forwarding mode
> testpmd: create a new mbuf pool <mb_pool_0>: n=2048, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> Port 0: 00:07:32:6F:EF:0D
> Checking link statuses...
> Done
> No commandline core given, start packet forwarding
> txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> support enabled, MP allocation mode: native
> Logical Core 1 (socket 0) forwards packets on 1 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=50:51:A9:98:D4:26
> 
>   txonly packet forwarding packets/burst=32
>   packet len=64 - nb packet segments=1
>   nb forwarding cores=1 - nb forwarding ports=1
>   port 0: RX queue number: 1 Tx queue number: 1
>     Rx offloads=0x0 Tx offloads=0x0
>     RX queue: 0
>       RX desc=512 - RX free threshold=32
>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       RX Offloads=0x0
>     TX queue: 0
>       TX desc=512 - TX free threshold=0
>       TX threshold registers: pthresh=8 hthresh=1  wthresh=16
>       TX offloads=0x0 - TX RS bit threshold=0
>   ######################## NIC statistics for port 0
> ########################
>   RX-packets: 11         RX-missed: 0          RX-bytes:  932
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 67473342   TX-errors: 0          TX-bytes:  4318294500
> 
>   Throughput (since last show)
>   Rx-pps:            0          Rx-bps:            0
>   Tx-pps:      1420391          Tx-bps:    727240920
> 
> ##########################################################
> ##################
> 
> On the ARMv7 I have launched test-pmd using:
> dpdk-testpmd --no-telemetry --no-huge -l0-1 -n1 --vdev
> net_af_xdp0,iface=eth0  -- --port-topology=chained
> --forward-mode=rxonly --total-num-mbufs=2048 --stats-period=1
> --eth-peer=0,00:07:32:6F:EF:0D
> producing a strange (for me) output:
> EAL: Detected CPU lcores: 2
> EAL: Detected NUMA nodes: 1
> EAL: Static memory layout is selected, amount of reserved memory can
> be adjusted with -m or --socket-mem
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> Set rxonly packet forwarding mode
> testpmd: create a new mbuf pool <mb_pool_0>: n=2048, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> libbpf: elf: skipping unrecognized data section(7) .xdp_run_config
> libbpf: elf: skipping unrecognized data section(8) xdp_metadata
> libxdp: XDP flag not supported by libxdp.
> libbpf: elf: skipping unrecognized data section(7) xdp_metadata
> libbpf: prog 'xdp_dispatcher': BPF program load failed: Invalid argument
> libbpf: prog 'xdp_dispatcher': -- BEGIN PROG LOAD LOG --
> Validating prog0() func#1...
> btf_vmlinux is malformed
> Arg#0 type PTR in prog0() is not supported yet.
> processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0
> peak_states 0 mark_read 0
> -- END PROG LOAD LOG --
> libbpf: failed to load program 'xdp_dispatcher'
> libbpf: failed to load object '/usr/local/lib/bpf//xdp-dispatcher.o'
> libxdp: Failed to load dispatcher: Invalid argument
> libxdp: Falling back to loading single prog without dispatcher
> Port 0: 50:51:A9:98:D4:26
> Checking link statuses...
> Done
> No commandline core given, start packet forwarding
> rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> support enabled, MP allocation mode: native
> Logical Core 1 (socket 0) forwards packets on 1 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=00:07:32:6F:EF:0D
> 
>   rxonly packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=1
>   port 0: RX queue number: 1 Tx queue number: 1
>     Rx offloads=0x0 Tx offloads=0x0
>     RX queue: 0
>       RX desc=0 - RX free threshold=0
>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       RX Offloads=0x0
>     TX queue: 0
>       TX desc=0 - TX free threshold=0
>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       TX offloads=0x0 - TX RS bit threshold=0
>  ######################## NIC statistics for port 0
> ########################
>   RX-packets: 0          RX-missed: 7363489    RX-bytes:  0
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 0          TX-errors: 0          TX-bytes:  0
> 
>   Throughput (since last show)
>   Rx-pps:            0          Rx-bps:            0
>   Tx-pps:            0          Tx-bps:            0
> 
> ##########################################################
> ##################
> 
> Is it the output expected? What did I do wrong?

Hi,

This looks like the libxdp issue reported here: https://github.com/xdp-project/xdp-tools/issues/184
It looks like a fix was pushed to libxdp to address it. Could you try that out and see if it solves your issue?
Even without this fix you might find that the PMD still works, as it falls back to a legacy mode for loading the program - " Falling back to loading single prog without dispatcher"

Thanks,
Ciara

> 
> Thanks!
> 
> Ciao,
> Alessio

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: test-pmd strange output on AF_XDP
  2022-04-29 13:50 ` Loftus, Ciara
@ 2022-04-29 19:00   ` Alessio Igor Bogani
  0 siblings, 0 replies; 3+ messages in thread
From: Alessio Igor Bogani @ 2022-04-29 19:00 UTC (permalink / raw)
  To: Loftus, Ciara; +Cc: users

Hi Ciara,

First of all: Thanks for help!

On Fri, 29 Apr 2022 at 15:50, Loftus, Ciara <ciara.loftus@intel.com> wrote:
[...]
> > Configuring Port 0 (socket 0)
> > libbpf: elf: skipping unrecognized data section(7) .xdp_run_config
> > libbpf: elf: skipping unrecognized data section(8) xdp_metadata
> > libxdp: XDP flag not supported by libxdp.
> > libbpf: elf: skipping unrecognized data section(7) xdp_metadata
> > libbpf: prog 'xdp_dispatcher': BPF program load failed: Invalid argument
> > libbpf: prog 'xdp_dispatcher': -- BEGIN PROG LOAD LOG --
> > Validating prog0() func#1...
> > btf_vmlinux is malformed
> > Arg#0 type PTR in prog0() is not supported yet.
> > processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0
> > peak_states 0 mark_read 0
> > -- END PROG LOAD LOG --
> > libbpf: failed to load program 'xdp_dispatcher'
> > libbpf: failed to load object '/usr/local/lib/bpf//xdp-dispatcher.o'
> > libxdp: Failed to load dispatcher: Invalid argument
> > libxdp: Falling back to loading single prog without dispatcher

> This looks like the libxdp issue reported here: https://github.com/xdp-project/xdp-tools/issues/184
> It looks like a fix was pushed to libxdp to address it. Could you try that out and see if it solves your issue?

Unfortunately I already have it: Indeed I'm using the tip of the HEAD
of both DPDK and xdk-tools (which contains libxdp).

I'll continue to investigate.

Ciao,
Alessio

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-04-29 19:01 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-29 12:56 test-pmd strange output on AF_XDP Alessio Igor Bogani
2022-04-29 13:50 ` Loftus, Ciara
2022-04-29 19:00   ` Alessio Igor Bogani

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).