DPDK usage discussions
 help / color / mirror / Atom feed
From: "Loftus, Ciara" <ciara.loftus@intel.com>
To: Alessio Igor Bogani <alessio.bogani@elettra.eu>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: RE: test-pmd strange output on AF_XDP
Date: Fri, 29 Apr 2022 13:50:36 +0000	[thread overview]
Message-ID: <PH0PR11MB4791BFA0F01796F39436A22C8EFC9@PH0PR11MB4791.namprd11.prod.outlook.com> (raw)
In-Reply-To: <CAPk1OjGkhbzG2QrndZ-GCzGEAF49noJFxbM-TMiMw__fyoHuaA@mail.gmail.com>

> 
> Dear DPDK users,
> 
> Sorry for my very bad English.
> 
> I'm trying to test DPDK on two systems which are connected to each
> other with a cross cable. The first one is a generic Intel x86_64
> which uses net_e1000_igb. The second one is an ARMv7 which uses af_xdp
> and shows a strange test-pmd output.
> 
> Launching test-pmd on the Intel system using:
> dpdk-testpmd --no-telemetry --no-huge -l0-1 -n1  --
> --port-topology=chained  --forward-mode=txonly  --total-num-mbufs=2048
> --stats-period=1 --eth-peer=0,50:51:a9:98:d4:26
> produces a reasonable output:
> EAL: Detected CPU lcores: 2
> EAL: Detected NUMA nodes: 1
> EAL: Static memory layout is selected, amount of reserved memory can
> be adjusted with -m or --socket-mem
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: VFIO support initialized
> EAL: Using IOMMU type 1 (Type 1)
> EAL: Ignore mapping IO port bar(2)
> EAL: Probe PCI driver: net_e1000_igb (8086:157b) device: 0000:01:00.0
> (socket 0)
> Set txonly packet forwarding mode
> testpmd: create a new mbuf pool <mb_pool_0>: n=2048, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> Port 0: 00:07:32:6F:EF:0D
> Checking link statuses...
> Done
> No commandline core given, start packet forwarding
> txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> support enabled, MP allocation mode: native
> Logical Core 1 (socket 0) forwards packets on 1 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=50:51:A9:98:D4:26
> 
>   txonly packet forwarding packets/burst=32
>   packet len=64 - nb packet segments=1
>   nb forwarding cores=1 - nb forwarding ports=1
>   port 0: RX queue number: 1 Tx queue number: 1
>     Rx offloads=0x0 Tx offloads=0x0
>     RX queue: 0
>       RX desc=512 - RX free threshold=32
>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       RX Offloads=0x0
>     TX queue: 0
>       TX desc=512 - TX free threshold=0
>       TX threshold registers: pthresh=8 hthresh=1  wthresh=16
>       TX offloads=0x0 - TX RS bit threshold=0
>   ######################## NIC statistics for port 0
> ########################
>   RX-packets: 11         RX-missed: 0          RX-bytes:  932
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 67473342   TX-errors: 0          TX-bytes:  4318294500
> 
>   Throughput (since last show)
>   Rx-pps:            0          Rx-bps:            0
>   Tx-pps:      1420391          Tx-bps:    727240920
> 
> ##########################################################
> ##################
> 
> On the ARMv7 I have launched test-pmd using:
> dpdk-testpmd --no-telemetry --no-huge -l0-1 -n1 --vdev
> net_af_xdp0,iface=eth0  -- --port-topology=chained
> --forward-mode=rxonly --total-num-mbufs=2048 --stats-period=1
> --eth-peer=0,00:07:32:6F:EF:0D
> producing a strange (for me) output:
> EAL: Detected CPU lcores: 2
> EAL: Detected NUMA nodes: 1
> EAL: Static memory layout is selected, amount of reserved memory can
> be adjusted with -m or --socket-mem
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> Set rxonly packet forwarding mode
> testpmd: create a new mbuf pool <mb_pool_0>: n=2048, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> libbpf: elf: skipping unrecognized data section(7) .xdp_run_config
> libbpf: elf: skipping unrecognized data section(8) xdp_metadata
> libxdp: XDP flag not supported by libxdp.
> libbpf: elf: skipping unrecognized data section(7) xdp_metadata
> libbpf: prog 'xdp_dispatcher': BPF program load failed: Invalid argument
> libbpf: prog 'xdp_dispatcher': -- BEGIN PROG LOAD LOG --
> Validating prog0() func#1...
> btf_vmlinux is malformed
> Arg#0 type PTR in prog0() is not supported yet.
> processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0
> peak_states 0 mark_read 0
> -- END PROG LOAD LOG --
> libbpf: failed to load program 'xdp_dispatcher'
> libbpf: failed to load object '/usr/local/lib/bpf//xdp-dispatcher.o'
> libxdp: Failed to load dispatcher: Invalid argument
> libxdp: Falling back to loading single prog without dispatcher
> Port 0: 50:51:A9:98:D4:26
> Checking link statuses...
> Done
> No commandline core given, start packet forwarding
> rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> support enabled, MP allocation mode: native
> Logical Core 1 (socket 0) forwards packets on 1 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=00:07:32:6F:EF:0D
> 
>   rxonly packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=1
>   port 0: RX queue number: 1 Tx queue number: 1
>     Rx offloads=0x0 Tx offloads=0x0
>     RX queue: 0
>       RX desc=0 - RX free threshold=0
>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       RX Offloads=0x0
>     TX queue: 0
>       TX desc=0 - TX free threshold=0
>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>       TX offloads=0x0 - TX RS bit threshold=0
>  ######################## NIC statistics for port 0
> ########################
>   RX-packets: 0          RX-missed: 7363489    RX-bytes:  0
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 0          TX-errors: 0          TX-bytes:  0
> 
>   Throughput (since last show)
>   Rx-pps:            0          Rx-bps:            0
>   Tx-pps:            0          Tx-bps:            0
> 
> ##########################################################
> ##################
> 
> Is it the output expected? What did I do wrong?

Hi,

This looks like the libxdp issue reported here: https://github.com/xdp-project/xdp-tools/issues/184
It looks like a fix was pushed to libxdp to address it. Could you try that out and see if it solves your issue?
Even without this fix you might find that the PMD still works, as it falls back to a legacy mode for loading the program - " Falling back to loading single prog without dispatcher"

Thanks,
Ciara

> 
> Thanks!
> 
> Ciao,
> Alessio

  reply	other threads:[~2022-04-29 13:50 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-29 12:56 Alessio Igor Bogani
2022-04-29 13:50 ` Loftus, Ciara [this message]
2022-04-29 19:00   ` Alessio Igor Bogani

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR11MB4791BFA0F01796F39436A22C8EFC9@PH0PR11MB4791.namprd11.prod.outlook.com \
    --to=ciara.loftus@intel.com \
    --cc=alessio.bogani@elettra.eu \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).