DPDK patches and discussions
 help / color / mirror / Atom feed
* Can DPDK AF_XDP PMD support macvlan driver in container?
@ 2024-10-23  6:07 Xiaohua Wang
  2024-10-23 16:09 ` Stephen Hemminger
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Xiaohua Wang @ 2024-10-23  6:07 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 8703 bytes --]

Hi,

dpdk-testpmd with AF_XDP PMD can't work on p1p1 (macvlan) interface, but can work on eth0 (veth) interface.

And is there a method to enable AF_XDP PMD to work in XDP SKB mode? Or add one option to set "SKB mode" in AF_XDP Options<https://doc.dpdk.org/guides/nics/af_xdp.html> ?

===============can't work on p1p1 (macvlan) interface====================

5p8j4:/tmp # ./dpdk-testpmd --log-level=pmd.net.af_xdp:debug --no-huge --no-pci --no-telemetry --vdev net_af_xdp,iface=p1p1 -- --total-num-mbufs 8192
EAL: Detected CPU lcores: 40
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp
init_internals(): Zero copy between umem and mbuf enabled.
testpmd: create a new mbuf pool <mb_pool_0>: n=8192, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
eth_rx_queue_setup(): Set up rx queue, rx queue id: 0, xsk queue id: 0
libbpf: elf: skipping unrecognized data section(8) .xdp_run_config
libbpf: elf: skipping unrecognized data section(9) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: prog 'xdp_pass': BPF program load failed: Invalid argument
libbpf: prog 'xdp_pass': failed to load: -22
libbpf: failed to load object '/usr/lib64/bpf/xdp-dispatcher.o'
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
libxdp: Error attaching XDP program to ifindex 5: Operation not supported
libxdp: XDP mode not supported; try using SKB mode
xsk_configure(): Failed to create xsk socket.
eth_rx_queue_setup(): Failed to configure xdp socket
Fail to configure port 0 rx queues
rte_pmd_af_xdp_remove(): Removing AF_XDP ethdev on numa socket 0
eth_dev_close(): Closing AF_XDP ethdev on numa socket 0
Port 0 is closed
EAL: Error - exiting with code: 1
Cause: Start ports failed
EAL: Already called cleanup

===============work on eth0 (veth) interface====================

5p8j4:/tmp # ./dpdk-testpmd --log-level=pmd.net.af_xdp:debug --no-huge --no-pci --no-telemetry --vdev net_af_xdp,iface=eth0 -- --total-num-mbufs 8192
EAL: Detected CPU lcores: 40
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp
init_internals(): Zero copy between umem and mbuf enabled.
testpmd: create a new mbuf pool <mb_pool_0>: n=8192, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
eth_rx_queue_setup(): Set up rx queue, rx queue id: 0, xsk queue id: 0
libbpf: elf: skipping unrecognized data section(8) .xdp_run_config
libbpf: elf: skipping unrecognized data section(9) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: prog 'xdp_pass': BPF program load failed: Invalid argument
libbpf: prog 'xdp_pass': failed to load: -22
libbpf: failed to load object '/usr/lib64/bpf/xdp-dispatcher.o'
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
configure_preferred_busy_poll(): Busy polling budget set to: 64
Port 0: 42:5F:27:A2:63:BA
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
Press enter to exit

Telling cores to stop...
Waiting for lcores to finish...

---------------------- Forward statistics for port 0 ----------------------
RX-packets: 14 RX-dropped: 0 RX-total: 14
TX-packets: 14 TX-dropped: 0 TX-total: 14
----------------------------------------------------------------------------

+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 14 RX-dropped: 0 RX-total: 14
TX-packets: 14 TX-dropped: 0 TX-total: 14
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
eth_dev_close(): Closing AF_XDP ethdev on numa socket 0
Port 0 is closed
Done

Bye...
rte_pmd_af_xdp_remove(): Removing AF_XDP ethdev on numa socket 0

=================================append test environment====================

on workernode:
=================
worker-pool1-1:/home/test # nsenter -t 127962 -n
Directory: /home/test
Mon Oct 14 03:33:00 CEST 2024
worker-pool1-1:/home/test # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if108: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 2120 qdisc noqueue state UP group default
link/ether 42:5f:27:a2:63:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.96.160/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::405f:27ff:fea2:63ba/64 scope link
valid_lft forever preferred_lft forever
5: p1p1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 7e:c5:53:73:95:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::7cc5:53ff:fe73:955e/64 scope link
valid_lft forever preferred_lft forever
worker-pool1-1:/home/test # ethtool -i eth0
driver: veth
version: 1.0
firmware-version:
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
worker-pool1-1:/home/test # ethtool -i eth1
Cannot get driver information: No such device
worker-pool1-1:/home/test # ethtool -i p1p1
driver: macvlan
version: 0.1
firmware-version:
expansion-rom-version:
bus-info:
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
worker-pool1-1:/home/test #
==============================
in container:
============
5p8j4:/tmp # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if108: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 2120 qdisc noqueue state UP group default
link/ether 42:5f:27:a2:63:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.96.160/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::405f:27ff:fea2:63ba/64 scope link
valid_lft forever preferred_lft forever
5: p1p1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 7e:c5:53:73:95:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::7cc5:53ff:fe73:955e/64 scope link
valid_lft forever preferred_lft forever
=========================================================================

[-- Attachment #2: Type: text/html, Size: 17763 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-10-24  9:16 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-23  6:07 Can DPDK AF_XDP PMD support macvlan driver in container? Xiaohua Wang
2024-10-23 16:09 ` Stephen Hemminger
2024-10-24  1:57   ` Xiaohua Wang
2024-10-24  2:12 ` Stephen Hemminger
2024-10-24  9:16 ` Maryam Tahhan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).