DPDK patches and discussions
 help / color / mirror / Atom feed
* Can DPDK AF_XDP PMD support macvlan driver in container?
@ 2024-10-23  6:07 Xiaohua Wang
  2024-10-23 16:09 ` Stephen Hemminger
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Xiaohua Wang @ 2024-10-23  6:07 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 8703 bytes --]

Hi,

dpdk-testpmd with AF_XDP PMD can't work on p1p1 (macvlan) interface, but can work on eth0 (veth) interface.

And is there a method to enable AF_XDP PMD to work in XDP SKB mode? Or add one option to set "SKB mode" in AF_XDP Options<https://doc.dpdk.org/guides/nics/af_xdp.html> ?

===============can't work on p1p1 (macvlan) interface====================

5p8j4:/tmp # ./dpdk-testpmd --log-level=pmd.net.af_xdp:debug --no-huge --no-pci --no-telemetry --vdev net_af_xdp,iface=p1p1 -- --total-num-mbufs 8192
EAL: Detected CPU lcores: 40
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp
init_internals(): Zero copy between umem and mbuf enabled.
testpmd: create a new mbuf pool <mb_pool_0>: n=8192, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
eth_rx_queue_setup(): Set up rx queue, rx queue id: 0, xsk queue id: 0
libbpf: elf: skipping unrecognized data section(8) .xdp_run_config
libbpf: elf: skipping unrecognized data section(9) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: prog 'xdp_pass': BPF program load failed: Invalid argument
libbpf: prog 'xdp_pass': failed to load: -22
libbpf: failed to load object '/usr/lib64/bpf/xdp-dispatcher.o'
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: Kernel error message: Underlying driver does not support XDP in native mode
libxdp: Error attaching XDP program to ifindex 5: Operation not supported
libxdp: XDP mode not supported; try using SKB mode
xsk_configure(): Failed to create xsk socket.
eth_rx_queue_setup(): Failed to configure xdp socket
Fail to configure port 0 rx queues
rte_pmd_af_xdp_remove(): Removing AF_XDP ethdev on numa socket 0
eth_dev_close(): Closing AF_XDP ethdev on numa socket 0
Port 0 is closed
EAL: Error - exiting with code: 1
Cause: Start ports failed
EAL: Already called cleanup

===============work on eth0 (veth) interface====================

5p8j4:/tmp # ./dpdk-testpmd --log-level=pmd.net.af_xdp:debug --no-huge --no-pci --no-telemetry --vdev net_af_xdp,iface=eth0 -- --total-num-mbufs 8192
EAL: Detected CPU lcores: 40
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp
init_internals(): Zero copy between umem and mbuf enabled.
testpmd: create a new mbuf pool <mb_pool_0>: n=8192, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
eth_rx_queue_setup(): Set up rx queue, rx queue id: 0, xsk queue id: 0
libbpf: elf: skipping unrecognized data section(8) .xdp_run_config
libbpf: elf: skipping unrecognized data section(9) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: prog 'xdp_pass': BPF program load failed: Invalid argument
libbpf: prog 'xdp_pass': failed to load: -22
libbpf: failed to load object '/usr/lib64/bpf/xdp-dispatcher.o'
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
libbpf: elf: skipping unrecognized data section(7) xdp_metadata
configure_preferred_busy_poll(): Busy polling budget set to: 64
Port 0: 42:5F:27:A2:63:BA
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
Press enter to exit

Telling cores to stop...
Waiting for lcores to finish...

---------------------- Forward statistics for port 0 ----------------------
RX-packets: 14 RX-dropped: 0 RX-total: 14
TX-packets: 14 TX-dropped: 0 TX-total: 14
----------------------------------------------------------------------------

+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 14 RX-dropped: 0 RX-total: 14
TX-packets: 14 TX-dropped: 0 TX-total: 14
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
eth_dev_close(): Closing AF_XDP ethdev on numa socket 0
Port 0 is closed
Done

Bye...
rte_pmd_af_xdp_remove(): Removing AF_XDP ethdev on numa socket 0

=================================append test environment====================

on workernode:
=================
worker-pool1-1:/home/test # nsenter -t 127962 -n
Directory: /home/test
Mon Oct 14 03:33:00 CEST 2024
worker-pool1-1:/home/test # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if108: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 2120 qdisc noqueue state UP group default
link/ether 42:5f:27:a2:63:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.96.160/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::405f:27ff:fea2:63ba/64 scope link
valid_lft forever preferred_lft forever
5: p1p1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 7e:c5:53:73:95:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::7cc5:53ff:fe73:955e/64 scope link
valid_lft forever preferred_lft forever
worker-pool1-1:/home/test # ethtool -i eth0
driver: veth
version: 1.0
firmware-version:
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
worker-pool1-1:/home/test # ethtool -i eth1
Cannot get driver information: No such device
worker-pool1-1:/home/test # ethtool -i p1p1
driver: macvlan
version: 0.1
firmware-version:
expansion-rom-version:
bus-info:
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
worker-pool1-1:/home/test #
==============================
in container:
============
5p8j4:/tmp # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if108: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 2120 qdisc noqueue state UP group default
link/ether 42:5f:27:a2:63:ba brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.96.160/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::405f:27ff:fea2:63ba/64 scope link
valid_lft forever preferred_lft forever
5: p1p1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 7e:c5:53:73:95:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::7cc5:53ff:fe73:955e/64 scope link
valid_lft forever preferred_lft forever
=========================================================================

[-- Attachment #2: Type: text/html, Size: 17763 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Can DPDK AF_XDP PMD support macvlan driver in container?
  2024-10-23  6:07 Can DPDK AF_XDP PMD support macvlan driver in container? Xiaohua Wang
@ 2024-10-23 16:09 ` Stephen Hemminger
  2024-10-24  1:57   ` Xiaohua Wang
  2024-10-24  2:12 ` Stephen Hemminger
  2024-10-24  9:16 ` Maryam Tahhan
  2 siblings, 1 reply; 5+ messages in thread
From: Stephen Hemminger @ 2024-10-23 16:09 UTC (permalink / raw)
  To: Xiaohua Wang; +Cc: dev

On Wed, 23 Oct 2024 06:07:22 +0000
Xiaohua Wang <xiaohua.wang@ericsson.com> wrote:

> Hi,
> 
> dpdk-testpmd with AF_XDP PMD can't work on p1p1 (macvlan) interface, but can work on eth0 (veth) interface.
> 
> And is there a method to enable AF_XDP PMD to work in XDP SKB mode? Or add one option to set "SKB mode" in AF_XDP Options<https://doc.dpdk.org/guides/nics/af_xdp.html> ?

Maybe a kernel problem not an issue directly with AF_XDP PMD.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: Can DPDK AF_XDP PMD support macvlan driver in container?
  2024-10-23 16:09 ` Stephen Hemminger
@ 2024-10-24  1:57   ` Xiaohua Wang
  0 siblings, 0 replies; 5+ messages in thread
From: Xiaohua Wang @ 2024-10-24  1:57 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev


How can I confirm this is a kernel problem?

And I added some codes in xsk_configure to set  xdp attach mode as " SKB_MODE", then the testpmd works fine in this test environment on p1p1 interface.

So, could you add one option to set "xdp attach mode" in AF_XDP Options in next DPDK release?
https://doc.dpdk.org/guides/nics/af_xdp.html

BRs//Xiaohua
========================================================================
static int
xsk_configure(struct pmd_internals *internals, struct pkt_rx_queue *rxq,
              int ring_size)
{
        struct xsk_socket_config cfg;
        struct pkt_tx_queue *txq = rxq->pair;
        int ret = 0;
        int reserve_size = ETH_AF_XDP_DFLT_NUM_DESCS;
        struct rte_mbuf *fq_bufs[reserve_size];
        bool reserve_before;

        rxq->umem = xdp_umem_configure(internals, rxq);
        if (rxq->umem == NULL)
                return -ENOMEM;
        txq->umem = rxq->umem;
        reserve_before = __atomic_load_n(&rxq->umem->refcnt, __ATOMIC_ACQUIRE) <= 1;

#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
        ret = rte_pktmbuf_alloc_bulk(rxq->umem->mb_pool, fq_bufs, reserve_size);
        if (ret) {
                AF_XDP_LOG(DEBUG, "Failed to get enough buffers for fq.\n");
                goto out_umem;
        }
#endif

        /* reserve fill queue of queues not (yet) sharing UMEM */
        if (reserve_before) {
                ret = reserve_fill_queue(rxq->umem, reserve_size, fq_bufs, &rxq->fq);
                if (ret) {
                        AF_XDP_LOG(ERR, "Failed to reserve fill queue.\n");
                        goto out_umem;
                }
        }

        cfg.rx_size = ring_size;
        cfg.tx_size = ring_size;
        cfg.libbpf_flags = 0;
        cfg.xdp_flags = XDP_FLAGS_UPDATE_IF_NOEXIST;
        cfg.bind_flags = 0;

        /* Force AF_XDP socket into copy mode when users want it */
        if (internals->force_copy)
                cfg.bind_flags |= XDP_COPY;

        /* =================new added code ==========================*/

        const char *env_xdp_attach_mode;
        env_xdp_attach_mode = getenv(XDP_ATTACH_MODE);
        if (env_xdp_attach_mode) {
                AF_XDP_LOG(INFO,"XDP attach mode enviroment variable is %s.\n", env_xdp_attach_mode);
                if (env_xdp_attach_mode[0] == '1' && env_xdp_attach_mode[1] == '\0')
                        cfg.xdp_flags |= XDP_FLAGS_SKB_MODE;
                else if (env_xdp_attach_mode[0] == '2' && env_xdp_attach_mode[1] == '\0')
                        cfg.xdp_flags |= XDP_FLAGS_DRV_MODE;
                else if (env_xdp_attach_mode[0] == '3' && env_xdp_attach_mode[1] == '\0')
                        cfg.xdp_flags |= XDP_FLAGS_HW_MODE;
                else
                        AF_XDP_LOG(INFO,"XDP attach mode enviroment variable shall be 1 or 2 or 3.\n");
        } else {
                AF_XDP_LOG(INFO,"No XDP attach mode enviroment variable.\n");
        }
        /* =================new added code ==========================*/

-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Thursday, October 24, 2024 12:09 AM
To: Xiaohua Wang <xiaohua.wang@ericsson.com>
Cc: dev@dpdk.org
Subject: Re: Can DPDK AF_XDP PMD support macvlan driver in container?

[You don't often get email from stephen@networkplumber.org. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]

On Wed, 23 Oct 2024 06:07:22 +0000
Xiaohua Wang <xiaohua.wang@ericsson.com> wrote:

> Hi,
>
> dpdk-testpmd with AF_XDP PMD can't work on p1p1 (macvlan) interface, but can work on eth0 (veth) interface.
>
> And is there a method to enable AF_XDP PMD to work in XDP SKB mode? Or add one option to set "SKB mode" in AF_XDP Options<https://doc.dpdk.org/guides/nics/af_xdp.html> ?

Maybe a kernel problem not an issue directly with AF_XDP PMD.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Can DPDK AF_XDP PMD support macvlan driver in container?
  2024-10-23  6:07 Can DPDK AF_XDP PMD support macvlan driver in container? Xiaohua Wang
  2024-10-23 16:09 ` Stephen Hemminger
@ 2024-10-24  2:12 ` Stephen Hemminger
  2024-10-24  9:16 ` Maryam Tahhan
  2 siblings, 0 replies; 5+ messages in thread
From: Stephen Hemminger @ 2024-10-24  2:12 UTC (permalink / raw)
  To: Xiaohua Wang; +Cc: dev

On Wed, 23 Oct 2024 06:07:22 +0000
Xiaohua Wang <xiaohua.wang@ericsson.com> wrote:

> eth_rx_queue_setup(): Set up rx queue, rx queue id: 0, xsk queue id: 0
> libbpf: elf: skipping unrecognized data section(8) .xdp_run_config
> libbpf: elf: skipping unrecognized data section(9) xdp_metadata
> libbpf: elf: skipping unrecognized data section(7) xdp_metadata
> libbpf: prog 'xdp_pass': BPF program load failed: Invalid argument
> libbpf: prog 'xdp_pass': failed to load: -22

Is xdp_pass your own BPF program?

It maybe that the kernel device driver (in this case macvlan)
needs to have support for BPF programs. Macvlan does not see
macvlan.c:macvlan_netdev_ops

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Can DPDK AF_XDP PMD support macvlan driver in container?
  2024-10-23  6:07 Can DPDK AF_XDP PMD support macvlan driver in container? Xiaohua Wang
  2024-10-23 16:09 ` Stephen Hemminger
  2024-10-24  2:12 ` Stephen Hemminger
@ 2024-10-24  9:16 ` Maryam Tahhan
  2 siblings, 0 replies; 5+ messages in thread
From: Maryam Tahhan @ 2024-10-24  9:16 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 2873 bytes --]

On 23/10/2024 07:07, Xiaohua Wang wrote:
>
> Hi,
>
> dpdk-testpmd with AF_XDP PMD can't work on p1p1 (macvlan) interface, 
> but can work on eth0 (veth) interface.
>
> And is there a method to enable AF_XDP PMD to work in XDP SKB mode? Or 
> add one option to set “SKB mode” in AF_XDP Options 
> <https://doc.dpdk.org/guides/nics/af_xdp.html> ?
>

[MT] I believe this is what the `force_copy=1` option does. But I'm not 
sure this will fix your issue as the log below says that it does try SKB 
mode. It could be a limitation of the Kernel driver.


> ===============can't work on p1p1 (macvlan) interface====================
>
> 5p8j4:/tmp # ./dpdk-testpmd --log-level=pmd.net.af_xdp:debug --no-huge 
> --no-pci --no-telemetry --vdev net_af_xdp,iface=p1p1 -- 
> --total-num-mbufs 8192
>
> EAL: Detected CPU lcores: 40
>
> EAL: Detected NUMA nodes: 1
>
> EAL: Static memory layout is selected, amount of reserved memory can 
> be adjusted with -m or --socket-mem
>
> EAL: Detected static linkage of DPDK
>
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>
> EAL: Selected IOVA mode 'VA'
>
> EAL: VFIO support initialized
>
> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp
>
> init_internals(): Zero copy between umem and mbuf enabled.
>
> testpmd: create a new mbuf pool <mb_pool_0>: n=8192, size=2176, socket=0
>
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last 
> port will pair with itself.
>
> Configuring Port 0 (socket 0)
>
> eth_rx_queue_setup(): Set up rx queue, rx queue id: 0, xsk queue id: 0
>
> libbpf: elf: skipping unrecognized data section(8) .xdp_run_config
>
> libbpf: elf: skipping unrecognized data section(9) xdp_metadata
>
> libbpf: elf: skipping unrecognized data section(7) xdp_metadata
>
> libbpf: prog 'xdp_pass': BPF program load failed: Invalid argument
>
> libbpf: prog 'xdp_pass': failed to load: -22
>
> libbpf: failed to load object '/usr/lib64/bpf/xdp-dispatcher.o'
>
> libbpf: elf: skipping unrecognized data section(7) xdp_metadata
>
> libbpf: elf: skipping unrecognized data section(7) xdp_metadata
>
> libbpf: elf: skipping unrecognized data section(7) xdp_metadata
>
> libbpf: Kernel error message: Underlying driver does not support XDP 
> in native mode
>
> libxdp: Error attaching XDP program to ifindex 5: Operation not supported
>
> libxdp: XDP mode not supported; try using SKB mode
>
[MT] Here it attempts SKB mode, then fails

> xsk_configure(): Failed to create xsk socket.
>
> eth_rx_queue_setup(): Failed to configure xdp socket
>
> Fail to configure port 0 rx queues
>
> rte_pmd_af_xdp_remove(): Removing AF_XDP ethdev on numa socket 0
>
> eth_dev_close(): Closing AF_XDP ethdev on numa socket 0
>
> Port 0 is closed
>
> EAL: Error - exiting with code: 1
>
> Cause: Start ports failed
>
> EAL: Already called cleanup
>

[-- Attachment #2: Type: text/html, Size: 7218 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-10-24  9:16 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-23  6:07 Can DPDK AF_XDP PMD support macvlan driver in container? Xiaohua Wang
2024-10-23 16:09 ` Stephen Hemminger
2024-10-24  1:57   ` Xiaohua Wang
2024-10-24  2:12 ` Stephen Hemminger
2024-10-24  9:16 ` Maryam Tahhan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).