DPDK usage discussions
 help / color / mirror / Atom feed
From: Ye Xiaolong <xiaolong.ye@intel.com>
To: Jags N <jagsnn@gmail.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] only one vdev net_af_xdp being recognized
Date: Thu, 11 Jul 2019 17:15:02 +0800	[thread overview]
Message-ID: <20190711091502.GA39155@intel.com> (raw)
In-Reply-To: <CALQkTo16A7DuLFLm63vU2Vq8bPbSr0G3qerWRgNVZvxVgSXY6Q@mail.gmail.com>

Hi,

On 07/11, Jags N wrote:
>Hi Xiaolong,
>
>Thanks much !  That works.
>
>I am now facing - xsk_configure(): Failed to create xsk socket.
>
>Port 0 is fine, Port 1 is showing the problem.

Has port 1 been brought up? 
Another reason may be that you've run with port 1 before, and somehow it left
without proper cleanup of the xdp program (you can verfy it by `./bpftool map -p`
to see whether there is existed xskmap, you can build the bpftool in tools/bpf/bpftool)
you can try reboot your system and try again.

Thanks,
Xiaolong

>
>I am checking "tools/lib/bpf/xsk.c:xsk_socket__create()" further on this.
>Meanwhile just asking if any obvious reasons, if I am missing anything ?
>
>[root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev
> net_af_xdp0,iface=enp0s9 --vdev net_af_xdp1,iface=enp0s10 --iova-mode=va
>EAL: Detected 3 lcore(s)
>EAL: Detected 1 NUMA nodes
>EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>EAL: Debug dataplane logs available - lower performance
>EAL: Probing VFIO support...
>EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using unreliable
>clock cycles !
>EAL: PCI device 0000:00:03.0 on NUMA socket -1
>EAL:   Invalid NUMA socket, default to 0
>EAL:   probe driver: 8086:100e net_e1000_em
>EAL: PCI device 0000:00:08.0 on NUMA socket -1
>EAL:   Invalid NUMA socket, default to 0
>EAL:   probe driver: 8086:100e net_e1000_em
>EAL: PCI device 0000:00:09.0 on NUMA socket -1
>EAL:   Invalid NUMA socket, default to 0
>EAL:   probe driver: 8086:100f net_e1000_em
>EAL: PCI device 0000:00:0a.0 on NUMA socket -1
>EAL:   Invalid NUMA socket, default to 0
>EAL:   probe driver: 8086:100f net_e1000_em
>testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176,
>socket=0
>testpmd: preferred mempool ops selected: ring_mp_mc
>Configuring Port 0 (socket 0)
>Port 0: 08:00:27:68:5B:66
>Configuring Port 1 (socket 0)
>xsk_configure(): Failed to create xsk socket.
>eth_rx_queue_setup(): Failed to configure xdp socket
>Fail to configure port 1 rx queues
>EAL: Error - exiting with code: 1
>  Cause: Start ports failed
>[root@localhost app]#
>
>Regards,
>Jags
>
>On Wed, Jul 10, 2019 at 7:47 AM Ye Xiaolong <xiaolong.ye@intel.com> wrote:
>
>> Hi,
>>
>> On 07/10, Jags N wrote:
>> >Hi,
>> >
>> >Continuing on my previous email,
>> >
>> >https://doc.dpdk.org/guides/rel_notes/release_19_08.html  release not
>> says
>> >- Added multi-queue support to allow one af_xdp vdev with multiple netdev
>> >queues
>> >
>> >Does it in anyway imply only one af_xdp vdev is supported as of now, and
>> >more than one af_xdp vdev may not be recognized ?
>>
>> Multiple af_xdp vdevs are supported.
>>
>> >
>> >Regards,
>> >Jags
>> >
>> >On Mon, Jul 8, 2019 at 4:48 PM Jags N <jagsnn@gmail.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> I am trying to understand net_af_xdp, and find that dpdk is recognizing
>> >> only one vdev net_af_xdp, hence only one port (port 0) is getting
>> >> configured. Requesting help to know if I am missing any information on
>> >> net_af_xdp support in dpdk, or if I have provided the EAL parameters
>> wrong.
>> >> Kindly advice.
>> >>
>> >> I am running Fedora 30.1-2 as Guest VM on Virtual Box VM Manager with
>> >> Linux Kernel 5.1.0, and dpdk-19.05. The interfaces are emulated ones
>> >> mentioned below,
>> >>
>> >> lspci output ...
>> >> 00:09.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
>> >> Controller (Copper) (rev 02)
>> >> 00:0a.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet
>> >> Controller (Copper) (rev 02)
>> >>
>> >> DPDK testpmd is executed as mentioned below,
>> >>
>> >> [root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev
>> >> net_af_xdp,iface=enp0s9  --vdev net_af_xdp,iface=enp0s10 --iova-mode=va
>> --
>> >> --portmask=0x3
>>
>> Here you need to use
>>
>> --vdev net_af_xdp0,iface=enp0s9 --vdev net_af_xdp1,iface=enp0s10
>>
>> Thanks,
>> Xiaolong
>>
>> >> EAL: Detected 3 lcore(s)
>> >> EAL: Detected 1 NUMA nodes
>> >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> >> EAL: Probing VFIO support...
>> >> EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using
>> unreliable
>> >> clock cycles !
>> >> EAL: PCI device 0000:00:03.0 on NUMA socket -1
>> >> EAL:   Invalid NUMA socket, default to 0
>> >> EAL:   probe driver: 8086:100e net_e1000_em
>> >> EAL: PCI device 0000:00:08.0 on NUMA socket -1
>> >> EAL:   Invalid NUMA socket, default to 0
>> >> EAL:   probe driver: 8086:100e net_e1000_em
>> >> EAL: PCI device 0000:00:09.0 on NUMA socket -1
>> >> EAL:   Invalid NUMA socket, default to 0
>> >> EAL:   probe driver: 8086:100f net_e1000_em
>> >> EAL: PCI device 0000:00:0a.0 on NUMA socket -1
>> >> EAL:   Invalid NUMA socket, default to 0
>> >> EAL:   probe driver: 8086:100f net_e1000_em
>> >> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
>> size=2176,
>> >> socket=0
>> >> testpmd: preferred mempool ops selected: ring_mp_mc
>> >>
>> >> Warning! port-topology=paired and odd forward ports number, the last
>> port
>> >> will pair with itself.
>> >>
>> >> Configuring Port 0 (socket 0)
>> >> Port 0: 08:00:27:68:5B:66
>> >> Checking link statuses...
>> >> Done
>> >> No commandline core given, start packet forwarding
>> >> io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
>> >> enabled, MP allocation mode: native
>> >> Logical Core 1 (socket 0) forwards packets on 1 streams:
>> >>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>> >>
>> >>   io packet forwarding packets/burst=32
>> >>   nb forwarding cores=1 - nb forwarding ports=1
>> >>   port 0: RX queue number: 1 Tx queue number: 1
>> >>     Rx offloads=0x0 Tx offloads=0x0
>> >>     RX queue: 0
>> >>       RX desc=0 - RX free threshold=0
>> >>       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>> >>       RX Offloads=0x0
>> >>     TX queue: 0
>> >>       TX desc=0 - TX free threshold=0
>> >>       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>> >>       TX offloads=0x0 - TX RS bit threshold=0
>> >> Press enter to exit
>> >>
>> >> Telling cores to stop...
>> >> Waiting for lcores to finish...
>> >>
>> >>   ---------------------- Forward statistics for port 0
>> >>  ----------------------
>> >>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>> >>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>> >>
>> >>
>> ----------------------------------------------------------------------------
>> >>
>> >>   +++++++++++++++ Accumulated forward statistics for all
>> >> ports+++++++++++++++
>> >>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>> >>   TX-packets: 0              TX-dropped: 0             TX-total: 0
>> >>
>> >>
>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> >>
>> >> Done.
>> >>
>> >> Stopping port 0...
>> >> Stopping ports...
>> >> Done
>> >>
>> >> Shutting down port 0...
>> >> Closing ports...
>> >> Done
>> >>
>> >> Bye...
>> >>
>> >> Regards,
>> >> Jags
>> >>
>> >>
>>

  reply	other threads:[~2019-07-11  2:33 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-08 11:18 Jags N
2019-07-10  1:22 ` Jags N
2019-07-10  8:59   ` Ye Xiaolong
2019-07-11  2:21     ` Jags N
2019-07-11  9:15       ` Ye Xiaolong [this message]
2019-07-11  6:18         ` Jags N
2019-07-13 17:11           ` Jags N
2019-07-14  8:01             ` Ye Xiaolong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190711091502.GA39155@intel.com \
    --to=xiaolong.ye@intel.com \
    --cc=jagsnn@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).