From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id ABB6CA00E6 for ; Wed, 10 Jul 2019 04:17:37 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EEAA41DBA; Wed, 10 Jul 2019 04:17:36 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 4F2DCF04 for ; Wed, 10 Jul 2019 04:17:35 +0200 (CEST) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Jul 2019 19:17:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,472,1557212400"; d="scan'208";a="168169601" Received: from yexl-server.sh.intel.com (HELO localhost) ([10.67.110.185]) by orsmga003.jf.intel.com with ESMTP; 09 Jul 2019 19:17:33 -0700 Date: Wed, 10 Jul 2019 16:59:26 +0800 From: Ye Xiaolong To: Jags N Cc: users@dpdk.org Message-ID: <20190710085926.GA24200@intel.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Subject: Re: [dpdk-users] only one vdev net_af_xdp being recognized X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hi, On 07/10, Jags N wrote: >Hi, > >Continuing on my previous email, > >https://doc.dpdk.org/guides/rel_notes/release_19_08.html release not says >- Added multi-queue support to allow one af_xdp vdev with multiple netdev >queues > >Does it in anyway imply only one af_xdp vdev is supported as of now, and >more than one af_xdp vdev may not be recognized ? Multiple af_xdp vdevs are supported. > >Regards, >Jags > >On Mon, Jul 8, 2019 at 4:48 PM Jags N wrote: > >> Hi, >> >> I am trying to understand net_af_xdp, and find that dpdk is recognizing >> only one vdev net_af_xdp, hence only one port (port 0) is getting >> configured. Requesting help to know if I am missing any information on >> net_af_xdp support in dpdk, or if I have provided the EAL parameters wrong. >> Kindly advice. >> >> I am running Fedora 30.1-2 as Guest VM on Virtual Box VM Manager with >> Linux Kernel 5.1.0, and dpdk-19.05. The interfaces are emulated ones >> mentioned below, >> >> lspci output ... >> 00:09.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet >> Controller (Copper) (rev 02) >> 00:0a.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet >> Controller (Copper) (rev 02) >> >> DPDK testpmd is executed as mentioned below, >> >> [root@localhost app]# ./testpmd -c 0x3 -n 4 --vdev >> net_af_xdp,iface=enp0s9 --vdev net_af_xdp,iface=enp0s10 --iova-mode=va -- >> --portmask=0x3 Here you need to use --vdev net_af_xdp0,iface=enp0s9 --vdev net_af_xdp1,iface=enp0s10 Thanks, Xiaolong >> EAL: Detected 3 lcore(s) >> EAL: Detected 1 NUMA nodes >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Probing VFIO support... >> EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using unreliable >> clock cycles ! >> EAL: PCI device 0000:00:03.0 on NUMA socket -1 >> EAL: Invalid NUMA socket, default to 0 >> EAL: probe driver: 8086:100e net_e1000_em >> EAL: PCI device 0000:00:08.0 on NUMA socket -1 >> EAL: Invalid NUMA socket, default to 0 >> EAL: probe driver: 8086:100e net_e1000_em >> EAL: PCI device 0000:00:09.0 on NUMA socket -1 >> EAL: Invalid NUMA socket, default to 0 >> EAL: probe driver: 8086:100f net_e1000_em >> EAL: PCI device 0000:00:0a.0 on NUMA socket -1 >> EAL: Invalid NUMA socket, default to 0 >> EAL: probe driver: 8086:100f net_e1000_em >> testpmd: create a new mbuf pool : n=155456, size=2176, >> socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> >> Warning! port-topology=paired and odd forward ports number, the last port >> will pair with itself. >> >> Configuring Port 0 (socket 0) >> Port 0: 08:00:27:68:5B:66 >> Checking link statuses... >> Done >> No commandline core given, start packet forwarding >> io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support >> enabled, MP allocation mode: native >> Logical Core 1 (socket 0) forwards packets on 1 streams: >> RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 >> >> io packet forwarding packets/burst=32 >> nb forwarding cores=1 - nb forwarding ports=1 >> port 0: RX queue number: 1 Tx queue number: 1 >> Rx offloads=0x0 Tx offloads=0x0 >> RX queue: 0 >> RX desc=0 - RX free threshold=0 >> RX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> RX Offloads=0x0 >> TX queue: 0 >> TX desc=0 - TX free threshold=0 >> TX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> TX offloads=0x0 - TX RS bit threshold=0 >> Press enter to exit >> >> Telling cores to stop... >> Waiting for lcores to finish... >> >> ---------------------- Forward statistics for port 0 >> ---------------------- >> RX-packets: 0 RX-dropped: 0 RX-total: 0 >> TX-packets: 0 TX-dropped: 0 TX-total: 0 >> >> ---------------------------------------------------------------------------- >> >> +++++++++++++++ Accumulated forward statistics for all >> ports+++++++++++++++ >> RX-packets: 0 RX-dropped: 0 RX-total: 0 >> TX-packets: 0 TX-dropped: 0 TX-total: 0 >> >> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >> >> Done. >> >> Stopping port 0... >> Stopping ports... >> Done >> >> Shutting down port 0... >> Closing ports... >> Done >> >> Bye... >> >> Regards, >> Jags >> >>