From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 214355A4A for ; Thu, 25 Apr 2019 07:50:00 +0200 (CEST) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Apr 2019 22:50:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,392,1549958400"; d="scan'208";a="143436541" Received: from yexl-server.sh.intel.com (HELO localhost) ([10.67.110.206]) by fmsmga008.fm.intel.com with ESMTP; 24 Apr 2019 22:49:59 -0700 Date: Thu, 25 Apr 2019 13:43:45 +0800 From: Ye Xiaolong To: Markus Theil Cc: dev@dpdk.org Message-ID: <20190425054345.GA90932@intel.com> References: <37073834d0b9a9f5a6e9f39bac3adc5eb29779ab.camel@debian.org> <5bc49c51-04f4-6f73-889d-d3c0ff749784@intel.com> <20190403142217.GA36385@intel.com> <6f660657-d488-1121-126a-a38c9744b1eb@intel.com> <20190403155741.GE36385@intel.com> <34359a7b-f2c8-81f2-8a49-f1238e8dfbf0@tu-ilmenau.de> <20190418010530.GA5184@intel.com> <01f1837a-8acf-7006-6ff2-d9a4d88015dc@tu-ilmenau.de> <20190424063537.GA78858@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Subject: Re: [dpdk-dev] [BUG] net/af_xdp: Current code can only create one af_xdp device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Apr 2019 05:50:01 -0000 Hi, Markus On 04/24, Markus Theil wrote: >Hi Xiaolong, > >I also tested with i40e devices, with the same result. > >./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev >net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1 >EAL: Detected 16 lcore(s) >EAL: Detected 1 NUMA nodes >EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >EAL: No free hugepages reported in hugepages-2048kB >EAL: No available hugepages reported in hugepages-2048kB >EAL: Probing VFIO support... >rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >testpmd: create a new mbuf pool : n=267456, >size=2176, socket=0 >testpmd: preferred mempool ops selected: ring_mp_mc >Configuring Port 0 (socket 0) >Port 0: 3C:FD:FE:A3:E7:30 >Configuring Port 1 (socket 0) >xsk_configure(): Failed to create xsk socket. (-1) >eth_rx_queue_setup(): Failed to configure xdp socket >Fail to configure port 1 rx queues >EAL: Error - exiting with code: 1 >  Cause: Start ports failed (-1) error should typically refer to "Operation not permitted", any special configuration for you interfaces and were you running it with root privilege? and out of curiosity, why you got (-1) in your log, do you add some private patch to print the errno? Thanks, Xiaolong > >If I execute the same call again, I get error -16 already on the first port: > >./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev >net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1 >EAL: Detected 16 lcore(s) >EAL: Detected 1 NUMA nodes >EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >EAL: No free hugepages reported in hugepages-2048kB >EAL: No available hugepages reported in hugepages-2048kB >EAL: Probing VFIO support... >rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >testpmd: create a new mbuf pool : n=267456, >size=2176, socket=0 >testpmd: preferred mempool ops selected: ring_mp_mc >Configuring Port 0 (socket 0) >xsk_configure(): Failed to create xsk socket. (-16) >eth_rx_queue_setup(): Failed to configure xdp socket >Fail to configure port 0 rx queues >EAL: Error - exiting with code: 1 >  Cause: Start ports failed > >Software versions/commits/infos: > >- Linux 5.1-rc6 >- DPDK 7f251bcf22c5729792f9243480af1b3c072876a5 (19.05-rc2) >- libbpf from https://github.com/libbpf/libbpf >(910c475f09e5c269f441d7496c27dace30dc2335) >- DPDK and libbpf build with meson > >Best regards, >Markus > >On 4/24/19 8:35 AM, Ye Xiaolong wrote: >> Hi, Markus >> >> On 04/23, Markus Theil wrote: >>> Hi Xiaolong, >>> >>> I tested your commit "net/af_xdp: fix creating multiple instance" on the >>> current master branch. It does not work for me in the following minimal >>> test setting: >>> >>> 1) allocate 2x 1GB huge pages for DPDK >>> >>> 2) ip link add p1 type veth peer name p2 >>> >>> 3) ./dpdk-testpmd --vdev=net_af_xdp0,iface=p1 >>> --vdev=net_af_xdp1,iface=p2 (I also tested this with two igb devices, >>> with the same errors) >> I've tested 19.05-rc2, started testpmd with 2 af_xdp vdev (with two i40e devices), >> and it works for me. >> >> $ ./x86_64-native-linuxapp-gcc/app/testpmd -l 5,6 -n 4 --log-level=pmd.net.af_xdp:info -b 82:00.1 --no-pci --vdev net_af_xdp0,iface=ens786f1 --vdev net_af_xdp1,iface=ens786f0 >> EAL: Detected 88 lcore(s) >> EAL: Detected 2 NUMA nodes >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Probing VFIO support... >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >> testpmd: create a new mbuf pool : n=155456, size=2176, socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> Configuring Port 0 (socket 0) >> Port 0: 3C:FD:FE:C5:E2:41 >> Configuring Port 1 (socket 0) >> Port 1: 3C:FD:FE:C5:E2:40 >> Checking link statuses... >> Done >> No commandline core given, start packet forwarding >> io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native >> Logical Core 6 (socket 0) forwards packets on 2 streams: >> RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01 >> RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 >> >> io packet forwarding packets/burst=32 >> nb forwarding cores=1 - nb forwarding ports=2 >> port 0: RX queue number: 1 Tx queue number: 1 >> Rx offloads=0x0 Tx offloads=0x0 >> RX queue: 0 >> RX desc=0 - RX free threshold=0 >> RX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> RX Offloads=0x0 >> TX queue: 0 >> TX desc=0 - TX free threshold=0 >> TX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> TX offloads=0x0 - TX RS bit threshold=0 >> port 1: RX queue number: 1 Tx queue number: 1 >> Rx offloads=0x0 Tx offloads=0x0 >> RX queue: 0 >> RX desc=0 - RX free threshold=0 >> RX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> RX Offloads=0x0 >> TX queue: 0 >> TX desc=0 - TX free threshold=0 >> TX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> TX offloads=0x0 - TX RS bit threshold=0 >> Press enter to exit >> >> Could you paste your whole failure log here? >>> I'm using Linux 5.1-rc6 and an up to date libbpf. The setup works for >>> the first device and fails for the second device when creating bpf maps >>> in libbpf ("qidconf_map" or "xsks_map"). It seems, that these maps also >>> need unique names and cannot exist twice under the same name. >> So far as I know, there should not be such contraint, the bpf maps creations >> are wrapped in libbpf. >> >>> Furthermore if running step 3 again after it failed for the first time, >>> xdp vdev allocation already fails for the first xdp vdev and does not >>> reach the second one. Please let me know if you need some program output >>> or more information from me. >>> >>> Best regards, >>> Markus >>> >> Thanks, >> Xiaolong >> >>> On 4/18/19 3:05 AM, Ye Xiaolong wrote: >>>> Hi, Markus >>>> >>>> On 04/17, Markus Theil wrote: >>>>> I tested the new af_xdp based device on the current master branch and >>>>> noticed, that the usage of static mempool names allows only for the >>>>> creation of a single af_xdp vdev. If a second vdev of the same type gets >>>>> created, the mempool allocation fails. >>>> Thanks for reporting, could you paste the cmdline you used and the error log? >>>> Are you referring to ring creation or mempool creation? >>>> >>>> >>>> Thanks, >>>> Xiaolong >>>>> Best regards, >>>>> Markus Theil From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 62431A05D3 for ; Thu, 25 Apr 2019 07:50:03 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 854A51B3FC; Thu, 25 Apr 2019 07:50:02 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 214355A4A for ; Thu, 25 Apr 2019 07:50:00 +0200 (CEST) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Apr 2019 22:50:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,392,1549958400"; d="scan'208";a="143436541" Received: from yexl-server.sh.intel.com (HELO localhost) ([10.67.110.206]) by fmsmga008.fm.intel.com with ESMTP; 24 Apr 2019 22:49:59 -0700 Date: Thu, 25 Apr 2019 13:43:45 +0800 From: Ye Xiaolong To: Markus Theil Cc: dev@dpdk.org Message-ID: <20190425054345.GA90932@intel.com> References: <37073834d0b9a9f5a6e9f39bac3adc5eb29779ab.camel@debian.org> <5bc49c51-04f4-6f73-889d-d3c0ff749784@intel.com> <20190403142217.GA36385@intel.com> <6f660657-d488-1121-126a-a38c9744b1eb@intel.com> <20190403155741.GE36385@intel.com> <34359a7b-f2c8-81f2-8a49-f1238e8dfbf0@tu-ilmenau.de> <20190418010530.GA5184@intel.com> <01f1837a-8acf-7006-6ff2-d9a4d88015dc@tu-ilmenau.de> <20190424063537.GA78858@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Subject: Re: [dpdk-dev] [BUG] net/af_xdp: Current code can only create one af_xdp device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190425054345.piLa4xpFI7L7MLQFa4QEDlZ1i44rBhYkvYnDnYCnixw@z> Hi, Markus On 04/24, Markus Theil wrote: >Hi Xiaolong, > >I also tested with i40e devices, with the same result. > >./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev >net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1 >EAL: Detected 16 lcore(s) >EAL: Detected 1 NUMA nodes >EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >EAL: No free hugepages reported in hugepages-2048kB >EAL: No available hugepages reported in hugepages-2048kB >EAL: Probing VFIO support... >rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >testpmd: create a new mbuf pool : n=267456, >size=2176, socket=0 >testpmd: preferred mempool ops selected: ring_mp_mc >Configuring Port 0 (socket 0) >Port 0: 3C:FD:FE:A3:E7:30 >Configuring Port 1 (socket 0) >xsk_configure(): Failed to create xsk socket. (-1) >eth_rx_queue_setup(): Failed to configure xdp socket >Fail to configure port 1 rx queues >EAL: Error - exiting with code: 1 >  Cause: Start ports failed (-1) error should typically refer to "Operation not permitted", any special configuration for you interfaces and were you running it with root privilege? and out of curiosity, why you got (-1) in your log, do you add some private patch to print the errno? Thanks, Xiaolong > >If I execute the same call again, I get error -16 already on the first port: > >./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev >net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1 >EAL: Detected 16 lcore(s) >EAL: Detected 1 NUMA nodes >EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >EAL: No free hugepages reported in hugepages-2048kB >EAL: No available hugepages reported in hugepages-2048kB >EAL: Probing VFIO support... >rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >testpmd: create a new mbuf pool : n=267456, >size=2176, socket=0 >testpmd: preferred mempool ops selected: ring_mp_mc >Configuring Port 0 (socket 0) >xsk_configure(): Failed to create xsk socket. (-16) >eth_rx_queue_setup(): Failed to configure xdp socket >Fail to configure port 0 rx queues >EAL: Error - exiting with code: 1 >  Cause: Start ports failed > >Software versions/commits/infos: > >- Linux 5.1-rc6 >- DPDK 7f251bcf22c5729792f9243480af1b3c072876a5 (19.05-rc2) >- libbpf from https://github.com/libbpf/libbpf >(910c475f09e5c269f441d7496c27dace30dc2335) >- DPDK and libbpf build with meson > >Best regards, >Markus > >On 4/24/19 8:35 AM, Ye Xiaolong wrote: >> Hi, Markus >> >> On 04/23, Markus Theil wrote: >>> Hi Xiaolong, >>> >>> I tested your commit "net/af_xdp: fix creating multiple instance" on the >>> current master branch. It does not work for me in the following minimal >>> test setting: >>> >>> 1) allocate 2x 1GB huge pages for DPDK >>> >>> 2) ip link add p1 type veth peer name p2 >>> >>> 3) ./dpdk-testpmd --vdev=net_af_xdp0,iface=p1 >>> --vdev=net_af_xdp1,iface=p2 (I also tested this with two igb devices, >>> with the same errors) >> I've tested 19.05-rc2, started testpmd with 2 af_xdp vdev (with two i40e devices), >> and it works for me. >> >> $ ./x86_64-native-linuxapp-gcc/app/testpmd -l 5,6 -n 4 --log-level=pmd.net.af_xdp:info -b 82:00.1 --no-pci --vdev net_af_xdp0,iface=ens786f1 --vdev net_af_xdp1,iface=ens786f0 >> EAL: Detected 88 lcore(s) >> EAL: Detected 2 NUMA nodes >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Probing VFIO support... >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >> testpmd: create a new mbuf pool : n=155456, size=2176, socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> Configuring Port 0 (socket 0) >> Port 0: 3C:FD:FE:C5:E2:41 >> Configuring Port 1 (socket 0) >> Port 1: 3C:FD:FE:C5:E2:40 >> Checking link statuses... >> Done >> No commandline core given, start packet forwarding >> io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native >> Logical Core 6 (socket 0) forwards packets on 2 streams: >> RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01 >> RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 >> >> io packet forwarding packets/burst=32 >> nb forwarding cores=1 - nb forwarding ports=2 >> port 0: RX queue number: 1 Tx queue number: 1 >> Rx offloads=0x0 Tx offloads=0x0 >> RX queue: 0 >> RX desc=0 - RX free threshold=0 >> RX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> RX Offloads=0x0 >> TX queue: 0 >> TX desc=0 - TX free threshold=0 >> TX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> TX offloads=0x0 - TX RS bit threshold=0 >> port 1: RX queue number: 1 Tx queue number: 1 >> Rx offloads=0x0 Tx offloads=0x0 >> RX queue: 0 >> RX desc=0 - RX free threshold=0 >> RX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> RX Offloads=0x0 >> TX queue: 0 >> TX desc=0 - TX free threshold=0 >> TX threshold registers: pthresh=0 hthresh=0 wthresh=0 >> TX offloads=0x0 - TX RS bit threshold=0 >> Press enter to exit >> >> Could you paste your whole failure log here? >>> I'm using Linux 5.1-rc6 and an up to date libbpf. The setup works for >>> the first device and fails for the second device when creating bpf maps >>> in libbpf ("qidconf_map" or "xsks_map"). It seems, that these maps also >>> need unique names and cannot exist twice under the same name. >> So far as I know, there should not be such contraint, the bpf maps creations >> are wrapped in libbpf. >> >>> Furthermore if running step 3 again after it failed for the first time, >>> xdp vdev allocation already fails for the first xdp vdev and does not >>> reach the second one. Please let me know if you need some program output >>> or more information from me. >>> >>> Best regards, >>> Markus >>> >> Thanks, >> Xiaolong >> >>> On 4/18/19 3:05 AM, Ye Xiaolong wrote: >>>> Hi, Markus >>>> >>>> On 04/17, Markus Theil wrote: >>>>> I tested the new af_xdp based device on the current master branch and >>>>> noticed, that the usage of static mempool names allows only for the >>>>> creation of a single af_xdp vdev. If a second vdev of the same type gets >>>>> created, the mempool allocation fails. >>>> Thanks for reporting, could you paste the cmdline you used and the error log? >>>> Are you referring to ring creation or mempool creation? >>>> >>>> >>>> Thanks, >>>> Xiaolong >>>>> Best regards, >>>>> Markus Theil