From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smail.rz.tu-ilmenau.de (smail.rz.tu-ilmenau.de [141.24.186.67]) by dpdk.org (Postfix) with ESMTP id B818DA3 for ; Wed, 24 Apr 2019 22:33:35 +0200 (CEST) Received: from [141.24.212.108] (thunderstorm.prakinf.tu-ilmenau.de [141.24.212.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smail.rz.tu-ilmenau.de (Postfix) with ESMTPSA id 7A52E580074; Wed, 24 Apr 2019 22:33:35 +0200 (CEST) To: Ye Xiaolong Cc: dev@dpdk.org References: <37073834d0b9a9f5a6e9f39bac3adc5eb29779ab.camel@debian.org> <5bc49c51-04f4-6f73-889d-d3c0ff749784@intel.com> <20190403142217.GA36385@intel.com> <6f660657-d488-1121-126a-a38c9744b1eb@intel.com> <20190403155741.GE36385@intel.com> <34359a7b-f2c8-81f2-8a49-f1238e8dfbf0@tu-ilmenau.de> <20190418010530.GA5184@intel.com> <01f1837a-8acf-7006-6ff2-d9a4d88015dc@tu-ilmenau.de> <20190424063537.GA78858@intel.com> <20190424144727.GA84411@intel.com> From: Markus Theil Openpgp: preference=signencrypt Autocrypt: addr=markus.theil@tu-ilmenau.de; prefer-encrypt=mutual; keydata= xsFNBFcopAYBEADBcwd5L8+T0zgqq4kYY4nQt6CYh5sOalHdI3zNE6fWbRbzQwViIlC9Q0q/ ys+nMmQajMWHalsgcdeVSQ2GJ/06qhtogCpmL3d2/GdlvVROh33zeqwqevscKvPH5i7oiBhh dMs8/5g89q4aTYtyaausy8qQbv3Q8BCVkwFW2pEcqfxNKgWi/8nM2A3powNA9gzCR2rmoGyd nvQNkk0MCwT8JSGnUkiEYEkWF4aIr3XToavpn+OMIIIizcDzRwU5NBmC3Q07PQTn8Srr+rJQ DF65vgaoI8G7wlNLQYavL1uFX1LVMP1jVr6GMOczeURqiF/QSuHCdyT3R8P3Qknc74tGT2Ow EbxllMnk1gvSfGQq47EYIvuXFyMUWOjjtgP+NxryXVAvQBmuqWWjRjfqMSx9URhvB/ZMQLbZ LUPNW0Whl/vOQdxVbEMQOSKhKYoWKeCDe7567sEi02bMScvr6ybKBvRMs71hT1T+HFcBE/IJ g3ZX+6qRzs+XKLTFGipRbRiLYKKNR+UM/sNc/w+3BTowB9g/cQukrITvb792T4/IPBJzpEry 9eZFhFTlIqggy/fGrpZkEpEsOyOWYlRyseETvNdrdeVG7dRGPj68jKUWTVcAaAAiu8WhgnvG 4tvpaORUhjdg4DfkbE9b9lvYkeesFsE0bUAd5z2DeVbtR0QBUwARAQABzSlNYXJrdXMgVGhl aWwgPG1hcmt1cy50aGVpbEB0dS1pbG1lbmF1LmRlPsLBfQQTAQoAJwUCVyikBgIbAwUJB4Yf gAULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRBt3CLaT/oEE5bzD/94Ezfl7mm57PXniW3m yIcjofJXw7YCJOprUon36W2Na2xrH3j8QH/sqkfTyCoj1LWxxDGQs+CQGkZ47cX+H1KqKKSS iGoNRV/cvoozWe7cn9bAvR3JkqLxjIi0vp68rs/f6ZI49N7zuZAsSBrXN2/2xIgH+mRoAPyw mgzaIXZL87vajXol4TlbMaC7blRs6Q4kzOP7ZjvfM/yxwUsifQltNY4wAEWKXLk67ij9akGO FG+y3sHF1HYH3w0sB+mIIN3x4BjYqXSH3XDx4xvCQXWkHmFl1RoQbJDvMjxP5/HXLR3omPjF ZpV657Grh/PgonwZ/U6sigaA11pjcPfkYNYkcGyb0OMqSKb3Ke52/bhxv4pPWrKRS7btMhj7 4zuMDk9V+De3YFXvKGllXBMAA6J8TlY71rlcOWKyBQNLLkUZ7/uAA949GTNzM0fPTRqry5qn WCR/ekzm3VyFgjWSun39L1W13bJW8aUu8k5x2KWq4YrdB0TOYZpKSAconOHVxhkEMxLwRUfZ B9kEPqlfQY5YYE6ZoZQF38Kvx3VFuAnhf+82PjMMrkQ3g07D3xJlq7xWdq1jrwG1QxmVFS64 g+oWM9IIFisvVspNrJAEgSGmYgTw+VT3PDP3Gj8sqD32mWb18bVE9I5FyagOewKdLpqcljIi Bz8WAuz+RbwX4i/mMs7BTQRXKKQGARAAzTGnHyUtTBcGHMKArcGiVnCB6knTFgU7I1gsoBrc J1bo0JRJj1lduYkdm12kC49c4dZtv1CciQIN9UEpalZsB2TXaC/xaDJ2IsZuHLOOaqSSwVg/ Bs41vMeFYmmwRRN1y6MQRCBobCC6KNuCpgtEmS/v4hurISt+MoPIppjK6E7tJQ0lgtfRHq/M HW+Wabw5Nq3OFSaLYC3nRJkoB1Vej8XGO8X6URWnZmL3xcnkIkoH13y2WTO0lJz9tF47t5U2 +xWrFMR+a6ow/QPL4Wi53IqhXDqa6OUzDAUuplZOm71VhwsEkk6u0YjzNRbgAYMBh7iye2j/ 4Lf2+YUB8+uKimpsEwW0nR85sKCQm102Zb9+1bYXPuIIP9HbVNy77X4aM9V0W48zBTqWZzh8 2i0oq8z1xN3qeuZbAXnzelKZvE1wM9cLQ3YHA629J2OGe3dkv2+untuyj6KMCEU3+vp6j7TX hKf+jy3PIrQcQmzMTs7xnkEm5LvbAtaZLrg4OGYjSpvH4bKsLA3sNGt5Xqsuqh5dsO7ccX1G nfY7Ug8UyNT5/0gZVkOileTQl0KtgwO9VBXAdrmMPHFldRn3dGNiGlCbxnsaNQDfQwTFmDu0 1TjzwC4byWLQT+C7yCTk8h9q0NwmCJ5yG7Fe7VUUpA+ZVLyMSt+tSpH8v3n+3I2AKoMAEQEA AcLBZQQYAQoADwUCVyikBgIbDAUJB4YfgAAKCRBt3CLaT/oEE7lZEACgrOxRaCQ7D5Rc4BOA N4VDIQqVch8X3pBE/k/v3UopkgmYnP4RlhegWr4wp2E6Vuyt8nwnZs3WhxQENfMjd5rV3WhG k5ib+pmLvtAht5j8jfP5+UKUTvX1a6oMi98PT8PuQ70oKM7T/KN+RpXIHoz/2Dgde1RQpwKC XWtkU9tBF87fE8FfwuqS6myOfd8zc6fOVV/fxmTXVC8qA7tB+0tOSDHB80GRYwnlumChOtOB Np8ABFWryE2e6mZZnp9Tpd1A74B45z6l445f5BixGLExAOoTJNA2k0JWx79/2Yi+pwTnQMzW QBLa48MnL3DUlVlahz1FZfGbA2U5NARS8iRdUhCaHL0Lph8HxWJwYA5w2afyCCwRD7xFo44V jsCNbqtZ6TrFARJdrbeWQl3RZ4Y+uuvN9mgvttVenAbx5d68IariYtXashucQeIMoqIloHTN sJDaupNm6+A9T3Re5yXmZsrWSxEEEGv1Bh+5DH6vauP0Ng0ebZ4c6jXfgLpPnAUWlV0rnmrJ q9141nbyLRYAhUXxiqajb+Zocp2Am4BF19rBUa1C78ooye9XShhuQvDTB6tZuiYWc24tiyqb IjR1hmG/zg8APhURAv/zUubaf4IA7v5YHVQqAbpUfb6ePlPVJBtVw2CwXFrGwnqDFh82La8D sGZPq8zmOtvOyZtafA== Organization: TU Ilmenau Message-ID: <4f144da3-76d8-d811-736b-f8e7975d9cde@tu-ilmenau.de> Date: Wed, 24 Apr 2019 22:33:35 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190424144727.GA84411@intel.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Content-Language: en-US Subject: Re: [dpdk-dev] [BUG] net/af_xdp: Current code can only create one af_xdp device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Apr 2019 20:33:36 -0000 Hi Xiaolong, with only one vdev everything works. It stops working if I use two vdevs. Both interfaces were brought up before testing. Best regards, Markus On 24.04.19 16:47, Ye Xiaolong wrote: > Hi, Markus > > On 04/24, Markus Theil wrote: >> Hi Xiaolong, >> >> I also tested with i40e devices, with the same result. >> >> ./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev >> net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1 >> EAL: Detected 16 lcore(s) >> EAL: Detected 1 NUMA nodes >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: No free hugepages reported in hugepages-2048kB >> EAL: No available hugepages reported in hugepages-2048kB >> EAL: Probing VFIO support... >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >> testpmd: create a new mbuf pool : n=267456, >> size=2176, socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> Configuring Port 0 (socket 0) >> Port 0: 3C:FD:FE:A3:E7:30 >> Configuring Port 1 (socket 0) >> xsk_configure(): Failed to create xsk socket. (-1) >> eth_rx_queue_setup(): Failed to configure xdp socket >> Fail to configure port 1 rx queues >> EAL: Error - exiting with code: 1 >>   Cause: Start ports failed >> > What about one vdev instance on your side? And have you brought up the interface? > xsk_configure requires the interface to be up state. > > dsd > Thanks, > Xiaolong > > >> If I execute the same call again, I get error -16 already on the first port: >> >> ./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev >> net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1 >> EAL: Detected 16 lcore(s) >> EAL: Detected 1 NUMA nodes >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: No free hugepages reported in hugepages-2048kB >> EAL: No available hugepages reported in hugepages-2048kB >> EAL: Probing VFIO support... >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >> testpmd: create a new mbuf pool : n=267456, >> size=2176, socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> Configuring Port 0 (socket 0) >> xsk_configure(): Failed to create xsk socket. (-16) >> eth_rx_queue_setup(): Failed to configure xdp socket >> Fail to configure port 0 rx queues >> EAL: Error - exiting with code: 1 >>   Cause: Start ports failed >> >> Software versions/commits/infos: >> >> - Linux 5.1-rc6 >> - DPDK 7f251bcf22c5729792f9243480af1b3c072876a5 (19.05-rc2) >> - libbpf from https://github.com/libbpf/libbpf >> (910c475f09e5c269f441d7496c27dace30dc2335) >> - DPDK and libbpf build with meson >> >> Best regards, >> Markus >> >> On 4/24/19 8:35 AM, Ye Xiaolong wrote: >>> Hi, Markus >>> >>> On 04/23, Markus Theil wrote: >>>> Hi Xiaolong, >>>> >>>> I tested your commit "net/af_xdp: fix creating multiple instance" on the >>>> current master branch. It does not work for me in the following minimal >>>> test setting: >>>> >>>> 1) allocate 2x 1GB huge pages for DPDK >>>> >>>> 2) ip link add p1 type veth peer name p2 >>>> >>>> 3) ./dpdk-testpmd --vdev=net_af_xdp0,iface=p1 >>>> --vdev=net_af_xdp1,iface=p2 (I also tested this with two igb devices, >>>> with the same errors) >>> I've tested 19.05-rc2, started testpmd with 2 af_xdp vdev (with two i40e devices), >>> and it works for me. >>> >>> $ ./x86_64-native-linuxapp-gcc/app/testpmd -l 5,6 -n 4 --log-level=pmd.net.af_xdp:info -b 82:00.1 --no-pci --vdev net_af_xdp0,iface=ens786f1 --vdev net_af_xdp1,iface=ens786f0 >>> EAL: Detected 88 lcore(s) >>> EAL: Detected 2 NUMA nodes >>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >>> EAL: Probing VFIO support... >>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >>> testpmd: create a new mbuf pool : n=155456, size=2176, socket=0 >>> testpmd: preferred mempool ops selected: ring_mp_mc >>> Configuring Port 0 (socket 0) >>> Port 0: 3C:FD:FE:C5:E2:41 >>> Configuring Port 1 (socket 0) >>> Port 1: 3C:FD:FE:C5:E2:40 >>> Checking link statuses... >>> Done >>> No commandline core given, start packet forwarding >>> io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native >>> Logical Core 6 (socket 0) forwards packets on 2 streams: >>> RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01 >>> RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 >>> >>> io packet forwarding packets/burst=32 >>> nb forwarding cores=1 - nb forwarding ports=2 >>> port 0: RX queue number: 1 Tx queue number: 1 >>> Rx offloads=0x0 Tx offloads=0x0 >>> RX queue: 0 >>> RX desc=0 - RX free threshold=0 >>> RX threshold registers: pthresh=0 hthresh=0 wthresh=0 >>> RX Offloads=0x0 >>> TX queue: 0 >>> TX desc=0 - TX free threshold=0 >>> TX threshold registers: pthresh=0 hthresh=0 wthresh=0 >>> TX offloads=0x0 - TX RS bit threshold=0 >>> port 1: RX queue number: 1 Tx queue number: 1 >>> Rx offloads=0x0 Tx offloads=0x0 >>> RX queue: 0 >>> RX desc=0 - RX free threshold=0 >>> RX threshold registers: pthresh=0 hthresh=0 wthresh=0 >>> RX Offloads=0x0 >>> TX queue: 0 >>> TX desc=0 - TX free threshold=0 >>> TX threshold registers: pthresh=0 hthresh=0 wthresh=0 >>> TX offloads=0x0 - TX RS bit threshold=0 >>> Press enter to exit >>> >>> Could you paste your whole failure log here? >>>> I'm using Linux 5.1-rc6 and an up to date libbpf. The setup works for >>>> the first device and fails for the second device when creating bpf maps >>>> in libbpf ("qidconf_map" or "xsks_map"). It seems, that these maps also >>>> need unique names and cannot exist twice under the same name. >>> So far as I know, there should not be such contraint, the bpf maps creations >>> are wrapped in libbpf. >>> >>>> Furthermore if running step 3 again after it failed for the first time, >>>> xdp vdev allocation already fails for the first xdp vdev and does not >>>> reach the second one. Please let me know if you need some program output >>>> or more information from me. >>>> >>>> Best regards, >>>> Markus >>>> >>> Thanks, >>> Xiaolong >>> >>>> On 4/18/19 3:05 AM, Ye Xiaolong wrote: >>>>> Hi, Markus >>>>> >>>>> On 04/17, Markus Theil wrote: >>>>>> I tested the new af_xdp based device on the current master branch and >>>>>> noticed, that the usage of static mempool names allows only for the >>>>>> creation of a single af_xdp vdev. If a second vdev of the same type gets >>>>>> created, the mempool allocation fails. >>>>> Thanks for reporting, could you paste the cmdline you used and the error log? >>>>> Are you referring to ring creation or mempool creation? >>>>> >>>>> >>>>> Thanks, >>>>> Xiaolong >>>>>> Best regards, >>>>>> Markus Theil -- Markus Theil Technische Universität Ilmenau, Fachgebiet Telematik/Rechnernetze Postfach 100565 98684 Ilmenau, Germany Phone: +49 3677 69-4582 Email: markus[dot]theil[at]tu-ilmenau[dot]de Web: http://www.tu-ilmenau.de/telematik From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 8E064A05D3 for ; Wed, 24 Apr 2019 22:33:38 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E9E1F1DB9; Wed, 24 Apr 2019 22:33:36 +0200 (CEST) Received: from smail.rz.tu-ilmenau.de (smail.rz.tu-ilmenau.de [141.24.186.67]) by dpdk.org (Postfix) with ESMTP id B818DA3 for ; Wed, 24 Apr 2019 22:33:35 +0200 (CEST) Received: from [141.24.212.108] (thunderstorm.prakinf.tu-ilmenau.de [141.24.212.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smail.rz.tu-ilmenau.de (Postfix) with ESMTPSA id 7A52E580074; Wed, 24 Apr 2019 22:33:35 +0200 (CEST) To: Ye Xiaolong Cc: dev@dpdk.org References: <37073834d0b9a9f5a6e9f39bac3adc5eb29779ab.camel@debian.org> <5bc49c51-04f4-6f73-889d-d3c0ff749784@intel.com> <20190403142217.GA36385@intel.com> <6f660657-d488-1121-126a-a38c9744b1eb@intel.com> <20190403155741.GE36385@intel.com> <34359a7b-f2c8-81f2-8a49-f1238e8dfbf0@tu-ilmenau.de> <20190418010530.GA5184@intel.com> <01f1837a-8acf-7006-6ff2-d9a4d88015dc@tu-ilmenau.de> <20190424063537.GA78858@intel.com> <20190424144727.GA84411@intel.com> From: Markus Theil Openpgp: preference=signencrypt Autocrypt: addr=markus.theil@tu-ilmenau.de; prefer-encrypt=mutual; keydata= xsFNBFcopAYBEADBcwd5L8+T0zgqq4kYY4nQt6CYh5sOalHdI3zNE6fWbRbzQwViIlC9Q0q/ ys+nMmQajMWHalsgcdeVSQ2GJ/06qhtogCpmL3d2/GdlvVROh33zeqwqevscKvPH5i7oiBhh dMs8/5g89q4aTYtyaausy8qQbv3Q8BCVkwFW2pEcqfxNKgWi/8nM2A3powNA9gzCR2rmoGyd nvQNkk0MCwT8JSGnUkiEYEkWF4aIr3XToavpn+OMIIIizcDzRwU5NBmC3Q07PQTn8Srr+rJQ DF65vgaoI8G7wlNLQYavL1uFX1LVMP1jVr6GMOczeURqiF/QSuHCdyT3R8P3Qknc74tGT2Ow EbxllMnk1gvSfGQq47EYIvuXFyMUWOjjtgP+NxryXVAvQBmuqWWjRjfqMSx9URhvB/ZMQLbZ LUPNW0Whl/vOQdxVbEMQOSKhKYoWKeCDe7567sEi02bMScvr6ybKBvRMs71hT1T+HFcBE/IJ g3ZX+6qRzs+XKLTFGipRbRiLYKKNR+UM/sNc/w+3BTowB9g/cQukrITvb792T4/IPBJzpEry 9eZFhFTlIqggy/fGrpZkEpEsOyOWYlRyseETvNdrdeVG7dRGPj68jKUWTVcAaAAiu8WhgnvG 4tvpaORUhjdg4DfkbE9b9lvYkeesFsE0bUAd5z2DeVbtR0QBUwARAQABzSlNYXJrdXMgVGhl aWwgPG1hcmt1cy50aGVpbEB0dS1pbG1lbmF1LmRlPsLBfQQTAQoAJwUCVyikBgIbAwUJB4Yf gAULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRBt3CLaT/oEE5bzD/94Ezfl7mm57PXniW3m yIcjofJXw7YCJOprUon36W2Na2xrH3j8QH/sqkfTyCoj1LWxxDGQs+CQGkZ47cX+H1KqKKSS iGoNRV/cvoozWe7cn9bAvR3JkqLxjIi0vp68rs/f6ZI49N7zuZAsSBrXN2/2xIgH+mRoAPyw mgzaIXZL87vajXol4TlbMaC7blRs6Q4kzOP7ZjvfM/yxwUsifQltNY4wAEWKXLk67ij9akGO FG+y3sHF1HYH3w0sB+mIIN3x4BjYqXSH3XDx4xvCQXWkHmFl1RoQbJDvMjxP5/HXLR3omPjF ZpV657Grh/PgonwZ/U6sigaA11pjcPfkYNYkcGyb0OMqSKb3Ke52/bhxv4pPWrKRS7btMhj7 4zuMDk9V+De3YFXvKGllXBMAA6J8TlY71rlcOWKyBQNLLkUZ7/uAA949GTNzM0fPTRqry5qn WCR/ekzm3VyFgjWSun39L1W13bJW8aUu8k5x2KWq4YrdB0TOYZpKSAconOHVxhkEMxLwRUfZ B9kEPqlfQY5YYE6ZoZQF38Kvx3VFuAnhf+82PjMMrkQ3g07D3xJlq7xWdq1jrwG1QxmVFS64 g+oWM9IIFisvVspNrJAEgSGmYgTw+VT3PDP3Gj8sqD32mWb18bVE9I5FyagOewKdLpqcljIi Bz8WAuz+RbwX4i/mMs7BTQRXKKQGARAAzTGnHyUtTBcGHMKArcGiVnCB6knTFgU7I1gsoBrc J1bo0JRJj1lduYkdm12kC49c4dZtv1CciQIN9UEpalZsB2TXaC/xaDJ2IsZuHLOOaqSSwVg/ Bs41vMeFYmmwRRN1y6MQRCBobCC6KNuCpgtEmS/v4hurISt+MoPIppjK6E7tJQ0lgtfRHq/M HW+Wabw5Nq3OFSaLYC3nRJkoB1Vej8XGO8X6URWnZmL3xcnkIkoH13y2WTO0lJz9tF47t5U2 +xWrFMR+a6ow/QPL4Wi53IqhXDqa6OUzDAUuplZOm71VhwsEkk6u0YjzNRbgAYMBh7iye2j/ 4Lf2+YUB8+uKimpsEwW0nR85sKCQm102Zb9+1bYXPuIIP9HbVNy77X4aM9V0W48zBTqWZzh8 2i0oq8z1xN3qeuZbAXnzelKZvE1wM9cLQ3YHA629J2OGe3dkv2+untuyj6KMCEU3+vp6j7TX hKf+jy3PIrQcQmzMTs7xnkEm5LvbAtaZLrg4OGYjSpvH4bKsLA3sNGt5Xqsuqh5dsO7ccX1G nfY7Ug8UyNT5/0gZVkOileTQl0KtgwO9VBXAdrmMPHFldRn3dGNiGlCbxnsaNQDfQwTFmDu0 1TjzwC4byWLQT+C7yCTk8h9q0NwmCJ5yG7Fe7VUUpA+ZVLyMSt+tSpH8v3n+3I2AKoMAEQEA AcLBZQQYAQoADwUCVyikBgIbDAUJB4YfgAAKCRBt3CLaT/oEE7lZEACgrOxRaCQ7D5Rc4BOA N4VDIQqVch8X3pBE/k/v3UopkgmYnP4RlhegWr4wp2E6Vuyt8nwnZs3WhxQENfMjd5rV3WhG k5ib+pmLvtAht5j8jfP5+UKUTvX1a6oMi98PT8PuQ70oKM7T/KN+RpXIHoz/2Dgde1RQpwKC XWtkU9tBF87fE8FfwuqS6myOfd8zc6fOVV/fxmTXVC8qA7tB+0tOSDHB80GRYwnlumChOtOB Np8ABFWryE2e6mZZnp9Tpd1A74B45z6l445f5BixGLExAOoTJNA2k0JWx79/2Yi+pwTnQMzW QBLa48MnL3DUlVlahz1FZfGbA2U5NARS8iRdUhCaHL0Lph8HxWJwYA5w2afyCCwRD7xFo44V jsCNbqtZ6TrFARJdrbeWQl3RZ4Y+uuvN9mgvttVenAbx5d68IariYtXashucQeIMoqIloHTN sJDaupNm6+A9T3Re5yXmZsrWSxEEEGv1Bh+5DH6vauP0Ng0ebZ4c6jXfgLpPnAUWlV0rnmrJ q9141nbyLRYAhUXxiqajb+Zocp2Am4BF19rBUa1C78ooye9XShhuQvDTB6tZuiYWc24tiyqb IjR1hmG/zg8APhURAv/zUubaf4IA7v5YHVQqAbpUfb6ePlPVJBtVw2CwXFrGwnqDFh82La8D sGZPq8zmOtvOyZtafA== Organization: TU Ilmenau Message-ID: <4f144da3-76d8-d811-736b-f8e7975d9cde@tu-ilmenau.de> Date: Wed, 24 Apr 2019 22:33:35 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190424144727.GA84411@intel.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit Content-Language: en-US Subject: Re: [dpdk-dev] [BUG] net/af_xdp: Current code can only create one af_xdp device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190424203335.v3ShNsyYuwO9IaS41__epk1DI6eaCnJ28cdt812WyW4@z> Hi Xiaolong, with only one vdev everything works. It stops working if I use two vdevs. Both interfaces were brought up before testing. Best regards, Markus On 24.04.19 16:47, Ye Xiaolong wrote: > Hi, Markus > > On 04/24, Markus Theil wrote: >> Hi Xiaolong, >> >> I also tested with i40e devices, with the same result. >> >> ./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev >> net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1 >> EAL: Detected 16 lcore(s) >> EAL: Detected 1 NUMA nodes >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: No free hugepages reported in hugepages-2048kB >> EAL: No available hugepages reported in hugepages-2048kB >> EAL: Probing VFIO support... >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >> testpmd: create a new mbuf pool : n=267456, >> size=2176, socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> Configuring Port 0 (socket 0) >> Port 0: 3C:FD:FE:A3:E7:30 >> Configuring Port 1 (socket 0) >> xsk_configure(): Failed to create xsk socket. (-1) >> eth_rx_queue_setup(): Failed to configure xdp socket >> Fail to configure port 1 rx queues >> EAL: Error - exiting with code: 1 >>   Cause: Start ports failed >> > What about one vdev instance on your side? And have you brought up the interface? > xsk_configure requires the interface to be up state. > > dsd > Thanks, > Xiaolong > > >> If I execute the same call again, I get error -16 already on the first port: >> >> ./dpdk-testpmd -n 4 --log-level=pmd.net.af_xdp:debug --no-pci --vdev >> net_af_xdp0,iface=enp36s0f0 --vdev net_af_xdp1,iface=enp36s0f1 >> EAL: Detected 16 lcore(s) >> EAL: Detected 1 NUMA nodes >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: No free hugepages reported in hugepages-2048kB >> EAL: No available hugepages reported in hugepages-2048kB >> EAL: Probing VFIO support... >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >> testpmd: create a new mbuf pool : n=267456, >> size=2176, socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> Configuring Port 0 (socket 0) >> xsk_configure(): Failed to create xsk socket. (-16) >> eth_rx_queue_setup(): Failed to configure xdp socket >> Fail to configure port 0 rx queues >> EAL: Error - exiting with code: 1 >>   Cause: Start ports failed >> >> Software versions/commits/infos: >> >> - Linux 5.1-rc6 >> - DPDK 7f251bcf22c5729792f9243480af1b3c072876a5 (19.05-rc2) >> - libbpf from https://github.com/libbpf/libbpf >> (910c475f09e5c269f441d7496c27dace30dc2335) >> - DPDK and libbpf build with meson >> >> Best regards, >> Markus >> >> On 4/24/19 8:35 AM, Ye Xiaolong wrote: >>> Hi, Markus >>> >>> On 04/23, Markus Theil wrote: >>>> Hi Xiaolong, >>>> >>>> I tested your commit "net/af_xdp: fix creating multiple instance" on the >>>> current master branch. It does not work for me in the following minimal >>>> test setting: >>>> >>>> 1) allocate 2x 1GB huge pages for DPDK >>>> >>>> 2) ip link add p1 type veth peer name p2 >>>> >>>> 3) ./dpdk-testpmd --vdev=net_af_xdp0,iface=p1 >>>> --vdev=net_af_xdp1,iface=p2 (I also tested this with two igb devices, >>>> with the same errors) >>> I've tested 19.05-rc2, started testpmd with 2 af_xdp vdev (with two i40e devices), >>> and it works for me. >>> >>> $ ./x86_64-native-linuxapp-gcc/app/testpmd -l 5,6 -n 4 --log-level=pmd.net.af_xdp:info -b 82:00.1 --no-pci --vdev net_af_xdp0,iface=ens786f1 --vdev net_af_xdp1,iface=ens786f0 >>> EAL: Detected 88 lcore(s) >>> EAL: Detected 2 NUMA nodes >>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >>> EAL: Probing VFIO support... >>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp0 >>> rte_pmd_af_xdp_probe(): Initializing pmd_af_xdp for net_af_xdp1 >>> testpmd: create a new mbuf pool : n=155456, size=2176, socket=0 >>> testpmd: preferred mempool ops selected: ring_mp_mc >>> Configuring Port 0 (socket 0) >>> Port 0: 3C:FD:FE:C5:E2:41 >>> Configuring Port 1 (socket 0) >>> Port 1: 3C:FD:FE:C5:E2:40 >>> Checking link statuses... >>> Done >>> No commandline core given, start packet forwarding >>> io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native >>> Logical Core 6 (socket 0) forwards packets on 2 streams: >>> RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01 >>> RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 >>> >>> io packet forwarding packets/burst=32 >>> nb forwarding cores=1 - nb forwarding ports=2 >>> port 0: RX queue number: 1 Tx queue number: 1 >>> Rx offloads=0x0 Tx offloads=0x0 >>> RX queue: 0 >>> RX desc=0 - RX free threshold=0 >>> RX threshold registers: pthresh=0 hthresh=0 wthresh=0 >>> RX Offloads=0x0 >>> TX queue: 0 >>> TX desc=0 - TX free threshold=0 >>> TX threshold registers: pthresh=0 hthresh=0 wthresh=0 >>> TX offloads=0x0 - TX RS bit threshold=0 >>> port 1: RX queue number: 1 Tx queue number: 1 >>> Rx offloads=0x0 Tx offloads=0x0 >>> RX queue: 0 >>> RX desc=0 - RX free threshold=0 >>> RX threshold registers: pthresh=0 hthresh=0 wthresh=0 >>> RX Offloads=0x0 >>> TX queue: 0 >>> TX desc=0 - TX free threshold=0 >>> TX threshold registers: pthresh=0 hthresh=0 wthresh=0 >>> TX offloads=0x0 - TX RS bit threshold=0 >>> Press enter to exit >>> >>> Could you paste your whole failure log here? >>>> I'm using Linux 5.1-rc6 and an up to date libbpf. The setup works for >>>> the first device and fails for the second device when creating bpf maps >>>> in libbpf ("qidconf_map" or "xsks_map"). It seems, that these maps also >>>> need unique names and cannot exist twice under the same name. >>> So far as I know, there should not be such contraint, the bpf maps creations >>> are wrapped in libbpf. >>> >>>> Furthermore if running step 3 again after it failed for the first time, >>>> xdp vdev allocation already fails for the first xdp vdev and does not >>>> reach the second one. Please let me know if you need some program output >>>> or more information from me. >>>> >>>> Best regards, >>>> Markus >>>> >>> Thanks, >>> Xiaolong >>> >>>> On 4/18/19 3:05 AM, Ye Xiaolong wrote: >>>>> Hi, Markus >>>>> >>>>> On 04/17, Markus Theil wrote: >>>>>> I tested the new af_xdp based device on the current master branch and >>>>>> noticed, that the usage of static mempool names allows only for the >>>>>> creation of a single af_xdp vdev. If a second vdev of the same type gets >>>>>> created, the mempool allocation fails. >>>>> Thanks for reporting, could you paste the cmdline you used and the error log? >>>>> Are you referring to ring creation or mempool creation? >>>>> >>>>> >>>>> Thanks, >>>>> Xiaolong >>>>>> Best regards, >>>>>> Markus Theil -- Markus Theil Technische Universität Ilmenau, Fachgebiet Telematik/Rechnernetze Postfach 100565 98684 Ilmenau, Germany Phone: +49 3677 69-4582 Email: markus[dot]theil[at]tu-ilmenau[dot]de Web: http://www.tu-ilmenau.de/telematik