From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by dpdk.org (Postfix, from userid 33) id C4DD01B13F; Tue, 25 Dec 2018 14:40:33 +0100 (CET) From: bugzilla@dpdk.org To: dev@dpdk.org Date: Tue, 25 Dec 2018 13:40:32 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: vhost/virtio X-Bugzilla-Version: 18.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: guesslin1986@gmail.com X-Bugzilla-Status: CONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 Subject: [dpdk-dev] [Bug 175] DPDK on Azure using `intel-go/nff-go` fails using `hv_netvsc` driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Dec 2018 13:40:33 -0000 https://bugs.dpdk.org/show_bug.cgi?id=3D175 Bug ID: 175 Summary: DPDK on Azure using `intel-go/nff-go` fails using `hv_netvsc` driver Product: DPDK Version: 18.11 Hardware: Other OS: Linux Status: CONFIRMED Severity: normal Priority: Normal Component: vhost/virtio Assignee: dev@dpdk.org Reporter: guesslin1986@gmail.com Target Milestone: --- Overview: running DPDK on Azure using `intel-go/nff-go` library fails with `Requested receive port exceeds number of ports which can be used by DPDK (= bind to DPDK). (3)` Steps to Reproduce: 1) build binary from https://gist.github.com/guesslin/76be1139e5e3b8d71e964e194c5d9322 2) run `app -d uio_hv_generic -n ` Actual Results: $ sudo ./app -d uio_hv_generic -n eth2 2018/12/25 06:58:49 Binding PMD driver eth2 to NIC uio_hv_generic 2018/12/25 06:58:49 Initiating nff-go flow system ------------***-------- Initializing DPDK --------***------------ EAL: Detected 4 lcore(s) EAL: Detected 1 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: No free hugepages reported in hugepages-2048kB EAL: No free hugepages reported in hugepages-2048kB EAL: No free hugepages reported in hugepages-1048576kB EAL: FATAL: Cannot get hugepage information. EAL: Cannot get hugepage information. Initiated nff-go flow system Setting receiver on port 0 panic: Erorr: Requested receive port exceeds number of ports which can be u= sed by DPDK (bind to DPDK). (3), Msg: start dpdk/nff-go failed Expected Results: no error outputs # result from testpmd ``` glasnostic@glasnostic-router:~$ sudo testpmd -w 0003:00:02.0 --vdev=3D"net_vdev_netvsc0,iface=3Deth2" -- -i --port-topology=3Dchained EAL: Detected 4 lcore(s) EAL: No free hugepages reported in hugepages-1048576kB EAL: Multi-process socket /var/run/.rte_unix EAL: Probing VFIO support... EAL: VFIO support initialized EAL: WARNING: cpu flags constant_tsc=3Dyes nonstop_tsc=3Dno -> using unreli= able clock cycles ! EAL: PCI device 0003:00:02.0 on NUMA socket 0 EAL: probe driver: 15b3:1004 net_mlx4 PMD: net_mlx4: cannot access device, is mlx4_ib loaded? EAL: Requested device 0003:00:02.0 cannot be used PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc0_id0 PMD: net_failsafe: Creating fail-safe device on NUMA socket 0 PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0 PMD: net_failsafe: MAC address is 00:0d:3a:18:3e:11 Interactive-mode selected Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. testpmd: create a new mbuf pool : n=3D171456, size=3D21= 76, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0) PMD: net_tap_net_vdev_netvsc0_id0: 0xfc3a80: TX configured queues number: 1 PMD: net_tap_net_vdev_netvsc0_id0: 0xfc3a80: RX configured queues number: 1 Port 0: 00:0D:3A:18:3E:11 Checking link statuses... Done testpmd> show port info all ********************* Infos for port 0 ********************* MAC address: 00:0D:3A:18:3E:11 Driver name: net_failsafe Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 10000 Mbps Link duplex: full-duplex MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 1 Maximum number of MAC addresses of hash filtering: 0 VLAN offload: strip off filter off qinq(extend) off No flow type is supported. Minimum size of RX buffer: 0 Maximum configurable length of RX packet: 1522 Current number of RX queues: 1 Max possible RX queues: 16 Max possible number of RXDs per queue: 65535 Min possible number of RXDs per queue: 0 RXDs number alignment: 1 Current number of TX queues: 1 Max possible TX queues: 16 Max possible number of TXDs per queue: 65535 Min possible number of TXDs per queue: 0 TXDs number alignment: 1 testpmd> ``` ## directly run testpmd ``` glasnostic@glasnostic-router:~$ sudo testpmd EAL: Detected 4 lcore(s) EAL: No free hugepages reported in hugepages-1048576kB EAL: Multi-process socket /var/run/.rte_unix EAL: Probing VFIO support... EAL: VFIO support initialized EAL: WARNING: cpu flags constant_tsc=3Dyes nonstop_tsc=3Dno -> using unreli= able clock cycles ! EAL: PCI device 0003:00:02.0 on NUMA socket 0 EAL: probe driver: 15b3:1004 net_mlx4 PMD: net_mlx4: cannot access device, is mlx4_ib loaded? EAL: Requested device 0003:00:02.0 cannot be used PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc_id0 PMD: net_failsafe: Creating fail-safe device on NUMA socket 0 PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc_id0 as dtap0 PMD: net_failsafe: MAC address is 00:0d:3a:18:3e:11 Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. testpmd: create a new mbuf pool : n=3D171456, size=3D21= 76, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=3Dpaired and odd forward ports number, the last port= will pair with itself. Configuring Port 0 (socket 0) PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: TX configured queues number: 1 PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: RX configured queues number: 1 Port 0: 00:0D:3A:18:3E:11 Checking link statuses... Done No commandline core given, start packet forwarding io packet forwarding - ports=3D1 - cores=3D1 - streams=3D1 - NUMA support e= nabled, MP over anonymous pages disabled Logical Core 1 (socket 0) forwards packets on 1 streams: RX P=3D0/Q=3D0 (socket 0) -> TX P=3D0/Q=3D0 (socket 0) peer=3D02:00:00:00= :00:00 io packet forwarding packets/burst=3D32 nb forwarding cores=3D1 - nb forwarding ports=3D1 port 0: CRC stripping enabled RX queues=3D1 - RX desc=3D1024 - RX free threshold=3D0 RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 TX queues=3D1 - TX desc=3D1024 - TX free threshold=3D0 TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 TX RS bit threshold=3D0 - TXQ offloads=3D0x0 Press enter to exit Telling cores to stop... Waiting for lcores to finish... ---------------------- Forward statistics for port 0 -------------------= --- RX-packets: 7 RX-dropped: 0 RX-total: 7 TX-packets: 7 TX-dropped: 0 TX-total: 7 -------------------------------------------------------------------------= --- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++= ++ RX-packets: 7 RX-dropped: 0 RX-total: 7 TX-packets: 7 TX-dropped: 0 TX-total: 7 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= +++ Done. Shutting down port 0... Stopping ports... Done Closing ports... Done Bye... ``` ## load mlx4_ib ``` glasnostic@glasnostic-router:~$ sudo modprobe mlx4_ib glasnostic@glasnostic-router:~$ lsmod | grep mlx4_ib mlx4_ib 176128 0 mlx4_core 294912 2 mlx4_en,mlx4_ib devlink 53248 3 mlx4_en,mlx4_core,mlx4_ib ib_core 229376 6 ib_iser,ib_cm,rdma_cm,ib_uverbs,iw_cm,mlx4_= ib glasnostic@glasnostic-router:~$ sudo testpmd EAL: Detected 4 lcore(s) EAL: No free hugepages reported in hugepages-1048576kB EAL: Multi-process socket /var/run/.rte_unix EAL: Probing VFIO support... EAL: VFIO support initialized EAL: WARNING: cpu flags constant_tsc=3Dyes nonstop_tsc=3Dno -> using unreli= able clock cycles ! EAL: PCI device 0003:00:02.0 on NUMA socket 0 EAL: probe driver: 15b3:1004 net_mlx4 PMD: net_mlx4: cannot access device, is mlx4_ib loaded? EAL: Requested device 0003:00:02.0 cannot be used PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc_id0 PMD: net_failsafe: Creating fail-safe device on NUMA socket 0 PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc_id0 as dtap0 PMD: net_failsafe: MAC address is 00:0d:3a:18:3e:11 Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. testpmd: create a new mbuf pool : n=3D171456, size=3D21= 76, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=3Dpaired and odd forward ports number, the last port= will pair with itself. Configuring Port 0 (socket 0) PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: TX configured queues number: 1 PMD: net_tap_net_vdev_netvsc_id0: 0xfc3a80: RX configured queues number: 1 Port 0: 00:0D:3A:18:3E:11 Checking link statuses... Done No commandline core given, start packet forwarding io packet forwarding - ports=3D1 - cores=3D1 - streams=3D1 - NUMA support e= nabled, MP over anonymous pages disabled Logical Core 1 (socket 0) forwards packets on 1 streams: RX P=3D0/Q=3D0 (socket 0) -> TX P=3D0/Q=3D0 (socket 0) peer=3D02:00:00:00= :00:00 io packet forwarding packets/burst=3D32 nb forwarding cores=3D1 - nb forwarding ports=3D1 port 0: CRC stripping enabled RX queues=3D1 - RX desc=3D1024 - RX free threshold=3D0 RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 TX queues=3D1 - TX desc=3D1024 - TX free threshold=3D0 TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 TX RS bit threshold=3D0 - TXQ offloads=3D0x0 Press enter to exit Telling cores to stop... Waiting for lcores to finish... ---------------------- Forward statistics for port 0 -------------------= --- RX-packets: 7 RX-dropped: 0 RX-total: 7 TX-packets: 7 TX-dropped: 0 TX-total: 7 -------------------------------------------------------------------------= --- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++= ++ RX-packets: 7 RX-dropped: 0 RX-total: 7 TX-packets: 7 TX-dropped: 0 TX-total: 7 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++= +++ Done. Shutting down port 0... Stopping ports... Done Closing ports... Done Bye... ``` --=20 You are receiving this mail because: You are the assignee for the bug.=