From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by dpdk.org (Postfix, from userid 33) id CDA9F7CD8; Wed, 17 Jan 2018 14:45:46 +0100 (CET) From: bugzilla@dpdk.org To: dev@dpdk.org Date: Wed, 17 Jan 2018 13:45:46 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: testpmd X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: nounoussma@hotmail.com X-Bugzilla-Status: CONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://dpdk.org/tracker/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 Subject: [dpdk-dev] [Bug 10] [Testpmd] NUMA, speed issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jan 2018 13:45:46 -0000 https://dpdk.org/tracker/show_bug.cgi?id=3D10 Bug ID: 10 Summary: [Testpmd] NUMA, speed issue Product: DPDK Version: unspecified Hardware: x86 OS: All Status: CONFIRMED Severity: normal Priority: Normal Component: testpmd Assignee: dev@dpdk.org Reporter: nounoussma@hotmail.com Target Milestone: --- Hello,=20 I need help to manage packet using dpdk under xeon intel chip. When I launch testpmd, I'm wondering if output traces below are blocking=20 to check bandwith: >./testpmd -l 0-3 -n 4 -- -i --portmask=3D0x1 --nb-cores=3D2 EAL: Detected 8 lcore(s) EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size EAL: Probing VFIO support... EAL: cannot open /proc/self/numa_maps, consider that all memory is in socke= t_id 0 EAL: PCI device 0000:01:00.0 on NUMA socket 0 EAL: probe driver: 8086:15a4 net_fm10k EAL: PCI device 0000:02:00.0 on NUMA socket 0 EAL: probe driver: 8086:15a4 net_fm10k EAL: PCI device 0000:04:00.0 on NUMA socket 0 EAL: probe driver: 8086:15ab net_ixgbe EAL: PCI device 0000:04:00.1 on NUMA socket 0 EAL: probe driver: 8086:15ab net_ixgbe EAL: PCI device 0000:04:10.1 on NUMA socket 0 EAL: probe driver: 8086:15a8 net_ixgbe_vf EAL: PCI device 0000:04:10.3 on NUMA socket 0 EAL: probe driver: 8086:15a8 net_ixgbe_vf EAL: PCI device 0000:04:10.5 on NUMA socket 0 EAL: probe driver: 8086:15a8 net_ixgbe_vf EAL: PCI device 0000:06:00.0 on NUMA socket 0 EAL: probe driver: 8086:15a4 net_fm10k EAL: PCI device 0000:08:00.0 on NUMA socket 0 EAL: probe driver: 8086:1533 net_e1000_igb Interactive-mode selected previous number of forwarding ports 2 - changed to number of configured por= ts 1 USER1: create a new mbuf pool : n=3D171456, size=3D2240, socket=3D0 Warning! Cannot handle an odd number of ports with the current port topolog= y. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=3Dchained Configuring Port 0 (socket 0) PMD: fm10k_dev_configure(): fm10k always strip CRC Port 0: 00:A0:C9:23:45:69 Configuring Port 1 (socket 0) PMD: fm10k_dev_configure(): fm10k always strip CRC Port 1: 00:A0:C9:23:45:6A Checking link statuses... Port 0 Link Up - speed 0 Mbps - full-duplex Port 1 Link Up - speed 0 Mbps - full-duplex On one side, traces show that there is NUMA, speed and hupepage issue. Have you a idea ? Thank you --=20 You are receiving this mail because: You are the assignee for the bug.=