DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 10] [Testpmd] NUMA, speed issue
@ 2018-01-17 13:45 bugzilla
  0 siblings, 0 replies; only message in thread
From: bugzilla @ 2018-01-17 13:45 UTC (permalink / raw)
  To: dev

https://dpdk.org/tracker/show_bug.cgi?id=10

            Bug ID: 10
           Summary: [Testpmd] NUMA, speed issue
           Product: DPDK
           Version: unspecified
          Hardware: x86
                OS: All
            Status: CONFIRMED
          Severity: normal
          Priority: Normal
         Component: testpmd
          Assignee: dev@dpdk.org
          Reporter: nounoussma@hotmail.com
  Target Milestone: ---

Hello, 

I need help to manage packet using dpdk under xeon intel chip.
When I launch testpmd, I'm wondering if output traces below are blocking 
to check bandwith:

>./testpmd -l 0-3 -n 4 -- -i --portmask=0x1 --nb-cores=2

EAL: Detected 8 lcore(s)
EAL: 1024 hugepages of size 2097152 reserved, but no mounted hugetlbfs found
for that size
EAL: Probing VFIO support...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id
0
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15a4 net_fm10k
EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15a4 net_fm10k
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15ab net_ixgbe
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:15ab net_ixgbe
EAL: PCI device 0000:04:10.1 on NUMA socket 0
EAL:   probe driver: 8086:15a8 net_ixgbe_vf
EAL: PCI device 0000:04:10.3 on NUMA socket 0
EAL:   probe driver: 8086:15a8 net_ixgbe_vf
EAL: PCI device 0000:04:10.5 on NUMA socket 0
EAL:   probe driver: 8086:15a8 net_ixgbe_vf
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15a4 net_fm10k
EAL: PCI device 0000:08:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1533 net_e1000_igb
Interactive-mode selected
previous number of forwarding ports 2 - changed to number of configured ports 1
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2240,
socket=0

Warning! Cannot handle an odd number of ports with the current port topology.
Configuration must be changed to have an even number of ports, or relaunch
application with --port-topology=chained

Configuring Port 0 (socket 0)
PMD: fm10k_dev_configure(): fm10k always strip CRC
Port 0: 00:A0:C9:23:45:69
Configuring Port 1 (socket 0)
PMD: fm10k_dev_configure(): fm10k always strip CRC
Port 1: 00:A0:C9:23:45:6A
Checking link statuses...
Port 0 Link Up - speed 0 Mbps - full-duplex
Port 1 Link Up - speed 0 Mbps - full-duplex


On one side, traces show that there is NUMA, speed and hupepage issue.
Have you a idea ?

Thank you

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2018-01-17 13:45 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-17 13:45 [dpdk-dev] [Bug 10] [Testpmd] NUMA, speed issue bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).