* [Bug 1058] Mellanox mlx5_pmd is reporting incorrect number of maximum rx and tx queues supported from rte_eth_dev_info_get
@ 2022-07-25 1:54 bugzilla
0 siblings, 0 replies; only message in thread
From: bugzilla @ 2022-07-25 1:54 UTC (permalink / raw)
To: dev
https://bugs.dpdk.org/show_bug.cgi?id=1058
Bug ID: 1058
Summary: Mellanox mlx5_pmd is reporting incorrect number of
maximum rx and tx queues supported from
rte_eth_dev_info_get
Product: DPDK
Version: 21.11
Hardware: x86
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: ethdev
Assignee: dev@dpdk.org
Reporter: sahithi.singam@oracle.com
Target Milestone: ---
DPDK is incorrectly reporting maximum number of Rx and Tx queues supported as
1024 where as linux is correctly reporting them in ethtool output.
Maximum number of rx queues supported on Mellanox Connect X6 Dx SRIOV VFs
were reported as 1024 when used with DPDK but they were reported as only 15
when used with linux. This behavior is same even when DPDK was used on Mellanox
ConnectX4 based SRIOV Virtual Functions.
~ # ethtool -i eth1
driver: mlx5_core
version: 5.0-0
firmware-version: 22.31.1660 (ORC0000000007)
expansion-rom-version:
bus-info: 0000:00:04.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
~ # ethtool -l eth1
Channel parameters for eth1:
Pre-set maximums:
RX: 0
TX: 0
Other: 0
Combined: 15
Current hardware settings:
RX: 0
TX: 0
Other: 0
Combined: 15
opt/dpdk-testpmd -l 2-7 -m 4 --allow 0000:00:04.0 -- --portmask=0x1 --mbcac
he=64 --forward-mode=io --eth-peer=0,02:00:17:0A:4B:FB --rxq=100 --txq=100
-
i
EAL: Detected CPU lcores: 16
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No free 1048576 kB hugepages reported on node 0
EAL: No available 1048576 kB hugepages reported
EAL: Probe PCI driver: mlx5_pci (15b3:101e) device: 0000:00:04.0 (socket 0)
mlx5_net: cannot bind mlx5 socket: No such file or directory
mlx5_net: Cannot initialize socket: No such file or directory
mlx5_net: DV flow is not supported
TELEMETRY: No legacy callbacks, legacy socket not created
Set io packet forwarding mode
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=78848, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will
pair with itself.
Configuring Port 0 (socket 0)
mlx5_net: port 0 queue 19 empty mbuf pool
mlx5_net: port 0 Rx queue allocation failed: Cannot allocate memory
Fail to start port 0: Cannot allocate memory
Please stop the ports first
Done
testpmd> show port info 0
********************* Infos for port 0 *********************
MAC address: 02:00:17:07:42:17
Device name: 0000:00:04.0
Driver name: mlx5_pci
Firmware-version: 22.31.1660
Devargs:
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 50 Gbps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 128
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 40
Redirection table size: 512
Supported RSS offload flow types:
ipv4
ipv4-frag
ipv4-tcp
ipv4-udp
ipv4-other
ipv6
ipv6-frag
ipv6-tcp
ipv6-udp
ipv6-other
ipv6-ex
ipv6-tcp-ex
ipv6-udp-ex
user defined 60
user defined 61
user defined 62
user defined 63
Minimum size of RX buffer: 32
Maximum configurable length of RX packet: 65536
Maximum configurable size of LRO aggregated packet: 65280
Current number of RX queues: 100
Max possible RX queues: 1024
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 100
Max possible TX queues: 1024
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
Max segment number per packet: 40
Max segment number per MTU/TSO: 40
Device capabilities: 0x10( FLOW_SHARED_OBJECT_KEEP )
Switch name: 0000:00:04.0
Switch domain Id: 0
Switch Port Id: 65535
testpmd>
--
You are receiving this mail because:
You are the assignee for the bug.
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2022-07-25 1:54 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-25 1:54 [Bug 1058] Mellanox mlx5_pmd is reporting incorrect number of maximum rx and tx queues supported from rte_eth_dev_info_get bugzilla
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).