From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38710A00C4; Mon, 25 Jul 2022 03:54:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D37094021E; Mon, 25 Jul 2022 03:54:24 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id 82A7B400D4 for ; Mon, 25 Jul 2022 03:54:22 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id 8642AA00C5; Mon, 25 Jul 2022 03:54:22 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Subject: [Bug 1058] Mellanox mlx5_pmd is reporting incorrect number of maximum rx and tx queues supported from rte_eth_dev_info_get Date: Mon, 25 Jul 2022 01:54:20 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: ethdev X-Bugzilla-Version: 21.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: sahithi.singam@oracle.com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org https://bugs.dpdk.org/show_bug.cgi?id=3D1058 Bug ID: 1058 Summary: Mellanox mlx5_pmd is reporting incorrect number of maximum rx and tx queues supported from rte_eth_dev_info_get Product: DPDK Version: 21.11 Hardware: x86 OS: Linux Status: UNCONFIRMED Severity: normal Priority: Normal Component: ethdev Assignee: dev@dpdk.org Reporter: sahithi.singam@oracle.com Target Milestone: --- DPDK is incorrectly reporting maximum number of Rx and Tx queues supported = as 1024 where as linux is correctly reporting them in ethtool output.=20 Maximum number of rx queues supported on Mellanox Connect X6 Dx SRIOV VFs=20 were reported as 1024 when used with DPDK but they were reported as only 15 when used with linux. This behavior is same even when DPDK was used on Mell= anox ConnectX4 based SRIOV Virtual Functions.=20 ~ # ethtool -i eth1 driver: mlx5_core version: 5.0-0 firmware-version: 22.31.1660 (ORC0000000007) expansion-rom-version:=20 bus-info: 0000:00:04.0 supports-statistics: yes supports-test: yes supports-eeprom-access: no supports-register-dump: no supports-priv-flags: yes ~ # ethtool -l eth1 Channel parameters for eth1: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 15 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 15 opt/dpdk-testpmd -l 2-7 -m 4 --allow 0000:00:04.0 -- --portmask=3D0x1 --mbc= ac he=3D64 --forward-mode=3Dio --eth-peer=3D0,02:00:17:0A:4B:FB --rxq=3D10= 0 --txq=3D100 - i=20=20 EAL: Detected CPU lcores: 16 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: No free 1048576 kB hugepages reported on node 0 EAL: No available 1048576 kB hugepages reported EAL: Probe PCI driver: mlx5_pci (15b3:101e) device: 0000:00:04.0 (socket 0) mlx5_net: cannot bind mlx5 socket: No such file or directory mlx5_net: Cannot initialize socket: No such file or directory mlx5_net: DV flow is not supported TELEMETRY: No legacy callbacks, legacy socket not created Set io packet forwarding mode Interactive-mode selected testpmd: create a new mbuf pool : n=3D78848, size=3D2176, socket= =3D0 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=3Dpaired and odd forward ports number, the last port= will pair with itself. Configuring Port 0 (socket 0) mlx5_net: port 0 queue 19 empty mbuf pool mlx5_net: port 0 Rx queue allocation failed: Cannot allocate memory Fail to start port 0: Cannot allocate memory Please stop the ports first Done testpmd> show port info 0=20 ********************* Infos for port 0 ********************* MAC address: 02:00:17:07:42:17 Device name: 0000:00:04.0 Driver name: mlx5_pci Firmware-version: 22.31.1660 Devargs:=20 Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 50 Gbps Link duplex: full-duplex Autoneg status: On MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 128 Maximum number of MAC addresses of hash filtering: 0 VLAN offload:=20 strip off, filter off, extend off, qinq strip off Hash key size in bytes: 40 Redirection table size: 512 Supported RSS offload flow types: ipv4 ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6 ipv6-frag ipv6-tcp ipv6-udp ipv6-other ipv6-ex ipv6-tcp-ex ipv6-udp-ex user defined 60 user defined 61 user defined 62 user defined 63 Minimum size of RX buffer: 32 Maximum configurable length of RX packet: 65536 Maximum configurable size of LRO aggregated packet: 65280 Current number of RX queues: 100 Max possible RX queues: 1024 Max possible number of RXDs per queue: 65535 Min possible number of RXDs per queue: 0 RXDs number alignment: 1 Current number of TX queues: 100 Max possible TX queues: 1024 Max possible number of TXDs per queue: 65535 Min possible number of TXDs per queue: 0 TXDs number alignment: 1 Max segment number per packet: 40 Max segment number per MTU/TSO: 40 Device capabilities: 0x10( FLOW_SHARED_OBJECT_KEEP ) Switch name: 0000:00:04.0 Switch domain Id: 0 Switch Port Id: 65535 testpmd> --=20 You are receiving this mail because: You are the assignee for the bug.=