DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 463] In Mellanox MLX5 driver, NULL pointer access in mlx5_ipool_malloc()
@ 2020-04-27  9:33 bugzilla
  0 siblings, 0 replies; only message in thread
From: bugzilla @ 2020-04-27  9:33 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=463

            Bug ID: 463
           Summary: In Mellanox MLX5 driver, NULL pointer access in
                    mlx5_ipool_malloc()
           Product: DPDK
           Version: unspecified
          Hardware: ARM
                OS: Linux
            Status: UNCONFIRMED
          Severity: major
          Priority: Normal
         Component: testpmd
          Assignee: dev@dpdk.org
          Reporter: Lijian.Zhang@arm.com
  Target Milestone: ---

With the last dpdk master branch code, Mellanox ConnectX-5, Ubuntu-18.04, below
code is trying to access NULL pointer.

rte_bitmap_scan(trunk->bmp, &iidx, &slab) is calling with trunk->bmp, but it's
a NULL pointer.

(gdb) p pool->free_list
$1 = 65535
(gdb) n
(gdb)
(gdb)
(gdb) p trunk
$2 = (struct mlx5_indexed_trunk *) 0x17ff01d80
(gdb) p *trunk
$3 = {idx = 0, prev = 65535, next = 65535, free = 4096, bmp = 0x0, data =
0x17ff01dc0 ""}
(gdb) p trunk->bmp
$4 = (struct rte_bitmap *) 0x0
(gdb)


Below is the system information of my server.

login@vpp-tx2-01:~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

login@vpp-tx2-01:~$ lscpu
Architecture:        aarch64
Byte Order:          Little Endian
CPU(s):              256
On-line CPU(s) list: 0-255
Thread(s) per core:  4
Core(s) per socket:  32
Socket(s):           2
NUMA node(s):        2
Vendor ID:           Cavium
Model:               2
Model name:          ThunderX2 99xx
Stepping:            0x1
CPU max MHz:         2500.0000
CPU min MHz:         1000.0000
BogoMIPS:            400.00
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            32768K
NUMA node0 CPU(s):   0-127
NUMA node1 CPU(s):   128-255
Flags:               fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics cpuid
asimdrdm

login@vpp-tx2-01:~$ sudo lshw -c network -businfo
[sudo] password for login:
Bus info          Device      Class      Description
====================================================
pci@0000:05:00.0  enp5s0f0    network    Ethernet Controller XL710 for 40GbE
QSFP+
pci@0000:05:00.1  enp5s0f1    network    Ethernet Controller XL710 for 40GbE
QSFP+
pci@0000:08:00.0  enp8s0f0    network    I350 Gigabit Network Connection
pci@0000:08:00.1  enp8s0f1    network    I350 Gigabit Network Connection
pci@0000:0b:00.0  enp11s0f0   network    MT27800 Family [ConnectX-5]
pci@0000:0b:00.1  enp11s0f1   network    MT27800 Family [ConnectX-5]
pci@0000:0e:00.0  eno1        network    FastLinQ QL41000 Series 10/25/40/50GbE
Controller
pci@0000:0e:00.1  eno2        network    FastLinQ QL41000 Series 10/25/40/50GbE
Controller
pci@0000:91:00.0  enp145s0f0  network    Ethernet Controller XL710 for 40GbE
QSFP+
pci@0000:91:00.1  enp145s0f1  network    Ethernet Controller XL710 for 40GbE
QSFP+
pci@0000:9a:00.0  enp154s0f0  network    MT27800 Family [ConnectX-5]
pci@0000:9a:00.1  enp154s0f1  network    MT27800 Family [ConnectX-5]

login@vpp-tx2-01:~$ ofed_info -s
MLNX_OFED_LINUX-5.0-2.1.8.0:

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2020-04-27  9:33 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-27  9:33 [dpdk-dev] [Bug 463] In Mellanox MLX5 driver, NULL pointer access in mlx5_ipool_malloc() bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).