DPDK usage discussions
 help / color / mirror / Atom feed
* Mellanox Technologies MT27710 Family [ConnectX-4 Lx] not receiving (transmitting) packets
@ 2022-04-08  0:07 fwefew 4t4tg
  2022-04-21  7:24 ` Asaf Penso
  0 siblings, 1 reply; 2+ messages in thread
From: fwefew 4t4tg @ 2022-04-08  0:07 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 7819 bytes --]

I have two identical metal boxes running Intel(R) Xeon(R) E-2278G CPU @
3.40GHz. There is no virtualization; so I did NOT enable IOMMU. Both boxes
are equipped with two Mellanox Technologies MT27710 NICs and two Intel
NICs. The Intel NICs are out of scope. I am using the NIC at 01:00.1 for
DPDK:

# lspci | grep Eth
01:00.0 Ethernet controller: Mellanox Technologies MT27710 Family
[ConnectX-4 Lx]
01:00.1 Ethernet controller: Mellanox Technologies MT27710 Family
[ConnectX-4 Lx]
03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network
Connection (rev 03)
04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network
Connection (rev 03)

ibv and mst utilities see both Mellanox NICs. DPDK's testpmd application
sees the NIC and reports decent values except perhaps ibv_devinfo which
reports Infiniband transport.

My application sends UDP packets from one machine to another. I know this
code works on AWS ENA NICs. The build for Mellaonx finds the NIC just like
DPDK's testpmd reporting the same EAL info. There are no errors
transmitting packets; all numbers in stats are exactly what they should be.
However, the receiving side never sees any packets. It reports no errors
and does not see any packets. Every call to rx_burst sees 0 packets. There
is no firewall issue. I can use ncat in tcp and udp mode to send files
between the machines just fine.

My application is using CRC checksum offload for RX and TX (value 6). And I
triple checked the MAC and IP addresses I use in the code. I believe they
are fine. I set auto-negotiate on both the RX and TX side. I have also
tried to set 10Gbs link speed. No help. I also ran testdpdk then tried to
send it packets with ncat; testpmd never sees any packets either.

I have read through https://doc.dpdk.org/guides/nics/mlx5.html for proper
setup. I found the following deviations:

- https://doc.dpdk.org/guides/platform/mlx5.html#mlx5-common-env says to
set the link type to Eth: mlxconfig -d <mst device> query | grep LINK_TYPE.
These devices do NOT have a link type and do not allow setting it. I am
assuming it only works in Eth mode.

- the latest and greatest MST toolkit wget
https://www.mellanox.com/downloads/MFT/mft-4.18.0-106-x86_64-deb.tgz does
not include the utiliity mlxdevm and neither does the OEFD install wget
https://www.mellanox.com/downloads/ofed/MLNX_EN-5.5-1.0.3.2/mlnx-en-5.5-1.0.3.2-ubuntu20.04-x86_64.iso
include it. The ISO file does NOT have a utility called mlnxofedinstall. So
I am not sure if I am missing something. The doc reads:

The firmware, the libraries libibverbs, libmlx5, and mlnx-ofed-kernel
modules are packaged in Mellanox OFED. After downloading, it can be
installed with this command:

./mlnxofedinstall --dpdk

So I just ran ./install --dpdk since the ISO image does have an install
script taking the --dpdk argument.

The doc https://doc.dpdk.org/guides/platform/mlx5.html#mlx5-common-env
mentions a whole bunch of instructions about SRIOV, SF ports etc but which
requires mlxdevm. As I say above there is no such utility.

So I am stuck.

DETAILS:

uname -a
Linux client 5.13.0-28-generic #31~20.04.1-Ubuntu SMP Wed Jan 19 14:08:10
UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

root@client:~/Dev/reinvent/scripts# ibv_devices
    device             node GUID
    ------           ----------------
    mlx5_0           0c42a1030065fd82
    mlx5_1           0c42a1030065fd83

# ibv_devinfo
hca_id: mlx5_0
transport: InfiniBand (0)
fw_ver: 14.32.1010
node_guid: 0c42:a103:0065:fd82
sys_image_guid: 0c42:a103:0065:fd82
vendor_id: 0x02c9
vendor_part_id: 4117
hw_ver: 0x0
board_id: MT_2420110034
phys_port_cnt: 1
port: 1
state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
active_mtu: 1024 (3)
sm_lid: 0
port_lid: 0
port_lmc: 0x00
link_layer: Ethernet

hca_id: mlx5_1
transport: InfiniBand (0)
fw_ver: 14.32.1010
node_guid: 0c42:a103:0065:fd83
sys_image_guid: 0c42:a103:0065:fd82
vendor_id: 0x02c9
vendor_part_id: 4117
hw_ver: 0x0
board_id: MT_2420110034
phys_port_cnt: 1
port: 1
state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
active_mtu: 1024 (3)
sm_lid: 0
port_lid: 0
port_lmc: 0x00
link_layer: Ethernet

# mst status -v
MST modules:
------------
    MST PCI module is not loaded
    MST PCI configuration module loaded
PCI devices:
------------
DEVICE_TYPE             MST                           PCI       RDMA
     NET                       NUMA
ConnectX4LX(rev:0)      /dev/mst/mt4117_pciconf0.1    01:00.1   mlx5_1
     net-enp1s0f1              -1

ConnectX4LX(rev:0)      /dev/mst/mt4117_pciconf0      01:00.0   mlx5_0
     net-bond0                 -1

DPDK's testpmd application sees and likes the 01:00.1:

/root/Dev/dpdk/build/app/dpdk-testpmd --proc-type primary --in-memory
--log-level 7 -n 4 --allow 01:00.1,class=eth -- -i
EAL: Detected CPU lcores: 16
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Selected IOVA mode 'PA'
EAL: No free 2048 kB hugepages reported on node 0
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:01:00.1 (socket 0)
mlx5_net: No available register for sampler.
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=267456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 0C:42:A1:65:FD:83
Checking link statuses...
Done
testpmd> show port info 0

********************* Infos for port 0  *********************
MAC address: 0C:42:A1:65:FD:83
Device name: 01:00.1
Driver name: mlx5_pci
Firmware-version: 14.32.1010
Devargs: class=eth
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10 Gbps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 128
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 40
Redirection table size: 1
Supported RSS offload flow types:
  ipv4
  ipv4-frag
  ipv4-tcp
  ipv4-udp
  ipv4-other
  ipv6
  ipv6-frag
  ipv6-tcp
  ipv6-udp
  ipv6-other
  ipv6-ex
  ipv6-tcp-ex
  ipv6-udp-ex
  user defined 60
  user defined 61
  user defined 62
  user defined 63
Minimum size of RX buffer: 32
Maximum configurable length of RX packet: 65536
Maximum configurable size of LRO aggregated packet: 65280
Current number of RX queues: 1
Max possible RX queues: 1024
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 1024
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
Max segment number per packet: 40
Max segment number per MTU/TSO: 40
Device capabilities: 0x14( RXQ_SHARE FLOW_SHARED_OBJECT_KEEP )
Switch name: 01:00.1
Switch domain Id: 0
Switch Port Id: 65535
Switch Rx domain: 0
testpmd>

# lsmod | egrep "(mlx|ib)" | sort
ib_cm                  53248  2 rdma_cm,ib_ipoib
ib_core               368640  8
rdma_cm,ib_ipoib,iw_cm,ib_umad,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm
ib_ipoib              135168  0
ib_umad                24576  0
ib_uverbs             139264  2 rdma_ucm,mlx5_ib
libahci                36864  1 ahci
libcrc32c              16384  2 btrfs,raid456
mlx5_core            1634304  1 mlx5_ib
mlx5_ib               397312  0
mlx_compat             69632  11
rdma_cm,ib_ipoib,mlxdevm,iw_cm,ib_umad,ib_core,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm,mlx5_core
mlxdevm               172032  1 mlx5_core
mlxfw                  32768  1 mlx5_core
pci_hyperv_intf        16384  1 mlx5_core
psample                20480  1 mlx5_core
tls                    94208  2 bonding,mlx5_core

[-- Attachment #2: Type: text/html, Size: 9476 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-04-21  7:24 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-08  0:07 Mellanox Technologies MT27710 Family [ConnectX-4 Lx] not receiving (transmitting) packets fwefew 4t4tg
2022-04-21  7:24 ` Asaf Penso

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).