From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0E01A034C for ; Fri, 25 Feb 2022 19:29:43 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 977EF4113D; Fri, 25 Feb 2022 19:29:43 +0100 (CET) Received: from mx0a-0016e101.pphosted.com (mx0a-0016e101.pphosted.com [148.163.145.30]) by mails.dpdk.org (Postfix) with ESMTP id DBD1F410FD for ; Fri, 25 Feb 2022 19:29:42 +0100 (CET) Received: from pps.filterd (m0151356.ppops.net [127.0.0.1]) by mx0a-0016e101.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21PHsOt5011607 for ; Fri, 25 Feb 2022 10:29:41 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsd.edu; h=mime-version : references : in-reply-to : from : date : message-id : subject : to : cc : content-type; s=campus; bh=61nRgSDjkjL4OU9mt10K6ZeXCkgQ7cTDo2DuSqyK5Mk=; b=ctj/2xAHGcD7o0CH5X1k5cH9/m79J69kznPGlVigWjF9K+M4CCYZ0i2iHkVvuyxNDgE/ eXvgr0kHQBEuCRvPlNx9AuEKKyLzY6cWxGbSzrcScgWcuLADnggHVRxQfEOew16v9NO+ Cqkte8qNxiyHs6qvxmApQdA8aCFdvVcb49x3r3ia9lnvg+tmnDk1tDf6v3KNIZJKg9Ja 4sL43thImF80XqUWwrLpKwu8XWSR/R5y8L7h7aIW7ulVtbPKLOUhH4TVvUMJSGkaLN3g sEZk4zq0co9koQX6lBffGykcOtc/m3aQMpvQTlD3rv4xgJ4LpX9umZiBFWyf2p4m9jyI mw== Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by mx0a-0016e101.pphosted.com (PPS) with ESMTPS id 3eejhtsaxw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 25 Feb 2022 10:29:41 -0800 Received: by mail-pg1-f199.google.com with SMTP id t18-20020a63dd12000000b00342725203b5so3047798pgg.16 for ; Fri, 25 Feb 2022 10:29:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsd.edu; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=61nRgSDjkjL4OU9mt10K6ZeXCkgQ7cTDo2DuSqyK5Mk=; b=id0KBdhqTUzyIr+vLREuqPBgD/1BTOB7hMsEYFdQXTc96Idj/NYV/GCzPIR2VMpKFQ DYg/RHNDkA7gTF5Xkx1YhWuKpEjte2iF/p9+Q05m4OE0De1jg4RMTelf+CwEf3esIBIJ 5lZkM2BoaYxpuCeAqpLV6UBK576uQmx6yIcK4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=61nRgSDjkjL4OU9mt10K6ZeXCkgQ7cTDo2DuSqyK5Mk=; b=qyp74kD0qr+PM24OnqLbqb9idmhKzgCAsOXG1u5I7ARBL4+uPAAHAOyt8mTwdLi+LW bAkZba7nKwD07EcVtp/lSNpm8EJwtJZv5y7jj84T/b0iYPb4kJqkyKwX0QxBrxXFVuMm BMDoQPhYLmWxkmVgEB2z1wZX8XwLcGq2GmifAE8wWTDXx08I+XApuEOqwF6Mt7FoZlr5 vp+eyAdSyxoX7A8bxje967loZpDdQKpYjtp3+k08oxXBO9RG3oXTrxqjCYty1hkdLEUP 37UXWwOvnuu3SxL6RCF1g97lKiqziomA3egxYXCgy6JDKYYW0eCfdmvQKlbasPmF1NJf R4Hw== X-Gm-Message-State: AOAM531M6cxE1PUIdyyhh5O2hOZt+j0pFnZr/ufBJtAy+WqsqXdKVyyh TZSVIeInXq17osp1jMwOHVXRgD3ixD0hStqjPDzrWxcmQWJXPJkVEBxnNmxE/sA1/5JtjvkoFj0 MjWKBKJQ+p/Ko+LsuJecf X-Received: by 2002:a63:4756:0:b0:373:e14b:5848 with SMTP id w22-20020a634756000000b00373e14b5848mr7076383pgk.337.1645813780448; Fri, 25 Feb 2022 10:29:40 -0800 (PST) X-Google-Smtp-Source: ABdhPJwiDh0UJV68q3AshOL0EnBz8HA+p6j7lU7L979ZbvVhegAJJePuF2xMSCnIgx4D3T6ps0eLyxiSveMZaKgXKm4= X-Received: by 2002:a63:4756:0:b0:373:e14b:5848 with SMTP id w22-20020a634756000000b00373e14b5848mr7076364pgk.337.1645813780088; Fri, 25 Feb 2022 10:29:40 -0800 (PST) MIME-Version: 1.0 References: <1848868.MyG8hOvIyE@thomas> In-Reply-To: <1848868.MyG8hOvIyE@thomas> From: Aaron Lee Date: Fri, 25 Feb 2022 10:29:29 -0800 Message-ID: Subject: Re: ConnectX5 Setup with DPDK To: Thomas Monjalon Cc: users@dpdk.org Content-Type: multipart/alternative; boundary="000000000000baa25c05d8dbe11a" X-campus_gsuite: gsuite_33445511 X-Proofpoint-GUID: buM8j3HmXF9Yr0skUhUrZ-nUrrLv5eVK X-Proofpoint-ORIG-GUID: buM8j3HmXF9Yr0skUhUrZ-nUrrLv5eVK pp_allow_relay: proofpoint_allowed X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-02-25_09,2022-02-25_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 mlxlogscore=999 malwarescore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 phishscore=0 mlxscore=0 bulkscore=0 adultscore=0 impostorscore=0 spamscore=0 classifier=spam adjust=-80 reason=mlx scancount=1 engine=8.12.0-2201110000 definitions=main-2202250105 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000baa25c05d8dbe11a Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Thomas, I was doing some more testing and wanted to increase the RX queues for the CX5 but was wondering how I could do that. I see in the usage example in the docs, I could pass in --rxq=3D2 --txq=3D2 to set the queues to 2 each b= ut I don't see that in my output when I run the command. Below is the output from running the command in https://doc.dpdk.org/guides/nics/mlx5.html#usage-example. Does this mean that the MCX515A-CCAT I have can't support more than 1 queue or am I supposed to configure another setting? EAL: Detected 80 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: Probing VFIO support... EAL: VFIO support initialized EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1) mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000. EAL: No legacy callbacks, legacy socket not created Interactive-mode selected testpmd: create a new mbuf pool : n=3D203456, size=3D2176, socke= t=3D0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=3D203456, size=3D2176, socke= t=3D1 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=3Dpaired and odd forward ports number, the last port will pair with itself. Configuring Port 0 (socket 1) mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil)= . mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil)= . mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil)= . mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil)= . Port 0: EC:0D:9A:68:21:A8 Checking link statuses... Done mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil)= . Best, Aaron On Mon, Feb 21, 2022 at 11:10 PM Thomas Monjalon wrote: > 21/02/2022 21:10, Aaron Lee: > > Hi Thomas, > > > > Actually I remembered in my previous setup I had run dpdk-devbind.py to > > bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to d= o > > this and just wanted to confirm that this is correct. > > Indeed, mlx5 PMD runs on top of mlx5 kernel driver. > We don't need UIO or VFIO drivers. > The kernel modules must remain loaded and can be used in the same time. > When DPDK is working, the traffic goes to the userspace PMD by default, > but it is possible to configure some flows to go directly to the kernel > driver. > This behaviour is called "bifurcated model". > > > > On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee wrote: > > > > > Hi Thomas, > > > > > > I tried installing things from scratch two days ago and have gotten > > > things working! I think part of the problem was figuring out the > correct > > > hugepage allocation for my system. If I recall correctly, I tried > setting > > > up my system with default page size 1G but perhaps didn't have enough > pages > > > allocated at the time. Currently have the following which gives me th= e > > > output you've shown previously. > > > > > > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s > > > Node Pages Size Total > > > 0 16 1Gb 16Gb > > > 1 16 1Gb 16Gb > > > > > > root@yeti-04:~/dpdk-21.11# echo show port summary all | > > > build/app/dpdk-testpmd --in-memory -- -i > > > EAL: Detected CPU lcores: 80 > > > EAL: Detected NUMA nodes: 2 > > > EAL: Detected static linkage of DPDK > > > EAL: Selected IOVA mode 'PA' > > > EAL: No free 2048 kB hugepages reported on node 0 > > > EAL: No free 2048 kB hugepages reported on node 1 > > > EAL: No available 2048 kB hugepages reported > > > EAL: VFIO support initialized > > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 > (socket 1) > > > TELEMETRY: No legacy callbacks, legacy socket not created > > > Interactive-mode selected > > > testpmd: create a new mbuf pool : n=3D779456, size=3D2176, > socket=3D0 > > > testpmd: preferred mempool ops selected: ring_mp_mc > > > testpmd: create a new mbuf pool : n=3D779456, size=3D2176, > socket=3D1 > > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > > > Warning! port-topology=3Dpaired and odd forward ports number, the las= t > port > > > will pair with itself. > > > > > > Configuring Port 0 (socket 1) > > > Port 0: EC:0D:9A:68:21:A8 > > > Checking link statuses... > > > Done > > > testpmd> show port summary all > > > Number of available ports: 1 > > > Port MAC Address Name Driver Status Link > > > 0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci up 100 Gbps > > > > > > Best, > > > Aaron > > > > > > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon > > > wrote: > > > > > >> 21/02/2022 19:52, Thomas Monjalon: > > >> > 18/02/2022 22:12, Aaron Lee: > > >> > > Hello, > > >> > > > > >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but > I'm > > >> > > wondering if the card I have simply isn't compatible. I first > noticed > > >> that > > >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of th= e > > >> error > > >> > > logs when running dpdk-pdump. > > >> > > > >> > When testing a NIC, it is more convenient to use dpdk-testpmd. > > >> > > > >> > > EAL: Detected CPU lcores: 80 > > >> > > EAL: Detected NUMA nodes: 2 > > >> > > EAL: Detected static linkage of DPDK > > >> > > EAL: Multi-process socket > > >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 > > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No > such > > >> file or > > >> > > directory > > >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_m= p > > >> > > vdev_scan(): Failed to request vdev from primary > > >> > > EAL: Selected IOVA mode 'PA' > > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No > such > > >> file or > > >> > > directory > > >> > > EAL: Fail to send request > /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync > > >> > > EAL: Cannot request default VFIO container fd > > >> > > EAL: VFIO support could not be initialized > > >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 > > >> (socket 1) > > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No > such > > >> file or > > >> > > directory > > >> > > EAL: Fail to send request > /var/run/dpdk/rte/mp_socket:common_mlx5_mp > > >> > > mlx5_common: port 0 request to primary process failed > > >> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after > encountering > > >> an > > >> > > error: No such file or directory > > >> > > mlx5_common: Failed to load driver mlx5_eth > > >> > > EAL: Requested device 0000:af:00.0 cannot be used > > >> > > EAL: Error - exiting with code: 1 > > >> > > Cause: No Ethernet ports - bye > > >> > > > >> > From this log, we miss the previous steps before running the > > >> application. > > >> > > > >> > Please check these simple steps: > > >> > - install rdma-core > > >> > - build dpdk (meson build && ninja -C build) > > >> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) > > >> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd > > >> --in-memory -- -i) > > >> > > > >> > EAL: Detected CPU lcores: 10 > > >> > EAL: Detected NUMA nodes: 1 > > >> > EAL: Detected static linkage of DPDK > > >> > EAL: Selected IOVA mode 'PA' > > >> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 > > >> (socket 0) > > >> > Interactive-mode selected > > >> > testpmd: create a new mbuf pool : n=3D219456, size=3D21= 76, > > >> socket=3D0 > > >> > testpmd: preferred mempool ops selected: ring_mp_mc > > >> > Configuring Port 0 (socket 0) > > >> > Port 0: 0C:42:A1:D6:E0:00 > > >> > Checking link statuses... > > >> > Done > > >> > testpmd> show port summary all > > >> > Number of available ports: 1 > > >> > Port MAC Address Name Driver Status Link > > >> > 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbp= s > > >> > > > >> > > I noticed that the pci id of the card I was given is 15b3:1017 a= s > > >> below. > > >> > > This sort of indicates to me that the PMD driver isn't supported > on > > >> this > > >> > > card. > > >> > > > >> > This card is well supported and even officially tested with DPDK > 21.11, > > >> > as you can see in the release notes: > > >> > > > >> > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__doc.dpdk.org_guide= s_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=3DDwICAg&c=3D-3= 5OiAkTchMrZOngvJPOeA&r=3DhV5L_ta1W9AMUIlRhnTmeA&m=3DSjlw2sMlSxVzIY1zsNBhZue= u7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=3DioqgYPqVWdF2XE0iOZ4AZn5Vw_NGm= tr5m9fYCf_TY9A&e=3D > > >> > > > >> > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT2780= 0 > > >> Family > > >> > > [ConnectX-5] [15b3:1017] > > >> > > > > >> > > I'd appreciate it if someone has gotten this card to work with > DPDK to > > >> > > point me in the right direction or if my suspicions were correct > that > > >> this > > >> > > card doesn't work with the PMD. > > >> > > >> If you want to check which hardware is supported by a PMD, > > >> you can use this command: > > >> > > >> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so > > >> PMD NAME: mlx5_eth > > >> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib > > >> PMD HW SUPPORT: > > >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) > (All > > >> Subdevices) > > >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual > > >> Function] (1014) (All Subdevices) > > >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015= ) > > >> (All Subdevices) > > >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtua= l > > >> Function] (1016) (All Subdevices) > > >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) > (All > > >> Subdevices) > > >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual > > >> Function] (1018) (All Subdevices) > > >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019= ) > > >> (All Subdevices) > > >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtua= l > > >> Function] (101a) (All Subdevices) > > >> Mellanox Technologies (15b3) : MT416842 BlueField integrated > ConnectX-5 > > >> network controller (a2d2) (All Subdevices) > > >> Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC > family > > >> VF (a2d3) (All Subdevices) > > >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) > (All > > >> Subdevices) > > >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual > > >> Function] (101c) (All Subdevices) > > >> Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) > (All > > >> Subdevices) > > >> Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual > Function > > >> (101e) (All Subdevices) > > >> Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated > ConnectX-6 > > >> Dx network controller (a2d6) (All Subdevices) > > >> Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) > (All > > >> Subdevices) > > >> Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (A= ll > > >> Subdevices) > > >> Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated > ConnectX-7 > > >> network controller (a2dc) (All Subdevices) > > >> > > >> > Please tell me what drove you into the wrong direction, > > >> > because I really would like to improve the documentation & tools. > > >> > > >> > > >> > > >> > > > > > > > > --000000000000baa25c05d8dbe11a Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Thomas,

I was doing= some more testing and wanted to increase the RX queues for the CX5 but was= wondering how I could do that. I see in the usage example in the docs, I c= ould pass in --rxq=3D2 --txq=3D2 to set the queues to 2 each but I don'= t see that in my output when I run the command. Below is the output from ru= nning the command in=C2=A0https://doc.dpdk.org/guides/nics/mlx5.html#usage-exampl= e. Does this mean that the MCX515A-CCAT I have can't support more t= han 1 queue or am I supposed to configure another setting?

EAL: Detected 80 lcore(s)

EAL: Detected 2 NUMA nodes

EAL: Multi-process socket /var/= run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'PA= '

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: Probe PCI driver: mlx5_pci= (15b3:1017) device: 0000:af:00.0 (socket 1)

mlx5_pci: Size 0xFFFF is not po= wer of 2, will be aligned to 0x10000.

EAL: No legacy callbacks, legac= y socket not created

Interactive-mode selected

testpmd: create a new mbuf pool= <mb_pool_0>: n=3D203456, size=3D2176, socket=3D0

testpmd: preferred mempool ops = selected: ring_mp_mc

testpmd: create a new mbuf pool= <mb_pool_1>: n=3D203456, size=3D2176, socket=3D1

testpmd: preferred mempool ops = selected: ring_mp_mc


Warning! port-topology=3Dpaired= and odd forward ports number, the last port will pair with itself.<= /p>


Configuring Port 0 (socket 1)

mlx5_pci: Failed to init cache = list FDB_ingress_0_matcher_cache entry (nil).

mlx5_pci: Failed to init cache = list FDB_ingress_0_matcher_cache entry (nil).

mlx5_pci: Failed to init cache = list FDB_ingress_0_matcher_cache entry (nil).

mlx5_pci: Failed to init cache = list FDB_ingress_0_matcher_cache entry (nil).

Port 0: EC:0D:9A:68:21:A8

Checking link statuses...

Done

mlx5_pci: Failed to init cache = list FDB_ingress_0_matcher_cache entry (nil).


Best,
Aaron

On Mon, Feb 21, 2022 at 11:10 PM Thomas M= onjalon <thomas@monjalon.net&= gt; wrote:
21/02= /2022 21:10, Aaron Lee:
> Hi Thomas,
>
> Actually I remembered in my previous setup I had run dpdk-devbind.py t= o
> bind the mlx5 NIC to igb_uio. I read somewhere that you don't need= to do
> this and just wanted to confirm that this is correct.

Indeed, mlx5 PMD runs on top of mlx5 kernel driver.
We don't need UIO or VFIO drivers.
The kernel modules must remain loaded and can be used in the same time.
When DPDK is working, the traffic goes to the userspace PMD by default,
but it is possible to configure some flows to go directly to the kernel dri= ver.
This behaviour is called "bifurcated model".


> On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee <acl049@ucsd.edu> wrote:
>
> > Hi Thomas,
> >
> > I tried installing things from scratch two days ago and have gott= en
> > things working! I think part of the problem was figuring out the = correct
> > hugepage allocation for my system. If I recall correctly, I tried= setting
> > up my system with default page size 1G but perhaps didn't hav= e enough pages
> > allocated at the time. Currently have the following which gives m= e the
> > output you've shown previously.
> >
> > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s
> > Node Pages Size Total
> > 0=C2=A0 =C2=A0 16=C2=A0 =C2=A0 1Gb=C2=A0 =C2=A0 16Gb
> > 1=C2=A0 =C2=A0 16=C2=A0 =C2=A0 1Gb=C2=A0 =C2=A0 16Gb
> >
> > root@yeti-04:~/dpdk-21.11# echo show port summary all |
> > build/app/dpdk-testpmd --in-memory -- -i
> > EAL: Detected CPU lcores: 80
> > EAL: Detected NUMA nodes: 2
> > EAL: Detected static linkage of DPDK
> > EAL: Selected IOVA mode 'PA'
> > EAL: No free 2048 kB hugepages reported on node 0
> > EAL: No free 2048 kB hugepages reported on node 1
> > EAL: No available 2048 kB hugepages reported
> > EAL: VFIO support initialized
> > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 = (socket 1)
> > TELEMETRY: No legacy callbacks, legacy socket not created
> > Interactive-mode selected
> > testpmd: create a new mbuf pool <mb_pool_0>: n=3D779456, si= ze=3D2176, socket=3D0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool <mb_pool_1>: n=3D779456, si= ze=3D2176, socket=3D1
> > testpmd: preferred mempool ops selected: ring_mp_mc
> >
> > Warning! port-topology=3Dpaired and odd forward ports number, the= last port
> > will pair with itself.
> >
> > Configuring Port 0 (socket 1)
> > Port 0: EC:0D:9A:68:21:A8
> > Checking link statuses...
> > Done
> > testpmd> show port summary all
> > Number of available ports: 1
> > Port MAC Address=C2=A0 =C2=A0 =C2=A0 =C2=A0Name=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0Driver=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Status=C2=A0 =C2= =A0Link
> > 0=C2=A0 =C2=A0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci=C2=A0 =C2= =A0 =C2=A0 =C2=A0up=C2=A0 =C2=A0 =C2=A0 =C2=A0100 Gbps
> >
> > Best,
> > Aaron
> >
> > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon <thomas@monjalon.net> > > wrote:
> >
> >> 21/02/2022 19:52, Thomas Monjalon:
> >> > 18/02/2022 22:12, Aaron Lee:
> >> > > Hello,
> >> > >
> >> > > I'm trying to get my ConnectX5 NIC working with= DPDK v21.11 but I'm
> >> > > wondering if the card I have simply isn't compa= tible. I first noticed
> >> that
> >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below a= re some of the
> >> error
> >> > > logs when running dpdk-pdump.
> >> >
> >> > When testing a NIC, it is more convenient to use dpdk-te= stpmd.
> >> >
> >> > > EAL: Detected CPU lcores: 80
> >> > > EAL: Detected NUMA nodes: 2
> >> > > EAL: Detected static linkage of DPDK
> >> > > EAL: Multi-process socket
> >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
> >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket= ) due to No such
> >> file or
> >> > > directory
> >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_sock= et:bus_vdev_mp
> >> > > vdev_scan(): Failed to request vdev from primary > >> > > EAL: Selected IOVA mode 'PA'
> >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket= ) due to No such
> >> file or
> >> > > directory
> >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_sock= et:eal_vfio_mp_sync
> >> > > EAL: Cannot request default VFIO container fd
> >> > > EAL: VFIO support could not be initialized
> >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device:= 0000:af:00.0
> >> (socket 1)
> >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket= ) due to No such
> >> file or
> >> > > directory
> >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_sock= et:common_mlx5_mp
> >> > > mlx5_common: port 0 request to primary process fail= ed
> >> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted = after encountering
> >> an
> >> > > error: No such file or directory
> >> > > mlx5_common: Failed to load driver mlx5_eth
> >> > > EAL: Requested device 0000:af:00.0 cannot be used > >> > > EAL: Error - exiting with code: 1
> >> > >=C2=A0 =C2=A0Cause: No Ethernet ports - bye
> >> >
> >> > From this log, we miss the previous steps before running= the
> >> application.
> >> >
> >> > Please check these simple steps:
> >> > - install rdma-core
> >> > - build dpdk (meson build && ninja -C build)
> >> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)<= br> > >> > - run testpmd (echo show port summary all | build/app/dp= dk-testpmd
> >> --in-memory -- -i)
> >> >
> >> > EAL: Detected CPU lcores: 10
> >> > EAL: Detected NUMA nodes: 1
> >> > EAL: Detected static linkage of DPDK
> >> > EAL: Selected IOVA mode 'PA'
> >> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000= :08:00.0
> >> (socket 0)
> >> > Interactive-mode selected
> >> > testpmd: create a new mbuf pool <mb_pool_0>: n=3D2= 19456, size=3D2176,
> >> socket=3D0
> >> > testpmd: preferred mempool ops selected: ring_mp_mc
> >> > Configuring Port 0 (socket 0)
> >> > Port 0: 0C:42:A1:D6:E0:00
> >> > Checking link statuses...
> >> > Done
> >> > testpmd> show port summary all
> >> > Number of available ports: 1
> >> > Port MAC Address=C2=A0 =C2=A0 =C2=A0 =C2=A0Name=C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0Driver=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Status= =C2=A0 =C2=A0Link
> >> > 0=C2=A0 =C2=A0 0C:42:A1:D6:E0:00 08:00.0=C2=A0 =C2=A0 = =C2=A0 mlx5_pci=C2=A0 =C2=A0 =C2=A0 =C2=A0up=C2=A0 =C2=A0 =C2=A0 =C2=A025 G= bps
> >> >
> >> > > I noticed that the pci id of the card I was given i= s 15b3:1017 as
> >> below.
> >> > > This sort of indicates to me that the PMD driver is= n't supported on
> >> this
> >> > > card.
> >> >
> >> > This card is well supported and even officially tested w= ith DPDK 21.11,
> >> > as you can see in the release notes:
> >> >
> >> https://urldefense.proofpoint.com/v2/url?u=3D= https-3A__doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2= Dplatforms&d=3DDwICAg&c=3D-35OiAkTchMrZOngvJPOeA&r=3DhV5L_ta1W9= AMUIlRhnTmeA&m=3DSjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SO= mlhbM-J6yO&s=3DioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e=3D=
> >> >
> >> > > af:00.0 Ethernet controller [0200]: Mellanox Techno= logies MT27800
> >> Family
> >> > > [ConnectX-5] [15b3:1017]
> >> > >
> >> > > I'd appreciate it if someone has gotten this ca= rd to work with DPDK to
> >> > > point me in the right direction or if my suspicions= were correct that
> >> this
> >> > > card doesn't work with the PMD.
> >>
> >> If you want to check which hardware is supported by a PMD, > >> you can use this command:
> >>
> >> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so > >> PMD NAME: mlx5_eth
> >> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5= _ib
> >> PMD HW SUPPORT:
> >>=C2=A0 Mellanox Technologies (15b3) : MT27700 Family [ConnectX= -4] (1013) (All
> >> Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT27700 Family [ConnectX= -4 Virtual
> >> Function] (1014) (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT27710 Family [ConnectX= -4 Lx] (1015)
> >> (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT27710 Family [ConnectX= -4 Lx Virtual
> >> Function] (1016) (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT27800 Family [ConnectX= -5] (1017) (All
> >> Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT27800 Family [ConnectX= -5 Virtual
> >> Function] (1018) (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT28800 Family [ConnectX= -5 Ex] (1019)
> >> (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT28800 Family [ConnectX= -5 Ex Virtual
> >> Function] (101a) (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT416842 BlueField integ= rated ConnectX-5
> >> network controller (a2d2) (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT416842 BlueField multi= core SoC family
> >> VF (a2d3) (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT28908 Family [ConnectX= -6] (101b) (All
> >> Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT28908 Family [ConnectX= -6 Virtual
> >> Function] (101c) (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT2892 Family [ConnectX-= 6 Dx] (101d) (All
> >> Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : ConnectX Family mlx5Gen = Virtual Function
> >> (101e) (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT42822 BlueField-2 inte= grated ConnectX-6
> >> Dx network controller (a2d6) (All Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT2894 Family [ConnectX-= 6 Lx] (101f) (All
> >> Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT2910 Family [ConnectX-= 7] (1021) (All
> >> Subdevices)
> >>=C2=A0 Mellanox Technologies (15b3) : MT43244 BlueField-3 inte= grated ConnectX-7
> >> network controller (a2dc) (All Subdevices)
> >>
> >> > Please tell me what drove you into the wrong direction,<= br> > >> > because I really would like to improve the documentation= & tools.
> >>
> >>
> >>
> >>
>





--000000000000baa25c05d8dbe11a--