From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05C16462AF for ; Mon, 24 Feb 2025 22:33:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CBD5B427C5; Mon, 24 Feb 2025 22:33:00 +0100 (CET) Received: from frogstar.hit.bme.hu (frogstar.hit.bme.hu [152.66.248.44]) by mails.dpdk.org (Postfix) with ESMTP id 9317140EE2 for ; Mon, 24 Feb 2025 22:32:59 +0100 (CET) Received: from [IPV6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e] (ipv6-0f7dd70064338b402b47292e.customers.kabelnet.hu [IPv6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e]) (authenticated bits=0) by frogstar.hit.bme.hu (8.18.1/8.17.1) with ESMTPSA id 51OLWmul038257 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NO); Mon, 24 Feb 2025 22:32:54 +0100 (CET) (envelope-from lencse@hit.bme.hu) X-Authentication-Warning: frogstar.hit.bme.hu: Host ipv6-0f7dd70064338b402b47292e.customers.kabelnet.hu [IPv6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e] claimed to be [IPV6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e] Content-Type: multipart/alternative; boundary="------------GPwMr79pB9z0gcY5KVsh1n6d" Message-ID: Date: Mon, 24 Feb 2025 22:32:40 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: dpdk-testpmd fails with a Mellanox ConnectX-4 Lx NIC To: "users@dpdk.org" References: <20250224233720.04fb1bae@sovereign> Content-Language: hu, en-GB From: =?UTF-8?Q?G=C3=A1bor_LENCSE?= In-Reply-To: <20250224233720.04fb1bae@sovereign> X-Virus-Scanned: clamav-milter 0.103.12 at frogstar.hit.bme.hu X-Virus-Status: Clean Received-SPF: pass (frogstar.hit.bme.hu: authenticated connection) receiver=frogstar.hit.bme.hu; client-ip=2a03:bf01:f7d:d700:6433:8b40:2b47:292e; helo=[IPV6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e]; envelope-from=lencse@hit.bme.hu; x-software=spfmilter 2.001 http://www.acme.com/software/spfmilter/ with libspf2-1.2.11; X-DCC--Metrics: frogstar.hit.bme.hu; whitelist X-Spam-Status: No, score=-0.8 required=5.0 tests=ALL_TRUSTED, AWL, HTML_MESSAGE, TW_PD,T_SCC_BODY_TEXT_LINE autolearn=disabled version=3.4.6-frogstar X-Spam-Checker-Version: SpamAssassin 3.4.6-frogstar (2021-04-09) on frogstar.hit.bme.hu X-Scanned-By: MIMEDefang 2.86 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org This is a multi-part message in MIME format. --------------GPwMr79pB9z0gcY5KVsh1n6d Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Hi Dmitry, 2025. 02. 24. 21:37 keltezéssel, Dmitry Kozlyuk írta: > Mellanox NICs need mlx5_core kernel driver even when used via DPDK. Indeed, it works! Thank you very much! I would have never thought of that. However, it seems to be rather slow and it loses frames. With my siitperf ( https://github.com/lencsegabor/siitperf ) measurement program I can achieve 7.1Mfps per direction using bidirectional traffic with 64-byte frame size when I use X540. And the bottleneck is surely the X540, as the numbers are the same when I use fixed IP addresses and port numbers and when any or both of them varies. With X710, I can achieve more that 8Mfps using RFC 4814 pseudorandom port numbers and more than 10Mfps using fixed frames (using bidirectional traffic with 64-byte frame size). However the first step of the binary search of the throughput test using 8Mfps rate lasted more than 120s (instead of 60s). What is worse, the binary search counts down to 0 due to the loss of a small amount of packets. Here is the current output: --------------------------------------------------- 2025-02-24 22:26:18.165766809 Iteration no. 1-8 --------------------------------------------------- Testing rate: 31250 fps. EAL: Detected 16 lcore(s) EAL: Detected 2 NUMA nodes EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing VFIO support... EAL: VFIO support initialized EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.0 (socket 0) EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.1 (socket 0) EAL: Probe PCI driver: net_mlx4 (15b3:1007) device: 0000:82:00.0 (socket 1) EAL: No legacy callbacks, legacy socket not created Info: Left port and Left Sender CPU core belong to the same NUMA node: 0 Info: Right port and Right Receiver CPU core belong to the same NUMA node: 0 Info: Right port and Right Sender CPU core belong to the same NUMA node: 0 Info: Left port and Left Receiver CPU core belong to the same NUMA node: 0 Info: Testing initiated at 2025-02-24 22:26:19 Info: Forward sender's sending took 59.9999682437 seconds. Forward frames sent: 1875000 Info: Reverse sender's sending took 59.9999682300 seconds. Reverse frames sent: 1875000 Reverse frames received: 1874963 Forward frames received: 1874958 Info: Test finished. Forward: 1874958 frames were received from the required 1875000 frames Reverse: 1874963 frames were received from the required 1875000 frames TEST FAILED Do you have any idea, what is happening here? Best regards, Gábor --------------GPwMr79pB9z0gcY5KVsh1n6d Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit

Hi Dmitry,

2025. 02. 24. 21:37 keltezéssel, Dmitry Kozlyuk írta:
Mellanox NICs need mlx5_core kernel driver even when used via DPDK.

Indeed, it works! Thank you very much! I would have never thought of that.

However, it seems to be rather slow and it loses frames.

With my siitperf ( https://github.com/lencsegabor/siitperf ) measurement program I can achieve 7.1Mfps per direction using bidirectional traffic with 64-byte frame size when I use X540. And the bottleneck is surely the X540, as the numbers are the same when I use fixed IP addresses and port numbers and when any or both of them varies. With X710, I can achieve more that 8Mfps using RFC 4814 pseudorandom port numbers and more than 10Mfps using fixed frames (using bidirectional traffic with 64-byte frame size).

However the first step of the binary search of the throughput test using 8Mfps rate lasted more than 120s (instead of 60s).

What is worse, the binary search counts down to 0 due to the loss of a small amount of packets.

Here is the current output:

--------------------------------------------------- 2025-02-24 22:26:18.165766809 Iteration no. 1-8 --------------------------------------------------- Testing rate: 31250 fps. EAL: Detected 16 lcore(s) EAL: Detected 2 NUMA nodes EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing VFIO support... EAL: VFIO support initialized EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.0 (socket 0) EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.1 (socket 0) EAL: Probe PCI driver: net_mlx4 (15b3:1007) device: 0000:82:00.0 (socket 1) EAL: No legacy callbacks, legacy socket not created Info: Left port and Left Sender CPU core belong to the same NUMA node: 0 Info: Right port and Right Receiver CPU core belong to the same NUMA node: 0 Info: Right port and Right Sender CPU core belong to the same NUMA node: 0 Info: Left port and Left Receiver CPU core belong to the same NUMA node: 0 Info: Testing initiated at 2025-02-24 22:26:19 Info: Forward sender's sending took 59.9999682437 seconds. Forward frames sent: 1875000 Info: Reverse sender's sending took 59.9999682300 seconds. Reverse frames sent: 1875000 Reverse frames received: 1874963 Forward frames received: 1874958 Info: Test finished. Forward: 1874958 frames were received from the required 1875000 frames Reverse: 1874963 frames were received from the required 1875000 frames TEST FAILED

Do you have any idea, what is happening here?

Best regards,

Gábor

--------------GPwMr79pB9z0gcY5KVsh1n6d--