From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0007F43883 for ; Wed, 10 Jan 2024 14:29:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C19840269; Wed, 10 Jan 2024 14:29:36 +0100 (CET) Received: from office2.cesnet.cz (office2.cesnet.cz [78.128.248.237]) by mails.dpdk.org (Postfix) with ESMTP id 21F764025E for ; Wed, 10 Jan 2024 14:29:35 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cesnet.cz; s=office2-2020; t=1704893374; bh=hvS8eDNdTpEnTSQMScT5BNaTg3ozG3S2aGlFn7zZqkM=; h=Date:To:From:Subject; b=KsKoSvso5rbkyCjYgGnKb3ITsg+Te5Fsk8lZeTTF6DtVDotIhYnD2S7vL/oVBpNH+ o8mJ+WBWttEnv+h/DtJ+9Cm0oF52dGNae/zCYMdSFIGr9us1bHPuUB6tihAOQn8MNj 2bwpQutwcWrprmwMsAI/Wu9zwJTmBj9mVzGH1T6blwz+MkQOa0yzeW/KpEzDiAvJix m46YKA1xDzbokxwFPxqLW+lu3SAkpScTH38EZ9VD6fF7OuXoT18iDesee3KY2wLMSm w4ZYOKc1sSBl9LXN3eeaqBwodQg6wti57kZfx2JMXvwoTPM8sDvbJRDQJreCNFa5CY V3+fgZ/10l8Xg== Received: from [IPV6:2001:67c:1220:98a1:8e0:908c:90c6:9bb0] (unknown [IPv6:2001:67c:1220:98a1:8e0:908c:90c6:9bb0]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by office2.cesnet.cz (Postfix) with ESMTPSA id 7B55E1180072 for ; Wed, 10 Jan 2024 14:29:33 +0100 (CET) Message-ID: Date: Wed, 10 Jan 2024 14:29:33 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Content-Language: en-US To: "users@dpdk.org" From: =?UTF-8?B?THVrw6HFoSDFoGnFoW1pxaE=?= Subject: Synchronized CPU stalls with 8 queues on Mellanox MLX5 NIC Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Hi everyone, past few weeks I am trying to debug why independent application workers have the same access patterns to a Mellanox NIC. The application I am debugging is Suricata and the debug tool that I am using is primarily Intel Vtune. I am using 8 cores for packet processing, each core has an independent processing queue. All application cores are on the same NUMA node. Importantly, this only happens on Mellanox/NVIDIA NIC (currently MT2892 Family - mlx5) and NOT on X710. Suricata is compiled with DPDK (2 versions tested, replicated on both - master 1dcf69b211 (https://github.com/OISF/suricata/) and version with interrupt support (commit c822f66b - https://github.com/lukashino/suricata/commits/feat-power-saving-v4/)). I've used various number of descriptors but the problem remained the same. For packet generation I use the Trex packet generator on an independent server in ASTF mode with the command "start -f astf/http_simple.py -m 6000". The traffic exchanged between the two trex interfaces is mirrored on a switch to Suricata interface. That yields roughly 4.6 Gbps of traffic. The traffic is a simple http  GET request yet the flows are alternating each iteration with an increment in an IP address. RSS then distributes the traffic evenly across all cores. The problem occurs both on 500 Mbps and on 20 Gbps transmit speed. This is a flame graph from one of the runs. I wonder why CPUs have almost synchronous no CPU/some CPU activity in the graph below. The worker cores are denoted with "W#0..." and are in 2 groups that are alternating. CPU stalls can be especially seen in regions of high CPU activity but it is present also with low CPU activity. Having high/low CPU activity is not relevant here as  I am only interested in the pattern of CPU stalls. It suggest for some shared resource. But even with a shared resource it would not be paused synchronously but randomly blocked. I am debugging the application with interrupts enabled however the same pattern occurs when poll mode is enabled. When polling mode is active I filtered out mlx5 module activity from the Vtune result and was still able to see CPU pauses ranging from 0.5 to 1 second across all cores. DPDK 8 cores, MLX5 NIC https://imgur.com/a/TrZ9vIy I tried to profile Suricata in different scenarios and this pattern of complete CPU stalls doesn't happen elsewhere. e.g. AF_PACKET 8 cores, MLX5 NIC, the CPU activity is similar across cores but the cores never pause: https://imgur.com/a/HIhDVyQ DPDK 4 cores, MLX5 NIC, https://imgur.com/a/G0JVOXa DPDK 9 cores, MLX5 NIC https://imgur.com/a/IdHCruj DPDK 8 cores, X710 NIC, no CPU stalls on worker cores https://imgur.com/a/94KLCjE Testpmd, MLX5, 8 cores, I tried to filter out majority of RX NIC functions and it still seems that CPUs are being continuously active. (It was running in rxonly fwd mode, with 8 queues and 8 cores) Though I am a bit skeptical about the CPU activity as testpmd only receives/discards the traffic. https://imgur.com/a/UwHZzAr It seems like the issue is connected with MLX5 NIC and DPDK as it works well with AF_PACKET, lower/higher number of threads. Does anybody have an idea why CPU stalls occurs in combination with 8 cores or possibly what else I could do to mitigate/better evaluate the problem? Thanks in advance. Lukas