From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 43A5645FE4 for ; Sat, 4 Jan 2025 23:01:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 02F9940278; Sat, 4 Jan 2025 23:01:54 +0100 (CET) Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com [209.85.167.48]) by mails.dpdk.org (Postfix) with ESMTP id C5EB24014F for ; Sat, 4 Jan 2025 23:01:52 +0100 (CET) Received: by mail-lf1-f48.google.com with SMTP id 2adb3069b0e04-53f757134cdso14209739e87.2 for ; Sat, 04 Jan 2025 14:01:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736028112; x=1736632912; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=8pt+WqTqBvehtyrqHjC000kHqUWSAkuTJIGUwmmGCeg=; b=bw+Sgajp6/gQXQ8oWEvDZ/oZEoJvv6aZvzcmKAmSJlEAdWUcFPpAHuKnEn5g/u3ADw jKe1ucF0wHv1GOZRm4n47v6T21IC+W+DRotrV7TOm+VFz1ZGxhu2gd9qQD3IZiEt6rpR Hh7NmbcfUxjQ2htM+Ohqu+P5APCxYdW4mVZnaf6jkd6ksoOy2/76jMpd0W9wyfc8prXg r7EuYD0WkaNb8IqNjn5hqzAEBzZNioE7JmEkrvK4dOV2IP8zTEupciFIZNzcaJcf9+xQ j/UvxfXRAHNSBZAwFRZKJmQBcS49j9ghH+llkYuXdbFpgfqHh5gJhMuuICkr1Kb4b6S0 S4Ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736028112; x=1736632912; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8pt+WqTqBvehtyrqHjC000kHqUWSAkuTJIGUwmmGCeg=; b=BHS0GdgnPL8/CK9Qlq6josghIXu2hQmHN+NVXvgfdY7UVVWQhNnxcQRAIDBbBgOI61 Vr0qExBgnHj/SdLT9IOJFbf5MDF05yTVUQLnIeWMdTEtur4WDDHbCc66/G5sINBez7yO /SNQc29Gh+LmRuNu1SqFPiHzYr44EvkPam1qGFrX/r7ndkCrp0r2u85zzUJhZea63vm/ S5TgIZ2Ti51JyEeP/7FiKxvmbYmrijxyfpriyN9F4joBMGKkcvPdZPoZsIJ+vHkwNxj/ 0JwgIRZHJ+I/M1w4rvt5ucF6JPT8SKqrsvsE7QHbUu84d8uAh4EoOI6kqyzCADGrDBq/ /jXA== X-Gm-Message-State: AOJu0Yx6FNa7Qx6SxvCStXGOB/mx9lS2Rc3Y22ZQeP8mUPKeczM5oxq6 9U+QyyNAw5Ym11gldYINZiFmyovZdIBJT2SjuDLMGgX1RhUpF0UK X-Gm-Gg: ASbGnctA9PP8VaFmwWOOc9+vt3zEMwqh1JoVnsjKNBZw6qmoh7kpSz+czrcFGeaWBsH XVVKFNoGhwesNxoJjq9pkhWgIdgdC2nlicQrG/xNbXyX9cumdYYUkjX28XMElMNVvZB25ZqhcB0 I4FXN8ZnMCYMmqXXO4tWq8QOaGj7lLe5TXua/JAuwXOaFyNS/2q/0qw4SDOnJHBULBNEvEeTk8U NkZCUrNHVb/cliAiShhenm0/28W4UIJ3GRC2XnKTwL/Sh8jzTnisJxXU4H97RpGhuGDGiJx3Bdt p+zptysHqqJIVQnvDtwi4EfLjBu3 X-Google-Smtp-Source: AGHT+IFfP7zwZfhylG7GkS89CeCSzXz46JLH8hJ9Bz1Mn8ivg9b8BbPLI/3aNoQeQ9AXZVNC4InVXg== X-Received: by 2002:a05:6512:1329:b0:541:8e09:ffad with SMTP id 2adb3069b0e04-5422954a544mr15908745e87.33.1736028111633; Sat, 04 Jan 2025 14:01:51 -0800 (PST) Received: from sovereign (broadband-109-173-43-194.ip.moscow.rt.ru. [109.173.43.194]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-5422381381csm4552193e87.165.2025.01.04.14.01.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 04 Jan 2025 14:01:49 -0800 (PST) Date: Sun, 5 Jan 2025 01:01:48 +0300 From: Dmitry Kozlyuk To: Alan Beadle Cc: users@dpdk.org Subject: Re: Multiprocess App Problems with tx_burst Message-ID: <20250105010148.1ef26333@sovereign> In-Reply-To: References: <20250104214032.04eb6d25@sovereign> X-Mailer: Claws Mail 3.18.0 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org 2025-01-04 14:16 (UTC-0500), Alan Beadle: > I am setting up one tx queue and one rx queue via the primary process > init code. The code here resembles the "basic forwarding" sample > application (in the skeleton/ subdir). Please let me know whether it > would be possible for each process to use entirely separate queues and > still pass mbuf pointers around. It is possible. Mbufs are in memory shared by all processes. For example, the primary process can setup 2 Rx queues: to use queue 0 by itself, and to let the secondary process use queue 1. Steering incoming packets to the right queue (of set of queues) would be another question, however. > I will explain the processes and lcores below. First of all though, my > application uses several types of packets. There are handshake packets > (acknacks and the like) and data packets. The data packets are > addressed to specific subsets of my secondary processes (for now just > 1 or 2 secondary processes exist per machine, but support for even > more is in principle part of my design). Sometimes the data should > also be read by other peer processes on the same machine (including > the daemon/primary process) so I chose to make the mbufs readable > instead of allocating a duplicate local buffer. It is important that > mbuf pointers from one process will work in the others. Otherwise all > of my data would need to be duplicated into non-dpdk shared buffers > too. > > The first process is the "daemon". This is the primary process. It > uses DPDK through my shared library (which uses DPDK internally, as > explained above). The daemon just polls the NIC and periodically > cleans up my non-DPDK data structures in shared memory. The intent is > to rely on the daemon to watch for packets during periods of low > activity and avoid unnecessary CPU usage. When a packet arrives it can > wake the correct secondary process by finding a futex in shared memory > for that process. On both machines the daemon is mapped to core 2 with > the parameter "-l 2". > > The second process is the "server". It uses separate app code from the > daemon but calls into the same library. Like the daemon, it receives > and parses packets. The server can originate new data packets, and can > also reply to inbound data packets with more data packets to be sent > back to processes on the other machine. It sleeps on a shared futex > during periods of inactivity. If there were additional secondary > processes (as is the case on the other machine) it could wake them > when packets arrive for those other processes, again using futexes in > shared memory. On both machines this second process is mapped to core > 4 with the parameter "-l 4". > > The other machine has another secondary process (a third process) > which is on core 6 with "-l 6". For the purposes of this discussion, > it behaves similarly to the server process above (sends, receives, and > sometime sleeps). So, "deamon" and "server" may try using the same queue sometimes, correct? Synchronizing all access to the single queue should work in this case. BTW, rte_eth_tx_burst() returning >0 does not mean the packets have been sent. It only means they have been enqueued for sending. At some point the NIC will complete sending, only then the PMD can free the mbuf (or decrement its reference count). For most PMDs, this happens on a subsequent call to rte_eth_tx_burst(). Which PMD and HW is it? Have you tried to print as many stats as possible when rte_eth_tx_burst() can't consume all packets (rte_eth_stats_get(), rte_eth_xstats_get())?