From: Stephen Hemminger <stephen@networkplumber.org>
To: dev@dpdk.org
Cc: Stephen Hemminger <stephen@networkplumber.org>,
Reshma Pattan <reshma.pattan@intel.com>
Subject: [PATCH v3 3/3] dumpcap: add lcores option
Date: Wed, 3 Jul 2024 08:45:45 -0700 [thread overview]
Message-ID: <20240703154705.19192-4-stephen@networkplumber.org> (raw)
In-Reply-To: <20240703154705.19192-1-stephen@networkplumber.org>
The dumpcap application is reading from ring and writing to
the kernel. By default the EAL init will cause the main thread
to bound to the first lcore (cpu 0). Add a command line option
to select the lcore to use; or if no lcores are specified
then just be a normal process and let the CPU scheduler handle it.
Letting scheduler is likely to work well for process doint
I/O with kernel.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/dumpcap/main.c | 31 +++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index ba91ca94d0..cb2a439f79 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -38,6 +38,7 @@
#include <rte_pdump.h>
#include <rte_ring.h>
#include <rte_string_fns.h>
+#include <rte_thread.h>
#include <rte_time.h>
#include <rte_version.h>
@@ -60,6 +61,7 @@ static const char *tmp_dir = "/tmp";
static unsigned int ring_size = 2048;
static const char *capture_comment;
static const char *file_prefix;
+static const char *lcore_arg;
static bool dump_bpf;
static bool show_interfaces;
static bool print_stats;
@@ -143,6 +145,7 @@ static void usage(void)
" (default: /tmp)\n"
"\n"
"Miscellaneous:\n"
+ " --lcore=<core> cpu core to run on (default: any)\n"
" --file-prefix=<prefix> prefix to use for multi-process\n"
" -q don't report packet capture counts\n"
" -v, --version print version information and exit\n"
@@ -343,6 +346,7 @@ static void parse_opts(int argc, char **argv)
{ "ifdescr", required_argument, NULL, 0 },
{ "ifname", required_argument, NULL, 0 },
{ "interface", required_argument, NULL, 'i' },
+ { "lcore", required_argument, NULL, 0 },
{ "list-interfaces", no_argument, NULL, 'D' },
{ "no-promiscuous-mode", no_argument, NULL, 'p' },
{ "output-file", required_argument, NULL, 'w' },
@@ -369,6 +373,8 @@ static void parse_opts(int argc, char **argv)
if (!strcmp(longopt, "capture-comment")) {
capture_comment = optarg;
+ } else if (!strcmp(longopt, "lcore")) {
+ lcore_arg = optarg;
} else if (!strcmp(longopt, "file-prefix")) {
file_prefix = optarg;
} else if (!strcmp(longopt, "temp-dir")) {
@@ -608,12 +614,16 @@ static void dpdk_init(void)
"--log-level", "notice"
};
int eal_argc = RTE_DIM(args);
+ rte_cpuset_t cpuset = { };
char **eal_argv;
unsigned int i;
if (file_prefix != NULL)
eal_argc += 2;
+ if (lcore_arg != NULL)
+ eal_argc += 2;
+
/* DPDK API requires mutable versions of command line arguments. */
eal_argv = calloc(eal_argc + 1, sizeof(char *));
if (eal_argv == NULL)
@@ -623,6 +633,11 @@ static void dpdk_init(void)
for (i = 1; i < RTE_DIM(args); i++)
eal_argv[i] = strdup(args[i]);
+ if (lcore_arg != NULL) {
+ eal_argv[i++] = strdup("--lcores");
+ eal_argv[i++] = strdup(lcore_arg);
+ }
+
if (file_prefix != NULL) {
eal_argv[i++] = strdup("--file-prefix");
eal_argv[i++] = strdup(file_prefix);
@@ -633,8 +648,24 @@ static void dpdk_init(void)
rte_panic("No memory\n");
}
+ /*
+ * Need to get the original cpuset, before EAL init changes
+ * the affinity of this thread (main lcore).
+ */
+ if (lcore_arg == NULL &&
+ rte_thread_get_affinity_by_id(rte_thread_self(), &cpuset) != 0)
+ rte_panic("rte_thread_getaffinity failed\n");
+
if (rte_eal_init(eal_argc, eal_argv) < 0)
rte_exit(EXIT_FAILURE, "EAL init failed: is primary process running?\n");
+
+ /*
+ * If no lcore argument was specified, then run this program as a normal process
+ * which can be scheduled on any non-isolated CPU.
+ */
+ if (lcore_arg == NULL &&
+ rte_thread_set_affinity_by_id(rte_thread_self(), &cpuset) != 0)
+ rte_exit(EXIT_FAILURE, "Can not restore original cpu affinity\n");
}
/* Create packet ring shared between callbacks and process */
--
2.43.0
next prev parent reply other threads:[~2024-07-03 15:47 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-26 20:49 [PATCH 0/2] dumpcap,pdump handle cleanup signals Stephen Hemminger
2024-02-26 20:49 ` [PATCH 1/2] app/dumpcap: handle SIGTERM and SIGHUP Stephen Hemminger
2024-02-26 20:49 ` [PATCH 2/2] app/pdump: " Stephen Hemminger
2024-02-27 9:59 ` Pattan, Reshma
2024-02-27 18:09 ` Stephen Hemminger
2024-05-29 16:08 ` [PATCH v2 0/2] Fix pdump and dumpcap leaks on SIGTERM Stephen Hemminger
2024-05-29 16:08 ` [PATCH v2 1/2] app/dumpcap: handle SIGTERM and SIGHUP Stephen Hemminger
2024-05-29 16:08 ` [PATCH v2 2/2] app/pdump: " Stephen Hemminger
2024-07-03 15:45 ` [PATCH v3 0/3] dumpcap and pdump patches for 24.07 Stephen Hemminger
2024-07-03 15:45 ` [PATCH v3 1/3] app/dumpcap: handle SIGTERM and SIGHUP Stephen Hemminger
2024-07-03 15:45 ` [PATCH v3 2/3] app/pdump: " Stephen Hemminger
2024-07-03 15:45 ` Stephen Hemminger [this message]
2024-07-23 13:22 ` [PATCH v3 0/3] dumpcap and pdump patches for 24.07 Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240703154705.19192-4-stephen@networkplumber.org \
--to=stephen@networkplumber.org \
--cc=dev@dpdk.org \
--cc=reshma.pattan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).