From: Maayan Kashani <mkashani@nvidia.com>
To: <dev@dpdk.org>
Cc: <mkashani@nvidia.com>, <dsosnowski@nvidia.com>,
<rasland@nvidia.com>, "Aman Singh" <aman.deep.singh@intel.com>,
Thomas Monjalon <thomas@monjalon.net>,
Ferruh Yigit <ferruh.yigit@amd.com>,
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
Reshma Pattan <reshma.pattan@intel.com>,
Stephen Hemminger <stephen@networkplumber.org>
Subject: [PATCH] app/testpmd: cross NUMA support
Date: Tue, 17 Jun 2025 18:53:50 +0300 [thread overview]
Message-ID: <20250617155351.190622-1-mkashani@nvidia.com> (raw)
Cross NUMA support means that
if the current NUMA is out of memory,
use another available NUMA memory.
Replace socket id specific initializations
with SOCKET_ID_ANY needed for testpmd
init when --no-numa flag is set.
Signed-off-by: Maayan Kashani <mkashani@nvidia.com>
---
app/test-pmd/testpmd.c | 8 ++++----
lib/ethdev/ethdev_private.c | 2 +-
lib/pdump/rte_pdump.c | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index b00c93c4536..66ebdee6978 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1729,8 +1729,7 @@ init_config(void)
mempools[i] = mbuf_pool_create
(mbuf_data_size[i],
nb_mbuf_per_pool,
- socket_num == UMA_NO_CONFIG ?
- 0 : socket_num, i);
+ SOCKET_ID_ANY, i);
}
init_port_config();
@@ -3058,7 +3057,8 @@ start_port(portid_t pid)
} else {
struct rte_mempool *mp =
mbuf_pool_find
- (port->socket_id, 0);
+ ((numa_support ? port->socket_id :
+ (unsigned int)SOCKET_ID_ANY), 0);
if (mp == NULL) {
fprintf(stderr,
"Failed to setup RX queue: No mempool allocation on the socket %d\n",
@@ -4484,7 +4484,7 @@ main(int argc, char** argv)
#ifdef RTE_LIB_METRICS
/* Init metrics library */
- rte_metrics_init(rte_socket_id());
+ rte_metrics_init(SOCKET_ID_ANY);
#endif
#ifdef RTE_LIB_LATENCYSTATS
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index b96d992ea12..285d377d91f 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -343,7 +343,7 @@ eth_dev_shared_data_prepare(void)
/* Allocate port data and ownership shared memory. */
mz = rte_memzone_reserve(MZ_RTE_ETH_DEV_DATA,
sizeof(*eth_dev_shared_data),
- rte_socket_id(), flags);
+ SOCKET_ID_ANY, flags);
if (mz == NULL) {
RTE_ETHDEV_LOG_LINE(ERR, "Cannot allocate ethdev shared data");
goto out;
diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
index 79b20ce59b2..ba75b828f2e 100644
--- a/lib/pdump/rte_pdump.c
+++ b/lib/pdump/rte_pdump.c
@@ -426,7 +426,7 @@ rte_pdump_init(void)
int ret;
mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats),
- rte_socket_id(), 0);
+ SOCKET_ID_ANY, 0);
if (mz == NULL) {
PDUMP_LOG_LINE(ERR, "cannot allocate pdump statistics");
rte_errno = ENOMEM;
--
2.21.0
reply other threads:[~2025-06-17 15:54 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250617155351.190622-1-mkashani@nvidia.com \
--to=mkashani@nvidia.com \
--cc=aman.deep.singh@intel.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=ferruh.yigit@amd.com \
--cc=rasland@nvidia.com \
--cc=reshma.pattan@intel.com \
--cc=stephen@networkplumber.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).