DPDK patches and discussions
 help / color / mirror / Atom feed
From: Gregory Etelson <getelson@nvidia.com>
To: <dev@dpdk.org>
Cc: <getelson@nvidia.com>, <mkashani@nvidia.com>,
	<rasland@nvidia.com>, Dariusz Sosnowski <dsosnowski@nvidia.com>,
	Aman Singh <aman.deep.singh@intel.com>,
	Shani Peretz <shperetz@nvidia.com>
Subject: [PATCH] app/testpmd: fix devargs format in port attach
Date: Thu, 30 Oct 2025 11:20:15 +0200	[thread overview]
Message-ID: <20251030092016.226974-1-getelson@nvidia.com> (raw)

The port attach procedure discarded PCI port devargs provided
by application.
The patch restores PCI port devargs.

Fixes: 12c2405989f6 ("app/testpmd: canonicalize short PCI name format")

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 app/test-pmd/testpmd.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 2360da3a48..cc384f0b14 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -3413,7 +3413,7 @@ reset_port(portid_t pid)
 }
 
 static char *
-convert_pci_address_format(const char *identifier, char *pci_buffer, size_t buf_size)
+convert_pci_address_format(const char *identifier, char *pci_buffer)
 {
 	struct rte_devargs da;
 	struct rte_pci_addr pci_addr;
@@ -3430,7 +3430,8 @@ convert_pci_address_format(const char *identifier, char *pci_buffer, size_t buf_
 	if (rte_pci_addr_parse(da.name, &pci_addr) != 0)
 		return NULL;
 
-	rte_pci_device_name(&pci_addr, pci_buffer, buf_size);
+	rte_pci_device_name(&pci_addr, pci_buffer, PCI_PRI_STR_SIZE);
+	sprintf(pci_buffer + strlen(pci_buffer), ",%s", da.args);
 	return pci_buffer;
 }
 
@@ -3439,8 +3440,7 @@ attach_port(char *identifier)
 {
 	portid_t pi;
 	struct rte_dev_iterator iterator;
-	char *long_identifier;
-	char long_format[PCI_PRI_STR_SIZE];
+	char *long_format, *long_identifier;
 
 	printf("Attaching a new port...\n");
 
@@ -3448,9 +3448,14 @@ attach_port(char *identifier)
 		fprintf(stderr, "Invalid parameters are specified\n");
 		return;
 	}
+	long_format = alloca(strlen(identifier) + PCI_PRI_STR_SIZE);
+	if (long_format == NULL) {
+		TESTPMD_LOG(ERR, "Failed to attach port %s - allocation failure\n", identifier);
+		return;
+	}
 
 	/* For PCI device convert to canonical format */
-	long_identifier = convert_pci_address_format(identifier, long_format, sizeof(long_format));
+	long_identifier = convert_pci_address_format(identifier, long_format);
 	if (long_identifier != NULL)
 		identifier = long_identifier;
 
-- 
2.51.0


             reply	other threads:[~2025-10-30  9:21 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-30  9:20 Gregory Etelson [this message]
2025-10-30 15:53 ` Stephen Hemminger
2025-10-30 17:17 ` [PATCH v2] " Gregory Etelson
2025-10-30 17:47   ` Stephen Hemminger
2025-10-31  6:18 ` [PATCH v3] " Gregory Etelson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251030092016.226974-1-getelson@nvidia.com \
    --to=getelson@nvidia.com \
    --cc=aman.deep.singh@intel.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=mkashani@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=shperetz@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).