DPDK patches and discussions
 help / color / mirror / Atom feed
From: Pravin Pathak <pravin.pathak@intel.com>
To: dev@dpdk.org
Cc: jerinj@marvell.com, mike.ximing.chen@intel.com,
	bruce.richardson@intel.com, thomas@monjalon.net,
	david.marchand@redhat.com, nipun.gupta@amd.com,
	chenbox@nvidia.com, tirthendu.sarkar@intel.com,
	Pravin Pathak <pravin.pathak@intel.com>
Subject: [PATCH v1 3/7] event/dlb2: return 96 single link ports for DLB2.5
Date: Thu,  8 May 2025 23:23:57 -0500	[thread overview]
Message-ID: <20250509042401.2634765-4-pravin.pathak@intel.com> (raw)
In-Reply-To: <20250509042401.2634765-1-pravin.pathak@intel.com>

DLB 2.0 device has 64 single linked or directed ports.
DLB 2.5 device has 96 single linked ports.
This commit fixes issue of rte_event_dev_info_get returning 64
instead of 96 single link ports for DLB2.5

Signed-off-by: Pravin Pathak <pravin.pathak@intel.com>
---
 drivers/event/dlb2/dlb2.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 58eb27f495..24c56a7968 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -241,16 +241,16 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
 	 * The capabilities (CAPs) were set at compile time.
 	 */
 
-	if (dlb2->max_cq_depth != DLB2_DEFAULT_CQ_DEPTH)
-		num_ldb_ports = DLB2_MAX_HL_ENTRIES / dlb2->max_cq_depth;
-	else
-		num_ldb_ports = dlb2->hw_rsrc_query_results.num_ldb_ports;
+	num_ldb_ports = dlb2->hw_rsrc_query_results.num_ldb_ports;
 
 	evdev_dlb2_default_info.max_event_queues =
 		dlb2->hw_rsrc_query_results.num_ldb_queues;
 
 	evdev_dlb2_default_info.max_event_ports = num_ldb_ports;
 
+	evdev_dlb2_default_info.max_single_link_event_port_queue_pairs =
+		dlb2->hw_rsrc_query_results.num_dir_ports;
+
 	if (dlb2->version == DLB2_HW_V2_5) {
 		evdev_dlb2_default_info.max_num_events =
 			dlb2->hw_rsrc_query_results.num_credits;
-- 
2.25.1


  parent reply	other threads:[~2025-05-09  4:24 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-09  4:23 [PATCH v1 0/7] event/dlb2: dlb2 hw resource management Pravin Pathak
2025-05-09  4:23 ` [PATCH v1 1/7] event/dlb2: addresses deq failure when CQ depth <= 16 Pravin Pathak
2025-05-09  4:23 ` [PATCH v1 2/7] event/dlb2: changes to correctly validate COS ID arguments Pravin Pathak
2025-05-09  4:23 ` Pravin Pathak [this message]
2025-05-09  4:23 ` [PATCH v1 4/7] event/dlb2: support managing history list resource Pravin Pathak
2025-05-09  4:23 ` [PATCH v1 5/7] event/dlb2: avoid credit release race condition Pravin Pathak
2025-05-09  4:24 ` [PATCH v1 6/7] event/dlb2: update qid depth xstat in vector path Pravin Pathak
2025-05-09  4:24 ` [PATCH v1 7/7] event/dlb2: fix default credits in dlb2_eventdev_info_get() Pravin Pathak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250509042401.2634765-4-pravin.pathak@intel.com \
    --to=pravin.pathak@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=chenbox@nvidia.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    --cc=mike.ximing.chen@intel.com \
    --cc=nipun.gupta@amd.com \
    --cc=thomas@monjalon.net \
    --cc=tirthendu.sarkar@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).