From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A600545604; Fri, 12 Jul 2024 02:17:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5F4F842E8C; Fri, 12 Jul 2024 02:17:32 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id 8D7CE42E63 for ; Fri, 12 Jul 2024 02:17:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1720743450; x=1752279450; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K7Q5NdyDlsas/8HLZT+G8jXN2xyD/b5npKq+1mxmJwc=; b=gQgEsV0sRHQo192orLJrKZml/WOqRJfBEq4KGf49K2DssE9eRPfXRqTU UHcfxlxKKEOYHK/dZlsKIyKP5MtuNKLNEvsAt2QzGpfNRXVCBW2BbktPZ 5klDXbFg/0COWjw5Zgo9AUCAJaQeVpS2wcGO2nDZQL8/X6Ymlk2X/t+oy JWdymLBJao9jgcHbFr1kySiVmGOCflyXEvTflNlj6Cbwqe0xicvv3eaX7 gFJFCzJWnAKHERpqzeYBsfOVwN8tFrCSPRLbGnU1BCm7/5GikuUNwHJ3O 0MYi2IBopCxTJ43erGg52ZYDQvM7T7LVbLJ9iWx574QWoqUibbYq6BQ6c Q==; X-CSE-ConnectionGUID: nINlNpAjT4mRSi2TQdd5iw== X-CSE-MsgGUID: nx+3+8XaQ2qEhi4Kfp+8yQ== X-IronPort-AV: E=McAfee;i="6700,10204,11130"; a="22037522" X-IronPort-AV: E=Sophos;i="6.09,201,1716274800"; d="scan'208";a="22037522" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jul 2024 17:17:28 -0700 X-CSE-ConnectionGUID: XDXNST+WQi6OKSXNjQAKkg== X-CSE-MsgGUID: b+twkYhRSRixv4pes0QO+Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,201,1716274800"; d="scan'208";a="48752996" Received: from txanpdk02.an.intel.com ([10.123.117.76]) by fmviesa009.fm.intel.com with ESMTP; 11 Jul 2024 17:17:28 -0700 From: Abdullah Sevincer To: dev@dpdk.org Cc: jerinj@marvell.com, mike.ximing.chen@intel.com, tirthendu.sarkar@intel.com, pravin.pathak@intel.com, Abdullah Sevincer Subject: [PATCH v6 2/3] event/dlb2: add support for dynamic HL entries Date: Thu, 11 Jul 2024 19:17:22 -0500 Message-Id: <20240712001723.2791461-3-abdullah.sevincer@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240712001723.2791461-1-abdullah.sevincer@intel.com> References: <20240619210106.253239-4-abdullah.sevincer@intel.com> <20240712001723.2791461-1-abdullah.sevincer@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org DLB has 64 LDB ports and 2048 HL entries. If all LDB ports are used, possible HL entries per LDB port equals 2048 / 64 = 32. So, the maximum CQ depth possible is 16, if all 64 LB ports are needed in a high-performance setting. In case all CQs are configured to have HL = 2* CQ Depth as a performance option, then the calculation of HL at the time of domain creation will be based on maximum possible dequeue depth. This could result in allocating too many HL entries to the domain as DLB only has limited number of HL entries to be allocated. Hence, it is best to allow application to specify HL entries as a command line argument and override default allocation. A summary of usage is listed below: When 'use_default_hl = 1', Per port HL is set to DLB2_FIXED_CQ_HL_SIZE (32) and command line parameter alloc_hl_entries is ignored. When 'use_default_hl = 0', Per LDB port HL = 2 * CQ depth and per port HL is set to 2 * DLB2_FIXED_CQ_HL_SIZE. User should calculate needed HL entries based on CQ depths the application will use and specify it as command line parameter 'alloc_hl_entries'. This will be used to allocate HL entries. Hence, alloc_hl_entries = (Sum of all LDB ports CQ depths * 2). If alloc_hl_entries is not specified, then Total HL entries for the vdev = num_ldb_ports * 64. Signed-off-by: Abdullah Sevincer --- doc/guides/eventdevs/dlb2.rst | 37 +++++++ doc/guides/rel_notes/release_24_07.rst | 4 + drivers/event/dlb2/dlb2.c | 130 +++++++++++++++++++++++-- drivers/event/dlb2/dlb2_priv.h | 12 ++- drivers/event/dlb2/pf/dlb2_pf.c | 7 +- drivers/event/dlb2/rte_pmd_dlb2.h | 1 + 6 files changed, 179 insertions(+), 12 deletions(-) diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst index 2532d92888..fb920d6648 100644 --- a/doc/guides/eventdevs/dlb2.rst +++ b/doc/guides/eventdevs/dlb2.rst @@ -456,6 +456,43 @@ Example command to enable QE Weight feature: --allow ea:00.0,enable_cq_weight= +Dynamic History List Entries +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +DLB has 64 LDB ports and 2048 HL entries. If all LDB ports are used, +possible HL entries per LDB port equals 2048 / 64 = 32. So, the +maximum CQ depth possible is 16, if all 64 LB ports are needed in a +high-performance setting. + +In case all CQs are configured to have HL = 2* CQ Depth as a +performance option, then the calculation of HL at the time of domain +creation will be based on maximum possible dequeue depth. This could +result in allocating too many HL entries to the domain as DLB only +has a limited number of HL entries to be allocated. Hence, it is best +to allow application to specify HL entries as a command line argument +and override default allocation. A summary of usage is listed below: + +When 'use_default_hl = 1', Per port HL is set to +DLB2_FIXED_CQ_HL_SIZE (32) and command line parameter +alloc_hl_entries is ignored. + +When 'use_default_hl = 0', Per LDB port HL = 2 * CQ depth and per +port HL is set to 2 * DLB2_FIXED_CQ_HL_SIZE. + +Users should calculate needed HL entries based on CQ depths the +application will use and specify it as command line parameter +'alloc_hl_entries'. This will be used to allocate HL entries. +Hence, alloc_hl_entries = (Sum of all LDB ports CQ depths * 2). + +If alloc_hl_entries is not specified, then Total HL entries for the +vdev = num_ldb_ports * 64 + +Example command to use dynamic history list entries feature: + + .. code-block:: console + + --allow ea:00.0,use_default_hl=0,alloc_hl_entries=1024 + Running Eventdev Applications with DLB Device --------------------------------------------- diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst index 858db48547..4f587dd47c 100644 --- a/doc/guides/rel_notes/release_24_07.rst +++ b/doc/guides/rel_notes/release_24_07.rst @@ -166,6 +166,10 @@ New Features when the number of pending completions fall below a configured threshold. + * Introduced dynamic HL (History List) feature for DLB device. History + list entries can dynamically be configured by passing parameters + ``use_default_hl`` and ``alloc_hl_entries``. + Removed Items ------------- diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 70e4289097..837c0639a3 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -180,10 +180,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2) * The capabilities (CAPs) were set at compile time. */ - if (dlb2->max_cq_depth != DLB2_DEFAULT_CQ_DEPTH) - num_ldb_ports = DLB2_MAX_HL_ENTRIES / dlb2->max_cq_depth; - else - num_ldb_ports = dlb2->hw_rsrc_query_results.num_ldb_ports; + num_ldb_ports = dlb2->hw_rsrc_query_results.num_ldb_ports; evdev_dlb2_default_info.max_event_queues = dlb2->hw_rsrc_query_results.num_ldb_queues; @@ -631,6 +628,52 @@ set_enable_cq_weight(const char *key __rte_unused, return 0; } +static int set_hl_override(const char *key __rte_unused, + const char *value, + void *opaque) +{ + bool *default_hl = opaque; + + if (value == NULL || opaque == NULL) { + DLB2_LOG_ERR("NULL pointer\n"); + return -EINVAL; + } + + if ((*value == 'n') || (*value == 'N') || (*value == '0')) + *default_hl = false; + else + *default_hl = true; + + return 0; +} + +static int set_hl_entries(const char *key __rte_unused, + const char *value, + void *opaque) +{ + int hl_entries = 0; + int ret; + + if (value == NULL || opaque == NULL) { + DLB2_LOG_ERR("NULL pointer\n"); + return -EINVAL; + } + + ret = dlb2_string_to_int(&hl_entries, value); + if (ret < 0) + return ret; + + if ((uint32_t)hl_entries > DLB2_MAX_HL_ENTRIES) { + DLB2_LOG_ERR( + "alloc_hl_entries %u out of range, must be in [1 - %d]\n", + hl_entries, DLB2_MAX_HL_ENTRIES); + return -EINVAL; + } + *(uint32_t *)opaque = hl_entries; + + return 0; +} + static int set_qid_depth_thresh(const char *key __rte_unused, const char *value, @@ -828,8 +871,19 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2, DLB2_NUM_ATOMIC_INFLIGHTS_PER_QUEUE * cfg->num_ldb_queues; - cfg->num_hist_list_entries = resources_asked->num_ldb_ports * - evdev_dlb2_default_info.max_event_port_dequeue_depth; + /* If hl_entries is non-zero then user specified command line option. + * Else compute using default_port_hl that has been set earlier based + * on use_default_hl option + */ + if (dlb2->hl_entries) { + cfg->num_hist_list_entries = dlb2->hl_entries; + if (resources_asked->num_ldb_ports) + dlb2->default_port_hl = cfg->num_hist_list_entries / + resources_asked->num_ldb_ports; + } else { + cfg->num_hist_list_entries = + resources_asked->num_ldb_ports * dlb2->default_port_hl; + } if (device_version == DLB2_HW_V2_5) { DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, credits=%d\n", @@ -1041,7 +1095,7 @@ dlb2_eventdev_port_default_conf_get(struct rte_eventdev *dev, struct dlb2_eventdev *dlb2 = dlb2_pmd_priv(dev); port_conf->new_event_threshold = dlb2->new_event_limit; - port_conf->dequeue_depth = 32; + port_conf->dequeue_depth = dlb2->default_port_hl / 2; port_conf->enqueue_depth = DLB2_MAX_ENQUEUE_DEPTH; port_conf->event_port_cfg = 0; } @@ -1560,9 +1614,18 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2, if (dlb2->version == DLB2_HW_V2_5 && qm_port->enable_inflight_ctrl) { cfg.enable_inflight_ctrl = 1; cfg.inflight_threshold = qm_port->inflight_threshold; + if (!qm_port->hist_list) + qm_port->hist_list = cfg.cq_depth; } - cfg.cq_history_list_size = cfg.cq_depth; + if (qm_port->hist_list) + cfg.cq_history_list_size = qm_port->hist_list; + else if (cfg.enable_inflight_ctrl) + cfg.cq_history_list_size = RTE_MIN(cfg.cq_depth, dlb2->default_port_hl); + else if (dlb2->default_port_hl == DLB2_FIXED_CQ_HL_SIZE) + cfg.cq_history_list_size = DLB2_FIXED_CQ_HL_SIZE; + else + cfg.cq_history_list_size = cfg.cq_depth * 2; cfg.cos_id = ev_port->cos_id; cfg.cos_strict = 0;/* best effots */ @@ -4365,6 +4428,13 @@ dlb2_set_port_params(struct dlb2_eventdev *dlb2, return -EINVAL; } break; + case RTE_PMD_DLB2_SET_PORT_HL: + if (dlb2->ev_ports[port_id].setup_done) { + DLB2_LOG_ERR("DLB2_SET_PORT_HL must be called before setting up port\n"); + return -EINVAL; + } + port->hist_list = params->port_hl; + break; default: DLB2_LOG_ERR("dlb2: Unsupported flag\n"); return -EINVAL; @@ -4683,6 +4753,28 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev, return err; } + if (dlb2_args->use_default_hl) { + dlb2->default_port_hl = DLB2_FIXED_CQ_HL_SIZE; + if (dlb2_args->alloc_hl_entries) + DLB2_LOG_ERR(": Ignoring 'alloc_hl_entries' and using " + "default history list sizes for eventdev:" + " %s\n", dev->data->name); + dlb2->hl_entries = 0; + } else { + dlb2->default_port_hl = 2 * DLB2_FIXED_CQ_HL_SIZE; + + if (dlb2_args->alloc_hl_entries > + dlb2->hw_rsrc_query_results.num_hist_list_entries) { + DLB2_LOG_ERR(": Insufficient HL entries asked=%d " + "available=%d for eventdev: %s\n", + dlb2->hl_entries, + dlb2->hw_rsrc_query_results.num_hist_list_entries, + dev->data->name); + return -EINVAL; + } + dlb2->hl_entries = dlb2_args->alloc_hl_entries; + } + dlb2_iface_hardware_init(&dlb2->qm_instance); /* configure class of service */ @@ -4790,6 +4882,8 @@ dlb2_parse_params(const char *params, DLB2_PRODUCER_COREMASK, DLB2_DEFAULT_LDB_PORT_ALLOCATION_ARG, DLB2_ENABLE_CQ_WEIGHT_ARG, + DLB2_USE_DEFAULT_HL, + DLB2_ALLOC_HL_ENTRIES, NULL }; if (params != NULL && params[0] != '\0') { @@ -4993,6 +5087,26 @@ dlb2_parse_params(const char *params, return ret; } + ret = rte_kvargs_process(kvlist, DLB2_USE_DEFAULT_HL, + set_hl_override, + &dlb2_args->use_default_hl); + if (ret != 0) { + DLB2_LOG_ERR("%s: Error parsing use_default_hl arg", + name); + rte_kvargs_free(kvlist); + return ret; + } + + ret = rte_kvargs_process(kvlist, DLB2_ALLOC_HL_ENTRIES, + set_hl_entries, + &dlb2_args->alloc_hl_entries); + if (ret != 0) { + DLB2_LOG_ERR("%s: Error parsing hl_override arg", + name); + rte_kvargs_free(kvlist); + return ret; + } + rte_kvargs_free(kvlist); } } diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h index bd11c0facf..e7ed27251e 100644 --- a/drivers/event/dlb2/dlb2_priv.h +++ b/drivers/event/dlb2/dlb2_priv.h @@ -52,6 +52,8 @@ #define DLB2_PRODUCER_COREMASK "producer_coremask" #define DLB2_DEFAULT_LDB_PORT_ALLOCATION_ARG "default_port_allocation" #define DLB2_ENABLE_CQ_WEIGHT_ARG "enable_cq_weight" +#define DLB2_USE_DEFAULT_HL "use_default_hl" +#define DLB2_ALLOC_HL_ENTRIES "alloc_hl_entries" /* Begin HW related defines and structs */ @@ -101,7 +103,8 @@ */ #define DLB2_MAX_HL_ENTRIES 2048 #define DLB2_MIN_CQ_DEPTH 1 -#define DLB2_DEFAULT_CQ_DEPTH 32 +#define DLB2_DEFAULT_CQ_DEPTH 128 /* Can be overridden using max_cq_depth command line parameter */ +#define DLB2_FIXED_CQ_HL_SIZE 32 /* Used when ENABLE_FIXED_HL_SIZE is true */ #define DLB2_MIN_HARDWARE_CQ_DEPTH 8 #define DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT \ DLB2_DEFAULT_CQ_DEPTH @@ -123,7 +126,7 @@ #define DLB2_NUM_QES_PER_CACHE_LINE 4 -#define DLB2_MAX_ENQUEUE_DEPTH 32 +#define DLB2_MAX_ENQUEUE_DEPTH 128 #define DLB2_MIN_ENQUEUE_DEPTH 4 #define DLB2_NAME_SIZE 64 @@ -391,6 +394,7 @@ struct dlb2_port { bool is_producer; /* True if port is of type producer */ uint16_t inflight_threshold; /* DLB2.5 HW inflight threshold */ bool enable_inflight_ctrl; /*DLB2.5 enable HW inflight control */ + uint16_t hist_list; /* Port history list */ }; /* Per-process per-port mmio and memory pointers */ @@ -637,6 +641,8 @@ struct dlb2_eventdev { uint32_t cos_bw[DLB2_COS_NUM_VALS]; /* bandwidth per cos domain */ uint8_t max_cos_port; /* Max LDB port from any cos */ bool enable_cq_weight; + uint16_t hl_entries; /* Num HL entires to allocate for the domain */ + int default_port_hl; /* Fixed or dynamic (2*CQ Depth) HL assignment */ }; /* used for collecting and passing around the dev args */ @@ -675,6 +681,8 @@ struct dlb2_devargs { const char *producer_coremask; bool default_ldb_port_allocation; bool enable_cq_weight; + bool use_default_hl; + uint32_t alloc_hl_entries; }; /* End Eventdev related defines and structs */ diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c index 249ed7ede9..137bdfd656 100644 --- a/drivers/event/dlb2/pf/dlb2_pf.c +++ b/drivers/event/dlb2/pf/dlb2_pf.c @@ -422,6 +422,8 @@ dlb2_pf_dir_port_create(struct dlb2_hw_dev *handle, cfg, cq_base, &response); + + cfg->response = response; if (ret) goto create_port_err; @@ -437,7 +439,6 @@ dlb2_pf_dir_port_create(struct dlb2_hw_dev *handle, dlb2_list_init_head(&port_memory.list); - cfg->response = response; return 0; @@ -731,7 +732,9 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev) .hw_credit_quanta = DLB2_SW_CREDIT_BATCH_SZ, .default_depth_thresh = DLB2_DEPTH_THRESH_DEFAULT, .max_cq_depth = DLB2_DEFAULT_CQ_DEPTH, - .max_enq_depth = DLB2_MAX_ENQUEUE_DEPTH + .max_enq_depth = DLB2_MAX_ENQUEUE_DEPTH, + .use_default_hl = true, + .alloc_hl_entries = 0 }; struct dlb2_eventdev *dlb2; int q; diff --git a/drivers/event/dlb2/rte_pmd_dlb2.h b/drivers/event/dlb2/rte_pmd_dlb2.h index 027ac7413c..53556cb7ad 100644 --- a/drivers/event/dlb2/rte_pmd_dlb2.h +++ b/drivers/event/dlb2/rte_pmd_dlb2.h @@ -82,6 +82,7 @@ rte_pmd_dlb2_set_token_pop_mode(uint8_t dev_id, */ struct rte_pmd_dlb2_port_params { uint16_t inflight_threshold : 12; + uint16_t port_hl; }; /*! -- 2.25.1