* [PATCH v5] event/dlb2: fix max enqueue and dequeue cli override [not found] <20220927150537.1464936-1-abdullah.sevincer@intel.com> @ 2022-09-28 18:44 ` Abdullah Sevincer 2022-09-30 9:02 ` Jerin Jacob 0 siblings, 1 reply; 2+ messages in thread From: Abdullah Sevincer @ 2022-09-28 18:44 UTC (permalink / raw) To: dev; +Cc: jerinj, Abdullah Sevincer, stable This patch addresses an issue of enqueuing more than max_enq_depth and not able to dequeuing events equal to max_cq_depth in a single call of rte_event_enqueue_burst and rte_event_dequeue_burst. Apply fix for restricting enqueue of events to max_enq_depth so that in a single rte_event_enqueue_burst() call at most max_enq_depth events are enqueued. Also set per port and domain history list sizes based on cq_depth. This results in dequeuing correct number of events as set by max_cq_depth. Fixes: f3cad285bb88 ("event/dlb2: add infos get and configure") Cc: stable@dpdk.org Signed-off-by: Abdullah Sevincer <abdullah.sevincer@intel.com> --- drivers/event/dlb2/dlb2.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 759578378f..dbb8284135 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -813,7 +813,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2, cfg->num_ldb_queues; cfg->num_hist_list_entries = resources_asked->num_ldb_ports * - DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT; + evdev_dlb2_default_info.max_event_port_dequeue_depth; if (device_version == DLB2_HW_V2_5) { DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, credits=%d\n", @@ -1538,7 +1538,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2, cfg.cq_depth = rte_align32pow2(dequeue_depth); cfg.cq_depth_threshold = 1; - cfg.cq_history_list_size = DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT; + cfg.cq_history_list_size = cfg.cq_depth; cfg.cos_id = ev_port->cos_id; cfg.cos_strict = 0;/* best effots */ @@ -2966,6 +2966,7 @@ __dlb2_event_enqueue_burst(void *event_port, struct dlb2_port *qm_port = &ev_port->qm_port; struct process_local_port_data *port_data; int retries = ev_port->enq_retries; + int num_tx; int i; RTE_ASSERT(ev_port->enq_configured); @@ -2974,8 +2975,8 @@ __dlb2_event_enqueue_burst(void *event_port, i = 0; port_data = &dlb2_port[qm_port->id][PORT_TYPE(qm_port)]; - - while (i < num) { + num_tx = RTE_MIN(num, ev_port->conf.enqueue_depth); + while (i < num_tx) { uint8_t sched_types[DLB2_NUM_QES_PER_CACHE_LINE]; uint8_t queue_ids[DLB2_NUM_QES_PER_CACHE_LINE]; int pop_offs = 0; -- 2.25.1 ^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH v5] event/dlb2: fix max enqueue and dequeue cli override 2022-09-28 18:44 ` [PATCH v5] event/dlb2: fix max enqueue and dequeue cli override Abdullah Sevincer @ 2022-09-30 9:02 ` Jerin Jacob 0 siblings, 0 replies; 2+ messages in thread From: Jerin Jacob @ 2022-09-30 9:02 UTC (permalink / raw) To: Abdullah Sevincer; +Cc: dev, jerinj, stable On Thu, Sep 29, 2022 at 12:14 AM Abdullah Sevincer <abdullah.sevincer@intel.com> wrote: > > This patch addresses an issue of enqueuing more than > max_enq_depth and not able to dequeuing events equal > to max_cq_depth in a single call of rte_event_enqueue_burst > and rte_event_dequeue_burst. > > Apply fix for restricting enqueue of events to max_enq_depth > so that in a single rte_event_enqueue_burst() call at most > max_enq_depth events are enqueued. > > Also set per port and domain history list sizes based on > cq_depth. This results in dequeuing correct number of > events as set by max_cq_depth. > > Fixes: f3cad285bb88 ("event/dlb2: add infos get and configure") > Cc: stable@dpdk.org > > Signed-off-by: Abdullah Sevincer <abdullah.sevincer@intel.com> Not sure what "cli" means in the subject. Changed the subject as " event/dlb2: handle enqueuing more than max enq depth" Applied to dpdk-next-net-eventdev/for-main. Thanks > --- > drivers/event/dlb2/dlb2.c | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c > index 759578378f..dbb8284135 100644 > --- a/drivers/event/dlb2/dlb2.c > +++ b/drivers/event/dlb2/dlb2.c > @@ -813,7 +813,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2, > cfg->num_ldb_queues; > > cfg->num_hist_list_entries = resources_asked->num_ldb_ports * > - DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT; > + evdev_dlb2_default_info.max_event_port_dequeue_depth; > > if (device_version == DLB2_HW_V2_5) { > DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, credits=%d\n", > @@ -1538,7 +1538,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2, > cfg.cq_depth = rte_align32pow2(dequeue_depth); > cfg.cq_depth_threshold = 1; > > - cfg.cq_history_list_size = DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT; > + cfg.cq_history_list_size = cfg.cq_depth; > > cfg.cos_id = ev_port->cos_id; > cfg.cos_strict = 0;/* best effots */ > @@ -2966,6 +2966,7 @@ __dlb2_event_enqueue_burst(void *event_port, > struct dlb2_port *qm_port = &ev_port->qm_port; > struct process_local_port_data *port_data; > int retries = ev_port->enq_retries; > + int num_tx; > int i; > > RTE_ASSERT(ev_port->enq_configured); > @@ -2974,8 +2975,8 @@ __dlb2_event_enqueue_burst(void *event_port, > i = 0; > > port_data = &dlb2_port[qm_port->id][PORT_TYPE(qm_port)]; > - > - while (i < num) { > + num_tx = RTE_MIN(num, ev_port->conf.enqueue_depth); > + while (i < num_tx) { > uint8_t sched_types[DLB2_NUM_QES_PER_CACHE_LINE]; > uint8_t queue_ids[DLB2_NUM_QES_PER_CACHE_LINE]; > int pop_offs = 0; > -- > 2.25.1 > ^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-09-30 9:03 UTC | newest] Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <20220927150537.1464936-1-abdullah.sevincer@intel.com> 2022-09-28 18:44 ` [PATCH v5] event/dlb2: fix max enqueue and dequeue cli override Abdullah Sevincer 2022-09-30 9:02 ` Jerin Jacob
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).