From: Tim McDaniel <timothy.mcdaniel@intel.com> To: jerinj@marvell.com Cc: mattias.ronnblom@ericsson.com, dev@dpdk.org, gage.eads@intel.com, harry.van.haaren@intel.com, "McDaniel, Timothy" <timothy.mcdaniel@intel.com> Subject: [dpdk-dev] [PATCH 18/27] event/dlb: add queue setup Date: Fri, 26 Jun 2020 23:37:42 -0500 Message-ID: <1593232671-5690-19-git-send-email-timothy.mcdaniel@intel.com> (raw) In-Reply-To: <1593232671-5690-1-git-send-email-timothy.mcdaniel@intel.com> From: "McDaniel, Timothy" <timothy.mcdaniel@intel.com> Signed-off-by: McDaniel, Timothy <timothy.mcdaniel@intel.com> --- drivers/event/dlb/dlb.c | 295 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 295 insertions(+) diff --git a/drivers/event/dlb/dlb.c b/drivers/event/dlb/dlb.c index b527f2c..ded3b18 100644 --- a/drivers/event/dlb/dlb.c +++ b/drivers/event/dlb/dlb.c @@ -221,6 +221,65 @@ int dlb_string_to_int(int *result, const char *str) return 0; } +static int32_t +dlb_hw_create_ldb_queue(struct dlb_eventdev *dlb, + struct dlb_queue *queue, + const struct rte_event_queue_conf *evq_conf) +{ + struct dlb_hw_dev *handle = &dlb->qm_instance; + struct dlb_create_ldb_queue_args cfg; + struct dlb_cmd_response response; + int32_t ret; + uint32_t qm_qid; + int sched_type = -1; + + if (evq_conf == NULL) + return -EINVAL; + + if (evq_conf->event_queue_cfg & RTE_EVENT_QUEUE_CFG_ALL_TYPES) { + if (evq_conf->nb_atomic_order_sequences != 0) + sched_type = RTE_SCHED_TYPE_ORDERED; + else + sched_type = RTE_SCHED_TYPE_PARALLEL; + } else { + sched_type = evq_conf->schedule_type; + } + + cfg.response = (uintptr_t)&response; + cfg.num_atomic_inflights = dlb->num_atm_inflights_per_queue; + cfg.num_sequence_numbers = evq_conf->nb_atomic_order_sequences; + cfg.num_qid_inflights = evq_conf->nb_atomic_order_sequences; + + if (sched_type != RTE_SCHED_TYPE_ORDERED) { + cfg.num_sequence_numbers = 0; + cfg.num_qid_inflights = DLB_DEF_UNORDERED_QID_INFLIGHTS; + } + + ret = dlb_iface_ldb_queue_create(handle, &cfg); + if (ret < 0) { + DLB_LOG_ERR("dlb: create LB event queue error, ret=%d (driver status: %s)\n", + ret, dlb_error_strings[response.status]); + return -EINVAL; + } + + qm_qid = response.id; + + /* Save off queue config for debug, resource lookups, and reconfig */ + queue->num_qid_inflights = cfg.num_qid_inflights; + queue->num_atm_inflights = cfg.num_atomic_inflights; + + queue->sched_type = sched_type; + queue->config_state = DLB_CONFIGURED; + + DLB_LOG_DBG("Created LB event queue %d, nb_inflights=%d, nb_seq=%d, qid inflights=%d\n", + qm_qid, + cfg.num_atomic_inflights, + cfg.num_sequence_numbers, + cfg.num_qid_inflights); + + return qm_qid; +} + /* VDEV-only notes: * This function first unmaps all memory mappings and closes the * domain's file descriptor, which causes the driver to reset the @@ -442,6 +501,7 @@ int dlb_string_to_int(int *result, const char *str) } /* End HW specific */ + static void dlb_eventdev_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info) @@ -640,6 +700,240 @@ int dlb_string_to_int(int *result, const char *str) queue_conf->priority = 0; } +static int32_t +dlb_get_sn_allocation(struct dlb_eventdev *dlb, int group) +{ + struct dlb_hw_dev *handle = &dlb->qm_instance; + struct dlb_get_sn_allocation_args cfg; + struct dlb_cmd_response response; + int ret; + + cfg.group = group; + cfg.response = (uintptr_t)&response; + + ret = dlb_iface_get_sn_allocation(handle, &cfg); + if (ret < 0) { + DLB_LOG_ERR("dlb: get_sn_allocation ret=%d (driver status: %s)\n", + ret, dlb_error_strings[response.status]); + return ret; + } + + return response.id; +} + +static int +dlb_set_sn_allocation(struct dlb_eventdev *dlb, int group, int num) +{ + struct dlb_hw_dev *handle = &dlb->qm_instance; + struct dlb_set_sn_allocation_args cfg; + struct dlb_cmd_response response; + int ret; + + cfg.num = num; + cfg.group = group; + cfg.response = (uintptr_t)&response; + + ret = dlb_iface_set_sn_allocation(handle, &cfg); + if (ret < 0) { + DLB_LOG_ERR("dlb: set_sn_allocation ret=%d (driver status: %s)\n", + ret, dlb_error_strings[response.status]); + return ret; + } + + return ret; +} + +static int32_t +dlb_get_sn_occupancy(struct dlb_eventdev *dlb, int group) +{ + struct dlb_hw_dev *handle = &dlb->qm_instance; + struct dlb_get_sn_occupancy_args cfg; + struct dlb_cmd_response response; + int ret; + + cfg.group = group; + cfg.response = (uintptr_t)&response; + + ret = dlb_iface_get_sn_occupancy(handle, &cfg); + if (ret < 0) { + DLB_LOG_ERR("dlb: get_sn_occupancy ret=%d (driver status: %s)\n", + ret, dlb_error_strings[response.status]); + return ret; + } + + return response.id; +} + +/* Query the current sequence number allocations and, if they conflict with the + * requested LDB queue configuration, attempt to re-allocate sequence numbers. + * This is best-effort; if it fails, the PMD will attempt to configure the + * load-balanced queue and return an error. + */ +static void +dlb_program_sn_allocation(struct dlb_eventdev *dlb, + const struct rte_event_queue_conf *queue_conf) +{ + int grp_occupancy[DLB_NUM_SN_GROUPS]; + int grp_alloc[DLB_NUM_SN_GROUPS]; + int i, sequence_numbers; + + sequence_numbers = (int)queue_conf->nb_atomic_order_sequences; + + for (i = 0; i < DLB_NUM_SN_GROUPS; i++) { + int total_slots; + + grp_alloc[i] = dlb_get_sn_allocation(dlb, i); + if (grp_alloc[i] < 0) + return; + + total_slots = DLB_MAX_LDB_SN_ALLOC / grp_alloc[i]; + + grp_occupancy[i] = dlb_get_sn_occupancy(dlb, i); + if (grp_occupancy[i] < 0) + return; + + /* DLB has at least one available slot for the requested + * sequence numbers, so no further configuration required. + */ + if (grp_alloc[i] == sequence_numbers && + grp_occupancy[i] < total_slots) + return; + } + + /* None of the sequence number groups are configured for the requested + * sequence numbers, so we have to reconfigure one of them. This is + * only possible if a group is not in use. + */ + for (i = 0; i < DLB_NUM_SN_GROUPS; i++) { + if (grp_occupancy[i] == 0) + break; + } + + if (i == DLB_NUM_SN_GROUPS) { + printf("[%s()] No groups with %d sequence_numbers are available or have free slots\n", + __func__, sequence_numbers); + return; + } + + /* Attempt to configure slot i with the requested number of sequence + * numbers. Ignore the return value -- if this fails, the error will be + * caught during subsequent queue configuration. + */ + dlb_set_sn_allocation(dlb, i, sequence_numbers); +} + +static int +dlb_eventdev_ldb_queue_setup(struct rte_eventdev *dev, + struct dlb_eventdev_queue *ev_queue, + const struct rte_event_queue_conf *queue_conf) +{ + struct dlb_eventdev *dlb = dlb_pmd_priv(dev); + int32_t qm_qid; + + if (queue_conf->nb_atomic_order_sequences) + dlb_program_sn_allocation(dlb, queue_conf); + + qm_qid = dlb_hw_create_ldb_queue(dlb, + &ev_queue->qm_queue, + queue_conf); + if (qm_qid < 0) { + DLB_LOG_ERR("Failed to create the load-balanced queue\n"); + + return qm_qid; + } + + dlb->qm_ldb_to_ev_queue_id[qm_qid] = ev_queue->id; + + ev_queue->qm_queue.id = qm_qid; + + return 0; +} + +static int dlb_num_dir_queues_setup(struct dlb_eventdev *dlb) +{ + int i, num = 0; + + for (i = 0; i < dlb->num_queues; i++) { + if (dlb->ev_queues[i].setup_done && + dlb->ev_queues[i].qm_queue.is_directed) + num++; + } + + return num; +} + +static void +dlb_queue_link_teardown(struct dlb_eventdev *dlb, + struct dlb_eventdev_queue *ev_queue) +{ + struct dlb_eventdev_port *ev_port; + int i, j; + + for (i = 0; i < dlb->num_ports; i++) { + ev_port = &dlb->ev_ports[i]; + + for (j = 0; j < DLB_MAX_NUM_QIDS_PER_LDB_CQ; j++) { + if (!ev_port->link[j].valid || + ev_port->link[j].queue_id != ev_queue->id) + continue; + + ev_port->link[j].valid = false; + ev_port->num_links--; + } + } + + ev_queue->num_links = 0; +} + +static int +dlb_eventdev_queue_setup(struct rte_eventdev *dev, + uint8_t ev_qid, + const struct rte_event_queue_conf *queue_conf) +{ + struct dlb_eventdev *dlb = dlb_pmd_priv(dev); + struct dlb_eventdev_queue *ev_queue; + int ret; + + if (!queue_conf) + return -EINVAL; + + if (ev_qid >= dlb->num_queues) + return -EINVAL; + + ev_queue = &dlb->ev_queues[ev_qid]; + + ev_queue->qm_queue.is_directed = queue_conf->event_queue_cfg & + RTE_EVENT_QUEUE_CFG_SINGLE_LINK; + ev_queue->id = ev_qid; + ev_queue->conf = *queue_conf; + + if (!ev_queue->qm_queue.is_directed) { + ret = dlb_eventdev_ldb_queue_setup(dev, ev_queue, queue_conf); + } else { + /* The directed queue isn't setup until link time, at which + * point we know its directed port ID. Directed queue setup + * will only fail if this queue is already setup or there are + * no directed queues left to configure. + */ + ret = 0; + + ev_queue->qm_queue.config_state = DLB_NOT_CONFIGURED; + + if (ev_queue->setup_done || + dlb_num_dir_queues_setup(dlb) == dlb->num_dir_queues) + ret = -EINVAL; + } + + /* Tear down pre-existing port->queue links */ + if (!ret && dlb->run_state == DLB_RUN_STATE_STOPPED) + dlb_queue_link_teardown(dlb, ev_queue); + + if (!ret) + ev_queue->setup_done = true; + + return ret; +} + static int set_dev_id(const char *key __rte_unused, const char *value, @@ -717,6 +1011,7 @@ int dlb_string_to_int(int *result, const char *str) .dev_infos_get = dlb_eventdev_info_get, .dev_configure = dlb_eventdev_configure, .queue_def_conf = dlb_eventdev_queue_default_conf_get, + .queue_setup = dlb_eventdev_queue_setup, .port_def_conf = dlb_eventdev_port_default_conf_get, }; -- 1.7.10
next prev parent reply other threads:[~2020-06-27 4:43 UTC|newest] Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-06-27 4:37 [dpdk-dev] [PATCH 00/27] event/dlb Intel DLB PMD Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites Tim McDaniel 2020-06-27 7:44 ` Jerin Jacob 2020-06-29 19:30 ` McDaniel, Timothy 2020-06-30 4:21 ` Jerin Jacob 2020-06-30 15:37 ` McDaniel, Timothy 2020-06-30 15:57 ` Jerin Jacob 2020-06-30 19:26 ` McDaniel, Timothy 2020-06-30 20:40 ` Pavan Nikhilesh Bhagavatula 2020-06-30 21:07 ` McDaniel, Timothy 2020-07-01 4:50 ` Jerin Jacob 2020-07-01 16:48 ` McDaniel, Timothy 2020-06-30 11:22 ` Kinsella, Ray 2020-06-30 11:30 ` Jerin Jacob 2020-06-30 11:36 ` Kinsella, Ray 2020-06-30 12:14 ` Jerin Jacob 2020-07-02 15:21 ` Kinsella, Ray 2020-07-02 16:35 ` McDaniel, Timothy 2020-06-27 4:37 ` [dpdk-dev] [PATCH 02/27] eventdev: do not pass disable_implicit_release bit to trace macro Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 03/27] event/dlb: add shared code version 10.7.9 Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 04/27] event/dlb: add make and meson build infrastructure Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 05/27] event/dlb: add DLB documentation Tim McDaniel 2020-07-09 3:29 ` Eads, Gage 2020-06-27 4:37 ` [dpdk-dev] [PATCH 06/27] event/dlb: add dynamic logging Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 07/27] event/dlb: add private data structures and constants Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 08/27] event/dlb: add definitions shared with LKM or shared code Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 09/27] event/dlb: add inline functions used in multiple files Tim McDaniel 2020-07-07 12:02 ` Bruce Richardson 2020-07-07 14:33 ` McDaniel, Timothy 2020-06-27 4:37 ` [dpdk-dev] [PATCH 10/27] event/dlb: add PFPMD-specific interface layer to shared code Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 11/27] event/dlb: add flexible PMD to device interfaces Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 12/27] event/dlb: add the PMD's public interfaces Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 13/27] event/dlb: add xstats support Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 14/27] event/dlb: add PMD self-tests Tim McDaniel 2020-07-10 20:42 ` Eads, Gage 2020-07-29 18:56 ` McDaniel, Timothy 2020-06-27 4:37 ` [dpdk-dev] [PATCH 15/27] event/dlb: add probe Tim McDaniel 2020-07-09 3:45 ` Eads, Gage 2020-06-27 4:37 ` [dpdk-dev] [PATCH 16/27] event/dlb: add infos_get and configure Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 17/27] event/dlb: add queue_def_conf and port_def_conf Tim McDaniel 2020-06-27 4:37 ` Tim McDaniel [this message] 2020-06-27 4:37 ` [dpdk-dev] [PATCH 19/27] event/dlb: add port_setup Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 20/27] event/dlb: add port_link Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 21/27] event/dlb: add queue_release and port_release Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 22/27] event/dlb: add port_unlink and port_unlinks_in_progress Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 23/27] event/dlb: add eventdev_start Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 24/27] event/dlb: add timeout_ticks, dump, xstats, and selftest Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 25/27] event/dlb: add enqueue and its burst variants Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 26/27] event/dlb: add dequeue, dequeue_burst, and variants Tim McDaniel 2020-06-27 4:37 ` [dpdk-dev] [PATCH 27/27] event/dlb: add eventdev_stop and eventdev_close Tim McDaniel [not found] <1593232671-5690-0-git-send-email-timothy.mcdaniel@intel.com> 2020-07-30 19:49 ` [dpdk-dev] [PATCH 00/27] Add Intel DLM PMD to 20.11 McDaniel, Timothy 2020-07-30 19:50 ` [dpdk-dev] [PATCH 18/27] event/dlb: add queue setup McDaniel, Timothy -- strict thread matches above, loose matches on Subject: below -- 2020-06-12 21:24 [dpdk-dev] [PATCH 00/27] V1 event/dlb add Intel DLB PMD McDaniel, Timothy 2020-06-12 21:24 ` [dpdk-dev] [PATCH 18/27] event/dlb: add queue setup McDaniel, Timothy
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1593232671-5690-19-git-send-email-timothy.mcdaniel@intel.com \ --to=timothy.mcdaniel@intel.com \ --cc=dev@dpdk.org \ --cc=gage.eads@intel.com \ --cc=harry.van.haaren@intel.com \ --cc=jerinj@marvell.com \ --cc=mattias.ronnblom@ericsson.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
DPDK patches and discussions This inbox may be cloned and mirrored by anyone: git clone --mirror http://inbox.dpdk.org/dev/0 dev/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 dev dev/ http://inbox.dpdk.org/dev \ dev@dpdk.org public-inbox-index dev Example config snippet for mirrors. Newsgroup available over NNTP: nntp://inbox.dpdk.org/inbox.dpdk.dev AGPL code for this site: git clone https://public-inbox.org/public-inbox.git