From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB4B6A04C1; Fri, 11 Sep 2020 22:31:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 003061C1F4; Fri, 11 Sep 2020 22:30:31 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id E3B441C196 for ; Fri, 11 Sep 2020 22:30:00 +0200 (CEST) IronPort-SDR: W+V8/zLiieEq1ABtzOwg9rJ7uqaP5bQQoI2hKCFJ9f+odKgs1Qwxr57IbJl7eIlqKmC1lne6NW KbyOVOuGD0yw== X-IronPort-AV: E=McAfee;i="6000,8403,9741"; a="156244356" X-IronPort-AV: E=Sophos;i="5.76,417,1592895600"; d="scan'208";a="156244356" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2020 13:30:00 -0700 IronPort-SDR: GxMF7I6D0Jzw8Bp7pvSNCLNsXVlL9cjlqyc5OHq3wgh5m4Ffd8u91Mk2CxnT3Bcwu7yukkX8SX hz2F67k45IdQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,417,1592895600"; d="scan'208";a="481453551" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by orsmga005.jf.intel.com with ESMTP; 11 Sep 2020 13:29:59 -0700 From: Timothy McDaniel To: Cc: dev@dpdk.org, erik.g.carrillo@intel.com, gage.eads@intel.com, harry.van.haaren@intel.com, jerinj@marvell.com Date: Fri, 11 Sep 2020 15:26:15 -0500 Message-Id: <1599855987-25976-11-git-send-email-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 1.7.10 In-Reply-To: <1599855987-25976-1-git-send-email-timothy.mcdaniel@intel.com> References: <1599855987-25976-1-git-send-email-timothy.mcdaniel@intel.com> Subject: [dpdk-dev] [PATCH 10/22] event/dlb2: add queue setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Load balanced (ldb) queues are setup here. Directed queues are not set up until link time, at which point we know the directed port ID. Directed queue setup will only fail if this queue is already setup or there are no directed queues left to configure. Signed-off-by: Timothy McDaniel --- drivers/event/dlb2/dlb2.c | 316 ++++++++++++++++++++ drivers/event/dlb2/dlb2_iface.c | 12 + drivers/event/dlb2/dlb2_iface.h | 12 + drivers/event/dlb2/pf/base/dlb2_resource.c | 464 +++++++++++++++++++++++++++++ drivers/event/dlb2/pf/dlb2_main.c | 10 + drivers/event/dlb2/pf/dlb2_pf.c | 82 +++++ 6 files changed, 896 insertions(+) diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 9ff371a..366e194 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -728,6 +728,321 @@ dlb2_eventdev_queue_default_conf_get(struct rte_eventdev *dev, queue_conf->priority = 0; } +static int32_t +dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group) +{ + struct dlb2_hw_dev *handle = &dlb2->qm_instance; + struct dlb2_get_sn_allocation_args cfg; + int ret; + + cfg.group = group; + + ret = dlb2_iface_get_sn_allocation(handle, &cfg); + if (ret < 0) { + DLB2_LOG_ERR("dlb2: get_sn_allocation ret=%d (driver status: %s)\n", + ret, dlb2_error_strings[cfg.response.status]); + return ret; + } + + return cfg.response.id; +} + +static int +dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num) +{ + struct dlb2_hw_dev *handle = &dlb2->qm_instance; + struct dlb2_set_sn_allocation_args cfg; + int ret; + + cfg.num = num; + cfg.group = group; + + ret = dlb2_iface_set_sn_allocation(handle, &cfg); + if (ret < 0) { + DLB2_LOG_ERR("dlb2: set_sn_allocation ret=%d (driver status: %s)\n", + ret, dlb2_error_strings[cfg.response.status]); + return ret; + } + + return ret; +} + +static int32_t +dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group) +{ + struct dlb2_hw_dev *handle = &dlb2->qm_instance; + struct dlb2_get_sn_occupancy_args cfg; + int ret; + + cfg.group = group; + + ret = dlb2_iface_get_sn_occupancy(handle, &cfg); + if (ret < 0) { + DLB2_LOG_ERR("dlb2: get_sn_occupancy ret=%d (driver status: %s)\n", + ret, dlb2_error_strings[cfg.response.status]); + return ret; + } + + return cfg.response.id; +} + +/* Query the current sequence number allocations and, if they conflict with the + * requested LDB queue configuration, attempt to re-allocate sequence numbers. + * This is best-effort; if it fails, the PMD will attempt to configure the + * load-balanced queue and return an error. + */ +static void +dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2, + const struct rte_event_queue_conf *queue_conf) +{ + int grp_occupancy[DLB2_NUM_SN_GROUPS]; + int grp_alloc[DLB2_NUM_SN_GROUPS]; + int i, sequence_numbers; + + sequence_numbers = (int)queue_conf->nb_atomic_order_sequences; + + for (i = 0; i < DLB2_NUM_SN_GROUPS; i++) { + int total_slots; + + grp_alloc[i] = dlb2_get_sn_allocation(dlb2, i); + if (grp_alloc[i] < 0) + return; + + total_slots = DLB2_MAX_LDB_SN_ALLOC / grp_alloc[i]; + + grp_occupancy[i] = dlb2_get_sn_occupancy(dlb2, i); + if (grp_occupancy[i] < 0) + return; + + /* DLB has at least one available slot for the requested + * sequence numbers, so no further configuration required. + */ + if (grp_alloc[i] == sequence_numbers && + grp_occupancy[i] < total_slots) + return; + } + + /* None of the sequence number groups are configured for the requested + * sequence numbers, so we have to reconfigure one of them. This is + * only possible if a group is not in use. + */ + for (i = 0; i < DLB2_NUM_SN_GROUPS; i++) { + if (grp_occupancy[i] == 0) + break; + } + + if (i == DLB2_NUM_SN_GROUPS) { + printf("[%s()] No groups with %d sequence_numbers are available or have free slots\n", + __func__, sequence_numbers); + return; + } + + /* Attempt to configure slot i with the requested number of sequence + * numbers. Ignore the return value -- if this fails, the error will be + * caught during subsequent queue configuration. + */ + dlb2_set_sn_allocation(dlb2, i, sequence_numbers); +} + +static int32_t +dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2, + struct dlb2_eventdev_queue *ev_queue, + const struct rte_event_queue_conf *evq_conf) +{ + struct dlb2_hw_dev *handle = &dlb2->qm_instance; + struct dlb2_queue *queue = &ev_queue->qm_queue; + struct dlb2_create_ldb_queue_args cfg; + int32_t ret; + uint32_t qm_qid; + int sched_type = -1; + + if (evq_conf == NULL) + return -EINVAL; + + if (evq_conf->event_queue_cfg & RTE_EVENT_QUEUE_CFG_ALL_TYPES) { + if (evq_conf->nb_atomic_order_sequences != 0) + sched_type = RTE_SCHED_TYPE_ORDERED; + else + sched_type = RTE_SCHED_TYPE_PARALLEL; + } else { + sched_type = evq_conf->schedule_type; + } + + cfg.num_atomic_inflights = DLB2_NUM_ATOMIC_INFLIGHTS_PER_QUEUE; + cfg.num_sequence_numbers = evq_conf->nb_atomic_order_sequences; + cfg.num_qid_inflights = evq_conf->nb_atomic_order_sequences; + + if (sched_type != RTE_SCHED_TYPE_ORDERED) { + cfg.num_sequence_numbers = 0; + cfg.num_qid_inflights = 2048; + } + + /* App should set this to the number of hardware flows they want, not + * the overall number of flows they're going to use. E.g. if app is + * using 64 flows and sets compression to 64, best-case they'll get + * 64 unique hashed flows in hardware. + */ + switch (evq_conf->nb_atomic_flows) { + /* Valid DLB2 compression levels */ + case 64: + case 128: + case 256: + case 512: + case (1 * 1024): /* 1K */ + case (2 * 1024): /* 2K */ + case (4 * 1024): /* 4K */ + case (64 * 1024): /* 64K */ + cfg.lock_id_comp_level = evq_conf->nb_atomic_flows; + break; + default: + /* Invalid compression level */ + cfg.lock_id_comp_level = 0; /* no compression */ + } + + if (ev_queue->depth_threshold == 0) { + cfg.depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH; + ev_queue->depth_threshold = RTE_PMD_DLB2_DEFAULT_DEPTH_THRESH; + } else { + cfg.depth_threshold = ev_queue->depth_threshold; + } + + ret = dlb2_iface_ldb_queue_create(handle, &cfg); + if (ret < 0) { + DLB2_LOG_ERR("dlb2: create LB event queue error, ret=%d (driver status: %s)\n", + ret, dlb2_error_strings[cfg.response.status]); + return -EINVAL; + } + + qm_qid = cfg.response.id; + + /* Save off queue config for debug, resource lookups, and reconfig */ + queue->num_qid_inflights = cfg.num_qid_inflights; + queue->num_atm_inflights = cfg.num_atomic_inflights; + + queue->sched_type = sched_type; + queue->config_state = DLB2_CONFIGURED; + + DLB2_LOG_DBG("Created LB event queue %d, nb_inflights=%d, nb_seq=%d, qid inflights=%d\n", + qm_qid, + cfg.num_atomic_inflights, + cfg.num_sequence_numbers, + cfg.num_qid_inflights); + + return qm_qid; +} + +static int +dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev, + struct dlb2_eventdev_queue *ev_queue, + const struct rte_event_queue_conf *queue_conf) +{ + struct dlb2_eventdev *dlb2 = dlb2_pmd_priv(dev); + int32_t qm_qid; + + if (queue_conf->nb_atomic_order_sequences) + dlb2_program_sn_allocation(dlb2, queue_conf); + + qm_qid = dlb2_hw_create_ldb_queue(dlb2, + ev_queue, + queue_conf); + if (qm_qid < 0) { + DLB2_LOG_ERR("Failed to create the load-balanced queue\n"); + + return qm_qid; + } + + dlb2->qm_ldb_to_ev_queue_id[qm_qid] = ev_queue->id; + + ev_queue->qm_queue.id = qm_qid; + + return 0; +} + +static int dlb2_num_dir_queues_setup(struct dlb2_eventdev *dlb2) +{ + int i, num = 0; + + for (i = 0; i < dlb2->num_queues; i++) { + if (dlb2->ev_queues[i].setup_done && + dlb2->ev_queues[i].qm_queue.is_directed) + num++; + } + + return num; +} + +static void +dlb2_queue_link_teardown(struct dlb2_eventdev *dlb2, + struct dlb2_eventdev_queue *ev_queue) +{ + struct dlb2_eventdev_port *ev_port; + int i, j; + + for (i = 0; i < dlb2->num_ports; i++) { + ev_port = &dlb2->ev_ports[i]; + + for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++) { + if (!ev_port->link[j].valid || + ev_port->link[j].queue_id != ev_queue->id) + continue; + + ev_port->link[j].valid = false; + ev_port->num_links--; + } + } + + ev_queue->num_links = 0; +} + +static int +dlb2_eventdev_queue_setup(struct rte_eventdev *dev, + uint8_t ev_qid, + const struct rte_event_queue_conf *queue_conf) +{ + struct dlb2_eventdev *dlb2 = dlb2_pmd_priv(dev); + struct dlb2_eventdev_queue *ev_queue; + int ret; + + if (!queue_conf) + return -EINVAL; + + if (ev_qid >= dlb2->num_queues) + return -EINVAL; + + ev_queue = &dlb2->ev_queues[ev_qid]; + + ev_queue->qm_queue.is_directed = queue_conf->event_queue_cfg & + RTE_EVENT_QUEUE_CFG_SINGLE_LINK; + ev_queue->id = ev_qid; + ev_queue->conf = *queue_conf; + + if (!ev_queue->qm_queue.is_directed) { + ret = dlb2_eventdev_ldb_queue_setup(dev, ev_queue, queue_conf); + } else { + /* The directed queue isn't setup until link time, at which + * point we know its directed port ID. Directed queue setup + * will only fail if this queue is already setup or there are + * no directed queues left to configure. + */ + ret = 0; + + ev_queue->qm_queue.config_state = DLB2_NOT_CONFIGURED; + + if (ev_queue->setup_done || + dlb2_num_dir_queues_setup(dlb2) == dlb2->num_dir_queues) + ret = -EINVAL; + } + + /* Tear down pre-existing port->queue links */ + if (!ret && dlb2->run_state == DLB2_RUN_STATE_STOPPED) + dlb2_queue_link_teardown(dlb2, ev_queue); + + if (!ret) + ev_queue->setup_done = true; + + return ret; +} + static void dlb2_entry_points_init(struct rte_eventdev *dev) { @@ -736,6 +1051,7 @@ dlb2_entry_points_init(struct rte_eventdev *dev) .dev_infos_get = dlb2_eventdev_info_get, .dev_configure = dlb2_eventdev_configure, .queue_def_conf = dlb2_eventdev_queue_default_conf_get, + .queue_setup = dlb2_eventdev_queue_setup, .port_def_conf = dlb2_eventdev_port_default_conf_get, .dump = dlb2_eventdev_dump, .xstats_get = dlb2_eventdev_xstats_get, diff --git a/drivers/event/dlb2/dlb2_iface.c b/drivers/event/dlb2/dlb2_iface.c index 5c11736..f50a918 100644 --- a/drivers/event/dlb2/dlb2_iface.c +++ b/drivers/event/dlb2/dlb2_iface.c @@ -45,3 +45,15 @@ int (*dlb2_iface_sched_domain_create)(struct dlb2_hw_dev *handle, struct dlb2_create_sched_domain_args *args); void (*dlb2_iface_domain_reset)(struct dlb2_eventdev *dlb2); + +int (*dlb2_iface_ldb_queue_create)(struct dlb2_hw_dev *handle, + struct dlb2_create_ldb_queue_args *cfg); + +int (*dlb2_iface_get_sn_allocation)(struct dlb2_hw_dev *handle, + struct dlb2_get_sn_allocation_args *args); + +int (*dlb2_iface_set_sn_allocation)(struct dlb2_hw_dev *handle, + struct dlb2_set_sn_allocation_args *args); + +int (*dlb2_iface_get_sn_occupancy)(struct dlb2_hw_dev *handle, + struct dlb2_get_sn_occupancy_args *args); diff --git a/drivers/event/dlb2/dlb2_iface.h b/drivers/event/dlb2/dlb2_iface.h index 576c1c3..c1ef7c2 100644 --- a/drivers/event/dlb2/dlb2_iface.h +++ b/drivers/event/dlb2/dlb2_iface.h @@ -31,4 +31,16 @@ extern int (*dlb2_iface_sched_domain_create)(struct dlb2_hw_dev *handle, extern void (*dlb2_iface_domain_reset)(struct dlb2_eventdev *dlb2); +extern int (*dlb2_iface_ldb_queue_create)(struct dlb2_hw_dev *handle, + struct dlb2_create_ldb_queue_args *cfg); + +extern int (*dlb2_iface_get_sn_allocation)(struct dlb2_hw_dev *handle, + struct dlb2_get_sn_allocation_args *args); + +extern int (*dlb2_iface_set_sn_allocation)(struct dlb2_hw_dev *handle, + struct dlb2_set_sn_allocation_args *args); + +extern int (*dlb2_iface_get_sn_occupancy)(struct dlb2_hw_dev *handle, + struct dlb2_get_sn_occupancy_args *args); + #endif /* _DLB2_IFACE_H_ */ diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c index f83f8a1..dc9d19a 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c @@ -3506,3 +3506,467 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw) return num; } + + +static void dlb2_configure_ldb_queue(struct dlb2_hw *hw, + struct dlb2_hw_domain *domain, + struct dlb2_ldb_queue *queue, + struct dlb2_create_ldb_queue_args *args, + bool vdev_req, + unsigned int vdev_id) +{ + union dlb2_sys_vf_ldb_vqid_v r0 = { {0} }; + union dlb2_sys_vf_ldb_vqid2qid r1 = { {0} }; + union dlb2_sys_ldb_qid2vqid r2 = { {0} }; + union dlb2_sys_ldb_vasqid_v r3 = { {0} }; + union dlb2_lsp_qid_ldb_infl_lim r4 = { {0} }; + union dlb2_lsp_qid_aqed_active_lim r5 = { {0} }; + union dlb2_aqed_pipe_qid_hid_width r6 = { {0} }; + union dlb2_sys_ldb_qid_its r7 = { {0} }; + union dlb2_lsp_qid_atm_depth_thrsh r8 = { {0} }; + union dlb2_lsp_qid_naldb_depth_thrsh r9 = { {0} }; + union dlb2_aqed_pipe_qid_fid_lim r10 = { {0} }; + union dlb2_chp_ord_qid_sn_map r11 = { {0} }; + union dlb2_sys_ldb_qid_cfg_v r12 = { {0} }; + union dlb2_sys_ldb_qid_v r13 = { {0} }; + + struct dlb2_sn_group *sn_group; + unsigned int offs; + + /* QID write permissions are turned on when the domain is started */ + r3.field.vasqid_v = 0; + + offs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + + queue->id.phys_id; + + DLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), r3.val); + + /* + * Unordered QIDs get 4K inflights, ordered get as many as the number + * of sequence numbers. + */ + r4.field.limit = args->num_qid_inflights; + + DLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(queue->id.phys_id), r4.val); + + r5.field.limit = queue->aqed_limit; + + if (r5.field.limit > DLB2_MAX_NUM_AQED_ENTRIES) + r5.field.limit = DLB2_MAX_NUM_AQED_ENTRIES; + + DLB2_CSR_WR(hw, + DLB2_LSP_QID_AQED_ACTIVE_LIM(queue->id.phys_id), + r5.val); + + switch (args->lock_id_comp_level) { + case 64: + r6.field.compress_code = 1; + break; + case 128: + r6.field.compress_code = 2; + break; + case 256: + r6.field.compress_code = 3; + break; + case 512: + r6.field.compress_code = 4; + break; + case 1024: + r6.field.compress_code = 5; + break; + case 2048: + r6.field.compress_code = 6; + break; + case 4096: + r6.field.compress_code = 7; + break; + case 0: + case 65536: + r6.field.compress_code = 0; + } + + DLB2_CSR_WR(hw, + DLB2_AQED_PIPE_QID_HID_WIDTH(queue->id.phys_id), + r6.val); + + /* Don't timestamp QEs that pass through this queue */ + r7.field.qid_its = 0; + + DLB2_CSR_WR(hw, + DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), + r7.val); + + r8.field.thresh = args->depth_threshold; + + DLB2_CSR_WR(hw, + DLB2_LSP_QID_ATM_DEPTH_THRSH(queue->id.phys_id), + r8.val); + + r9.field.thresh = args->depth_threshold; + + DLB2_CSR_WR(hw, + DLB2_LSP_QID_NALDB_DEPTH_THRSH(queue->id.phys_id), + r9.val); + + /* + * This register limits the number of inflight flows a queue can have + * at one time. It has an upper bound of 2048, but can be + * over-subscribed. 512 is chosen so that a single queue doesn't use + * the entire atomic storage, but can use a substantial portion if + * needed. + */ + r10.field.qid_fid_limit = 512; + + DLB2_CSR_WR(hw, + DLB2_AQED_PIPE_QID_FID_LIM(queue->id.phys_id), + r10.val); + + /* Configure SNs */ + sn_group = &hw->rsrcs.sn_groups[queue->sn_group]; + r11.field.mode = sn_group->mode; + r11.field.slot = queue->sn_slot; + r11.field.grp = sn_group->id; + + DLB2_CSR_WR(hw, DLB2_CHP_ORD_QID_SN_MAP(queue->id.phys_id), r11.val); + + r12.field.sn_cfg_v = (args->num_sequence_numbers != 0); + r12.field.fid_cfg_v = (args->num_atomic_inflights != 0); + + DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), r12.val); + + if (vdev_req) { + offs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id; + + r0.field.vqid_v = 1; + + DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), r0.val); + + r1.field.qid = queue->id.phys_id; + + DLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), r1.val); + + r2.field.vqid = queue->id.virt_id; + + DLB2_CSR_WR(hw, + DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), + r2.val); + } + + r13.field.qid_v = 1; + + DLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), r13.val); +} + +static int +dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw, + struct dlb2_ldb_queue *queue, + struct dlb2_create_ldb_queue_args *args) +{ + int slot = -1; + int i; + + queue->sn_cfg_valid = false; + + if (args->num_sequence_numbers == 0) + return 0; + + for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) { + struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i]; + + if (group->sequence_numbers_per_queue == + args->num_sequence_numbers && + !dlb2_sn_group_full(group)) { + slot = dlb2_sn_group_alloc_slot(group); + if (slot >= 0) + break; + } + } + + if (slot == -1) { + DLB2_HW_ERR(hw, + "[%s():%d] Internal error: no sequence number slots available\n", + __func__, __LINE__); + return -EFAULT; + } + + queue->sn_cfg_valid = true; + queue->sn_group = i; + queue->sn_slot = slot; + return 0; +} + +static int +dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw, + struct dlb2_hw_domain *domain, + struct dlb2_ldb_queue *queue, + struct dlb2_create_ldb_queue_args *args) +{ + int ret; + + ret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args); + if (ret) + return ret; + + /* Attach QID inflights */ + queue->num_qid_inflights = args->num_qid_inflights; + + /* Attach atomic inflights */ + queue->aqed_limit = args->num_atomic_inflights; + + domain->num_avail_aqed_entries -= args->num_atomic_inflights; + domain->num_used_aqed_entries += args->num_atomic_inflights; + + return 0; +} + +static int +dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw, + u32 domain_id, + struct dlb2_create_ldb_queue_args *args, + struct dlb2_cmd_response *resp, + bool vdev_req, + unsigned int vdev_id) +{ + struct dlb2_hw_domain *domain; + int i; + + domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id); + + if (!domain) { + resp->status = DLB2_ST_INVALID_DOMAIN_ID; + return -EINVAL; + } + + if (!domain->configured) { + resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED; + return -EINVAL; + } + + if (domain->started) { + resp->status = DLB2_ST_DOMAIN_STARTED; + return -EINVAL; + } + + if (dlb2_list_empty(&domain->avail_ldb_queues)) { + resp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE; + return -EINVAL; + } + + if (args->num_sequence_numbers) { + for (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) { + struct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i]; + + if (group->sequence_numbers_per_queue == + args->num_sequence_numbers && + !dlb2_sn_group_full(group)) + break; + } + + if (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) { + resp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE; + return -EINVAL; + } + } + + if (args->num_qid_inflights > 4096) { + resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION; + return -EINVAL; + } + + /* Inflights must be <= number of sequence numbers if ordered */ + if (args->num_sequence_numbers != 0 && + args->num_qid_inflights > args->num_sequence_numbers) { + resp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION; + return -EINVAL; + } + + if (domain->num_avail_aqed_entries < args->num_atomic_inflights) { + resp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE; + return -EINVAL; + } + + if (args->num_atomic_inflights && + args->lock_id_comp_level != 0 && + args->lock_id_comp_level != 64 && + args->lock_id_comp_level != 128 && + args->lock_id_comp_level != 256 && + args->lock_id_comp_level != 512 && + args->lock_id_comp_level != 1024 && + args->lock_id_comp_level != 2048 && + args->lock_id_comp_level != 4096 && + args->lock_id_comp_level != 65536) { + resp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL; + return -EINVAL; + } + + return 0; +} + +static void +dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw, + u32 domain_id, + struct dlb2_create_ldb_queue_args *args, + bool vdev_req, + unsigned int vdev_id) +{ + DLB2_HW_DBG(hw, "DLB2 create load-balanced queue arguments:\n"); + if (vdev_req) + DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id); + DLB2_HW_DBG(hw, "\tDomain ID: %d\n", + domain_id); + DLB2_HW_DBG(hw, "\tNumber of sequence numbers: %d\n", + args->num_sequence_numbers); + DLB2_HW_DBG(hw, "\tNumber of QID inflights: %d\n", + args->num_qid_inflights); + DLB2_HW_DBG(hw, "\tNumber of ATM inflights: %d\n", + args->num_atomic_inflights); +} + +/** + * dlb2_hw_create_ldb_queue() - Allocate and initialize a DLB LDB queue. + * @hw: Contains the current state of the DLB2 hardware. + * @domain_id: Domain ID + * @args: User-provided arguments. + * @resp: Response to user. + * @vdev_req: Request came from a virtual device. + * @vdev_id: If vdev_req is true, this contains the virtual device's ID. + * + * Return: returns < 0 on error, 0 otherwise. If the driver is unable to + * satisfy a request, resp->status will be set accordingly. + */ +int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw, + u32 domain_id, + struct dlb2_create_ldb_queue_args *args, + struct dlb2_cmd_response *resp, + bool vdev_req, + unsigned int vdev_id) +{ + struct dlb2_hw_domain *domain; + struct dlb2_ldb_queue *queue; + int ret; + + dlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id); + + /* + * Verify that hardware resources are available before attempting to + * satisfy the request. This simplifies the error unwinding code. + */ + ret = dlb2_verify_create_ldb_queue_args(hw, + domain_id, + args, + resp, + vdev_req, + vdev_id); + if (ret) + return ret; + + domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id); + if (!domain) { + DLB2_HW_ERR(hw, + "[%s():%d] Internal error: domain not found\n", + __func__, __LINE__); + return -EFAULT; + } + + queue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue)); + if (!queue) { + DLB2_HW_ERR(hw, + "[%s():%d] Internal error: no available ldb queues\n", + __func__, __LINE__); + return -EFAULT; + } + + ret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args); + if (ret < 0) { + DLB2_HW_ERR(hw, + "[%s():%d] Internal error: failed to attach the ldb queue resources\n", + __func__, __LINE__); + return ret; + } + + dlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id); + + queue->num_mappings = 0; + + queue->configured = true; + + /* + * Configuration succeeded, so move the resource from the 'avail' to + * the 'used' list. + */ + dlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list); + + dlb2_list_add(&domain->used_ldb_queues, &queue->domain_list); + + resp->status = 0; + resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id; + + return 0; +} + +int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id) +{ + if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) + return -EINVAL; + + return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue; +} + +int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, + unsigned int group_id) +{ + if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) + return -EINVAL; + + return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]); +} + +static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw, + unsigned int group_id, + unsigned long val) +{ + DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n"); + DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id); + DLB2_HW_DBG(hw, "\tValue: %lu\n", val); +} + +int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw, + unsigned int group_id, + unsigned long val) +{ + u32 valid_allocations[] = {64, 128, 256, 512, 1024}; + union dlb2_ro_pipe_grp_sn_mode r0 = { {0} }; + struct dlb2_sn_group *group; + int mode; + + if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) + return -EINVAL; + + group = &hw->rsrcs.sn_groups[group_id]; + + /* + * Once the first load-balanced queue using an SN group is configured, + * the group cannot be changed. + */ + if (group->slot_use_bitmap != 0) + return -EPERM; + + for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++) + if (val == valid_allocations[mode]) + break; + + if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES) + return -EINVAL; + + group->mode = mode; + group->sequence_numbers_per_queue = val; + + r0.field.sn_mode_0 = hw->rsrcs.sn_groups[0].mode; + r0.field.sn_mode_1 = hw->rsrcs.sn_groups[1].mode; + + DLB2_CSR_WR(hw, DLB2_RO_PIPE_GRP_SN_MODE, r0.val); + + dlb2_log_set_group_sequence_numbers(hw, group_id, val); + + return 0; +} diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c index ca1ad69..b8e32ab 100644 --- a/drivers/event/dlb2/pf/dlb2_main.c +++ b/drivers/event/dlb2/pf/dlb2_main.c @@ -632,3 +632,13 @@ dlb2_pf_reset_domain(struct dlb2_hw *hw, u32 id) { return dlb2_reset_domain(hw, id, NOT_VF_REQ, PF_ID_ZERO); } + +int +dlb2_pf_create_ldb_queue(struct dlb2_hw *hw, + u32 id, + struct dlb2_create_ldb_queue_args *args, + struct dlb2_cmd_response *resp) +{ + return dlb2_hw_create_ldb_queue(hw, id, args, resp, NOT_VF_REQ, + PF_ID_ZERO); +} diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c index 21f28a4..dea70e6 100644 --- a/drivers/event/dlb2/pf/dlb2_pf.c +++ b/drivers/event/dlb2/pf/dlb2_pf.c @@ -156,6 +156,84 @@ dlb2_pf_domain_reset(struct dlb2_eventdev *dlb2) DLB2_LOG_ERR("dlb2_pf_reset_domain err %d", ret); } +static int +dlb2_pf_ldb_queue_create(struct dlb2_hw_dev *handle, + struct dlb2_create_ldb_queue_args *cfg) +{ + struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)handle->pf_dev; + struct dlb2_cmd_response response = {0}; + int ret; + + DLB2_INFO(dev->dlb2_device, "Entering %s()\n", __func__); + + ret = dlb2_pf_create_ldb_queue(&dlb2_dev->hw, + handle->domain_id, + cfg, + &response); + + cfg->response = response; + + DLB2_INFO(dev->dlb2_device, "Exiting %s() with ret=%d\n", + __func__, ret); + + return ret; +} + +static int +dlb2_pf_get_sn_occupancy(struct dlb2_hw_dev *handle, + struct dlb2_get_sn_occupancy_args *args) +{ + struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)handle->pf_dev; + struct dlb2_cmd_response response = {0}; + int ret; + + ret = dlb2_get_group_sequence_number_occupancy(&dlb2_dev->hw, + args->group); + + response.id = ret; + response.status = 0; + + args->response = response; + + return ret; +} + +static int +dlb2_pf_get_sn_allocation(struct dlb2_hw_dev *handle, + struct dlb2_get_sn_allocation_args *args) +{ + struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)handle->pf_dev; + struct dlb2_cmd_response response = {0}; + int ret; + + ret = dlb2_get_group_sequence_numbers(&dlb2_dev->hw, args->group); + + response.id = ret; + response.status = 0; + + args->response = response; + + return ret; +} + +static int +dlb2_pf_set_sn_allocation(struct dlb2_hw_dev *handle, + struct dlb2_set_sn_allocation_args *args) +{ + struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)handle->pf_dev; + struct dlb2_cmd_response response = {0}; + int ret; + + ret = dlb2_set_group_sequence_numbers(&dlb2_dev->hw, args->group, + args->num); + + response.status = 0; + + args->response = response; + + return ret; +} + static void dlb2_pf_iface_fn_ptrs_init(void) { @@ -168,6 +246,10 @@ dlb2_pf_iface_fn_ptrs_init(void) dlb2_iface_get_num_resources = dlb2_pf_get_num_resources; dlb2_iface_get_cq_poll_mode = dlb2_pf_get_cq_poll_mode; dlb2_iface_sched_domain_create = dlb2_pf_sched_domain_create; + dlb2_iface_ldb_queue_create = dlb2_pf_ldb_queue_create; + dlb2_iface_get_sn_allocation = dlb2_pf_get_sn_allocation; + dlb2_iface_set_sn_allocation = dlb2_pf_set_sn_allocation; + dlb2_iface_get_sn_occupancy = dlb2_pf_get_sn_occupancy; } /* PCI DEV HOOKS */ -- 2.6.4