From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9013AA04E6; Sat, 31 Oct 2020 00:44:23 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CC5AFC858; Sat, 31 Oct 2020 00:40:21 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id CFAEFBE97 for ; Sat, 31 Oct 2020 00:39:59 +0100 (CET) IronPort-SDR: dE2zsqsq7FGIiQbUp/sRmX2f1Hcd3dmWXEoWqbkZ1/n8iXQBNzipvcSWhnwjW91wkH1nxCFiVk S+5x/uu599QQ== X-IronPort-AV: E=McAfee;i="6000,8403,9790"; a="166094678" X-IronPort-AV: E=Sophos;i="5.77,435,1596524400"; d="scan'208";a="166094678" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2020 16:39:59 -0700 IronPort-SDR: QYT2auyzbxnAnX3ib4hQopMBy9FMMm61rNgMA6N2tANoXIIkyNFZdBE3JG2TAOUirj4dEmRuG/ nSO/d8qtk7KA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,435,1596524400"; d="scan'208";a="352025696" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga004.fm.intel.com with ESMTP; 30 Oct 2020 16:39:58 -0700 From: Timothy McDaniel To: Cc: dev@dpdk.org, erik.g.carrillo@intel.com, gage.eads@intel.com, harry.van.haaren@intel.com, jerinj@marvell.com, thomas@monjalon.net Date: Fri, 30 Oct 2020 18:41:24 -0500 Message-Id: <1604101295-15970-13-git-send-email-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 1.7.10 In-Reply-To: <1604101295-15970-1-git-send-email-timothy.mcdaniel@intel.com> References: <20200612212434.6852-2-timothy.mcdaniel@intel.com> <1604101295-15970-1-git-send-email-timothy.mcdaniel@intel.com> Subject: [dpdk-dev] [PATCH v11 12/23] event/dlb: add queue setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Load balanced (ldb) queues are setup here. Directed queues are not set up until link time, at which point we know the directed port ID. Directed queue setup will only fail if this queue is already setup or there are no directed queues left to configure. Signed-off-by: Timothy McDaniel Reviewed-by: Gage Eads --- doc/guides/eventdevs/dlb.rst | 35 +++ drivers/event/dlb/dlb.c | 293 +++++++++++++++++++++++ drivers/event/dlb/dlb_iface.c | 12 + drivers/event/dlb/dlb_iface.h | 12 + drivers/event/dlb/pf/base/dlb_resource.c | 386 +++++++++++++++++++++++++++++++ drivers/event/dlb/pf/dlb_pf.c | 81 +++++++ 6 files changed, 819 insertions(+) diff --git a/doc/guides/eventdevs/dlb.rst b/doc/guides/eventdevs/dlb.rst index 2d7999b..d8e936a 100644 --- a/doc/guides/eventdevs/dlb.rst +++ b/doc/guides/eventdevs/dlb.rst @@ -82,3 +82,38 @@ The PMD does not support the following configuration sequences: This sequence is not supported because the event device must be reconfigured before its ports or queues can be. +Load-Balanced Queues +~~~~~~~~~~~~~~~~~~~~ + +A load-balanced queue can support atomic and ordered scheduling, or atomic and +unordered scheduling, but not atomic and unordered and ordered scheduling. A +queue's scheduling types are controlled by the event queue configuration. + +If the user sets the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag, the +``nb_atomic_order_sequences`` determines the supported scheduling types. +With non-zero ``nb_atomic_order_sequences``, the queue is configured for atomic +and ordered scheduling. In this case, ``RTE_SCHED_TYPE_PARALLEL`` scheduling is +supported by scheduling those events as ordered events. Note that when the +event is dequeued, its sched_type will be ``RTE_SCHED_TYPE_ORDERED``. Else if +``nb_atomic_order_sequences`` is zero, the queue is configured for atomic and +unordered scheduling. In this case, ``RTE_SCHED_TYPE_ORDERED`` is unsupported. + +If the ``RTE_EVENT_QUEUE_CFG_ALL_TYPES`` flag is not set, schedule_type +dictates the queue's scheduling type. + +The ``nb_atomic_order_sequences`` queue configuration field sets the ordered +queue's reorder buffer size. DLB has 4 groups of ordered queues, where each +group is configured to contain either 1 queue with 1024 reorder entries, 2 +queues with 512 reorder entries, and so on down to 32 queues with 32 entries. + +When a load-balanced queue is created, the PMD will configure a new sequence +number group on-demand if num_sequence_numbers does not match a pre-existing +group with available reorder buffer entries. If all sequence number groups are +in use, no new group will be created and queue configuration will fail. (Note +that when the PMD is used with a virtual DLB device, it cannot change the +sequence number configuration.) + +The queue's ``nb_atomic_flows`` parameter is ignored by the DLB PMD, because +the DLB does not limit the number of flows a queue can track. In the DLB, all +load-balanced queues can use the full 16-bit flow ID range. + diff --git a/drivers/event/dlb/dlb.c b/drivers/event/dlb/dlb.c index e98a438..edcc6d1 100644 --- a/drivers/event/dlb/dlb.c +++ b/drivers/event/dlb/dlb.c @@ -657,6 +657,298 @@ dlb_eventdev_queue_default_conf_get(struct rte_eventdev *dev, queue_conf->priority = 0; } +static int32_t +dlb_hw_create_ldb_queue(struct dlb_eventdev *dlb, + struct dlb_queue *queue, + const struct rte_event_queue_conf *evq_conf) +{ + struct dlb_hw_dev *handle = &dlb->qm_instance; + struct dlb_create_ldb_queue_args cfg; + struct dlb_cmd_response response; + int32_t ret; + uint32_t qm_qid; + int sched_type = -1; + + if (evq_conf == NULL) + return -EINVAL; + + if (evq_conf->event_queue_cfg & RTE_EVENT_QUEUE_CFG_ALL_TYPES) { + if (evq_conf->nb_atomic_order_sequences != 0) + sched_type = RTE_SCHED_TYPE_ORDERED; + else + sched_type = RTE_SCHED_TYPE_PARALLEL; + } else + sched_type = evq_conf->schedule_type; + + cfg.response = (uintptr_t)&response; + cfg.num_atomic_inflights = dlb->num_atm_inflights_per_queue; + cfg.num_sequence_numbers = evq_conf->nb_atomic_order_sequences; + cfg.num_qid_inflights = evq_conf->nb_atomic_order_sequences; + + if (sched_type != RTE_SCHED_TYPE_ORDERED) { + cfg.num_sequence_numbers = 0; + cfg.num_qid_inflights = DLB_DEF_UNORDERED_QID_INFLIGHTS; + } + + ret = dlb_iface_ldb_queue_create(handle, &cfg); + if (ret < 0) { + DLB_LOG_ERR("dlb: create LB event queue error, ret=%d (driver status: %s)\n", + ret, dlb_error_strings[response.status]); + return -EINVAL; + } + + qm_qid = response.id; + + /* Save off queue config for debug, resource lookups, and reconfig */ + queue->num_qid_inflights = cfg.num_qid_inflights; + queue->num_atm_inflights = cfg.num_atomic_inflights; + + queue->sched_type = sched_type; + queue->config_state = DLB_CONFIGURED; + + DLB_LOG_DBG("Created LB event queue %d, nb_inflights=%d, nb_seq=%d, qid inflights=%d\n", + qm_qid, + cfg.num_atomic_inflights, + cfg.num_sequence_numbers, + cfg.num_qid_inflights); + + return qm_qid; +} + +static int32_t +dlb_get_sn_allocation(struct dlb_eventdev *dlb, int group) +{ + struct dlb_hw_dev *handle = &dlb->qm_instance; + struct dlb_get_sn_allocation_args cfg; + struct dlb_cmd_response response; + int ret; + + cfg.group = group; + cfg.response = (uintptr_t)&response; + + ret = dlb_iface_get_sn_allocation(handle, &cfg); + if (ret < 0) { + DLB_LOG_ERR("dlb: get_sn_allocation ret=%d (driver status: %s)\n", + ret, dlb_error_strings[response.status]); + return ret; + } + + return response.id; +} + +static int +dlb_set_sn_allocation(struct dlb_eventdev *dlb, int group, int num) +{ + struct dlb_hw_dev *handle = &dlb->qm_instance; + struct dlb_set_sn_allocation_args cfg; + struct dlb_cmd_response response; + int ret; + + cfg.num = num; + cfg.group = group; + cfg.response = (uintptr_t)&response; + + ret = dlb_iface_set_sn_allocation(handle, &cfg); + if (ret < 0) { + DLB_LOG_ERR("dlb: set_sn_allocation ret=%d (driver status: %s)\n", + ret, dlb_error_strings[response.status]); + return ret; + } + + return ret; +} + +static int32_t +dlb_get_sn_occupancy(struct dlb_eventdev *dlb, int group) +{ + struct dlb_hw_dev *handle = &dlb->qm_instance; + struct dlb_get_sn_occupancy_args cfg; + struct dlb_cmd_response response; + int ret; + + cfg.group = group; + cfg.response = (uintptr_t)&response; + + ret = dlb_iface_get_sn_occupancy(handle, &cfg); + if (ret < 0) { + DLB_LOG_ERR("dlb: get_sn_occupancy ret=%d (driver status: %s)\n", + ret, dlb_error_strings[response.status]); + return ret; + } + + return response.id; +} + +/* Query the current sequence number allocations and, if they conflict with the + * requested LDB queue configuration, attempt to re-allocate sequence numbers. + * This is best-effort; if it fails, the PMD will attempt to configure the + * load-balanced queue and return an error. + */ +static void +dlb_program_sn_allocation(struct dlb_eventdev *dlb, + const struct rte_event_queue_conf *queue_conf) +{ + int grp_occupancy[DLB_NUM_SN_GROUPS]; + int grp_alloc[DLB_NUM_SN_GROUPS]; + int i, sequence_numbers; + + sequence_numbers = (int)queue_conf->nb_atomic_order_sequences; + + for (i = 0; i < DLB_NUM_SN_GROUPS; i++) { + int total_slots; + + grp_alloc[i] = dlb_get_sn_allocation(dlb, i); + if (grp_alloc[i] < 0) + return; + + total_slots = DLB_MAX_LDB_SN_ALLOC / grp_alloc[i]; + + grp_occupancy[i] = dlb_get_sn_occupancy(dlb, i); + if (grp_occupancy[i] < 0) + return; + + /* DLB has at least one available slot for the requested + * sequence numbers, so no further configuration required. + */ + if (grp_alloc[i] == sequence_numbers && + grp_occupancy[i] < total_slots) + return; + } + + /* None of the sequence number groups are configured for the requested + * sequence numbers, so we have to reconfigure one of them. This is + * only possible if a group is not in use. + */ + for (i = 0; i < DLB_NUM_SN_GROUPS; i++) { + if (grp_occupancy[i] == 0) + break; + } + + if (i == DLB_NUM_SN_GROUPS) { + printf("[%s()] No groups with %d sequence_numbers are available or have free slots\n", + __func__, sequence_numbers); + return; + } + + /* Attempt to configure slot i with the requested number of sequence + * numbers. Ignore the return value -- if this fails, the error will be + * caught during subsequent queue configuration. + */ + dlb_set_sn_allocation(dlb, i, sequence_numbers); +} + +static int +dlb_eventdev_ldb_queue_setup(struct rte_eventdev *dev, + struct dlb_eventdev_queue *ev_queue, + const struct rte_event_queue_conf *queue_conf) +{ + struct dlb_eventdev *dlb = dlb_pmd_priv(dev); + int32_t qm_qid; + + if (queue_conf->nb_atomic_order_sequences) + dlb_program_sn_allocation(dlb, queue_conf); + + qm_qid = dlb_hw_create_ldb_queue(dlb, + &ev_queue->qm_queue, + queue_conf); + if (qm_qid < 0) { + DLB_LOG_ERR("Failed to create the load-balanced queue\n"); + + return qm_qid; + } + + dlb->qm_ldb_to_ev_queue_id[qm_qid] = ev_queue->id; + + ev_queue->qm_queue.id = qm_qid; + + return 0; +} + +static int dlb_num_dir_queues_setup(struct dlb_eventdev *dlb) +{ + int i, num = 0; + + for (i = 0; i < dlb->num_queues; i++) { + if (dlb->ev_queues[i].setup_done && + dlb->ev_queues[i].qm_queue.is_directed) + num++; + } + + return num; +} + +static void +dlb_queue_link_teardown(struct dlb_eventdev *dlb, + struct dlb_eventdev_queue *ev_queue) +{ + struct dlb_eventdev_port *ev_port; + int i, j; + + for (i = 0; i < dlb->num_ports; i++) { + ev_port = &dlb->ev_ports[i]; + + for (j = 0; j < DLB_MAX_NUM_QIDS_PER_LDB_CQ; j++) { + if (!ev_port->link[j].valid || + ev_port->link[j].queue_id != ev_queue->id) + continue; + + ev_port->link[j].valid = false; + ev_port->num_links--; + } + } + + ev_queue->num_links = 0; +} + +static int +dlb_eventdev_queue_setup(struct rte_eventdev *dev, + uint8_t ev_qid, + const struct rte_event_queue_conf *queue_conf) +{ + struct dlb_eventdev *dlb = dlb_pmd_priv(dev); + struct dlb_eventdev_queue *ev_queue; + int ret; + + if (!queue_conf) + return -EINVAL; + + if (ev_qid >= dlb->num_queues) + return -EINVAL; + + ev_queue = &dlb->ev_queues[ev_qid]; + + ev_queue->qm_queue.is_directed = queue_conf->event_queue_cfg & + RTE_EVENT_QUEUE_CFG_SINGLE_LINK; + ev_queue->id = ev_qid; + ev_queue->conf = *queue_conf; + + if (!ev_queue->qm_queue.is_directed) { + ret = dlb_eventdev_ldb_queue_setup(dev, ev_queue, queue_conf); + } else { + /* The directed queue isn't setup until link time, at which + * point we know its directed port ID. Directed queue setup + * will only fail if this queue is already setup or there are + * no directed queues left to configure. + */ + ret = 0; + + ev_queue->qm_queue.config_state = DLB_NOT_CONFIGURED; + + if (ev_queue->setup_done || + dlb_num_dir_queues_setup(dlb) == dlb->num_dir_queues) + ret = -EINVAL; + } + + /* Tear down pre-existing port->queue links */ + if (!ret && dlb->run_state == DLB_RUN_STATE_STOPPED) + dlb_queue_link_teardown(dlb, ev_queue); + + if (!ret) + ev_queue->setup_done = true; + + return ret; +} + static int set_dev_id(const char *key __rte_unused, const char *value, @@ -735,6 +1027,7 @@ dlb_entry_points_init(struct rte_eventdev *dev) .dev_configure = dlb_eventdev_configure, .queue_def_conf = dlb_eventdev_queue_default_conf_get, .port_def_conf = dlb_eventdev_port_default_conf_get, + .queue_setup = dlb_eventdev_queue_setup, .dump = dlb_eventdev_dump, .xstats_get = dlb_eventdev_xstats_get, .xstats_get_names = dlb_eventdev_xstats_get_names, diff --git a/drivers/event/dlb/dlb_iface.c b/drivers/event/dlb/dlb_iface.c index f3e82f2..219f79e 100644 --- a/drivers/event/dlb/dlb_iface.c +++ b/drivers/event/dlb/dlb_iface.c @@ -33,6 +33,18 @@ int (*dlb_iface_ldb_credit_pool_create)(struct dlb_hw_dev *handle, int (*dlb_iface_dir_credit_pool_create)(struct dlb_hw_dev *handle, struct dlb_create_dir_pool_args *cfg); +int (*dlb_iface_ldb_queue_create)(struct dlb_hw_dev *handle, + struct dlb_create_ldb_queue_args *cfg); + int (*dlb_iface_get_cq_poll_mode)(struct dlb_hw_dev *handle, enum dlb_cq_poll_modes *mode); +int (*dlb_iface_get_sn_allocation)(struct dlb_hw_dev *handle, + struct dlb_get_sn_allocation_args *args); + +int (*dlb_iface_set_sn_allocation)(struct dlb_hw_dev *handle, + struct dlb_set_sn_allocation_args *args); + +int (*dlb_iface_get_sn_occupancy)(struct dlb_hw_dev *handle, + struct dlb_get_sn_occupancy_args *args); + diff --git a/drivers/event/dlb/dlb_iface.h b/drivers/event/dlb/dlb_iface.h index d576232..af1416d 100644 --- a/drivers/event/dlb/dlb_iface.h +++ b/drivers/event/dlb/dlb_iface.h @@ -32,7 +32,19 @@ extern int (*dlb_iface_ldb_credit_pool_create)(struct dlb_hw_dev *handle, extern int (*dlb_iface_dir_credit_pool_create)(struct dlb_hw_dev *handle, struct dlb_create_dir_pool_args *cfg); +extern int (*dlb_iface_ldb_queue_create)(struct dlb_hw_dev *handle, + struct dlb_create_ldb_queue_args *cfg); + extern int (*dlb_iface_get_cq_poll_mode)(struct dlb_hw_dev *handle, enum dlb_cq_poll_modes *mode); +extern int (*dlb_iface_get_sn_allocation)(struct dlb_hw_dev *handle, + struct dlb_get_sn_allocation_args *args); + +extern int (*dlb_iface_set_sn_allocation)(struct dlb_hw_dev *handle, + struct dlb_set_sn_allocation_args *args); + +extern int (*dlb_iface_get_sn_occupancy)(struct dlb_hw_dev *handle, + struct dlb_get_sn_occupancy_args *args); + #endif /* _DLB_IFACE_H */ diff --git a/drivers/event/dlb/pf/base/dlb_resource.c b/drivers/event/dlb/pf/base/dlb_resource.c index 2f8ffec..35b66e2 100644 --- a/drivers/event/dlb/pf/base/dlb_resource.c +++ b/drivers/event/dlb/pf/base/dlb_resource.c @@ -4214,3 +4214,389 @@ void dlb_hw_disable_vf_to_pf_isr_pend_err(struct dlb_hw *hw) DLB_CSR_WR(hw, DLB_SYS_SYS_ALARM_INT_ENABLE, r0.val); } + +static void dlb_configure_ldb_queue(struct dlb_hw *hw, + struct dlb_domain *domain, + struct dlb_ldb_queue *queue, + struct dlb_create_ldb_queue_args *args) +{ + union dlb_sys_ldb_vasqid_v r0 = { {0} }; + union dlb_lsp_qid_ldb_infl_lim r1 = { {0} }; + union dlb_lsp_qid_aqed_active_lim r2 = { {0} }; + union dlb_aqed_pipe_fl_lim r3 = { {0} }; + union dlb_aqed_pipe_fl_base r4 = { {0} }; + union dlb_chp_ord_qid_sn_map r7 = { {0} }; + union dlb_sys_ldb_qid_cfg_v r10 = { {0} }; + union dlb_sys_ldb_qid_v r11 = { {0} }; + union dlb_aqed_pipe_fl_push_ptr r5 = { {0} }; + union dlb_aqed_pipe_fl_pop_ptr r6 = { {0} }; + union dlb_aqed_pipe_qid_fid_lim r8 = { {0} }; + union dlb_ro_pipe_qid2grpslt r9 = { {0} }; + struct dlb_sn_group *sn_group; + unsigned int offs; + + /* QID write permissions are turned on when the domain is started */ + r0.field.vasqid_v = 0; + + offs = domain->id * DLB_MAX_NUM_LDB_QUEUES + queue->id; + + DLB_CSR_WR(hw, DLB_SYS_LDB_VASQID_V(offs), r0.val); + + /* + * Unordered QIDs get 4K inflights, ordered get as many as the number + * of sequence numbers. + */ + r1.field.limit = args->num_qid_inflights; + + DLB_CSR_WR(hw, DLB_LSP_QID_LDB_INFL_LIM(queue->id), r1.val); + + r2.field.limit = queue->aqed_freelist.bound - + queue->aqed_freelist.base; + + if (r2.field.limit > DLB_MAX_NUM_AQOS_ENTRIES) + r2.field.limit = DLB_MAX_NUM_AQOS_ENTRIES; + + /* AQOS */ + DLB_CSR_WR(hw, DLB_LSP_QID_AQED_ACTIVE_LIM(queue->id), r2.val); + + r3.field.freelist_disable = 0; + r3.field.limit = queue->aqed_freelist.bound - 1; + + DLB_CSR_WR(hw, DLB_AQED_PIPE_FL_LIM(queue->id), r3.val); + + r4.field.base = queue->aqed_freelist.base; + + DLB_CSR_WR(hw, DLB_AQED_PIPE_FL_BASE(queue->id), r4.val); + + r5.field.push_ptr = r4.field.base; + r5.field.generation = 1; + + DLB_CSR_WR(hw, DLB_AQED_PIPE_FL_PUSH_PTR(queue->id), r5.val); + + r6.field.pop_ptr = r4.field.base; + r6.field.generation = 0; + + DLB_CSR_WR(hw, DLB_AQED_PIPE_FL_POP_PTR(queue->id), r6.val); + + /* Configure SNs */ + sn_group = &hw->rsrcs.sn_groups[queue->sn_group]; + r7.field.mode = sn_group->mode; + r7.field.slot = queue->sn_slot; + r7.field.grp = sn_group->id; + + DLB_CSR_WR(hw, DLB_CHP_ORD_QID_SN_MAP(queue->id), r7.val); + + /* + * This register limits the number of inflight flows a queue can have + * at one time. It has an upper bound of 2048, but can be + * over-subscribed. 512 is chosen so that a single queue doesn't use + * the entire atomic storage, but can use a substantial portion if + * needed. + */ + r8.field.qid_fid_limit = 512; + + DLB_CSR_WR(hw, DLB_AQED_PIPE_QID_FID_LIM(queue->id), r8.val); + + r9.field.group = sn_group->id; + r9.field.slot = queue->sn_slot; + + DLB_CSR_WR(hw, DLB_RO_PIPE_QID2GRPSLT(queue->id), r9.val); + + r10.field.sn_cfg_v = (args->num_sequence_numbers != 0); + r10.field.fid_cfg_v = (args->num_atomic_inflights != 0); + + DLB_CSR_WR(hw, DLB_SYS_LDB_QID_CFG_V(queue->id), r10.val); + + r11.field.qid_v = 1; + + DLB_CSR_WR(hw, DLB_SYS_LDB_QID_V(queue->id), r11.val); +} + +int dlb_get_group_sequence_numbers(struct dlb_hw *hw, unsigned int group_id) +{ + if (group_id >= DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS) + return -EINVAL; + + return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue; +} + +int dlb_get_group_sequence_number_occupancy(struct dlb_hw *hw, + unsigned int group_id) +{ + if (group_id >= DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS) + return -EINVAL; + + return dlb_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]); +} + +static void dlb_log_set_group_sequence_numbers(struct dlb_hw *hw, + unsigned int group_id, + unsigned long val) +{ + DLB_HW_INFO(hw, "DLB set group sequence numbers:\n"); + DLB_HW_INFO(hw, "\tGroup ID: %u\n", group_id); + DLB_HW_INFO(hw, "\tValue: %lu\n", val); +} + +int dlb_set_group_sequence_numbers(struct dlb_hw *hw, + unsigned int group_id, + unsigned long val) +{ + u32 valid_allocations[6] = {32, 64, 128, 256, 512, 1024}; + union dlb_ro_pipe_grp_sn_mode r0 = { {0} }; + struct dlb_sn_group *group; + int mode; + + if (group_id >= DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS) + return -EINVAL; + + group = &hw->rsrcs.sn_groups[group_id]; + + /* Once the first load-balanced queue using an SN group is configured, + * the group cannot be changed. + */ + if (group->slot_use_bitmap != 0) + return -EPERM; + + for (mode = 0; mode < DLB_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++) + if (val == valid_allocations[mode]) + break; + + if (mode == DLB_MAX_NUM_SEQUENCE_NUMBER_MODES) + return -EINVAL; + + group->mode = mode; + group->sequence_numbers_per_queue = val; + + r0.field.sn_mode_0 = hw->rsrcs.sn_groups[0].mode; + r0.field.sn_mode_1 = hw->rsrcs.sn_groups[1].mode; + r0.field.sn_mode_2 = hw->rsrcs.sn_groups[2].mode; + r0.field.sn_mode_3 = hw->rsrcs.sn_groups[3].mode; + + DLB_CSR_WR(hw, DLB_RO_PIPE_GRP_SN_MODE, r0.val); + + dlb_log_set_group_sequence_numbers(hw, group_id, val); + + return 0; +} + +static int +dlb_ldb_queue_attach_to_sn_group(struct dlb_hw *hw, + struct dlb_ldb_queue *queue, + struct dlb_create_ldb_queue_args *args) +{ + int slot = -1; + int i; + + queue->sn_cfg_valid = false; + + if (args->num_sequence_numbers == 0) + return 0; + + for (i = 0; i < DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) { + struct dlb_sn_group *group = &hw->rsrcs.sn_groups[i]; + + if (group->sequence_numbers_per_queue == + args->num_sequence_numbers && + !dlb_sn_group_full(group)) { + slot = dlb_sn_group_alloc_slot(group); + if (slot >= 0) + break; + } + } + + if (slot == -1) { + DLB_HW_ERR(hw, + "[%s():%d] Internal error: no sequence number slots available\n", + __func__, __LINE__); + return -EFAULT; + } + + queue->sn_cfg_valid = true; + queue->sn_group = i; + queue->sn_slot = slot; + return 0; +} + +static int +dlb_ldb_queue_attach_resources(struct dlb_hw *hw, + struct dlb_domain *domain, + struct dlb_ldb_queue *queue, + struct dlb_create_ldb_queue_args *args) +{ + int ret; + + ret = dlb_ldb_queue_attach_to_sn_group(hw, queue, args); + if (ret) + return ret; + + /* Attach QID inflights */ + queue->num_qid_inflights = args->num_qid_inflights; + + /* Attach atomic inflights */ + queue->aqed_freelist.base = domain->aqed_freelist.base + + domain->aqed_freelist.offset; + queue->aqed_freelist.bound = queue->aqed_freelist.base + + args->num_atomic_inflights; + domain->aqed_freelist.offset += args->num_atomic_inflights; + + return 0; +} + +static int +dlb_verify_create_ldb_queue_args(struct dlb_hw *hw, + u32 domain_id, + struct dlb_create_ldb_queue_args *args, + struct dlb_cmd_response *resp) +{ + struct dlb_freelist *aqed_freelist; + struct dlb_domain *domain; + int i; + + domain = dlb_get_domain_from_id(hw, domain_id); + + if (!domain) { + resp->status = DLB_ST_INVALID_DOMAIN_ID; + return -1; + } + + if (!domain->configured) { + resp->status = DLB_ST_DOMAIN_NOT_CONFIGURED; + return -1; + } + + if (domain->started) { + resp->status = DLB_ST_DOMAIN_STARTED; + return -1; + } + + if (dlb_list_empty(&domain->avail_ldb_queues)) { + resp->status = DLB_ST_LDB_QUEUES_UNAVAILABLE; + return -1; + } + + if (args->num_sequence_numbers) { + for (i = 0; i < DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) { + struct dlb_sn_group *group = &hw->rsrcs.sn_groups[i]; + + if (group->sequence_numbers_per_queue == + args->num_sequence_numbers && + !dlb_sn_group_full(group)) + break; + } + + if (i == DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS) { + resp->status = DLB_ST_SEQUENCE_NUMBERS_UNAVAILABLE; + return -1; + } + } + + if (args->num_qid_inflights > 4096) { + resp->status = DLB_ST_INVALID_QID_INFLIGHT_ALLOCATION; + return -1; + } + + /* Inflights must be <= number of sequence numbers if ordered */ + if (args->num_sequence_numbers != 0 && + args->num_qid_inflights > args->num_sequence_numbers) { + resp->status = DLB_ST_INVALID_QID_INFLIGHT_ALLOCATION; + return -1; + } + + aqed_freelist = &domain->aqed_freelist; + + if (dlb_freelist_count(aqed_freelist) < args->num_atomic_inflights) { + resp->status = DLB_ST_ATOMIC_INFLIGHTS_UNAVAILABLE; + return -1; + } + + return 0; +} + +static void +dlb_log_create_ldb_queue_args(struct dlb_hw *hw, + u32 domain_id, + struct dlb_create_ldb_queue_args *args) +{ + DLB_HW_INFO(hw, "DLB create load-balanced queue arguments:\n"); + DLB_HW_INFO(hw, "\tDomain ID: %d\n", + domain_id); + DLB_HW_INFO(hw, "\tNumber of sequence numbers: %d\n", + args->num_sequence_numbers); + DLB_HW_INFO(hw, "\tNumber of QID inflights: %d\n", + args->num_qid_inflights); + DLB_HW_INFO(hw, "\tNumber of ATM inflights: %d\n", + args->num_atomic_inflights); +} + +/** + * dlb_hw_create_ldb_queue() - Allocate and initialize a DLB LDB queue. + * @hw: Contains the current state of the DLB hardware. + * @args: User-provided arguments. + * @resp: Response to user. + * + * Return: returns < 0 on error, 0 otherwise. If the driver is unable to + * satisfy a request, resp->status will be set accordingly. + */ +int dlb_hw_create_ldb_queue(struct dlb_hw *hw, + u32 domain_id, + struct dlb_create_ldb_queue_args *args, + struct dlb_cmd_response *resp) +{ + struct dlb_ldb_queue *queue; + struct dlb_domain *domain; + int ret; + + dlb_log_create_ldb_queue_args(hw, domain_id, args); + + /* Verify that hardware resources are available before attempting to + * satisfy the request. This simplifies the error unwinding code. + */ + /* At least one available queue */ + if (dlb_verify_create_ldb_queue_args(hw, domain_id, args, resp)) + return -EINVAL; + + domain = dlb_get_domain_from_id(hw, domain_id); + if (!domain) { + DLB_HW_ERR(hw, + "[%s():%d] Internal error: domain not found\n", + __func__, __LINE__); + return -EFAULT; + } + + queue = DLB_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue)); + + /* Verification should catch this. */ + if (!queue) { + DLB_HW_ERR(hw, + "[%s():%d] Internal error: no available ldb queues\n", + __func__, __LINE__); + return -EFAULT; + } + + ret = dlb_ldb_queue_attach_resources(hw, domain, queue, args); + if (ret < 0) { + DLB_HW_ERR(hw, + "[%s():%d] Internal error: failed to attach the ldb queue resources\n", + __func__, __LINE__); + return ret; + } + + dlb_configure_ldb_queue(hw, domain, queue, args); + + queue->num_mappings = 0; + + queue->configured = true; + + /* Configuration succeeded, so move the resource from the 'avail' to + * the 'used' list. + */ + dlb_list_del(&domain->avail_ldb_queues, &queue->domain_list); + + dlb_list_add(&domain->used_ldb_queues, &queue->domain_list); + + resp->status = 0; + resp->id = queue->id; + + return 0; +} diff --git a/drivers/event/dlb/pf/dlb_pf.c b/drivers/event/dlb/pf/dlb_pf.c index 57a150c..fffb88b 100644 --- a/drivers/event/dlb/pf/dlb_pf.c +++ b/drivers/event/dlb/pf/dlb_pf.c @@ -198,6 +198,83 @@ dlb_pf_get_cq_poll_mode(struct dlb_hw_dev *handle, return 0; } +static int +dlb_pf_ldb_queue_create(struct dlb_hw_dev *handle, + struct dlb_create_ldb_queue_args *cfg) +{ + struct dlb_dev *dlb_dev = (struct dlb_dev *)handle->pf_dev; + struct dlb_cmd_response response = {0}; + int ret; + + DLB_INFO(dev->dlb_device, "Entering %s()\n", __func__); + + ret = dlb_hw_create_ldb_queue(&dlb_dev->hw, + handle->domain_id, + cfg, + &response); + + *(struct dlb_cmd_response *)cfg->response = response; + + DLB_INFO(dev->dlb_device, "Exiting %s() with ret=%d\n", __func__, ret); + + return ret; +} + +static int +dlb_pf_get_sn_allocation(struct dlb_hw_dev *handle, + struct dlb_get_sn_allocation_args *args) +{ + struct dlb_dev *dlb_dev = (struct dlb_dev *)handle->pf_dev; + struct dlb_cmd_response response = {0}; + int ret; + + ret = dlb_get_group_sequence_numbers(&dlb_dev->hw, args->group); + + response.id = ret; + response.status = 0; + + *(struct dlb_cmd_response *)args->response = response; + + return ret; +} + +static int +dlb_pf_set_sn_allocation(struct dlb_hw_dev *handle, + struct dlb_set_sn_allocation_args *args) +{ + struct dlb_dev *dlb_dev = (struct dlb_dev *)handle->pf_dev; + struct dlb_cmd_response response = {0}; + int ret; + + ret = dlb_set_group_sequence_numbers(&dlb_dev->hw, args->group, + args->num); + + response.status = 0; + + *(struct dlb_cmd_response *)args->response = response; + + return ret; +} + +static int +dlb_pf_get_sn_occupancy(struct dlb_hw_dev *handle, + struct dlb_get_sn_occupancy_args *args) +{ + struct dlb_dev *dlb_dev = (struct dlb_dev *)handle->pf_dev; + struct dlb_cmd_response response = {0}; + int ret; + + ret = dlb_get_group_sequence_number_occupancy(&dlb_dev->hw, + args->group); + + response.id = ret; + response.status = 0; + + *(struct dlb_cmd_response *)args->response = response; + + return ret; +} + static void dlb_pf_iface_fn_ptrs_init(void) { @@ -209,7 +286,11 @@ dlb_pf_iface_fn_ptrs_init(void) dlb_iface_sched_domain_create = dlb_pf_sched_domain_create; dlb_iface_ldb_credit_pool_create = dlb_pf_ldb_credit_pool_create; dlb_iface_dir_credit_pool_create = dlb_pf_dir_credit_pool_create; + dlb_iface_ldb_queue_create = dlb_pf_ldb_queue_create; dlb_iface_get_cq_poll_mode = dlb_pf_get_cq_poll_mode; + dlb_iface_get_sn_allocation = dlb_pf_get_sn_allocation; + dlb_iface_set_sn_allocation = dlb_pf_set_sn_allocation; + dlb_iface_get_sn_occupancy = dlb_pf_get_sn_occupancy; } /* PCI DEV HOOKS */ -- 2.6.4