* [PATCH v1 1/9] net/ice/base: fix dead lock issue when getting node from ID type
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-03-29 1:48 ` Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 2/9] net/base/ice: support priority configuration of the exact node Wenjun Wu
` (14 subsequent siblings)
15 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 1:48 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang; +Cc: Wenjun Wu
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v1 2/9] net/base/ice: support priority configuration of the exact node
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-03-29 1:48 ` Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (13 subsequent siblings)
15 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 1:48 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang; +Cc: Wenjun Wu
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 21 +++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..c0f90b762b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,27 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_cfg_node_priority - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..e1dc6e18a4 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v1 3/9] net/ice/base: support queue BW allocation configuration
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 2/9] net/base/ice: support priority configuration of the exact node Wenjun Wu
@ 2022-03-29 1:48 ` Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 4/9] net/ice: support queue bandwidth limit Wenjun Wu
` (12 subsequent siblings)
15 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 1:48 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang; +Cc: Wenjun Wu
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index c0f90b762b..4b7fdb2f13 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_node_priority - config priority of node
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e1dc6e18a4..454a1570bb 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_node_priority(struct ice_port_info *pi,
struct ice_sched_node *node, u8 priority);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v1 4/9] net/ice: support queue bandwidth limit
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-03-29 1:48 ` [PATCH v1 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-03-29 1:48 ` Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 5/9] net/ice: support queue group " Wenjun Wu
` (11 subsequent siblings)
15 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 1:48 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang; +Cc: Ting Xu, Wenjun Wu
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 19 ++
drivers/net/ice/ice_ethdev.h | 48 +++
drivers/net/ice/ice_tm.c | 599 +++++++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
4 files changed, 667 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13adcf90ed..37897765c8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2312,6 +2325,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2492,6 +2508,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ed580d438..0841e1866c 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,48 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +541,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -620,6 +665,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..65eed0acbd
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,599 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not root or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ txq = dev->data->tx_queues[tm_node->id];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile)
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ goto fail_clear;
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v1 5/9] net/ice: support queue group bandwidth limit
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-03-29 1:48 ` [PATCH v1 4/9] net/ice: support queue bandwidth limit Wenjun Wu
@ 2022-03-29 1:48 ` Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 6/9] net/ice: support queue priority configuration Wenjun Wu
` (10 subsequent siblings)
15 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 1:48 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang; +Cc: Wenjun Wu
To set up the exact queue group, we need to reconfigure topology by
delete and then recreate queue nodes.
This patch adds queue group configuration support and queue group
bandwidth limit support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 9 +-
drivers/net/ice/ice_tm.c | 240 ++++++++++++++++++++++++++++++++---
2 files changed, 233 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 0841e1866c..6ddbcc9972 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -474,6 +474,7 @@ struct ice_tm_node {
uint32_t weight;
uint32_t reference_count;
struct ice_tm_node *parent;
+ struct ice_tm_node **children;
struct ice_tm_shaper_profile *shaper_profile;
struct rte_tm_node_params params;
};
@@ -482,6 +483,8 @@ struct ice_tm_node {
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
};
@@ -489,10 +492,14 @@ enum ice_tm_node_type {
/* Struct to store all the Traffic Manager configuration. */
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
- struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node *root; /* root node - port */
struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 65eed0acbd..33f3aae5be 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -44,8 +44,12 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
}
@@ -62,6 +66,16 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
rte_free(tm_node);
@@ -79,6 +93,8 @@ ice_tm_node_search(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -94,6 +110,20 @@ ice_tm_node_search(struct rte_eth_dev *dev,
}
}
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
TAILQ_FOREACH(tm_node, queue_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QUEUE;
@@ -354,6 +384,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -415,6 +446,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -431,9 +464,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "parent is not root or TC";
+ error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
@@ -452,6 +487,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
error->message = "too many TCs";
return -EINVAL;
}
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
} else {
/* check the queue number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
@@ -466,7 +515,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or queue node */
+ /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -478,6 +527,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
@@ -485,10 +538,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node, node);
tm_node->tc = pf->tm_conf.nb_tc_node;
pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->tc;
+ tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -543,11 +606,17 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
+ /* TC or VSI or queue group or queue node */
tm_node->parent->reference_count--;
if (node_type == ICE_TM_NODE_TYPE_TC) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
} else {
TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
pf->tm_conf.nb_queue_node--;
@@ -557,36 +626,177 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
- TAILQ_FOREACH(tm_node, queue_list, node) {
- txq = dev->data->tx_queues[tm_node->id];
- vsi = txq->vsi;
- if (tm_node->shaper_profile)
+ }
+
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
-
- peak = peak / 1000 * BITS_PER_BYTE;
- ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
- tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
- if (ret_val) {
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ PMD_DRV_LOG(ERR, "too many queues");
goto fail_clear;
}
}
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
return ret_val;
fail_clear:
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v1 6/9] net/ice: support queue priority configuration
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-03-29 1:48 ` [PATCH v1 5/9] net/ice: support queue group " Wenjun Wu
@ 2022-03-29 1:48 ` Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 7/9] net/ice: support queue weight configuration Wenjun Wu
` (9 subsequent siblings)
15 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 1:48 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang; +Cc: Wenjun Wu
This patch adds queue priority configuration support.
The highest priority is 0, and the lowest priority is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 33f3aae5be..9552158b53 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -780,6 +781,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -795,6 +797,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v1 7/9] net/ice: support queue weight configuration
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-03-29 1:48 ` [PATCH v1 6/9] net/ice: support queue priority configuration Wenjun Wu
@ 2022-03-29 1:48 ` Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 8/9] net/ice: support queue group priority configuration Wenjun Wu
` (8 subsequent siblings)
15 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 1:48 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang; +Cc: Wenjun Wu
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 9552158b53..36981f78dd 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -805,6 +805,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v1 8/9] net/ice: support queue group priority configuration
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 preceding siblings ...)
2022-03-29 1:48 ` [PATCH v1 7/9] net/ice: support queue weight configuration Wenjun Wu
@ 2022-03-29 1:48 ` Wenjun Wu
2022-03-29 1:48 ` [PATCH v1 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
` (7 subsequent siblings)
15 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 1:48 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang; +Cc: Wenjun Wu
This patch adds queue group priority configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 36981f78dd..fdbb415eda 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -765,6 +765,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_node_priority(hw->port_info, qgroup_sched_node, priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v1 9/9] net/ice: add warning log for unsupported configuration
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (7 preceding siblings ...)
2022-03-29 1:48 ` [PATCH v1 8/9] net/ice: support queue group priority configuration Wenjun Wu
@ 2022-03-29 1:48 ` Wenjun Wu
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 subsequent siblings)
15 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 1:48 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang; +Cc: Wenjun Wu
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index fdbb415eda..d214b6bc91 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -515,6 +515,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
+ if (tm_node->priority != 0 && (level_id != ICE_TM_NODE_TYPE_QUEUE ||
+ level_id != ICE_TM_NODE_TYPE_QGROUP))
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
/* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 0/9] Enable ETS-based TX QoS on PF
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (8 preceding siblings ...)
2022-03-29 1:48 ` [PATCH v1 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
@ 2022-03-29 2:02 ` Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (9 more replies)
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 subsequent siblings)
15 siblings, 10 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:02 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
Ting Xu (1):
net/ice: support queue bandwidth limit
Wenjun Wu (8):
net/ice/base: fix dead lock issue when getting node from ID type
net/base/ice: support priority configuration of the exact node
net/ice/base: support queue BW allocation configuration
net/ice: support queue group bandwidth limit
net/ice: support queue priority configuration
net/ice: support queue weight configuration
net/ice: support queue group priority configuration
net/ice: add warning log for unsupported configuration
drivers/net/ice/base/ice_sched.c | 89 +++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 844 +++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
6 files changed, 1012 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 1/9] net/ice/base: fix dead lock issue when getting node from ID type
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-03-29 2:03 ` Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 2/9] net/base/ice: support priority configuration of the exact node Wenjun Wu
` (8 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:03 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 2/9] net/base/ice: support priority configuration of the exact node
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-03-29 2:03 ` Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (7 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:03 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 21 +++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..c0f90b762b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,27 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_cfg_node_priority - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..e1dc6e18a4 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 3/9] net/ice/base: support queue BW allocation configuration
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 2/9] net/base/ice: support priority configuration of the exact node Wenjun Wu
@ 2022-03-29 2:03 ` Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 4/9] net/ice: support queue bandwidth limit Wenjun Wu
` (6 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:03 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index c0f90b762b..4b7fdb2f13 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_node_priority - config priority of node
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e1dc6e18a4..454a1570bb 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_node_priority(struct ice_port_info *pi,
struct ice_sched_node *node, u8 priority);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 4/9] net/ice: support queue bandwidth limit
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-03-29 2:03 ` [PATCH v2 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-03-29 2:03 ` Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 5/9] net/ice: support queue group " Wenjun Wu
` (5 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:03 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 19 ++
drivers/net/ice/ice_ethdev.h | 48 +++
drivers/net/ice/ice_tm.c | 599 +++++++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
4 files changed, 667 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13adcf90ed..37897765c8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2312,6 +2325,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2492,6 +2508,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ed580d438..0841e1866c 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,48 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +541,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -620,6 +665,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..65eed0acbd
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,599 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not root or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ txq = dev->data->tx_queues[tm_node->id];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile)
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ goto fail_clear;
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 5/9] net/ice: support queue group bandwidth limit
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-03-29 2:03 ` [PATCH v2 4/9] net/ice: support queue bandwidth limit Wenjun Wu
@ 2022-03-29 2:03 ` Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 6/9] net/ice: support queue priority configuration Wenjun Wu
` (4 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:03 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
To set up the exact queue group, we need to reconfigure topology by
delete and then recreate queue nodes.
This patch adds queue group configuration support and queue group
bandwidth limit support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 9 +-
drivers/net/ice/ice_tm.c | 239 ++++++++++++++++++++++++++++++++---
2 files changed, 232 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 0841e1866c..6ddbcc9972 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -474,6 +474,7 @@ struct ice_tm_node {
uint32_t weight;
uint32_t reference_count;
struct ice_tm_node *parent;
+ struct ice_tm_node **children;
struct ice_tm_shaper_profile *shaper_profile;
struct rte_tm_node_params params;
};
@@ -482,6 +483,8 @@ struct ice_tm_node {
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
};
@@ -489,10 +492,14 @@ enum ice_tm_node_type {
/* Struct to store all the Traffic Manager configuration. */
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
- struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node *root; /* root node - port */
struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 65eed0acbd..83dbc09505 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -44,8 +44,12 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
}
@@ -62,6 +66,16 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
rte_free(tm_node);
@@ -79,6 +93,8 @@ ice_tm_node_search(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -94,6 +110,20 @@ ice_tm_node_search(struct rte_eth_dev *dev,
}
}
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
TAILQ_FOREACH(tm_node, queue_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QUEUE;
@@ -354,6 +384,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -415,6 +446,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -431,9 +464,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "parent is not root or TC";
+ error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
@@ -452,6 +487,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
error->message = "too many TCs";
return -EINVAL;
}
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
} else {
/* check the queue number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
@@ -466,7 +515,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or queue node */
+ /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -478,6 +527,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
@@ -485,10 +538,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node, node);
tm_node->tc = pf->tm_conf.nb_tc_node;
pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->tc;
+ tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -543,11 +606,17 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
+ /* TC or VSI or queue group or queue node */
tm_node->parent->reference_count--;
if (node_type == ICE_TM_NODE_TYPE_TC) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
} else {
TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
pf->tm_conf.nb_queue_node--;
@@ -557,36 +626,176 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
- TAILQ_FOREACH(tm_node, queue_list, node) {
- txq = dev->data->tx_queues[tm_node->id];
- vsi = txq->vsi;
- if (tm_node->shaper_profile)
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
-
- peak = peak / 1000 * BITS_PER_BYTE;
- ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
- tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
- if (ret_val) {
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ PMD_DRV_LOG(ERR, "too many queues");
goto fail_clear;
}
}
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
return ret_val;
fail_clear:
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 6/9] net/ice: support queue priority configuration
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-03-29 2:03 ` [PATCH v2 5/9] net/ice: support queue group " Wenjun Wu
@ 2022-03-29 2:03 ` Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 7/9] net/ice: support queue weight configuration Wenjun Wu
` (3 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:03 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue priority configuration support.
The highest priority is 0, and the lowest priority is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 83dbc09505..5ebf9abeb9 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -779,6 +780,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +796,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 7/9] net/ice: support queue weight configuration
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-03-29 2:03 ` [PATCH v2 6/9] net/ice: support queue priority configuration Wenjun Wu
@ 2022-03-29 2:03 ` Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 8/9] net/ice: support queue group priority configuration Wenjun Wu
` (2 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:03 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 5ebf9abeb9..070cb7db41 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -804,6 +804,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 8/9] net/ice: support queue group priority configuration
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 preceding siblings ...)
2022-03-29 2:03 ` [PATCH v2 7/9] net/ice: support queue weight configuration Wenjun Wu
@ 2022-03-29 2:03 ` Wenjun Wu
2022-03-29 2:03 ` [PATCH v2 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:03 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue group priority configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 070cb7db41..8a71bfa599 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -764,6 +764,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_node_priority(hw->port_info, qgroup_sched_node, priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v2 9/9] net/ice: add warning log for unsupported configuration
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (7 preceding siblings ...)
2022-03-29 2:03 ` [PATCH v2 8/9] net/ice: support queue group priority configuration Wenjun Wu
@ 2022-03-29 2:03 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 2:03 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 8a71bfa599..9fa959b4a0 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -515,6 +515,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
+ if (tm_node->priority != 0 && (level_id != ICE_TM_NODE_TYPE_QUEUE ||
+ level_id != ICE_TM_NODE_TYPE_QGROUP))
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
/* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 0/9] Enable ETS-based TX QoS on PF
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (8 preceding siblings ...)
2022-03-29 2:03 ` [PATCH v2 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (9 more replies)
9 siblings, 10 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
v3: fix uninitialization issue
Ting Xu (1):
net/ice: support queue bandwidth limit
Wenjun Wu (8):
net/ice/base: fix dead lock issue when getting node from ID type
net/base/ice: support priority configuration of the exact node
net/ice/base: support queue BW allocation configuration
net/ice: support queue group bandwidth limit
net/ice: support queue priority configuration
net/ice: support queue weight configuration
net/ice: support queue group priority configuration
net/ice: add warning log for unsupported configuration
drivers/net/ice/base/ice_sched.c | 89 +++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 844 +++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
6 files changed, 1012 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 1/9] net/ice/base: fix dead lock issue when getting node from ID type
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 2/9] net/base/ice: support priority configuration of the exact node Wenjun Wu
` (8 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 2/9] net/base/ice: support priority configuration of the exact node
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (7 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 21 +++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..c0f90b762b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,27 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_cfg_node_priority - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..e1dc6e18a4 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 3/9] net/ice/base: support queue BW allocation configuration
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 2/9] net/base/ice: support priority configuration of the exact node Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 4/9] net/ice: support queue bandwidth limit Wenjun Wu
` (6 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index c0f90b762b..4b7fdb2f13 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_node_priority - config priority of node
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e1dc6e18a4..454a1570bb 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_node_priority(struct ice_port_info *pi,
struct ice_sched_node *node, u8 priority);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 4/9] net/ice: support queue bandwidth limit
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-03-29 3:06 ` [PATCH v3 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 5/9] net/ice: support queue group " Wenjun Wu
` (5 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 19 ++
drivers/net/ice/ice_ethdev.h | 48 +++
drivers/net/ice/ice_tm.c | 599 +++++++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
4 files changed, 667 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13adcf90ed..37897765c8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2312,6 +2325,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2492,6 +2508,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ed580d438..0841e1866c 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,48 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +541,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -620,6 +665,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..65eed0acbd
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,599 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not root or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ txq = dev->data->tx_queues[tm_node->id];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile)
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ goto fail_clear;
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 5/9] net/ice: support queue group bandwidth limit
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-03-29 3:06 ` [PATCH v3 4/9] net/ice: support queue bandwidth limit Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 6/9] net/ice: support queue priority configuration Wenjun Wu
` (4 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
To set up the exact queue group, we need to reconfigure topology by
delete and then recreate queue nodes.
This patch adds queue group configuration support and queue group
bandwidth limit support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 9 +-
drivers/net/ice/ice_tm.c | 239 ++++++++++++++++++++++++++++++++---
2 files changed, 232 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 0841e1866c..6ddbcc9972 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -474,6 +474,7 @@ struct ice_tm_node {
uint32_t weight;
uint32_t reference_count;
struct ice_tm_node *parent;
+ struct ice_tm_node **children;
struct ice_tm_shaper_profile *shaper_profile;
struct rte_tm_node_params params;
};
@@ -482,6 +483,8 @@ struct ice_tm_node {
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
};
@@ -489,10 +492,14 @@ enum ice_tm_node_type {
/* Struct to store all the Traffic Manager configuration. */
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
- struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node *root; /* root node - port */
struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 65eed0acbd..83dbc09505 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -44,8 +44,12 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
}
@@ -62,6 +66,16 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
rte_free(tm_node);
@@ -79,6 +93,8 @@ ice_tm_node_search(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -94,6 +110,20 @@ ice_tm_node_search(struct rte_eth_dev *dev,
}
}
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
TAILQ_FOREACH(tm_node, queue_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QUEUE;
@@ -354,6 +384,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -415,6 +446,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -431,9 +464,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "parent is not root or TC";
+ error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
@@ -452,6 +487,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
error->message = "too many TCs";
return -EINVAL;
}
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
} else {
/* check the queue number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
@@ -466,7 +515,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or queue node */
+ /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -478,6 +527,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
@@ -485,10 +538,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node, node);
tm_node->tc = pf->tm_conf.nb_tc_node;
pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->tc;
+ tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -543,11 +606,17 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
+ /* TC or VSI or queue group or queue node */
tm_node->parent->reference_count--;
if (node_type == ICE_TM_NODE_TYPE_TC) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
} else {
TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
pf->tm_conf.nb_queue_node--;
@@ -557,36 +626,176 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
- TAILQ_FOREACH(tm_node, queue_list, node) {
- txq = dev->data->tx_queues[tm_node->id];
- vsi = txq->vsi;
- if (tm_node->shaper_profile)
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
-
- peak = peak / 1000 * BITS_PER_BYTE;
- ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
- tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
- if (ret_val) {
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ PMD_DRV_LOG(ERR, "too many queues");
goto fail_clear;
}
}
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
return ret_val;
fail_clear:
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 6/9] net/ice: support queue priority configuration
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-03-29 3:06 ` [PATCH v3 5/9] net/ice: support queue group " Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 7/9] net/ice: support queue weight configuration Wenjun Wu
` (3 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue priority configuration support.
The highest priority is 0, and the lowest priority is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 83dbc09505..5ebf9abeb9 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -779,6 +780,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +796,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 7/9] net/ice: support queue weight configuration
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-03-29 3:06 ` [PATCH v3 6/9] net/ice: support queue priority configuration Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 8/9] net/ice: support queue group priority configuration Wenjun Wu
` (2 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 5ebf9abeb9..070cb7db41 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -804,6 +804,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 8/9] net/ice: support queue group priority configuration
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 preceding siblings ...)
2022-03-29 3:06 ` [PATCH v3 7/9] net/ice: support queue weight configuration Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 3:06 ` [PATCH v3 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue group priority configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 070cb7db41..8a71bfa599 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -764,6 +764,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_node_priority(hw->port_info, qgroup_sched_node, priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v3 9/9] net/ice: add warning log for unsupported configuration
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (7 preceding siblings ...)
2022-03-29 3:06 ` [PATCH v3 8/9] net/ice: support queue group priority configuration Wenjun Wu
@ 2022-03-29 3:06 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 3:06 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 8a71bfa599..8ae0b3930c 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -531,6 +531,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+ if (tm_node->priority != 0 && (level_id != ICE_TM_NODE_TYPE_QUEUE ||
+ level_id != ICE_TM_NODE_TYPE_QGROUP))
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 0/9] Enable ETS-based TX QoS on PF
2022-03-29 3:06 ` [PATCH v3 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (8 preceding siblings ...)
2022-03-29 3:06 ` [PATCH v3 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (9 more replies)
9 siblings, 10 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
v3: fix uninitialization issue
v4: fix logical issue
Ting Xu (1):
net/ice: support queue bandwidth limit
Wenjun Wu (8):
net/ice/base: fix dead lock issue when getting node from ID type
net/base/ice: support priority configuration of the exact node
net/ice/base: support queue BW allocation configuration
net/ice: support queue group bandwidth limit
net/ice: support queue priority configuration
net/ice: support queue weight configuration
net/ice: support queue group priority configuration
net/ice: add warning log for unsupported configuration
drivers/net/ice/base/ice_sched.c | 89 +++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 844 +++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
6 files changed, 1012 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 1/9] net/ice/base: fix dead lock issue when getting node from ID type
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 2/9] net/base/ice: support priority configuration of the exact node Wenjun Wu
` (8 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 2/9] net/base/ice: support priority configuration of the exact node
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (7 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 21 +++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..c0f90b762b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,27 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_cfg_node_priority - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..e1dc6e18a4 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 3/9] net/ice/base: support queue BW allocation configuration
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 2/9] net/base/ice: support priority configuration of the exact node Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 4/9] net/ice: support queue bandwidth limit Wenjun Wu
` (6 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index c0f90b762b..4b7fdb2f13 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_node_priority - config priority of node
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e1dc6e18a4..454a1570bb 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_node_priority(struct ice_port_info *pi,
struct ice_sched_node *node, u8 priority);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 4/9] net/ice: support queue bandwidth limit
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-03-29 4:56 ` [PATCH v4 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 5/9] net/ice: support queue group " Wenjun Wu
` (5 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 19 ++
drivers/net/ice/ice_ethdev.h | 48 +++
drivers/net/ice/ice_tm.c | 599 +++++++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
4 files changed, 667 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13adcf90ed..37897765c8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2312,6 +2325,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2492,6 +2508,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ed580d438..0841e1866c 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,48 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +541,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -620,6 +665,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..65eed0acbd
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,599 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not root or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ txq = dev->data->tx_queues[tm_node->id];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile)
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ goto fail_clear;
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 5/9] net/ice: support queue group bandwidth limit
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-03-29 4:56 ` [PATCH v4 4/9] net/ice: support queue bandwidth limit Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 6/9] net/ice: support queue priority configuration Wenjun Wu
` (4 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
To set up the exact queue group, we need to reconfigure topology by
delete and then recreate queue nodes.
This patch adds queue group configuration support and queue group
bandwidth limit support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 9 +-
drivers/net/ice/ice_tm.c | 239 ++++++++++++++++++++++++++++++++---
2 files changed, 232 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 0841e1866c..6ddbcc9972 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -474,6 +474,7 @@ struct ice_tm_node {
uint32_t weight;
uint32_t reference_count;
struct ice_tm_node *parent;
+ struct ice_tm_node **children;
struct ice_tm_shaper_profile *shaper_profile;
struct rte_tm_node_params params;
};
@@ -482,6 +483,8 @@ struct ice_tm_node {
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
};
@@ -489,10 +492,14 @@ enum ice_tm_node_type {
/* Struct to store all the Traffic Manager configuration. */
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
- struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node *root; /* root node - port */
struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 65eed0acbd..83dbc09505 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -44,8 +44,12 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
}
@@ -62,6 +66,16 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
rte_free(tm_node);
@@ -79,6 +93,8 @@ ice_tm_node_search(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -94,6 +110,20 @@ ice_tm_node_search(struct rte_eth_dev *dev,
}
}
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
TAILQ_FOREACH(tm_node, queue_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QUEUE;
@@ -354,6 +384,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -415,6 +446,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -431,9 +464,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "parent is not root or TC";
+ error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
@@ -452,6 +487,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
error->message = "too many TCs";
return -EINVAL;
}
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
} else {
/* check the queue number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
@@ -466,7 +515,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or queue node */
+ /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -478,6 +527,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
@@ -485,10 +538,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node, node);
tm_node->tc = pf->tm_conf.nb_tc_node;
pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->tc;
+ tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -543,11 +606,17 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
+ /* TC or VSI or queue group or queue node */
tm_node->parent->reference_count--;
if (node_type == ICE_TM_NODE_TYPE_TC) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
} else {
TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
pf->tm_conf.nb_queue_node--;
@@ -557,36 +626,176 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
- TAILQ_FOREACH(tm_node, queue_list, node) {
- txq = dev->data->tx_queues[tm_node->id];
- vsi = txq->vsi;
- if (tm_node->shaper_profile)
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
-
- peak = peak / 1000 * BITS_PER_BYTE;
- ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
- tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
- if (ret_val) {
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ PMD_DRV_LOG(ERR, "too many queues");
goto fail_clear;
}
}
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
return ret_val;
fail_clear:
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 6/9] net/ice: support queue priority configuration
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-03-29 4:56 ` [PATCH v4 5/9] net/ice: support queue group " Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 7/9] net/ice: support queue weight configuration Wenjun Wu
` (3 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue priority configuration support.
The highest priority is 0, and the lowest priority is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 83dbc09505..5ebf9abeb9 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -779,6 +780,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +796,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 7/9] net/ice: support queue weight configuration
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-03-29 4:56 ` [PATCH v4 6/9] net/ice: support queue priority configuration Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 8/9] net/ice: support queue group priority configuration Wenjun Wu
` (2 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 5ebf9abeb9..070cb7db41 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -804,6 +804,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 8/9] net/ice: support queue group priority configuration
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 preceding siblings ...)
2022-03-29 4:56 ` [PATCH v4 7/9] net/ice: support queue weight configuration Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 4:56 ` [PATCH v4 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue group priority configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 070cb7db41..8a71bfa599 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -764,6 +764,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_node_priority(hw->port_info, qgroup_sched_node, priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v4 9/9] net/ice: add warning log for unsupported configuration
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (7 preceding siblings ...)
2022-03-29 4:56 ` [PATCH v4 8/9] net/ice: support queue group priority configuration Wenjun Wu
@ 2022-03-29 4:56 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 4:56 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 8a71bfa599..3ae60e5594 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -531,6 +531,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+ if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
+ level_id != ICE_TM_NODE_TYPE_QGROUP)
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 0/9] Enable ETS-based TX QoS on PF
2022-03-29 4:56 ` [PATCH v4 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (8 preceding siblings ...)
2022-03-29 4:56 ` [PATCH v4 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (8 more replies)
9 siblings, 9 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
v3: fix uninitialization issue
v4: fix logical issue
v5: fix CI testing issue. Add explicit cast.
Ting Xu (1):
net/ice: support queue bandwidth limit
Wenjun Wu (8):
net/ice/base: fix dead lock issue when getting node from ID type
net/ice/base: support priority configuration of the exact node
net/ice/base: support queue BW allocation configuration
net/ice: support queue group bandwidth limit
net/ice: support queue priority configuration
net/ice: support queue weight configuration
net/ice: support queue group priority configuration
net/ice: add warning log for unsupported configuration
drivers/net/ice/base/ice_sched.c | 89 +++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 844 +++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
6 files changed, 1012 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 1/9] net/ice/base: fix dead lock issue when getting node from ID type
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 2/9] net/ice/base: support priority configuration of the exact node Wenjun Wu
` (7 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 2/9] net/ice/base: support priority configuration of the exact node
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (6 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 21 +++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..c0f90b762b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,27 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_cfg_node_priority - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..e1dc6e18a4 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 3/9] net/ice/base: support queue BW allocation configuration
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 2/9] net/ice/base: support priority configuration of the exact node Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 4/9] net/ice: support queue bandwidth limit Wenjun Wu
` (5 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index c0f90b762b..4b7fdb2f13 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_node_priority - config priority of node
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e1dc6e18a4..454a1570bb 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_node_priority(struct ice_port_info *pi,
struct ice_sched_node *node, u8 priority);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 4/9] net/ice: support queue bandwidth limit
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-03-29 9:38 ` [PATCH v5 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 5/9] net/ice: support queue group " Wenjun Wu
` (4 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 19 ++
drivers/net/ice/ice_ethdev.h | 48 +++
drivers/net/ice/ice_tm.c | 599 +++++++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
4 files changed, 667 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13adcf90ed..37897765c8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2312,6 +2325,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2492,6 +2508,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ed580d438..0841e1866c 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,48 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +541,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -620,6 +665,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..383af88981
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,599 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not root or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != (uint32_t)parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ txq = dev->data->tx_queues[tm_node->id];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile)
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ goto fail_clear;
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 5/9] net/ice: support queue group bandwidth limit
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-03-29 9:38 ` [PATCH v5 4/9] net/ice: support queue bandwidth limit Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 6/9] net/ice: support queue priority configuration Wenjun Wu
` (3 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
To set up the exact queue group, we need to reconfigure topology by
delete and then recreate queue nodes.
This patch adds queue group configuration support and queue group
bandwidth limit support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 9 +-
drivers/net/ice/ice_tm.c | 239 ++++++++++++++++++++++++++++++++---
2 files changed, 232 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 0841e1866c..6ddbcc9972 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -474,6 +474,7 @@ struct ice_tm_node {
uint32_t weight;
uint32_t reference_count;
struct ice_tm_node *parent;
+ struct ice_tm_node **children;
struct ice_tm_shaper_profile *shaper_profile;
struct rte_tm_node_params params;
};
@@ -482,6 +483,8 @@ struct ice_tm_node {
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
};
@@ -489,10 +492,14 @@ enum ice_tm_node_type {
/* Struct to store all the Traffic Manager configuration. */
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
- struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node *root; /* root node - port */
struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 383af88981..d70d077286 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -44,8 +44,12 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
}
@@ -62,6 +66,16 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
rte_free(tm_node);
@@ -79,6 +93,8 @@ ice_tm_node_search(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -94,6 +110,20 @@ ice_tm_node_search(struct rte_eth_dev *dev,
}
}
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
TAILQ_FOREACH(tm_node, queue_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QUEUE;
@@ -354,6 +384,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -415,6 +446,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -431,9 +464,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "parent is not root or TC";
+ error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
@@ -452,6 +487,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
error->message = "too many TCs";
return -EINVAL;
}
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
} else {
/* check the queue number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
@@ -466,7 +515,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or queue node */
+ /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -478,6 +527,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
@@ -485,10 +538,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node, node);
tm_node->tc = pf->tm_conf.nb_tc_node;
pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->tc;
+ tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -543,11 +606,17 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
+ /* TC or VSI or queue group or queue node */
tm_node->parent->reference_count--;
if (node_type == ICE_TM_NODE_TYPE_TC) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
} else {
TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
pf->tm_conf.nb_queue_node--;
@@ -557,36 +626,176 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
- TAILQ_FOREACH(tm_node, queue_list, node) {
- txq = dev->data->tx_queues[tm_node->id];
- vsi = txq->vsi;
- if (tm_node->shaper_profile)
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
-
- peak = peak / 1000 * BITS_PER_BYTE;
- ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
- tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
- if (ret_val) {
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ PMD_DRV_LOG(ERR, "too many queues");
goto fail_clear;
}
}
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
return ret_val;
fail_clear:
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 6/9] net/ice: support queue priority configuration
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-03-29 9:38 ` [PATCH v5 5/9] net/ice: support queue group " Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 7/9] net/ice: support queue weight configuration Wenjun Wu
` (2 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue priority configuration support.
The highest priority is 0, and the lowest priority is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index d70d077286..91e420d653 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -779,6 +780,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +796,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 7/9] net/ice: support queue weight configuration
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-03-29 9:38 ` [PATCH v5 6/9] net/ice: support queue priority configuration Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 8/9] net/ice: support queue group priority configuration Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 91e420d653..4d7bb9102c 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -804,6 +804,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 8/9] net/ice: support queue group priority configuration
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 preceding siblings ...)
2022-03-29 9:38 ` [PATCH v5 7/9] net/ice: support queue weight configuration Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
2022-03-29 9:38 ` [PATCH v5 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue group priority configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 4d7bb9102c..17f369994b 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -764,6 +764,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_node_priority(hw->port_info, qgroup_sched_node, priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v5 9/9] net/ice: add warning log for unsupported configuration
2022-03-29 9:38 ` [PATCH v5 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (7 preceding siblings ...)
2022-03-29 9:38 ` [PATCH v5 8/9] net/ice: support queue group priority configuration Wenjun Wu
@ 2022-03-29 9:38 ` Wenjun Wu
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-03-29 9:38 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 17f369994b..3e98c2f01e 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -531,6 +531,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+ if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
+ level_id != ICE_TM_NODE_TYPE_QGROUP)
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 00/10] Enable ETS-based TX QoS on PF
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (9 preceding siblings ...)
2022-03-29 2:02 ` [PATCH v2 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 01/10] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (9 more replies)
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 subsequent siblings)
15 siblings, 10 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
v3: fix uninitialization issue.
v4: fix logical issue.
v5: fix CI testing issue. Add explicit cast.
v6: add release note.
Ting Xu (1):
net/ice: support queue bandwidth limit
Wenjun Wu (9):
net/ice/base: fix dead lock issue when getting node from ID type
net/ice/base: support priority configuration of the exact node
net/ice/base: support queue BW allocation configuration
net/ice: support queue group bandwidth limit
net/ice: support queue priority configuration
net/ice: support queue weight configuration
net/ice: support queue group priority configuration
net/ice: add warning log for unsupported configuration
doc: add release notes for 22.07
doc/guides/rel_notes/release_22_07.rst | 4 +
drivers/net/ice/base/ice_sched.c | 89 ++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 844 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
7 files changed, 1016 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 01/10] net/ice/base: fix dead lock issue when getting node from ID type
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 02/10] net/ice/base: support priority configuration of the exact node Wenjun Wu
` (8 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 02/10] net/ice/base: support priority configuration of the exact node
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 01/10] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 03/10] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (7 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 21 +++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..c0f90b762b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,27 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_cfg_node_priority - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..e1dc6e18a4 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 03/10] net/ice/base: support queue BW allocation configuration
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 01/10] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 02/10] net/ice/base: support priority configuration of the exact node Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 04/10] net/ice: support queue bandwidth limit Wenjun Wu
` (6 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index c0f90b762b..4b7fdb2f13 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_node_priority - config priority of node
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e1dc6e18a4..454a1570bb 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_node_priority(struct ice_port_info *pi,
struct ice_sched_node *node, u8 priority);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 04/10] net/ice: support queue bandwidth limit
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-04-08 5:38 ` [PATCH v6 03/10] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 05/10] net/ice: support queue group " Wenjun Wu
` (5 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
drivers/net/ice/ice_ethdev.c | 19 ++
drivers/net/ice/ice_ethdev.h | 48 +++
drivers/net/ice/ice_tm.c | 599 +++++++++++++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
4 files changed, 667 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13adcf90ed..37897765c8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2312,6 +2325,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2492,6 +2508,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ed580d438..0841e1866c 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,48 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +541,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -620,6 +665,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..383af88981
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,599 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not root or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != (uint32_t)parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ txq = dev->data->tx_queues[tm_node->id];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile)
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ goto fail_clear;
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 05/10] net/ice: support queue group bandwidth limit
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-04-08 5:38 ` [PATCH v6 04/10] net/ice: support queue bandwidth limit Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 06/10] net/ice: support queue priority configuration Wenjun Wu
` (4 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
To set up the exact queue group, we need to reconfigure topology by
delete and then recreate queue nodes.
This patch adds queue group configuration support and queue group
bandwidth limit support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 9 +-
drivers/net/ice/ice_tm.c | 239 ++++++++++++++++++++++++++++++++---
2 files changed, 232 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 0841e1866c..6ddbcc9972 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -474,6 +474,7 @@ struct ice_tm_node {
uint32_t weight;
uint32_t reference_count;
struct ice_tm_node *parent;
+ struct ice_tm_node **children;
struct ice_tm_shaper_profile *shaper_profile;
struct rte_tm_node_params params;
};
@@ -482,6 +483,8 @@ struct ice_tm_node {
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
};
@@ -489,10 +492,14 @@ enum ice_tm_node_type {
/* Struct to store all the Traffic Manager configuration. */
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
- struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node *root; /* root node - port */
struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 383af88981..d70d077286 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -44,8 +44,12 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
}
@@ -62,6 +66,16 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
rte_free(tm_node);
@@ -79,6 +93,8 @@ ice_tm_node_search(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -94,6 +110,20 @@ ice_tm_node_search(struct rte_eth_dev *dev,
}
}
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
TAILQ_FOREACH(tm_node, queue_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QUEUE;
@@ -354,6 +384,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -415,6 +446,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -431,9 +464,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "parent is not root or TC";
+ error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
@@ -452,6 +487,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
error->message = "too many TCs";
return -EINVAL;
}
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
} else {
/* check the queue number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
@@ -466,7 +515,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or queue node */
+ /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -478,6 +527,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
@@ -485,10 +538,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node, node);
tm_node->tc = pf->tm_conf.nb_tc_node;
pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->tc;
+ tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -543,11 +606,17 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
+ /* TC or VSI or queue group or queue node */
tm_node->parent->reference_count--;
if (node_type == ICE_TM_NODE_TYPE_TC) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
} else {
TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
pf->tm_conf.nb_queue_node--;
@@ -557,36 +626,176 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
- TAILQ_FOREACH(tm_node, queue_list, node) {
- txq = dev->data->tx_queues[tm_node->id];
- vsi = txq->vsi;
- if (tm_node->shaper_profile)
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
-
- peak = peak / 1000 * BITS_PER_BYTE;
- ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
- tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
- if (ret_val) {
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ PMD_DRV_LOG(ERR, "too many queues");
goto fail_clear;
}
}
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
return ret_val;
fail_clear:
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 06/10] net/ice: support queue priority configuration
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-04-08 5:38 ` [PATCH v6 05/10] net/ice: support queue group " Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 07/10] net/ice: support queue weight configuration Wenjun Wu
` (3 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
This patch adds queue priority configuration support.
The highest priority is 0, and the lowest priority is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index d70d077286..91e420d653 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -779,6 +780,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +796,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 07/10] net/ice: support queue weight configuration
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-04-08 5:38 ` [PATCH v6 06/10] net/ice: support queue priority configuration Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 08/10] net/ice: support queue group priority configuration Wenjun Wu
` (2 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 91e420d653..4d7bb9102c 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -804,6 +804,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 08/10] net/ice: support queue group priority configuration
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 preceding siblings ...)
2022-04-08 5:38 ` [PATCH v6 07/10] net/ice: support queue weight configuration Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 09/10] net/ice: add warning log for unsupported configuration Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 10/10] doc: add release notes for 22.07 Wenjun Wu
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
This patch adds queue group priority configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 4d7bb9102c..17f369994b 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -764,6 +764,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_node_priority(hw->port_info, qgroup_sched_node, priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 09/10] net/ice: add warning log for unsupported configuration
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
` (7 preceding siblings ...)
2022-04-08 5:38 ` [PATCH v6 08/10] net/ice: support queue group priority configuration Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
2022-04-08 5:38 ` [PATCH v6 10/10] doc: add release notes for 22.07 Wenjun Wu
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 17f369994b..3e98c2f01e 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -531,6 +531,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+ if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
+ level_id != ICE_TM_NODE_TYPE_QGROUP)
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v6 10/10] doc: add release notes for 22.07
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
` (8 preceding siblings ...)
2022-04-08 5:38 ` [PATCH v6 09/10] net/ice: add warning log for unsupported configuration Wenjun Wu
@ 2022-04-08 5:38 ` Wenjun Wu
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-08 5:38 UTC (permalink / raw)
To: dev, qi.z.zhang, qiming.yang
Add support for ETS-based TX QoS on PF
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 42a5f2d990..2ce3c99fb8 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -55,6 +55,10 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Updated Intel ice driver.**
+
+ * Added Tx QoS rate limitation and priority configuration support for queue and queue group.
+ * Added TX QoS queue weight configuration support.
Removed Items
-------------
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 0/9] Enable ETS-based TX QoS on PF
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (10 preceding siblings ...)
2022-04-08 5:38 ` [PATCH v6 00/10] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (9 more replies)
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
` (3 subsequent siblings)
15 siblings, 10 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
v3: fix uninitialization issue.
v4: fix logical issue.
v5: fix CI testing issue. Add explicit cast.
v6: add release note.
v7: merge the release note with the previous patch.
Ting Xu (1):
net/ice: support queue bandwidth limit
Wenjun Wu (8):
net/ice/base: fix dead lock issue when getting node from ID type
net/ice/base: support priority configuration of the exact node
net/ice/base: support queue BW allocation configuration
net/ice: support queue group bandwidth limit
net/ice: support queue priority configuration
net/ice: support queue weight configuration
net/ice: support queue group priority configuration
net/ice: add warning log for unsupported configuration
doc/guides/rel_notes/release_22_07.rst | 4 +
drivers/net/ice/base/ice_sched.c | 89 ++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 844 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
7 files changed, 1016 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 1/9] net/ice/base: fix dead lock issue when getting node from ID type
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 2/9] net/ice/base: support priority configuration of the exact node Wenjun Wu
` (8 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 2/9] net/ice/base: support priority configuration of the exact node
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (7 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 21 +++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..c0f90b762b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,27 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_cfg_node_priority - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi, struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..e1dc6e18a4 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_node_priority(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 3/9] net/ice/base: support queue BW allocation configuration
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 2/9] net/ice/base: support priority configuration of the exact node Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 4/9] net/ice: support queue bandwidth limit Wenjun Wu
` (6 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index c0f90b762b..4b7fdb2f13 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_node_priority - config priority of node
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e1dc6e18a4..454a1570bb 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_node_priority(struct ice_port_info *pi,
struct ice_sched_node *node, u8 priority);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 4/9] net/ice: support queue bandwidth limit
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-04-22 0:57 ` [PATCH v7 3/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 5/9] net/ice: support queue group " Wenjun Wu
` (5 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 4 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 48 ++
drivers/net/ice/ice_tm.c | 599 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
5 files changed, 671 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 42a5f2d990..2ce3c99fb8 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -55,6 +55,10 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Updated Intel ice driver.**
+
+ * Added Tx QoS rate limitation and priority configuration support for queue and queue group.
+ * Added TX QoS queue weight configuration support.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13adcf90ed..37897765c8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2312,6 +2325,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2492,6 +2508,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ed580d438..0841e1866c 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,48 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +541,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -620,6 +665,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..383af88981
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,599 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not root or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != (uint32_t)parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ txq = dev->data->tx_queues[tm_node->id];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile)
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ goto fail_clear;
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 5/9] net/ice: support queue group bandwidth limit
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-04-22 0:57 ` [PATCH v7 4/9] net/ice: support queue bandwidth limit Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 6/9] net/ice: support queue priority configuration Wenjun Wu
` (4 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
To set up the exact queue group, we need to reconfigure topology by
delete and then recreate queue nodes.
This patch adds queue group configuration support and queue group
bandwidth limit support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 9 +-
drivers/net/ice/ice_tm.c | 239 ++++++++++++++++++++++++++++++++---
2 files changed, 232 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 0841e1866c..6ddbcc9972 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -474,6 +474,7 @@ struct ice_tm_node {
uint32_t weight;
uint32_t reference_count;
struct ice_tm_node *parent;
+ struct ice_tm_node **children;
struct ice_tm_shaper_profile *shaper_profile;
struct rte_tm_node_params params;
};
@@ -482,6 +483,8 @@ struct ice_tm_node {
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
};
@@ -489,10 +492,14 @@ enum ice_tm_node_type {
/* Struct to store all the Traffic Manager configuration. */
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
- struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node *root; /* root node - port */
struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 383af88981..d70d077286 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -44,8 +44,12 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
}
@@ -62,6 +66,16 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
rte_free(tm_node);
@@ -79,6 +93,8 @@ ice_tm_node_search(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -94,6 +110,20 @@ ice_tm_node_search(struct rte_eth_dev *dev,
}
}
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
TAILQ_FOREACH(tm_node, queue_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QUEUE;
@@ -354,6 +384,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -415,6 +446,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -431,9 +464,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "parent is not root or TC";
+ error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
@@ -452,6 +487,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
error->message = "too many TCs";
return -EINVAL;
}
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
} else {
/* check the queue number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
@@ -466,7 +515,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or queue node */
+ /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -478,6 +527,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
@@ -485,10 +538,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node, node);
tm_node->tc = pf->tm_conf.nb_tc_node;
pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->tc;
+ tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -543,11 +606,17 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
+ /* TC or VSI or queue group or queue node */
tm_node->parent->reference_count--;
if (node_type == ICE_TM_NODE_TYPE_TC) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
} else {
TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
pf->tm_conf.nb_queue_node--;
@@ -557,36 +626,176 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
- TAILQ_FOREACH(tm_node, queue_list, node) {
- txq = dev->data->tx_queues[tm_node->id];
- vsi = txq->vsi;
- if (tm_node->shaper_profile)
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
-
- peak = peak / 1000 * BITS_PER_BYTE;
- ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
- tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
- if (ret_val) {
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ PMD_DRV_LOG(ERR, "too many queues");
goto fail_clear;
}
}
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
return ret_val;
fail_clear:
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 6/9] net/ice: support queue priority configuration
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-04-22 0:57 ` [PATCH v7 5/9] net/ice: support queue group " Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 7/9] net/ice: support queue weight configuration Wenjun Wu
` (3 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue priority configuration support.
The highest priority is 0, and the lowest priority is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index d70d077286..91e420d653 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -779,6 +780,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +796,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 7/9] net/ice: support queue weight configuration
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-04-22 0:57 ` [PATCH v7 6/9] net/ice: support queue priority configuration Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 8/9] net/ice: support queue group priority configuration Wenjun Wu
` (2 subsequent siblings)
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 91e420d653..4d7bb9102c 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -804,6 +804,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 8/9] net/ice: support queue group priority configuration
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 preceding siblings ...)
2022-04-22 0:57 ` [PATCH v7 7/9] net/ice: support queue weight configuration Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-22 0:57 ` [PATCH v7 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
2022-04-27 1:59 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Yang, Qiming
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue group priority configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 4d7bb9102c..17f369994b 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -764,6 +764,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_node_priority(hw->port_info, qgroup_sched_node, priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v7 9/9] net/ice: add warning log for unsupported configuration
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (7 preceding siblings ...)
2022-04-22 0:57 ` [PATCH v7 8/9] net/ice: support queue group priority configuration Wenjun Wu
@ 2022-04-22 0:57 ` Wenjun Wu
2022-04-27 1:59 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Yang, Qiming
9 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-22 0:57 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 17f369994b..3e98c2f01e 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -531,6 +531,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+ if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
+ level_id != ICE_TM_NODE_TYPE_QGROUP)
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* RE: [PATCH v7 0/9] Enable ETS-based TX QoS on PF
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (8 preceding siblings ...)
2022-04-22 0:57 ` [PATCH v7 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
@ 2022-04-27 1:59 ` Yang, Qiming
9 siblings, 0 replies; 109+ messages in thread
From: Yang, Qiming @ 2022-04-27 1:59 UTC (permalink / raw)
To: Wu, Wenjun1, dev, Zhang, Qi Z
Hi,
> -----Original Message-----
> From: Wu, Wenjun1 <wenjun1.wu@intel.com>
> Sent: 2022年4月22日 8:58
> To: dev@dpdk.org; Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: [PATCH v7 0/9] Enable ETS-based TX QoS on PF
>
> This patch set enables ETS-based TX QoS on PF. It is supported to configure
> bandwidth and priority in both queue and queue group level, and weight
> only in queue level.
>
> v2: fix code style issue.
> v3: fix uninitialization issue.
> v4: fix logical issue.
> v5: fix CI testing issue. Add explicit cast.
> v6: add release note.
> v7: merge the release note with the previous patch.
>
> Ting Xu (1):
> net/ice: support queue bandwidth limit
>
> Wenjun Wu (8):
> net/ice/base: fix dead lock issue when getting node from ID type
> net/ice/base: support priority configuration of the exact node
> net/ice/base: support queue BW allocation configuration
> net/ice: support queue group bandwidth limit
> net/ice: support queue priority configuration
> net/ice: support queue weight configuration
> net/ice: support queue group priority configuration
> net/ice: add warning log for unsupported configuration
>
> doc/guides/rel_notes/release_22_07.rst | 4 +
> drivers/net/ice/base/ice_sched.c | 89 ++-
> drivers/net/ice/base/ice_sched.h | 6 +
> drivers/net/ice/ice_ethdev.c | 19 +
> drivers/net/ice/ice_ethdev.h | 55 ++
> drivers/net/ice/ice_tm.c | 844 +++++++++++++++++++++++++
> drivers/net/ice/meson.build | 1 +
> 7 files changed, 1016 insertions(+), 2 deletions(-) create mode 100644
> drivers/net/ice/ice_tm.c
>
> --
> 2.25.1
Acked-by: Qiming Yang <qiming.yang@intel.com>
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 0/9] Enable ETS-based TX QoS on PF
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (11 preceding siblings ...)
2022-04-22 0:57 ` [PATCH v7 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (8 more replies)
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 subsequent siblings)
15 siblings, 9 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
v3: fix uninitialization issue.
v4: fix logical issue.
v5: fix CI testing issue. Add explicit cast.
v6: add release note.
v7: merge the release note with the previous patch.
v8: rework shared code patch.
Ting Xu (1):
net/ice: support queue bandwidth limit
Wenjun Wu (8):
net/ice/base: fix dead lock issue when getting node from ID type
net/ice/base: support queue BW allocation configuration
net/ice/base: support priority configuration of the exact node
net/ice: support queue group bandwidth limit
net/ice: support queue priority configuration
net/ice: support queue weight configuration
net/ice: support queue group priority configuration
net/ice: add warning log for unsupported configuration
doc/guides/rel_notes/release_22_07.rst | 4 +
drivers/net/ice/base/ice_sched.c | 90 ++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 845 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
7 files changed, 1018 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 1/9] net/ice/base: fix dead lock issue when getting node from ID type
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 2/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (7 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 2/9] net/ice/base: support queue BW allocation configuration
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 3/9] net/ice/base: support priority configuration of the exact node Wenjun Wu
` (6 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..4ca15bf8f8 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..184ad09e6a 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 3/9] net/ice/base: support priority configuration of the exact node
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 2/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 4/9] net/ice: support queue bandwidth limit Wenjun Wu
` (5 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 22 ++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 25 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 4ca15bf8f8..1b060d3567 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,28 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_cfg_sibl_node_prio_lock - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi,
+ struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_sched_save_q_bw_alloc - save queue node's BW allocation information
* @q_ctx: queue context structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 184ad09e6a..c9f3f79eff 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -169,6 +169,9 @@ enum ice_status
ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
u8 tc);
enum ice_status
+ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 4/9] net/ice: support queue bandwidth limit
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
` (2 preceding siblings ...)
2022-04-28 2:59 ` [PATCH v8 3/9] net/ice/base: support priority configuration of the exact node Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 5/9] net/ice: support queue group " Wenjun Wu
` (4 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 4 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 48 ++
drivers/net/ice/ice_tm.c | 599 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
5 files changed, 671 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 42a5f2d990..2ce3c99fb8 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -55,6 +55,10 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Updated Intel ice driver.**
+
+ * Added Tx QoS rate limitation and priority configuration support for queue and queue group.
+ * Added TX QoS queue weight configuration support.
Removed Items
-------------
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 13adcf90ed..37897765c8 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2312,6 +2325,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2492,6 +2508,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ed580d438..0841e1866c 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,48 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +541,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -620,6 +665,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..383af88981
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,599 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not root or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != (uint32_t)parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ txq = dev->data->tx_queues[tm_node->id];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile)
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ goto fail_clear;
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 5/9] net/ice: support queue group bandwidth limit
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
` (3 preceding siblings ...)
2022-04-28 2:59 ` [PATCH v8 4/9] net/ice: support queue bandwidth limit Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 6/9] net/ice: support queue priority configuration Wenjun Wu
` (3 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
To set up the exact queue group, we need to reconfigure topology by
delete and then recreate queue nodes.
This patch adds queue group configuration support and queue group
bandwidth limit support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 9 +-
drivers/net/ice/ice_tm.c | 239 ++++++++++++++++++++++++++++++++---
2 files changed, 232 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 0841e1866c..6ddbcc9972 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -474,6 +474,7 @@ struct ice_tm_node {
uint32_t weight;
uint32_t reference_count;
struct ice_tm_node *parent;
+ struct ice_tm_node **children;
struct ice_tm_shaper_profile *shaper_profile;
struct rte_tm_node_params params;
};
@@ -482,6 +483,8 @@ struct ice_tm_node {
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
};
@@ -489,10 +492,14 @@ enum ice_tm_node_type {
/* Struct to store all the Traffic Manager configuration. */
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
- struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node *root; /* root node - port */
struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 383af88981..d70d077286 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -44,8 +44,12 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
}
@@ -62,6 +66,16 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
rte_free(tm_node);
@@ -79,6 +93,8 @@ ice_tm_node_search(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -94,6 +110,20 @@ ice_tm_node_search(struct rte_eth_dev *dev,
}
}
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
TAILQ_FOREACH(tm_node, queue_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QUEUE;
@@ -354,6 +384,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -415,6 +446,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -431,9 +464,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "parent is not root or TC";
+ error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
@@ -452,6 +487,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
error->message = "too many TCs";
return -EINVAL;
}
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
} else {
/* check the queue number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
@@ -466,7 +515,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or queue node */
+ /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -478,6 +527,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
@@ -485,10 +538,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node, node);
tm_node->tc = pf->tm_conf.nb_tc_node;
pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->tc;
+ tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -543,11 +606,17 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
+ /* TC or VSI or queue group or queue node */
tm_node->parent->reference_count--;
if (node_type == ICE_TM_NODE_TYPE_TC) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
} else {
TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
pf->tm_conf.nb_queue_node--;
@@ -557,36 +626,176 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
- TAILQ_FOREACH(tm_node, queue_list, node) {
- txq = dev->data->tx_queues[tm_node->id];
- vsi = txq->vsi;
- if (tm_node->shaper_profile)
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
-
- peak = peak / 1000 * BITS_PER_BYTE;
- ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
- tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
- if (ret_val) {
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ PMD_DRV_LOG(ERR, "too many queues");
goto fail_clear;
}
}
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
return ret_val;
fail_clear:
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 6/9] net/ice: support queue priority configuration
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
` (4 preceding siblings ...)
2022-04-28 2:59 ` [PATCH v8 5/9] net/ice: support queue group " Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 7/9] net/ice: support queue weight configuration Wenjun Wu
` (2 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue priority configuration support.
The highest priority is 0, and the lowest priority is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index d70d077286..91e420d653 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -779,6 +780,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +796,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 7/9] net/ice: support queue weight configuration
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
` (5 preceding siblings ...)
2022-04-28 2:59 ` [PATCH v8 6/9] net/ice: support queue priority configuration Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 8/9] net/ice: support queue group priority configuration Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 91e420d653..4d7bb9102c 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -804,6 +804,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 8/9] net/ice: support queue group priority configuration
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
` (6 preceding siblings ...)
2022-04-28 2:59 ` [PATCH v8 7/9] net/ice: support queue weight configuration Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
2022-04-28 2:59 ` [PATCH v8 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue group priority configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 4d7bb9102c..f604523ead 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -764,6 +764,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_sched_cfg_sibl_node_prio_lock(hw->port_info, qgroup_sched_node,
+ priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v8 9/9] net/ice: add warning log for unsupported configuration
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
` (7 preceding siblings ...)
2022-04-28 2:59 ` [PATCH v8 8/9] net/ice: support queue group priority configuration Wenjun Wu
@ 2022-04-28 2:59 ` Wenjun Wu
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 2:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index f604523ead..34a0bfcff8 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -531,6 +531,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+ if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
+ level_id != ICE_TM_NODE_TYPE_QGROUP)
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 0/9] Enable ETS-based TX QoS on PF
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (12 preceding siblings ...)
2022-04-28 2:59 ` [PATCH v8 " Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (8 more replies)
2022-05-17 4:59 ` [PATCH v10 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
15 siblings, 9 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
v3: fix uninitialization issue.
v4: fix logical issue.
v5: fix CI testing issue. Add explicit cast.
v6: add release note.
v7: merge the release note with the previous patch.
v8: rework shared code patch.
v9: rebase the code.
Ting Xu (1):
net/ice: support queue bandwidth limit
Wenjun Wu (8):
net/ice/base: fix dead lock issue when getting node from ID type
net/ice/base: support queue BW allocation configuration
net/ice/base: support priority configuration of the exact node
net/ice: support queue group bandwidth limit
net/ice: support queue priority configuration
net/ice: support queue weight configuration
net/ice: support queue group priority configuration
net/ice: add warning log for unsupported configuration
doc/guides/rel_notes/release_22_07.rst | 5 +
drivers/net/ice/base/ice_sched.c | 90 ++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 845 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
7 files changed, 1019 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 1/9] net/ice/base: fix dead lock issue when getting node from ID type
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 2/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (7 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 2/9] net/ice/base: support queue BW allocation configuration
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 3/9] net/ice/base: support priority configuration of the exact node Wenjun Wu
` (6 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..4ca15bf8f8 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..184ad09e6a 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 3/9] net/ice/base: support priority configuration of the exact node
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 1/9] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 2/9] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 4/9] net/ice: support queue bandwidth limit Wenjun Wu
` (5 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 22 ++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 25 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 4ca15bf8f8..1b060d3567 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,28 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_cfg_sibl_node_prio_lock - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi,
+ struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_sched_save_q_bw_alloc - save queue node's BW allocation information
* @q_ctx: queue context structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 184ad09e6a..c9f3f79eff 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -169,6 +169,9 @@ enum ice_status
ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
u8 tc);
enum ice_status
+ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 4/9] net/ice: support queue bandwidth limit
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-04-28 3:30 ` [PATCH v9 3/9] net/ice/base: support priority configuration of the exact node Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 5/9] net/ice: support queue group " Wenjun Wu
` (4 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 5 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 48 ++
drivers/net/ice/ice_tm.c | 599 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
5 files changed, 672 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 90123bb807..4797da32da 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -60,6 +60,11 @@ New Features
* Added Tx QoS queue rate limitation support.
* Added quanta size configuration support.
+* **Updated Intel ice driver.**
+
+ * Added Tx QoS rate limitation and priority configuration support for queue and queue group.
+ * Added TX QoS queue weight configuration support.
+
Removed Items
-------------
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 00ac2bb191..35ab542e61 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2328,6 +2341,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2508,6 +2524,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3d8427225f..4359c61624 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,48 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +541,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -624,6 +669,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..383af88981
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,599 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not root or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != (uint32_t)parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ txq = dev->data->tx_queues[tm_node->id];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile)
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ goto fail_clear;
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 5/9] net/ice: support queue group bandwidth limit
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-04-28 3:30 ` [PATCH v9 4/9] net/ice: support queue bandwidth limit Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 6/9] net/ice: support queue priority configuration Wenjun Wu
` (3 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
To set up the exact queue group, we need to reconfigure topology by
delete and then recreate queue nodes.
This patch adds queue group configuration support and queue group
bandwidth limit support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 9 +-
drivers/net/ice/ice_tm.c | 239 ++++++++++++++++++++++++++++++++---
2 files changed, 232 insertions(+), 16 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 4359c61624..f9f4a1c71b 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -474,6 +474,7 @@ struct ice_tm_node {
uint32_t weight;
uint32_t reference_count;
struct ice_tm_node *parent;
+ struct ice_tm_node **children;
struct ice_tm_shaper_profile *shaper_profile;
struct rte_tm_node_params params;
};
@@ -482,6 +483,8 @@ struct ice_tm_node {
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
};
@@ -489,10 +492,14 @@ enum ice_tm_node_type {
/* Struct to store all the Traffic Manager configuration. */
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
- struct ice_tm_node *root; /* root node - vf vsi */
+ struct ice_tm_node *root; /* root node - port */
struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 383af88981..d70d077286 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -44,8 +44,12 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
}
@@ -62,6 +66,16 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
rte_free(tm_node);
@@ -79,6 +93,8 @@ ice_tm_node_search(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -94,6 +110,20 @@ ice_tm_node_search(struct rte_eth_dev *dev,
}
}
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
TAILQ_FOREACH(tm_node, queue_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QUEUE;
@@ -354,6 +384,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -415,6 +446,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -431,9 +464,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC) {
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "parent is not root or TC";
+ error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
@@ -452,6 +487,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
error->message = "too many TCs";
return -EINVAL;
}
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
} else {
/* check the queue number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
@@ -466,7 +515,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or queue node */
+ /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -478,6 +527,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->reference_count = 0;
tm_node->parent = parent_node;
tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
@@ -485,10 +538,20 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node, node);
tm_node->tc = pf->tm_conf.nb_tc_node;
pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->tc;
+ tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -543,11 +606,17 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
+ /* TC or VSI or queue group or queue node */
tm_node->parent->reference_count--;
if (node_type == ICE_TM_NODE_TYPE_TC) {
TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
} else {
TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
pf->tm_conf.nb_queue_node--;
@@ -557,36 +626,176 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
- TAILQ_FOREACH(tm_node, queue_list, node) {
- txq = dev->data->tx_queues[tm_node->id];
- vsi = txq->vsi;
- if (tm_node->shaper_profile)
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
-
- peak = peak / 1000 * BITS_PER_BYTE;
- ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
- tm_node->tc, tm_node->id, ICE_MAX_BW, (u32)peak);
- if (ret_val) {
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "configure queue %u bandwidth failed", tm_node->id);
+ PMD_DRV_LOG(ERR, "too many queues");
goto fail_clear;
}
}
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
return ret_val;
fail_clear:
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 6/9] net/ice: support queue priority configuration
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-04-28 3:30 ` [PATCH v9 5/9] net/ice: support queue group " Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 7/9] net/ice: support queue weight configuration Wenjun Wu
` (2 subsequent siblings)
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue priority configuration support.
The highest priority is 0, and the lowest priority is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index d70d077286..91e420d653 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -779,6 +780,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +796,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 7/9] net/ice: support queue weight configuration
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-04-28 3:30 ` [PATCH v9 6/9] net/ice: support queue priority configuration Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 8/9] net/ice: support queue group priority configuration Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 91e420d653..4d7bb9102c 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -804,6 +804,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 8/9] net/ice: support queue group priority configuration
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 preceding siblings ...)
2022-04-28 3:30 ` [PATCH v9 7/9] net/ice: support queue weight configuration Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
2022-04-28 3:30 ` [PATCH v9 9/9] net/ice: add warning log for unsupported configuration Wenjun Wu
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue group priority configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 4d7bb9102c..f604523ead 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -764,6 +764,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_sched_cfg_sibl_node_prio_lock(hw->port_info, qgroup_sched_node,
+ priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v9 9/9] net/ice: add warning log for unsupported configuration
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (7 preceding siblings ...)
2022-04-28 3:30 ` [PATCH v9 8/9] net/ice: support queue group priority configuration Wenjun Wu
@ 2022-04-28 3:30 ` Wenjun Wu
8 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-04-28 3:30 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index f604523ead..34a0bfcff8 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -531,6 +531,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+ if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
+ level_id != ICE_TM_NODE_TYPE_QGROUP)
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v10 0/7] Enable ETS-based TX QoS on PF
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (13 preceding siblings ...)
2022-04-28 3:30 ` [PATCH v9 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-05-17 4:59 ` Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 1/7] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (6 more replies)
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
15 siblings, 7 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 4:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
v3: fix uninitialization issue.
v4: fix logical issue.
v5: fix CI testing issue. Add explicit cast.
v6: add release note.
v7: merge the release note with the previous patch.
v8: rework shared code patch.
v9: rebase the code.
v10: rebase the code and rework the release note.
Ting Xu (1):
net/ice: support queue and queue group bandwidth limit
Wenjun Wu (6):
net/ice/base: fix dead lock issue when getting node from ID type
net/ice/base: support queue BW allocation configuration
net/ice/base: support priority configuration of the exact node
net/ice: support queue and queue group priority configuration
net/ice: support queue weight configuration
net/ice: add warning log for unsupported configuration
doc/guides/rel_notes/release_22_07.rst | 3 +
drivers/net/ice/base/ice_sched.c | 90 ++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 845 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
7 files changed, 1017 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v10 1/7] net/ice/base: fix dead lock issue when getting node from ID type
2022-05-17 4:59 ` [PATCH v10 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-05-17 4:59 ` Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 2/7] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (5 subsequent siblings)
6 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 4:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v10 2/7] net/ice/base: support queue BW allocation configuration
2022-05-17 4:59 ` [PATCH v10 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 1/7] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-05-17 4:59 ` Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 3/7] net/ice/base: support priority configuration of the exact node Wenjun Wu
` (4 subsequent siblings)
6 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 4:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..4ca15bf8f8 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..184ad09e6a 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v10 3/7] net/ice/base: support priority configuration of the exact node
2022-05-17 4:59 ` [PATCH v10 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 1/7] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 2/7] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-05-17 4:59 ` Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 4/7] net/ice: support queue and queue group bandwidth limit Wenjun Wu
` (3 subsequent siblings)
6 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 4:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 22 ++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 25 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 4ca15bf8f8..1b060d3567 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,28 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_cfg_sibl_node_prio_lock - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi,
+ struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_sched_save_q_bw_alloc - save queue node's BW allocation information
* @q_ctx: queue context structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 184ad09e6a..c9f3f79eff 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -169,6 +169,9 @@ enum ice_status
ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
u8 tc);
enum ice_status
+ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v10 4/7] net/ice: support queue and queue group bandwidth limit
2022-05-17 4:59 ` [PATCH v10 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-05-17 4:59 ` [PATCH v10 3/7] net/ice/base: support priority configuration of the exact node Wenjun Wu
@ 2022-05-17 4:59 ` Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 5/7] net/ice: support queue and queue group priority configuration Wenjun Wu
` (2 subsequent siblings)
6 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 4:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues and queue group. To set up the exact queue
group, we need to reconfigure topology by delete and then recreate
queue nodes. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 808 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
5 files changed, 884 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index a60a0d5f16..de29061809 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -74,6 +74,7 @@ New Features
* Added support for promisc configuration in DCF mode.
* Added support for MAC configuration in DCF mode.
* Added support for VLAN filter and offload configuration in DCF mode.
+ * Added Tx QoS queue / queue group rate limitation configure support.
* **Updated Mellanox mlx5 driver.**
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 00ac2bb191..35ab542e61 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2328,6 +2341,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2508,6 +2524,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3d8427225f..f9f4a1c71b 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,55 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_node **children;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - port */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +548,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -624,6 +676,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..d70d077286
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,808 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not valid";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != (uint32_t)parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or VSI or queue group or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->parent->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or VSI or queue group or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
+
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "too many queues");
+ goto fail_clear;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v10 5/7] net/ice: support queue and queue group priority configuration
2022-05-17 4:59 ` [PATCH v10 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-05-17 4:59 ` [PATCH v10 4/7] net/ice: support queue and queue group bandwidth limit Wenjun Wu
@ 2022-05-17 4:59 ` Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 6/7] net/ice: support queue weight configuration Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 7/7] net/ice: add warning log for unsupported configuration Wenjun Wu
6 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 4:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue and queue group priority configuration
support. The highest priority is 0, and the lowest priority
is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_tm.c | 23 +++++++++++++++++++++--
2 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index de29061809..c5bfc52368 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -75,6 +75,7 @@ New Features
* Added support for MAC configuration in DCF mode.
* Added support for VLAN filter and offload configuration in DCF mode.
* Added Tx QoS queue / queue group rate limitation configure support.
+ * Added Tx QoS queue / queue group priority configuration support.
* **Updated Mellanox mlx5 driver.**
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index d70d077286..105455f3cc 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -763,6 +764,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_sched_cfg_sibl_node_prio_lock(hw->port_info, qgroup_sched_node,
+ priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
@@ -779,6 +789,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +805,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v10 6/7] net/ice: support queue weight configuration
2022-05-17 4:59 ` [PATCH v10 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-05-17 4:59 ` [PATCH v10 5/7] net/ice: support queue and queue group priority configuration Wenjun Wu
@ 2022-05-17 4:59 ` Wenjun Wu
2022-05-17 4:59 ` [PATCH v10 7/7] net/ice: add warning log for unsupported configuration Wenjun Wu
6 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 4:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_tm.c | 13 +++++++++++--
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index c5bfc52368..a0eb6ab61b 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -76,6 +76,7 @@ New Features
* Added support for VLAN filter and offload configuration in DCF mode.
* Added Tx QoS queue / queue group rate limitation configure support.
* Added Tx QoS queue / queue group priority configuration support.
+ * Added Tx QoS queue weight configuration support.
* **Updated Mellanox mlx5 driver.**
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 105455f3cc..f604523ead 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -813,6 +813,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v10 7/7] net/ice: add warning log for unsupported configuration
2022-05-17 4:59 ` [PATCH v10 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-05-17 4:59 ` [PATCH v10 6/7] net/ice: support queue weight configuration Wenjun Wu
@ 2022-05-17 4:59 ` Wenjun Wu
6 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 4:59 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index f604523ead..34a0bfcff8 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -531,6 +531,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+ if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
+ level_id != ICE_TM_NODE_TYPE_QGROUP)
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v11 0/7] Enable ETS-based TX QoS on PF
2022-03-29 1:48 [PATCH v1 0/9] Enable ETS-based TX QoS on PF Wenjun Wu
` (14 preceding siblings ...)
2022-05-17 4:59 ` [PATCH v10 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-05-17 5:09 ` Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 1/7] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
` (7 more replies)
15 siblings, 8 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 5:09 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch set enables ETS-based TX QoS on PF. It is supported to
configure bandwidth and priority in both queue and queue group level,
and weight only in queue level.
v2: fix code style issue.
v3: fix uninitialization issue.
v4: fix logical issue.
v5: fix CI testing issue. Add explicit cast.
v6: add release note.
v7: merge the release note with the previous patch.
v8: rework shared code patch.
v9: rebase the code.
v10: rebase the code and rework the release note.
v11: add fix information in commit log.
Ting Xu (1):
net/ice: support queue and queue group bandwidth limit
Wenjun Wu (6):
net/ice/base: fix dead lock issue when getting node from ID type
net/ice/base: support queue BW allocation configuration
net/ice/base: support priority configuration of the exact node
net/ice: support queue and queue group priority configuration
net/ice: support queue weight configuration
net/ice: add warning log for unsupported configuration
doc/guides/rel_notes/release_22_07.rst | 3 +
drivers/net/ice/base/ice_sched.c | 90 ++-
drivers/net/ice/base/ice_sched.h | 6 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 845 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
7 files changed, 1017 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/ice/ice_tm.c
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v11 1/7] net/ice/base: fix dead lock issue when getting node from ID type
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
@ 2022-05-17 5:09 ` Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 2/7] net/ice/base: support queue BW allocation configuration Wenjun Wu
` (6 subsequent siblings)
7 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 5:09 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
The function ice_sched_get_node_by_id_type needs to be called
with the scheduler lock held. However, the function
ice_sched_get_node also requests the scheduler lock.
It will cause the dead lock issue.
This patch replaces function ice_sched_get_node with
function ice_sched_find_node_by_teid to solve this problem.
Fixes: 93e84b1bfc92 ("net/ice/base: add basic Tx scheduler")
Cc: stable@dpdk.org
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 2620892c9e..e697c579be 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4774,12 +4774,12 @@ ice_sched_get_node_by_id_type(struct ice_port_info *pi, u32 id,
case ICE_AGG_TYPE_Q:
/* The current implementation allows single queue to modify */
- node = ice_sched_get_node(pi, id);
+ node = ice_sched_find_node_by_teid(pi->root, id);
break;
case ICE_AGG_TYPE_QG:
/* The current implementation allows single qg to modify */
- child_node = ice_sched_get_node(pi, id);
+ child_node = ice_sched_find_node_by_teid(pi->root, id);
if (!child_node)
break;
node = child_node->parent;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v11 2/7] net/ice/base: support queue BW allocation configuration
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 1/7] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
@ 2022-05-17 5:09 ` Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 3/7] net/ice/base: support priority configuration of the exact node Wenjun Wu
` (5 subsequent siblings)
7 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 5:09 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds BW allocation support of queue scheduling node
to support WFQ in queue level.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 64 ++++++++++++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 ++
2 files changed, 67 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index e697c579be..4ca15bf8f8 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,70 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_save_q_bw_alloc - save queue node's BW allocation information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw_alloc(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type,
+ u32 bw_alloc)
+{
+ switch (rl_type) {
+ case ICE_MIN_BW:
+ ice_set_clear_cir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ case ICE_MAX_BW:
+ ice_set_clear_eir_bw_alloc(&q_ctx->bw_t_info, bw_alloc);
+ break;
+ default:
+ return ICE_ERR_PARAM;
+ }
+ return ICE_SUCCESS;
+}
+
+/**
+ * ice_cfg_q_bw_alloc - configure queue BW weight/alloc params
+ * @pi: port information structure
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
+ * @rl_type: min, max, or shared
+ * @bw_alloc: BW weight/allocation
+ *
+ * This function configures BW allocation of queue scheduling node.
+ */
+enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc)
+{
+ enum ice_status status = ICE_ERR_PARAM;
+ struct ice_sched_node *node;
+ struct ice_q_ctx *q_ctx;
+
+ ice_acquire_lock(&pi->sched_lock);
+ q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+ if (!q_ctx)
+ goto exit_q_bw_alloc;
+
+ node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+ if (!node) {
+ ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
+ goto exit_q_bw_alloc;
+ }
+
+ status = ice_sched_cfg_node_bw_alloc(pi->hw, node, rl_type, bw_alloc);
+ if (!status)
+ status = ice_sched_save_q_bw_alloc(q_ctx, rl_type, bw_alloc);
+
+exit_q_bw_alloc:
+ ice_release_lock(&pi->sched_lock);
+ return status;
+}
+
/**
* ice_cfg_agg_vsi_priority_per_tc - config aggregator's VSI priority per TC
* @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 1441b5f191..184ad09e6a 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -172,6 +172,9 @@ enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
+ice_cfg_q_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+ u16 q_handle, enum ice_rl_type rl_type, u32 bw_alloc);
+enum ice_status
ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
enum ice_rl_type rl_type, u8 *bw_alloc);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v11 3/7] net/ice/base: support priority configuration of the exact node
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 1/7] net/ice/base: fix dead lock issue when getting node from ID type Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 2/7] net/ice/base: support queue BW allocation configuration Wenjun Wu
@ 2022-05-17 5:09 ` Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 4/7] net/ice: support queue and queue group bandwidth limit Wenjun Wu
` (4 subsequent siblings)
7 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 5:09 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds priority configuration support of the exact
node in the scheduler tree.
This function does not need additional calls to the scheduler
lock.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/base/ice_sched.c | 22 ++++++++++++++++++++++
drivers/net/ice/base/ice_sched.h | 3 +++
2 files changed, 25 insertions(+)
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 4ca15bf8f8..1b060d3567 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3613,6 +3613,28 @@ ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
return status;
}
+/**
+ * ice_sched_cfg_sibl_node_prio_lock - config priority of node
+ * @pi: port information structure
+ * @node: sched node to configure
+ * @priority: sibling priority
+ *
+ * This function configures node element's sibling priority only.
+ */
+enum ice_status
+ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi,
+ struct ice_sched_node *node,
+ u8 priority)
+{
+ enum ice_status status;
+
+ ice_acquire_lock(&pi->sched_lock);
+ status = ice_sched_cfg_sibl_node_prio(pi, node, priority);
+ ice_release_lock(&pi->sched_lock);
+
+ return status;
+}
+
/**
* ice_sched_save_q_bw_alloc - save queue node's BW allocation information
* @q_ctx: queue context structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 184ad09e6a..c9f3f79eff 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -169,6 +169,9 @@ enum ice_status
ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
u8 tc);
enum ice_status
+ice_sched_cfg_sibl_node_prio_lock(struct ice_port_info *pi,
+ struct ice_sched_node *node, u8 priority);
+enum ice_status
ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
u8 *q_prio);
enum ice_status
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v11 4/7] net/ice: support queue and queue group bandwidth limit
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
` (2 preceding siblings ...)
2022-05-17 5:09 ` [PATCH v11 3/7] net/ice/base: support priority configuration of the exact node Wenjun Wu
@ 2022-05-17 5:09 ` Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 5/7] net/ice: support queue and queue group priority configuration Wenjun Wu
` (3 subsequent siblings)
7 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 5:09 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
From: Ting Xu <ting.xu@intel.com>
Enable basic TM API for PF only. Support for adding profiles and queue
nodes. Only max bandwidth is supported in profiles. Profiles can be
assigned to target queues and queue group. To set up the exact queue
group, we need to reconfigure topology by delete and then recreate
queue nodes. Only TC0 is valid.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_ethdev.c | 19 +
drivers/net/ice/ice_ethdev.h | 55 ++
drivers/net/ice/ice_tm.c | 808 +++++++++++++++++++++++++
drivers/net/ice/meson.build | 1 +
5 files changed, 884 insertions(+)
create mode 100644 drivers/net/ice/ice_tm.c
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index a60a0d5f16..de29061809 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -74,6 +74,7 @@ New Features
* Added support for promisc configuration in DCF mode.
* Added support for MAC configuration in DCF mode.
* Added support for VLAN filter and offload configuration in DCF mode.
+ * Added Tx QoS queue / queue group rate limitation configure support.
* **Updated Mellanox mlx5 driver.**
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 00ac2bb191..35ab542e61 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -205,6 +205,18 @@ static const struct rte_pci_id pci_id_ice_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static int
+ice_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ice_tm_ops;
+
+ return 0;
+}
+
static const struct eth_dev_ops ice_eth_dev_ops = {
.dev_configure = ice_dev_configure,
.dev_start = ice_dev_start,
@@ -267,6 +279,7 @@ static const struct eth_dev_ops ice_eth_dev_ops = {
.timesync_read_time = ice_timesync_read_time,
.timesync_write_time = ice_timesync_write_time,
.timesync_disable = ice_timesync_disable,
+ .tm_ops_get = ice_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
@@ -2328,6 +2341,9 @@ ice_dev_init(struct rte_eth_dev *dev)
/* Initialize RSS context for gtpu_eh */
ice_rss_ctx_init(pf);
+ /* Initialize TM configuration */
+ ice_tm_conf_init(dev);
+
if (!ad->is_safe_mode) {
ret = ice_flow_init(ad);
if (ret) {
@@ -2508,6 +2524,9 @@ ice_dev_close(struct rte_eth_dev *dev)
rte_free(pf->proto_xtr);
pf->proto_xtr = NULL;
+ /* Uninit TM configuration */
+ ice_tm_conf_uninit(dev);
+
if (ad->devargs.pps_out_ena) {
ICE_WRITE_REG(hw, GLTSYN_AUX_OUT(pin_idx, timer), 0);
ICE_WRITE_REG(hw, GLTSYN_CLKO(pin_idx, timer), 0);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3d8427225f..f9f4a1c71b 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -9,10 +9,12 @@
#include <rte_time.h>
#include <ethdev_driver.h>
+#include <rte_tm_driver.h>
#include "base/ice_common.h"
#include "base/ice_adminq_cmd.h"
#include "base/ice_flow.h"
+#include "base/ice_sched.h"
#define ICE_ADMINQ_LEN 32
#define ICE_SBIOQ_LEN 32
@@ -453,6 +455,55 @@ struct ice_acl_info {
uint64_t hw_entry_id[MAX_ACL_NORMAL_ENTRIES];
};
+TAILQ_HEAD(ice_shaper_profile_list, ice_tm_shaper_profile);
+TAILQ_HEAD(ice_tm_node_list, ice_tm_node);
+
+struct ice_tm_shaper_profile {
+ TAILQ_ENTRY(ice_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ice_tm_node {
+ TAILQ_ENTRY(ice_tm_node) node;
+ uint32_t id;
+ uint32_t tc;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ice_tm_node *parent;
+ struct ice_tm_node **children;
+ struct ice_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+/* node type of Traffic Manager */
+enum ice_tm_node_type {
+ ICE_TM_NODE_TYPE_PORT,
+ ICE_TM_NODE_TYPE_TC,
+ ICE_TM_NODE_TYPE_VSI,
+ ICE_TM_NODE_TYPE_QGROUP,
+ ICE_TM_NODE_TYPE_QUEUE,
+ ICE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store all the Traffic Manager configuration. */
+struct ice_tm_conf {
+ struct ice_shaper_profile_list shaper_profile_list;
+ struct ice_tm_node *root; /* root node - port */
+ struct ice_tm_node_list tc_list; /* node list for all the TCs */
+ struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
+ struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
+ struct ice_tm_node_list queue_list; /* node list for all the queues */
+ uint32_t nb_tc_node;
+ uint32_t nb_vsi_node;
+ uint32_t nb_qgroup_node;
+ uint32_t nb_queue_node;
+ bool committed;
+};
+
struct ice_pf {
struct ice_adapter *adapter; /* The adapter this PF associate to */
struct ice_vsi *main_vsi; /* pointer to main VSI structure */
@@ -497,6 +548,7 @@ struct ice_pf {
uint64_t old_tx_bytes;
uint64_t supported_rxdid; /* bitmap for supported RXDID */
uint64_t rss_hf;
+ struct ice_tm_conf tm_conf;
};
#define ICE_MAX_QUEUE_NUM 2048
@@ -624,6 +676,9 @@ int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
struct ice_rss_hash_cfg *cfg);
+void ice_tm_conf_init(struct rte_eth_dev *dev);
+void ice_tm_conf_uninit(struct rte_eth_dev *dev);
+extern const struct rte_tm_ops ice_tm_ops;
static inline int
ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
new file mode 100644
index 0000000000..d70d077286
--- /dev/null
+++ b/drivers/net/ice/ice_tm.c
@@ -0,0 +1,808 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+#include <rte_tm_driver.h>
+
+#include "ice_ethdev.h"
+#include "ice_rxtx.h"
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error);
+static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
+static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
+static int ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+static int ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+const struct rte_tm_ops ice_tm_ops = {
+ .shaper_profile_add = ice_shaper_profile_add,
+ .shaper_profile_delete = ice_shaper_profile_del,
+ .node_add = ice_tm_node_add,
+ .node_delete = ice_tm_node_delete,
+ .node_type_get = ice_node_type_get,
+ .hierarchy_commit = ice_hierarchy_commit,
+};
+
+void
+ice_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize node configuration */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.vsi_list);
+ TAILQ_INIT(&pf->tm_conf.qgroup_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_vsi_node = 0;
+ pf->tm_conf.nb_qgroup_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
+}
+
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_qgroup_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_vsi_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
+}
+
+static inline struct ice_tm_node *
+ice_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ice_tm_node_type *node_type)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, vsi_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_VSI;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QGROUP;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = ICE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ /* checked all the unsupported parameter */
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for non-leaf node */
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for leaf node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
+
+static inline struct ice_tm_shaper_profile *
+ice_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ice_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ice_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ice_tm_shaper_profile",
+ sizeof(struct ice_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
+
+static int
+ice_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ice_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
+
+static int
+ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_shaper_profile *shaper_profile = NULL;
+ struct ice_tm_node *tm_node;
+ struct ice_tm_node *parent_node;
+ uint16_t tc_nb = 1;
+ uint16_t vsi_nb = 1;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ice_node_param_check(pf, node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node is already existed */
+ if (ice_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
+ shaper_profile = ice_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != ICE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->parent = NULL;
+ tm_node->reference_count = 0;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ice_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
+ parent_node_type != ICE_TM_NODE_TYPE_TC &&
+ parent_node_type != ICE_TM_NODE_TYPE_VSI &&
+ parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not valid";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != (uint32_t)parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ /* check the VSI number */
+ if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many VSIs";
+ return -EINVAL;
+ }
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ /* check the queue group number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queue groups";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+ if (node_id >= pf->dev_data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or VSI or queue group or queue node */
+ tm_node = rte_zmalloc("ice_tm_node",
+ sizeof(struct ice_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ tm_node->children = (struct ice_tm_node **)
+ rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+
+ rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->tc = pf->tm_conf.nb_tc_node;
+ pf->tm_conf.nb_tc_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
+ tm_node, node);
+ tm_node->tc = parent_node->tc;
+ pf->tm_conf.nb_vsi_node++;
+ } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->tc;
+ pf->tm_conf.nb_qgroup_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ tm_node->tc = parent_node->parent->parent->tc;
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ return 0;
+}
+
+static int
+ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or VSI or queue group or queue node */
+ tm_node->parent->reference_count--;
+ if (node_type == ICE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
+ TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
+ pf->tm_conf.nb_vsi_node--;
+ } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
+ pf->tm_conf.nb_qgroup_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
+
+static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev,
+ struct ice_sched_node *queue_sched_node,
+ struct ice_sched_node *dst_node,
+ uint16_t queue_id)
+{
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_aqc_move_txqs_data *buf;
+ struct ice_sched_node *queue_parent_node;
+ uint8_t txqs_moved;
+ int ret = ICE_SUCCESS;
+ uint16_t buf_size = ice_struct_size(buf, txqs, 1);
+
+ buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf));
+
+ queue_parent_node = queue_sched_node->parent;
+ buf->src_teid = queue_parent_node->info.node_teid;
+ buf->dest_teid = dst_node->info.node_teid;
+ buf->txqs[0].q_teid = queue_sched_node->info.node_teid;
+ buf->txqs[0].txq_id = queue_id;
+
+ ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50,
+ NULL, buf, buf_size, &txqs_moved, NULL);
+ if (ret || txqs_moved == 0) {
+ PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id);
+ return ICE_ERR_PARAM;
+ }
+
+ if (queue_parent_node->num_children > 0) {
+ queue_parent_node->num_children--;
+ queue_parent_node->children[queue_parent_node->num_children] = NULL;
+ } else {
+ PMD_DRV_LOG(ERR, "invalid children number %d for queue %u",
+ queue_parent_node->num_children, queue_id);
+ return ICE_ERR_PARAM;
+ }
+ dst_node->children[dst_node->num_children++] = queue_sched_node;
+ queue_sched_node->parent = dst_node;
+ ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info);
+
+ return ret;
+}
+
+static int ice_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ __rte_unused struct rte_tm_error *error)
+{
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
+ struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct ice_tm_node *tm_node;
+ struct ice_sched_node *node;
+ struct ice_sched_node *vsi_node;
+ struct ice_sched_node *queue_node;
+ struct ice_tx_queue *txq;
+ struct ice_vsi *vsi;
+ int ret_val = ICE_SUCCESS;
+ uint64_t peak = 0;
+ uint32_t i;
+ uint32_t idx_vsi_child;
+ uint32_t idx_qg;
+ uint32_t nb_vsi_child;
+ uint32_t nb_qg;
+ uint32_t qid;
+ uint32_t q_teid;
+ uint32_t vsi_layer;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ret_val = ice_tx_queue_stop(dev, i);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "stop queue %u failed", i);
+ goto fail_clear;
+ }
+ }
+
+ node = hw->port_info->root;
+ vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ for (i = 0; i < vsi_layer; i++)
+ node = node->children[0];
+ vsi_node = node;
+ nb_vsi_child = vsi_node->num_children;
+ nb_qg = vsi_node->children[0]->num_children;
+
+ idx_vsi_child = 0;
+ idx_qg = 0;
+
+ TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ struct ice_tm_node *tm_child_node;
+ struct ice_sched_node *qgroup_sched_node =
+ vsi_node->children[idx_vsi_child]->children[idx_qg];
+
+ for (i = 0; i < tm_node->reference_count; i++) {
+ tm_child_node = tm_node->children[i];
+ qid = tm_child_node->id;
+ ret_val = ice_tx_queue_start(dev, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "start queue %u failed", qid);
+ goto fail_clear;
+ }
+ txq = dev->data->tx_queues[qid];
+ q_teid = txq->q_teid;
+ queue_node = ice_sched_get_node(hw->port_info, q_teid);
+ if (queue_node == NULL) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
+ goto fail_clear;
+ }
+ if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
+ continue;
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto fail_clear;
+ }
+ }
+ if (tm_node->reference_count != 0 && tm_node->shaper_profile) {
+ uint32_t node_teid = qgroup_sched_node->info.node_teid;
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_sched_set_node_bw_lmt_per_tc(hw->port_info,
+ node_teid,
+ ICE_AGG_TYPE_Q,
+ tm_node->tc,
+ ICE_MAX_BW,
+ (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ idx_qg++;
+ if (idx_qg >= nb_qg) {
+ idx_qg = 0;
+ idx_vsi_child++;
+ }
+ if (idx_vsi_child >= nb_vsi_child) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "too many queues");
+ goto fail_clear;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ qid = tm_node->id;
+ txq = dev->data->tx_queues[qid];
+ vsi = txq->vsi;
+ if (tm_node->shaper_profile) {
+ /* Transfer from Byte per seconds to Kbps */
+ peak = tm_node->shaper_profile->profile.peak.rate;
+ peak = peak / 1000 * BITS_PER_BYTE;
+ ret_val = ice_cfg_q_bw_lmt(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)peak);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue %u bandwidth failed",
+ tm_node->id);
+ goto fail_clear;
+ }
+ }
+ }
+
+ return ret_val;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ice_tm_conf_uninit(dev);
+ ice_tm_conf_init(dev);
+ }
+ return ret_val;
+}
diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build
index d608da7765..de307c9e71 100644
--- a/drivers/net/ice/meson.build
+++ b/drivers/net/ice/meson.build
@@ -12,6 +12,7 @@ sources = files(
'ice_hash.c',
'ice_rxtx.c',
'ice_switch_filter.c',
+ 'ice_tm.c',
)
deps += ['hash', 'net', 'common_iavf']
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v11 5/7] net/ice: support queue and queue group priority configuration
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
` (3 preceding siblings ...)
2022-05-17 5:09 ` [PATCH v11 4/7] net/ice: support queue and queue group bandwidth limit Wenjun Wu
@ 2022-05-17 5:09 ` Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 6/7] net/ice: support queue weight configuration Wenjun Wu
` (2 subsequent siblings)
7 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 5:09 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue and queue group priority configuration
support. The highest priority is 0, and the lowest priority
is 7.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_tm.c | 23 +++++++++++++++++++++--
2 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index de29061809..c5bfc52368 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -75,6 +75,7 @@ New Features
* Added support for MAC configuration in DCF mode.
* Added support for VLAN filter and offload configuration in DCF mode.
* Added Tx QoS queue / queue group rate limitation configure support.
+ * Added Tx QoS queue / queue group priority configuration support.
* **Updated Mellanox mlx5 driver.**
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index d70d077286..105455f3cc 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -147,9 +147,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (priority) {
+ if (priority >= 8) {
error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority should be 0";
+ error->message = "priority should be less than 8";
return -EINVAL;
}
@@ -684,6 +684,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
struct ice_vsi *vsi;
int ret_val = ICE_SUCCESS;
uint64_t peak = 0;
+ uint8_t priority;
uint32_t i;
uint32_t idx_vsi_child;
uint32_t idx_qg;
@@ -763,6 +764,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_sched_cfg_sibl_node_prio_lock(hw->port_info, qgroup_sched_node,
+ priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue group %u priority failed",
+ tm_node->priority);
+ goto fail_clear;
+ }
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
@@ -779,6 +789,7 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
qid = tm_node->id;
txq = dev->data->tx_queues[qid];
vsi = txq->vsi;
+ q_teid = txq->q_teid;
if (tm_node->shaper_profile) {
/* Transfer from Byte per seconds to Kbps */
peak = tm_node->shaper_profile->profile.peak.rate;
@@ -794,6 +805,14 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
goto fail_clear;
}
}
+ priority = 7 - tm_node->priority;
+ ret_val = ice_cfg_vsi_q_priority(hw->port_info, 1,
+ &q_teid, &priority);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v11 6/7] net/ice: support queue weight configuration
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
` (4 preceding siblings ...)
2022-05-17 5:09 ` [PATCH v11 5/7] net/ice: support queue and queue group priority configuration Wenjun Wu
@ 2022-05-17 5:09 ` Wenjun Wu
2022-05-17 5:09 ` [PATCH v11 7/7] net/ice: add warning log for unsupported configuration Wenjun Wu
2022-05-17 7:23 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Zhang, Qi Z
7 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 5:09 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
This patch adds queue weight configuration support.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
doc/guides/rel_notes/release_22_07.rst | 1 +
drivers/net/ice/ice_tm.c | 13 +++++++++++--
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index c5bfc52368..a0eb6ab61b 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -76,6 +76,7 @@ New Features
* Added support for VLAN filter and offload configuration in DCF mode.
* Added Tx QoS queue / queue group rate limitation configure support.
* Added Tx QoS queue / queue group priority configuration support.
+ * Added Tx QoS queue weight configuration support.
* **Updated Mellanox mlx5 driver.**
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 105455f3cc..f604523ead 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -153,9 +153,9 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return -EINVAL;
}
- if (weight != 1) {
+ if (weight > 200 || weight < 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "weight must be 1";
+ error->message = "weight must be between 1 and 200";
return -EINVAL;
}
@@ -813,6 +813,15 @@ static int ice_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "configure queue %u priority failed", tm_node->priority);
goto fail_clear;
}
+
+ ret_val = ice_cfg_q_bw_alloc(hw->port_info, vsi->idx,
+ tm_node->tc, tm_node->id,
+ ICE_MAX_BW, (u32)tm_node->weight);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ PMD_DRV_LOG(ERR, "configure queue %u weight failed", tm_node->weight);
+ goto fail_clear;
+ }
}
return ret_val;
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* [PATCH v11 7/7] net/ice: add warning log for unsupported configuration
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
` (5 preceding siblings ...)
2022-05-17 5:09 ` [PATCH v11 6/7] net/ice: support queue weight configuration Wenjun Wu
@ 2022-05-17 5:09 ` Wenjun Wu
2022-05-17 7:23 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Zhang, Qi Z
7 siblings, 0 replies; 109+ messages in thread
From: Wenjun Wu @ 2022-05-17 5:09 UTC (permalink / raw)
To: dev, qiming.yang, qi.z.zhang
Priority configuration is enabled in level 3 and level 4.
Weight configuration is enabled in level 4.
This patch adds warning log for unsupported priority
and weight configuration.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_tm.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index f604523ead..34a0bfcff8 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -531,6 +531,15 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
+ if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
+ level_id != ICE_TM_NODE_TYPE_QGROUP)
+ PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d",
+ level_id);
+
+ if (tm_node->weight != 1 && level_id != ICE_TM_NODE_TYPE_QUEUE)
+ PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d",
+ level_id);
+
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
--
2.25.1
^ permalink raw reply [flat|nested] 109+ messages in thread
* RE: [PATCH v11 0/7] Enable ETS-based TX QoS on PF
2022-05-17 5:09 ` [PATCH v11 0/7] Enable ETS-based TX QoS on PF Wenjun Wu
` (6 preceding siblings ...)
2022-05-17 5:09 ` [PATCH v11 7/7] net/ice: add warning log for unsupported configuration Wenjun Wu
@ 2022-05-17 7:23 ` Zhang, Qi Z
7 siblings, 0 replies; 109+ messages in thread
From: Zhang, Qi Z @ 2022-05-17 7:23 UTC (permalink / raw)
To: Wu, Wenjun1, dev, Yang, Qiming
> -----Original Message-----
> From: Wu, Wenjun1 <wenjun1.wu@intel.com>
> Sent: Tuesday, May 17, 2022 1:09 PM
> To: dev@dpdk.org; Yang, Qiming <qiming.yang@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: [PATCH v11 0/7] Enable ETS-based TX QoS on PF
>
> This patch set enables ETS-based TX QoS on PF. It is supported to configure
> bandwidth and priority in both queue and queue group level, and weight only in
> queue level.
>
> v2: fix code style issue.
> v3: fix uninitialization issue.
> v4: fix logical issue.
> v5: fix CI testing issue. Add explicit cast.
> v6: add release note.
> v7: merge the release note with the previous patch.
> v8: rework shared code patch.
> v9: rebase the code.
> v10: rebase the code and rework the release note.
> v11: add fix information in commit log.
>
> Ting Xu (1):
> net/ice: support queue and queue group bandwidth limit
>
> Wenjun Wu (6):
> net/ice/base: fix dead lock issue when getting node from ID type
> net/ice/base: support queue BW allocation configuration
> net/ice/base: support priority configuration of the exact node
> net/ice: support queue and queue group priority configuration
> net/ice: support queue weight configuration
> net/ice: add warning log for unsupported configuration
>
> doc/guides/rel_notes/release_22_07.rst | 3 +
> drivers/net/ice/base/ice_sched.c | 90 ++-
> drivers/net/ice/base/ice_sched.h | 6 +
> drivers/net/ice/ice_ethdev.c | 19 +
> drivers/net/ice/ice_ethdev.h | 55 ++
> drivers/net/ice/ice_tm.c | 845 +++++++++++++++++++++++++
> drivers/net/ice/meson.build | 1 +
> 7 files changed, 1017 insertions(+), 2 deletions(-) create mode 100644
> drivers/net/ice/ice_tm.c
>
> --
> 2.25.1
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Applied to dpdk-next-net-intel.
Thanks
Qi
^ permalink raw reply [flat|nested] 109+ messages in thread