* RE: [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler
2024-01-05 14:11 ` [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
@ 2024-01-05 8:10 ` Wu, Wenjun1
2024-01-05 14:11 ` [PATCH v2 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
` (2 subsequent siblings)
3 siblings, 0 replies; 21+ messages in thread
From: Wu, Wenjun1 @ 2024-01-05 8:10 UTC (permalink / raw)
To: Zhang, Qi Z, Yang, Qiming; +Cc: dev
> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Friday, January 5, 2024 10:11 PM
> To: Yang, Qiming <qiming.yang@intel.com>; Wu, Wenjun1
> <wenjun1.wu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler
>
> Remove dummy layers, code refactor, complete document
>
> Qi Zhang (3):
> net/ice: hide port and TC layer in Tx sched tree
> net/ice: refactor tm config data structure
> doc: update ice document for qos
>
> v2:
> - fix typos.
>
> doc/guides/nics/ice.rst | 19 +++
> drivers/net/ice/ice_ethdev.h | 12 +-
> drivers/net/ice/ice_tm.c | 285 +++++++++++------------------------
> 3 files changed, 112 insertions(+), 204 deletions(-)
>
> --
> 2.31.1
Acked-by: Wenjun Wu <wenjun1.wu@intel.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: [PATCH v2 2/3] net/ice: refactor tm config data structure
2024-01-05 14:11 ` [PATCH v2 2/3] net/ice: refactor tm config data structure Qi Zhang
@ 2024-01-05 8:37 ` Zhang, Qi Z
2024-01-09 2:50 ` Wu, Wenjun1
1 sibling, 0 replies; 21+ messages in thread
From: Zhang, Qi Z @ 2024-01-05 8:37 UTC (permalink / raw)
To: Yang, Qiming, Wu, Wenjun1; +Cc: dev
> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Friday, January 5, 2024 10:11 PM
> To: Yang, Qiming <qiming.yang@intel.com>; Wu, Wenjun1
> <wenjun1.wu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v2 2/3] net/ice: refactor tm config data structure
>
> Simplified struct ice_tm_conf by removing per level node list.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> drivers/net/ice/ice_ethdev.h | 5 +-
> drivers/net/ice/ice_tm.c | 210 +++++++++++++++--------------------
> 2 files changed, 88 insertions(+), 127 deletions(-)
>
> diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index
> ae22c29ffc..008a7a23b9 100644
> --- a/drivers/net/ice/ice_ethdev.h
> +++ b/drivers/net/ice/ice_ethdev.h
> @@ -472,6 +472,7 @@ struct ice_tm_node {
> uint32_t id;
> uint32_t priority;
> uint32_t weight;
> + uint32_t level;
> uint32_t reference_count;
> struct ice_tm_node *parent;
> struct ice_tm_node **children;
> @@ -492,10 +493,6 @@ enum ice_tm_node_type { struct ice_tm_conf {
> struct ice_shaper_profile_list shaper_profile_list;
> struct ice_tm_node *root; /* root node - port */
> - struct ice_tm_node_list qgroup_list; /* node list for all the queue
> groups */
> - struct ice_tm_node_list queue_list; /* node list for all the queues */
> - uint32_t nb_qgroup_node;
> - uint32_t nb_queue_node;
> bool committed;
> bool clear_on_fail;
> };
> diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index
> 7ae68c683b..7c662f8a85 100644
> --- a/drivers/net/ice/ice_tm.c
> +++ b/drivers/net/ice/ice_tm.c
> @@ -43,66 +43,30 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
> /* initialize node configuration */
> TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
> pf->tm_conf.root = NULL;
> - TAILQ_INIT(&pf->tm_conf.qgroup_list);
> - TAILQ_INIT(&pf->tm_conf.queue_list);
> - pf->tm_conf.nb_qgroup_node = 0;
> - pf->tm_conf.nb_queue_node = 0;
> pf->tm_conf.committed = false;
> pf->tm_conf.clear_on_fail = false;
> }
>
> -void
> -ice_tm_conf_uninit(struct rte_eth_dev *dev)
> +static void free_node(struct ice_tm_node *root)
> {
> - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> >dev_private);
> - struct ice_tm_node *tm_node;
> + uint32_t i;
>
> - /* clear node configuration */
> - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
> - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
> - rte_free(tm_node);
> - }
> - pf->tm_conf.nb_queue_node = 0;
> - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
> - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
> - rte_free(tm_node);
> - }
> - pf->tm_conf.nb_qgroup_node = 0;
> - if (pf->tm_conf.root) {
> - rte_free(pf->tm_conf.root);
> - pf->tm_conf.root = NULL;
> - }
> + if (root == NULL)
> + return;
> +
> + for (i = 0; i < root->reference_count; i++)
> + free_node(root->children[i]);
> +
> + rte_free(root);
The memory of point array for children should also be freed.
rte_free(root->children)
As the patch has been acked, I will squash the fix when merging the patch.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 0/3] net/ice: simplified to 3 layer Tx scheduler.
@ 2024-01-05 13:59 Qi Zhang
2024-01-05 13:59 ` [PATCH 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
` (4 more replies)
0 siblings, 5 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 13:59 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Remove dummy layers, code refactor, complete document.
Qi Zhang (3):
net/ice: hide port and TC layer in Tx sched tree
net/ice: refactor tm config data struture
doc: update ice document for qos
doc/guides/nics/ice.rst | 19 +++
drivers/net/ice/ice_ethdev.h | 12 +-
drivers/net/ice/ice_tm.c | 285 +++++++++++------------------------
3 files changed, 112 insertions(+), 204 deletions(-)
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 1/3] net/ice: hide port and TC layer in Tx sched tree
2024-01-05 13:59 [PATCH 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
@ 2024-01-05 13:59 ` Qi Zhang
2024-01-05 13:59 ` [PATCH 2/3] net/ice: refactor tm config data struture Qi Zhang
` (3 subsequent siblings)
4 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 13:59 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
In currently 5 layer tree implementation, the port and tc layer
is not configurable, so its not necessary to expose them to applicaiton.
The patch hides the top 2 layers and represented the root of the tree at
VSI layer. From application's point of view, its a 3 layer scheduler tree:
Port -> Queue Group -> Queue.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
drivers/net/ice/ice_ethdev.h | 7 ----
drivers/net/ice/ice_tm.c | 79 ++++--------------------------------
2 files changed, 7 insertions(+), 79 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index fa4981ed14..ae22c29ffc 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -470,7 +470,6 @@ struct ice_tm_shaper_profile {
struct ice_tm_node {
TAILQ_ENTRY(ice_tm_node) node;
uint32_t id;
- uint32_t tc;
uint32_t priority;
uint32_t weight;
uint32_t reference_count;
@@ -484,8 +483,6 @@ struct ice_tm_node {
/* node type of Traffic Manager */
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
- ICE_TM_NODE_TYPE_TC,
- ICE_TM_NODE_TYPE_VSI,
ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
@@ -495,12 +492,8 @@ enum ice_tm_node_type {
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
struct ice_tm_node *root; /* root node - port */
- struct ice_tm_node_list tc_list; /* node list for all the TCs */
- struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
- uint32_t nb_tc_node;
- uint32_t nb_vsi_node;
uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index b570798f07..7ae68c683b 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -43,12 +43,8 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
/* initialize node configuration */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
- TAILQ_INIT(&pf->tm_conf.tc_list);
- TAILQ_INIT(&pf->tm_conf.vsi_list);
TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
- pf->tm_conf.nb_tc_node = 0;
- pf->tm_conf.nb_vsi_node = 0;
pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
@@ -72,16 +68,6 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_qgroup_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
- TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_vsi_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
- TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_tc_node = 0;
if (pf->tm_conf.root) {
rte_free(pf->tm_conf.root);
pf->tm_conf.root = NULL;
@@ -93,8 +79,6 @@ ice_tm_node_search(struct rte_eth_dev *dev,
uint32_t node_id, enum ice_tm_node_type *node_type)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
- struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -104,20 +88,6 @@ ice_tm_node_search(struct rte_eth_dev *dev,
return pf->tm_conf.root;
}
- TAILQ_FOREACH(tm_node, tc_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_TC;
- return tm_node;
- }
- }
-
- TAILQ_FOREACH(tm_node, vsi_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_VSI;
- return tm_node;
- }
- }
-
TAILQ_FOREACH(tm_node, qgroup_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QGROUP;
@@ -371,6 +341,8 @@ ice_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+#define MAX_QUEUE_PER_GROUP 8
+
static int
ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
@@ -384,8 +356,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_shaper_profile *shaper_profile = NULL;
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
- uint16_t tc_nb = 1;
- uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -440,6 +410,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->shaper_profile = shaper_profile;
tm_node->children = (struct ice_tm_node **)
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
@@ -448,7 +419,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
/* check the parent node */
parent_node = ice_tm_node_search(dev, parent_node_id,
&parent_node_type);
@@ -458,8 +428,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC &&
- parent_node_type != ICE_TM_NODE_TYPE_VSI &&
parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent is not valid";
@@ -475,20 +443,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
/* check the node number */
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- /* check the TC number */
- if (pf->tm_conf.nb_tc_node >= tc_nb) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "too many TCs";
- return -EINVAL;
- }
- } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
- /* check the VSI number */
- if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "too many VSIs";
- return -EINVAL;
- }
- } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
/* check the queue group number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
@@ -497,7 +451,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
} else {
/* check the queue number */
- if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ if (parent_node->reference_count >= MAX_QUEUE_PER_GROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "too many queues";
return -EINVAL;
@@ -509,7 +463,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -538,24 +491,12 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
- tm_node, node);
- tm_node->tc = pf->tm_conf.nb_tc_node;
- pf->tm_conf.nb_tc_node++;
- } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
- tm_node, node);
- tm_node->tc = parent_node->tc;
- pf->tm_conf.nb_vsi_node++;
- } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
tm_node, node);
- tm_node->tc = parent_node->parent->tc;
pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -603,15 +544,9 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or VSI or queue group or queue node */
+ /* queue group or queue node */
tm_node->parent->reference_count--;
- if (node_type == ICE_TM_NODE_TYPE_TC) {
- TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
- pf->tm_conf.nb_tc_node--;
- } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
- TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
- pf->tm_conf.nb_vsi_node--;
- } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
pf->tm_conf.nb_qgroup_node--;
} else {
@@ -872,7 +807,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
/* config vsi node */
vsi_node = ice_get_vsi_node(hw);
- tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list);
+ tm_node = pf->tm_conf.root;
ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
if (ret_val) {
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 2/3] net/ice: refactor tm config data struture
2024-01-05 13:59 [PATCH 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 13:59 ` [PATCH 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
@ 2024-01-05 13:59 ` Qi Zhang
2024-01-05 13:59 ` [PATCH 3/3] doc: update ice document for qos Qi Zhang
` (2 subsequent siblings)
4 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 13:59 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Simplified struct ice_tm_conf by removing per level node list.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
drivers/net/ice/ice_ethdev.h | 5 +-
drivers/net/ice/ice_tm.c | 210 +++++++++++++++--------------------
2 files changed, 88 insertions(+), 127 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index ae22c29ffc..008a7a23b9 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -472,6 +472,7 @@ struct ice_tm_node {
uint32_t id;
uint32_t priority;
uint32_t weight;
+ uint32_t level;
uint32_t reference_count;
struct ice_tm_node *parent;
struct ice_tm_node **children;
@@ -492,10 +493,6 @@ enum ice_tm_node_type {
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
struct ice_tm_node *root; /* root node - port */
- struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
- struct ice_tm_node_list queue_list; /* node list for all the queues */
- uint32_t nb_qgroup_node;
- uint32_t nb_queue_node;
bool committed;
bool clear_on_fail;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 7ae68c683b..7c662f8a85 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -43,66 +43,30 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
/* initialize node configuration */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
- TAILQ_INIT(&pf->tm_conf.qgroup_list);
- TAILQ_INIT(&pf->tm_conf.queue_list);
- pf->tm_conf.nb_qgroup_node = 0;
- pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
pf->tm_conf.clear_on_fail = false;
}
-void
-ice_tm_conf_uninit(struct rte_eth_dev *dev)
+static void free_node(struct ice_tm_node *root)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node *tm_node;
+ uint32_t i;
- /* clear node configuration */
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
- TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_queue_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
- TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_qgroup_node = 0;
- if (pf->tm_conf.root) {
- rte_free(pf->tm_conf.root);
- pf->tm_conf.root = NULL;
- }
+ if (root == NULL)
+ return;
+
+ for (i = 0; i < root->reference_count; i++)
+ free_node(root->children[i]);
+
+ rte_free(root);
}
-static inline struct ice_tm_node *
-ice_tm_node_search(struct rte_eth_dev *dev,
- uint32_t node_id, enum ice_tm_node_type *node_type)
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
- struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
- struct ice_tm_node *tm_node;
-
- if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_PORT;
- return pf->tm_conf.root;
- }
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_QGROUP;
- return tm_node;
- }
- }
-
- TAILQ_FOREACH(tm_node, queue_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_QUEUE;
- return tm_node;
- }
- }
-
- return NULL;
+ free_node(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
}
static int
@@ -195,11 +159,29 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return 0;
}
+static struct ice_tm_node *
+find_node(struct ice_tm_node *root, uint32_t id)
+{
+ uint32_t i;
+
+ if (root == NULL || root->id == id)
+ return root;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *node = find_node(root->children[i], id);
+
+ if (node)
+ return node;
+ }
+
+ return NULL;
+}
+
static int
ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error)
{
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node *tm_node;
if (!is_leaf || !error)
@@ -212,14 +194,14 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check if the node id exists */
- tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ tm_node = find_node(pf->tm_conf.root, node_id);
if (!tm_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "no such node";
return -EINVAL;
}
- if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE)
*is_leaf = true;
else
*is_leaf = false;
@@ -351,8 +333,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
- enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
struct ice_tm_shaper_profile *shaper_profile = NULL;
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
@@ -367,7 +347,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return ret;
/* check if the node is already existed */
- if (ice_tm_node_search(dev, node_id, &node_type)) {
+ if (find_node(pf->tm_conf.root, node_id)) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "node id already used";
return -EINVAL;
@@ -408,6 +388,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
if (!tm_node)
return -ENOMEM;
tm_node->id = node_id;
+ tm_node->level = ICE_TM_NODE_TYPE_PORT;
tm_node->parent = NULL;
tm_node->reference_count = 0;
tm_node->shaper_profile = shaper_profile;
@@ -420,29 +401,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check the parent node */
- parent_node = ice_tm_node_search(dev, parent_node_id,
- &parent_node_type);
+ parent_node = find_node(pf->tm_conf.root, parent_node_id);
if (!parent_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent not exist";
return -EINVAL;
}
- if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
+ if (parent_node->level != ICE_TM_NODE_TYPE_PORT &&
+ parent_node->level != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
- level_id != (uint32_t)parent_node_type + 1) {
+ level_id != parent_node->level + 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
error->message = "Wrong level";
return -EINVAL;
}
/* check the node number */
- if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ if (parent_node->level == ICE_TM_NODE_TYPE_PORT) {
/* check the queue group number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
@@ -473,6 +453,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->weight = weight;
tm_node->reference_count = 0;
tm_node->parent = parent_node;
+ tm_node->level = parent_node->level + 1;
tm_node->shaper_profile = shaper_profile;
tm_node->children = (struct ice_tm_node **)
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
@@ -490,15 +471,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
- if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
- tm_node, node);
- pf->tm_conf.nb_qgroup_node++;
- } else {
- TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
- tm_node, node);
- pf->tm_conf.nb_queue_node++;
- }
tm_node->parent->reference_count++;
return 0;
@@ -509,7 +481,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
struct ice_tm_node *tm_node;
if (!error)
@@ -522,7 +493,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check if the node id exists */
- tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ tm_node = find_node(pf->tm_conf.root, node_id);
if (!tm_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "no such node";
@@ -538,7 +509,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
}
/* root node */
- if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ if (tm_node->level == ICE_TM_NODE_TYPE_PORT) {
rte_free(tm_node);
pf->tm_conf.root = NULL;
return 0;
@@ -546,13 +517,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
/* queue group or queue node */
tm_node->parent->reference_count--;
- if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
- TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
- pf->tm_conf.nb_qgroup_node--;
- } else {
- TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
- pf->tm_conf.nb_queue_node--;
- }
rte_free(tm_node);
return 0;
@@ -708,9 +672,9 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_sched_node *vsi_node = ice_get_vsi_node(hw);
- struct ice_tm_node *tm_node;
+ struct ice_tm_node *root = pf->tm_conf.root;
+ uint32_t i;
int ret;
/* reset vsi_node */
@@ -720,8 +684,12 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev)
return ret;
}
- /* reset queue group nodes */
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (root == NULL)
+ return 0;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *tm_node = root->children[i];
+
if (tm_node->sched_node == NULL)
continue;
@@ -774,9 +742,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
- struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
- struct ice_tm_node *tm_node;
+ struct ice_tm_node *root;
struct ice_sched_node *vsi_node = NULL;
struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
@@ -807,14 +773,14 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
/* config vsi node */
vsi_node = ice_get_vsi_node(hw);
- tm_node = pf->tm_conf.root;
+ root = pf->tm_conf.root;
- ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
+ ret_val = ice_set_node_rate(hw, root, vsi_node);
if (ret_val) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
PMD_DRV_LOG(ERR,
"configure vsi node %u bandwidth failed",
- tm_node->id);
+ root->id);
goto add_leaf;
}
@@ -825,13 +791,27 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
idx_vsi_child = 0;
idx_qg = 0;
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (root == NULL)
+ goto commit;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *tm_node = root->children[i];
struct ice_tm_node *tm_child_node;
struct ice_sched_node *qgroup_sched_node =
vsi_node->children[idx_vsi_child]->children[idx_qg];
+ uint32_t j;
- for (i = 0; i < tm_node->reference_count; i++) {
- tm_child_node = tm_node->children[i];
+ ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group node %u failed",
+ tm_node->id);
+ goto reset_leaf;
+ }
+
+ for (j = 0; j < tm_node->reference_count; j++) {
+ tm_child_node = tm_node->children[j];
qid = tm_child_node->id;
ret_val = ice_tx_queue_start(dev, qid);
if (ret_val) {
@@ -847,25 +827,25 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
goto reset_leaf;
}
- if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
- continue;
- ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (queue_node->info.parent_teid != qgroup_sched_node->info.node_teid) {
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node,
+ qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto reset_leaf;
+ }
+ }
+ ret_val = ice_cfg_hw_node(hw, tm_child_node, queue_node);
if (ret_val) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ PMD_DRV_LOG(ERR,
+ "configure queue group node %u failed",
+ tm_node->id);
goto reset_leaf;
}
}
- ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node);
- if (ret_val) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR,
- "configure queue group node %u failed",
- tm_node->id);
- goto reset_leaf;
- }
-
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
@@ -878,23 +858,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
}
}
- /* config queue nodes */
- TAILQ_FOREACH(tm_node, queue_list, node) {
- qid = tm_node->id;
- txq = dev->data->tx_queues[qid];
- q_teid = txq->q_teid;
- queue_node = ice_sched_get_node(hw->port_info, q_teid);
-
- ret_val = ice_cfg_hw_node(hw, tm_node, queue_node);
- if (ret_val) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR,
- "configure queue group node %u failed",
- tm_node->id);
- goto reset_leaf;
- }
- }
-
+commit:
pf->tm_conf.committed = true;
pf->tm_conf.clear_on_fail = clear_on_fail;
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 3/3] doc: update ice document for qos
2024-01-05 13:59 [PATCH 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 13:59 ` [PATCH 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
2024-01-05 13:59 ` [PATCH 2/3] net/ice: refactor tm config data struture Qi Zhang
@ 2024-01-05 13:59 ` Qi Zhang
2024-01-05 14:11 ` [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 21:12 ` [PATCH v3 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
4 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 13:59 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Add description for ice PMD's rte_tm capabilities.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
doc/guides/nics/ice.rst | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index bafb3ba022..1f737a009c 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -352,6 +352,25 @@ queue 3 using a raw pattern::
Currently, raw pattern support is limited to the FDIR and Hash engines.
+Traffic Management Support
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ice PMD provides support for the Traffic Management API (RTE_RM), allow
+users to offload a 3-layers Tx scheduler on the E810 NIC:
+
+- ``Port Layer``
+
+ This is the root layer, support peak bandwidth configuration, max to 32 children.
+
+- ``Queue Group Layer``
+
+ The middel layer, support peak / committed bandwidth, weight, prioirty configurations,
+ max to 8 children.
+
+- ``Queue Layer``
+
+ The leaf layer, support peak / committed bandwidth, weight, prioirty configurations.
+
Additional Options
++++++++++++++++++
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler
2024-01-05 13:59 [PATCH 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
` (2 preceding siblings ...)
2024-01-05 13:59 ` [PATCH 3/3] doc: update ice document for qos Qi Zhang
@ 2024-01-05 14:11 ` Qi Zhang
2024-01-05 8:10 ` Wu, Wenjun1
` (3 more replies)
2024-01-05 21:12 ` [PATCH v3 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
4 siblings, 4 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 14:11 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Remove dummy layers, code refactor, complete document
Qi Zhang (3):
net/ice: hide port and TC layer in Tx sched tree
net/ice: refactor tm config data structure
doc: update ice document for qos
v2:
- fix typos.
doc/guides/nics/ice.rst | 19 +++
drivers/net/ice/ice_ethdev.h | 12 +-
drivers/net/ice/ice_tm.c | 285 +++++++++++------------------------
3 files changed, 112 insertions(+), 204 deletions(-)
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v2 1/3] net/ice: hide port and TC layer in Tx sched tree
2024-01-05 14:11 ` [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 8:10 ` Wu, Wenjun1
@ 2024-01-05 14:11 ` Qi Zhang
2024-01-05 14:11 ` [PATCH v2 2/3] net/ice: refactor tm config data structure Qi Zhang
2024-01-05 14:11 ` [PATCH v2 3/3] doc: update ice document for qos Qi Zhang
3 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 14:11 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
In currently 5 layer tree implementation, the port and tc layer
is not configurable, so its not necessary to expose them to application.
The patch hides the top 2 layers and represented the root of the tree at
VSI layer. From application's point of view, its a 3 layer scheduler tree:
Port -> Queue Group -> Queue.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
drivers/net/ice/ice_ethdev.h | 7 ----
drivers/net/ice/ice_tm.c | 79 ++++--------------------------------
2 files changed, 7 insertions(+), 79 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index fa4981ed14..ae22c29ffc 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -470,7 +470,6 @@ struct ice_tm_shaper_profile {
struct ice_tm_node {
TAILQ_ENTRY(ice_tm_node) node;
uint32_t id;
- uint32_t tc;
uint32_t priority;
uint32_t weight;
uint32_t reference_count;
@@ -484,8 +483,6 @@ struct ice_tm_node {
/* node type of Traffic Manager */
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
- ICE_TM_NODE_TYPE_TC,
- ICE_TM_NODE_TYPE_VSI,
ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
@@ -495,12 +492,8 @@ enum ice_tm_node_type {
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
struct ice_tm_node *root; /* root node - port */
- struct ice_tm_node_list tc_list; /* node list for all the TCs */
- struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
- uint32_t nb_tc_node;
- uint32_t nb_vsi_node;
uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index b570798f07..7ae68c683b 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -43,12 +43,8 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
/* initialize node configuration */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
- TAILQ_INIT(&pf->tm_conf.tc_list);
- TAILQ_INIT(&pf->tm_conf.vsi_list);
TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
- pf->tm_conf.nb_tc_node = 0;
- pf->tm_conf.nb_vsi_node = 0;
pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
@@ -72,16 +68,6 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_qgroup_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
- TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_vsi_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
- TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_tc_node = 0;
if (pf->tm_conf.root) {
rte_free(pf->tm_conf.root);
pf->tm_conf.root = NULL;
@@ -93,8 +79,6 @@ ice_tm_node_search(struct rte_eth_dev *dev,
uint32_t node_id, enum ice_tm_node_type *node_type)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
- struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -104,20 +88,6 @@ ice_tm_node_search(struct rte_eth_dev *dev,
return pf->tm_conf.root;
}
- TAILQ_FOREACH(tm_node, tc_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_TC;
- return tm_node;
- }
- }
-
- TAILQ_FOREACH(tm_node, vsi_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_VSI;
- return tm_node;
- }
- }
-
TAILQ_FOREACH(tm_node, qgroup_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QGROUP;
@@ -371,6 +341,8 @@ ice_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+#define MAX_QUEUE_PER_GROUP 8
+
static int
ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
@@ -384,8 +356,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_shaper_profile *shaper_profile = NULL;
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
- uint16_t tc_nb = 1;
- uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -440,6 +410,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->shaper_profile = shaper_profile;
tm_node->children = (struct ice_tm_node **)
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
@@ -448,7 +419,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
/* check the parent node */
parent_node = ice_tm_node_search(dev, parent_node_id,
&parent_node_type);
@@ -458,8 +428,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC &&
- parent_node_type != ICE_TM_NODE_TYPE_VSI &&
parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent is not valid";
@@ -475,20 +443,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
/* check the node number */
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- /* check the TC number */
- if (pf->tm_conf.nb_tc_node >= tc_nb) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "too many TCs";
- return -EINVAL;
- }
- } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
- /* check the VSI number */
- if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "too many VSIs";
- return -EINVAL;
- }
- } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
/* check the queue group number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
@@ -497,7 +451,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
} else {
/* check the queue number */
- if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ if (parent_node->reference_count >= MAX_QUEUE_PER_GROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "too many queues";
return -EINVAL;
@@ -509,7 +463,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -538,24 +491,12 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
- tm_node, node);
- tm_node->tc = pf->tm_conf.nb_tc_node;
- pf->tm_conf.nb_tc_node++;
- } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
- tm_node, node);
- tm_node->tc = parent_node->tc;
- pf->tm_conf.nb_vsi_node++;
- } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
tm_node, node);
- tm_node->tc = parent_node->parent->tc;
pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -603,15 +544,9 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or VSI or queue group or queue node */
+ /* queue group or queue node */
tm_node->parent->reference_count--;
- if (node_type == ICE_TM_NODE_TYPE_TC) {
- TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
- pf->tm_conf.nb_tc_node--;
- } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
- TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
- pf->tm_conf.nb_vsi_node--;
- } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
pf->tm_conf.nb_qgroup_node--;
} else {
@@ -872,7 +807,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
/* config vsi node */
vsi_node = ice_get_vsi_node(hw);
- tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list);
+ tm_node = pf->tm_conf.root;
ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
if (ret_val) {
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v2 2/3] net/ice: refactor tm config data structure
2024-01-05 14:11 ` [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 8:10 ` Wu, Wenjun1
2024-01-05 14:11 ` [PATCH v2 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
@ 2024-01-05 14:11 ` Qi Zhang
2024-01-05 8:37 ` Zhang, Qi Z
2024-01-09 2:50 ` Wu, Wenjun1
2024-01-05 14:11 ` [PATCH v2 3/3] doc: update ice document for qos Qi Zhang
3 siblings, 2 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 14:11 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Simplified struct ice_tm_conf by removing per level node list.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
drivers/net/ice/ice_ethdev.h | 5 +-
drivers/net/ice/ice_tm.c | 210 +++++++++++++++--------------------
2 files changed, 88 insertions(+), 127 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index ae22c29ffc..008a7a23b9 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -472,6 +472,7 @@ struct ice_tm_node {
uint32_t id;
uint32_t priority;
uint32_t weight;
+ uint32_t level;
uint32_t reference_count;
struct ice_tm_node *parent;
struct ice_tm_node **children;
@@ -492,10 +493,6 @@ enum ice_tm_node_type {
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
struct ice_tm_node *root; /* root node - port */
- struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
- struct ice_tm_node_list queue_list; /* node list for all the queues */
- uint32_t nb_qgroup_node;
- uint32_t nb_queue_node;
bool committed;
bool clear_on_fail;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 7ae68c683b..7c662f8a85 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -43,66 +43,30 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
/* initialize node configuration */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
- TAILQ_INIT(&pf->tm_conf.qgroup_list);
- TAILQ_INIT(&pf->tm_conf.queue_list);
- pf->tm_conf.nb_qgroup_node = 0;
- pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
pf->tm_conf.clear_on_fail = false;
}
-void
-ice_tm_conf_uninit(struct rte_eth_dev *dev)
+static void free_node(struct ice_tm_node *root)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node *tm_node;
+ uint32_t i;
- /* clear node configuration */
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
- TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_queue_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
- TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_qgroup_node = 0;
- if (pf->tm_conf.root) {
- rte_free(pf->tm_conf.root);
- pf->tm_conf.root = NULL;
- }
+ if (root == NULL)
+ return;
+
+ for (i = 0; i < root->reference_count; i++)
+ free_node(root->children[i]);
+
+ rte_free(root);
}
-static inline struct ice_tm_node *
-ice_tm_node_search(struct rte_eth_dev *dev,
- uint32_t node_id, enum ice_tm_node_type *node_type)
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
- struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
- struct ice_tm_node *tm_node;
-
- if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_PORT;
- return pf->tm_conf.root;
- }
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_QGROUP;
- return tm_node;
- }
- }
-
- TAILQ_FOREACH(tm_node, queue_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_QUEUE;
- return tm_node;
- }
- }
-
- return NULL;
+ free_node(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
}
static int
@@ -195,11 +159,29 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return 0;
}
+static struct ice_tm_node *
+find_node(struct ice_tm_node *root, uint32_t id)
+{
+ uint32_t i;
+
+ if (root == NULL || root->id == id)
+ return root;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *node = find_node(root->children[i], id);
+
+ if (node)
+ return node;
+ }
+
+ return NULL;
+}
+
static int
ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error)
{
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node *tm_node;
if (!is_leaf || !error)
@@ -212,14 +194,14 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check if the node id exists */
- tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ tm_node = find_node(pf->tm_conf.root, node_id);
if (!tm_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "no such node";
return -EINVAL;
}
- if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE)
*is_leaf = true;
else
*is_leaf = false;
@@ -351,8 +333,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
- enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
struct ice_tm_shaper_profile *shaper_profile = NULL;
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
@@ -367,7 +347,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return ret;
/* check if the node is already existed */
- if (ice_tm_node_search(dev, node_id, &node_type)) {
+ if (find_node(pf->tm_conf.root, node_id)) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "node id already used";
return -EINVAL;
@@ -408,6 +388,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
if (!tm_node)
return -ENOMEM;
tm_node->id = node_id;
+ tm_node->level = ICE_TM_NODE_TYPE_PORT;
tm_node->parent = NULL;
tm_node->reference_count = 0;
tm_node->shaper_profile = shaper_profile;
@@ -420,29 +401,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check the parent node */
- parent_node = ice_tm_node_search(dev, parent_node_id,
- &parent_node_type);
+ parent_node = find_node(pf->tm_conf.root, parent_node_id);
if (!parent_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent not exist";
return -EINVAL;
}
- if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
+ if (parent_node->level != ICE_TM_NODE_TYPE_PORT &&
+ parent_node->level != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
- level_id != (uint32_t)parent_node_type + 1) {
+ level_id != parent_node->level + 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
error->message = "Wrong level";
return -EINVAL;
}
/* check the node number */
- if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ if (parent_node->level == ICE_TM_NODE_TYPE_PORT) {
/* check the queue group number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
@@ -473,6 +453,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->weight = weight;
tm_node->reference_count = 0;
tm_node->parent = parent_node;
+ tm_node->level = parent_node->level + 1;
tm_node->shaper_profile = shaper_profile;
tm_node->children = (struct ice_tm_node **)
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
@@ -490,15 +471,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
- if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
- tm_node, node);
- pf->tm_conf.nb_qgroup_node++;
- } else {
- TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
- tm_node, node);
- pf->tm_conf.nb_queue_node++;
- }
tm_node->parent->reference_count++;
return 0;
@@ -509,7 +481,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
struct ice_tm_node *tm_node;
if (!error)
@@ -522,7 +493,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check if the node id exists */
- tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ tm_node = find_node(pf->tm_conf.root, node_id);
if (!tm_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "no such node";
@@ -538,7 +509,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
}
/* root node */
- if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ if (tm_node->level == ICE_TM_NODE_TYPE_PORT) {
rte_free(tm_node);
pf->tm_conf.root = NULL;
return 0;
@@ -546,13 +517,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
/* queue group or queue node */
tm_node->parent->reference_count--;
- if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
- TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
- pf->tm_conf.nb_qgroup_node--;
- } else {
- TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
- pf->tm_conf.nb_queue_node--;
- }
rte_free(tm_node);
return 0;
@@ -708,9 +672,9 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_sched_node *vsi_node = ice_get_vsi_node(hw);
- struct ice_tm_node *tm_node;
+ struct ice_tm_node *root = pf->tm_conf.root;
+ uint32_t i;
int ret;
/* reset vsi_node */
@@ -720,8 +684,12 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev)
return ret;
}
- /* reset queue group nodes */
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (root == NULL)
+ return 0;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *tm_node = root->children[i];
+
if (tm_node->sched_node == NULL)
continue;
@@ -774,9 +742,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
- struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
- struct ice_tm_node *tm_node;
+ struct ice_tm_node *root;
struct ice_sched_node *vsi_node = NULL;
struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
@@ -807,14 +773,14 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
/* config vsi node */
vsi_node = ice_get_vsi_node(hw);
- tm_node = pf->tm_conf.root;
+ root = pf->tm_conf.root;
- ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
+ ret_val = ice_set_node_rate(hw, root, vsi_node);
if (ret_val) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
PMD_DRV_LOG(ERR,
"configure vsi node %u bandwidth failed",
- tm_node->id);
+ root->id);
goto add_leaf;
}
@@ -825,13 +791,27 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
idx_vsi_child = 0;
idx_qg = 0;
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (root == NULL)
+ goto commit;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *tm_node = root->children[i];
struct ice_tm_node *tm_child_node;
struct ice_sched_node *qgroup_sched_node =
vsi_node->children[idx_vsi_child]->children[idx_qg];
+ uint32_t j;
- for (i = 0; i < tm_node->reference_count; i++) {
- tm_child_node = tm_node->children[i];
+ ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group node %u failed",
+ tm_node->id);
+ goto reset_leaf;
+ }
+
+ for (j = 0; j < tm_node->reference_count; j++) {
+ tm_child_node = tm_node->children[j];
qid = tm_child_node->id;
ret_val = ice_tx_queue_start(dev, qid);
if (ret_val) {
@@ -847,25 +827,25 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
goto reset_leaf;
}
- if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
- continue;
- ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (queue_node->info.parent_teid != qgroup_sched_node->info.node_teid) {
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node,
+ qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto reset_leaf;
+ }
+ }
+ ret_val = ice_cfg_hw_node(hw, tm_child_node, queue_node);
if (ret_val) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ PMD_DRV_LOG(ERR,
+ "configure queue group node %u failed",
+ tm_node->id);
goto reset_leaf;
}
}
- ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node);
- if (ret_val) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR,
- "configure queue group node %u failed",
- tm_node->id);
- goto reset_leaf;
- }
-
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
@@ -878,23 +858,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
}
}
- /* config queue nodes */
- TAILQ_FOREACH(tm_node, queue_list, node) {
- qid = tm_node->id;
- txq = dev->data->tx_queues[qid];
- q_teid = txq->q_teid;
- queue_node = ice_sched_get_node(hw->port_info, q_teid);
-
- ret_val = ice_cfg_hw_node(hw, tm_node, queue_node);
- if (ret_val) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR,
- "configure queue group node %u failed",
- tm_node->id);
- goto reset_leaf;
- }
- }
-
+commit:
pf->tm_conf.committed = true;
pf->tm_conf.clear_on_fail = clear_on_fail;
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v2 3/3] doc: update ice document for qos
2024-01-05 14:11 ` [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
` (2 preceding siblings ...)
2024-01-05 14:11 ` [PATCH v2 2/3] net/ice: refactor tm config data structure Qi Zhang
@ 2024-01-05 14:11 ` Qi Zhang
3 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 14:11 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Add description for ice PMD's rte_tm capabilities.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
doc/guides/nics/ice.rst | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index bafb3ba022..3d381a266b 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -352,6 +352,25 @@ queue 3 using a raw pattern::
Currently, raw pattern support is limited to the FDIR and Hash engines.
+Traffic Management Support
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ice PMD provides support for the Traffic Management API (RTE_RM), allow
+users to offload a 3-layers Tx scheduler on the E810 NIC:
+
+- ``Port Layer``
+
+ This is the root layer, support peak bandwidth configuration, max to 32 children.
+
+- ``Queue Group Layer``
+
+ The middel layer, support peak / committed bandwidth, weight, priority configurations,
+ max to 8 children.
+
+- ``Queue Layer``
+
+ The leaf layer, support peak / committed bandwidth, weight, priority configurations.
+
Additional Options
++++++++++++++++++
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v3 0/3] net/ice: simplified to 3 layer Tx scheduler
2024-01-05 13:59 [PATCH 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
` (3 preceding siblings ...)
2024-01-05 14:11 ` [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
@ 2024-01-05 21:12 ` Qi Zhang
2024-01-05 21:12 ` [PATCH v3 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
` (3 more replies)
4 siblings, 4 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 21:12 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Remove dummy layers, code refactor, complete document
v3:
- fix tm_node memory free.
- fix corrupt when slibling node deletion is not in a reversed order.
v2:
- fix typos.
Qi Zhang (3):
net/ice: hide port and TC layer in Tx sched tree
net/ice: refactor tm config data structure
doc: update ice document for qos
doc/guides/nics/ice.rst | 19 +++
drivers/net/ice/ice_ethdev.h | 12 +-
drivers/net/ice/ice_tm.c | 313 +++++++++++++----------------------
3 files changed, 132 insertions(+), 212 deletions(-)
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v3 1/3] net/ice: hide port and TC layer in Tx sched tree
2024-01-05 21:12 ` [PATCH v3 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
@ 2024-01-05 21:12 ` Qi Zhang
2024-01-05 21:12 ` [PATCH v3 2/3] net/ice: refactor tm config data structure Qi Zhang
` (2 subsequent siblings)
3 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 21:12 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
In currently 5 layer tree implementation, the port and tc layer
is not configurable, so its not necessary to expose them to application.
The patch hides the top 2 layers and represented the root of the tree at
VSI layer. From application's point of view, its a 3 layer scheduler tree:
Port -> Queue Group -> Queue.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 7 ----
drivers/net/ice/ice_tm.c | 79 ++++--------------------------------
2 files changed, 7 insertions(+), 79 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index fa4981ed14..ae22c29ffc 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -470,7 +470,6 @@ struct ice_tm_shaper_profile {
struct ice_tm_node {
TAILQ_ENTRY(ice_tm_node) node;
uint32_t id;
- uint32_t tc;
uint32_t priority;
uint32_t weight;
uint32_t reference_count;
@@ -484,8 +483,6 @@ struct ice_tm_node {
/* node type of Traffic Manager */
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
- ICE_TM_NODE_TYPE_TC,
- ICE_TM_NODE_TYPE_VSI,
ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
@@ -495,12 +492,8 @@ enum ice_tm_node_type {
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
struct ice_tm_node *root; /* root node - port */
- struct ice_tm_node_list tc_list; /* node list for all the TCs */
- struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
- uint32_t nb_tc_node;
- uint32_t nb_vsi_node;
uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index b570798f07..7ae68c683b 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -43,12 +43,8 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
/* initialize node configuration */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
- TAILQ_INIT(&pf->tm_conf.tc_list);
- TAILQ_INIT(&pf->tm_conf.vsi_list);
TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
- pf->tm_conf.nb_tc_node = 0;
- pf->tm_conf.nb_vsi_node = 0;
pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
@@ -72,16 +68,6 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_qgroup_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
- TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_vsi_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
- TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_tc_node = 0;
if (pf->tm_conf.root) {
rte_free(pf->tm_conf.root);
pf->tm_conf.root = NULL;
@@ -93,8 +79,6 @@ ice_tm_node_search(struct rte_eth_dev *dev,
uint32_t node_id, enum ice_tm_node_type *node_type)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
- struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -104,20 +88,6 @@ ice_tm_node_search(struct rte_eth_dev *dev,
return pf->tm_conf.root;
}
- TAILQ_FOREACH(tm_node, tc_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_TC;
- return tm_node;
- }
- }
-
- TAILQ_FOREACH(tm_node, vsi_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_VSI;
- return tm_node;
- }
- }
-
TAILQ_FOREACH(tm_node, qgroup_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QGROUP;
@@ -371,6 +341,8 @@ ice_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+#define MAX_QUEUE_PER_GROUP 8
+
static int
ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
@@ -384,8 +356,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_shaper_profile *shaper_profile = NULL;
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
- uint16_t tc_nb = 1;
- uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -440,6 +410,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->shaper_profile = shaper_profile;
tm_node->children = (struct ice_tm_node **)
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
@@ -448,7 +419,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
/* check the parent node */
parent_node = ice_tm_node_search(dev, parent_node_id,
&parent_node_type);
@@ -458,8 +428,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC &&
- parent_node_type != ICE_TM_NODE_TYPE_VSI &&
parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent is not valid";
@@ -475,20 +443,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
/* check the node number */
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- /* check the TC number */
- if (pf->tm_conf.nb_tc_node >= tc_nb) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "too many TCs";
- return -EINVAL;
- }
- } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
- /* check the VSI number */
- if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "too many VSIs";
- return -EINVAL;
- }
- } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
/* check the queue group number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
@@ -497,7 +451,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
} else {
/* check the queue number */
- if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ if (parent_node->reference_count >= MAX_QUEUE_PER_GROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "too many queues";
return -EINVAL;
@@ -509,7 +463,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -538,24 +491,12 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
- tm_node, node);
- tm_node->tc = pf->tm_conf.nb_tc_node;
- pf->tm_conf.nb_tc_node++;
- } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
- tm_node, node);
- tm_node->tc = parent_node->tc;
- pf->tm_conf.nb_vsi_node++;
- } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
tm_node, node);
- tm_node->tc = parent_node->parent->tc;
pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -603,15 +544,9 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or VSI or queue group or queue node */
+ /* queue group or queue node */
tm_node->parent->reference_count--;
- if (node_type == ICE_TM_NODE_TYPE_TC) {
- TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
- pf->tm_conf.nb_tc_node--;
- } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
- TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
- pf->tm_conf.nb_vsi_node--;
- } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
pf->tm_conf.nb_qgroup_node--;
} else {
@@ -872,7 +807,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
/* config vsi node */
vsi_node = ice_get_vsi_node(hw);
- tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list);
+ tm_node = pf->tm_conf.root;
ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
if (ret_val) {
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v3 2/3] net/ice: refactor tm config data structure
2024-01-05 21:12 ` [PATCH v3 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 21:12 ` [PATCH v3 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
@ 2024-01-05 21:12 ` Qi Zhang
2024-01-05 21:12 ` [PATCH v3 3/3] doc: update ice document for qos Qi Zhang
2024-01-08 20:21 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Qi Zhang
3 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 21:12 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Simplified struct ice_tm_conf by removing per level node list.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
drivers/net/ice/ice_ethdev.h | 5 +-
drivers/net/ice/ice_tm.c | 244 ++++++++++++++++-------------------
2 files changed, 111 insertions(+), 138 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index ae22c29ffc..008a7a23b9 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -472,6 +472,7 @@ struct ice_tm_node {
uint32_t id;
uint32_t priority;
uint32_t weight;
+ uint32_t level;
uint32_t reference_count;
struct ice_tm_node *parent;
struct ice_tm_node **children;
@@ -492,10 +493,6 @@ enum ice_tm_node_type {
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
struct ice_tm_node *root; /* root node - port */
- struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
- struct ice_tm_node_list queue_list; /* node list for all the queues */
- uint32_t nb_qgroup_node;
- uint32_t nb_queue_node;
bool committed;
bool clear_on_fail;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index 7ae68c683b..c579662843 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -6,6 +6,9 @@
#include "ice_ethdev.h"
#include "ice_rxtx.h"
+#define MAX_CHILDREN_PER_SCHED_NODE 8
+#define MAX_CHILDREN_PER_TM_NODE 256
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error);
@@ -43,66 +46,30 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
/* initialize node configuration */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
- TAILQ_INIT(&pf->tm_conf.qgroup_list);
- TAILQ_INIT(&pf->tm_conf.queue_list);
- pf->tm_conf.nb_qgroup_node = 0;
- pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
pf->tm_conf.clear_on_fail = false;
}
-void
-ice_tm_conf_uninit(struct rte_eth_dev *dev)
+static void free_node(struct ice_tm_node *root)
{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node *tm_node;
+ uint32_t i;
- /* clear node configuration */
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
- TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_queue_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
- TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_qgroup_node = 0;
- if (pf->tm_conf.root) {
- rte_free(pf->tm_conf.root);
- pf->tm_conf.root = NULL;
- }
+ if (root == NULL)
+ return;
+
+ for (i = 0; i < root->reference_count; i++)
+ free_node(root->children[i]);
+
+ rte_free(root);
}
-static inline struct ice_tm_node *
-ice_tm_node_search(struct rte_eth_dev *dev,
- uint32_t node_id, enum ice_tm_node_type *node_type)
+void
+ice_tm_conf_uninit(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
- struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
- struct ice_tm_node *tm_node;
-
- if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_PORT;
- return pf->tm_conf.root;
- }
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_QGROUP;
- return tm_node;
- }
- }
-
- TAILQ_FOREACH(tm_node, queue_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_QUEUE;
- return tm_node;
- }
- }
-
- return NULL;
+ free_node(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
}
static int
@@ -195,11 +162,29 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return 0;
}
+static struct ice_tm_node *
+find_node(struct ice_tm_node *root, uint32_t id)
+{
+ uint32_t i;
+
+ if (root == NULL || root->id == id)
+ return root;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *node = find_node(root->children[i], id);
+
+ if (node)
+ return node;
+ }
+
+ return NULL;
+}
+
static int
ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error)
{
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node *tm_node;
if (!is_leaf || !error)
@@ -212,14 +197,14 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check if the node id exists */
- tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ tm_node = find_node(pf->tm_conf.root, node_id);
if (!tm_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "no such node";
return -EINVAL;
}
- if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE)
*is_leaf = true;
else
*is_leaf = false;
@@ -341,8 +326,6 @@ ice_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
-#define MAX_QUEUE_PER_GROUP 8
-
static int
ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
@@ -351,8 +334,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
- enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
struct ice_tm_shaper_profile *shaper_profile = NULL;
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
@@ -367,7 +348,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return ret;
/* check if the node is already existed */
- if (ice_tm_node_search(dev, node_id, &node_type)) {
+ if (find_node(pf->tm_conf.root, node_id)) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "node id already used";
return -EINVAL;
@@ -402,17 +383,19 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
/* add the root node */
- tm_node = rte_zmalloc("ice_tm_node",
- sizeof(struct ice_tm_node),
+ tm_node = rte_zmalloc(NULL,
+ sizeof(struct ice_tm_node) +
+ sizeof(struct ice_tm_node *) * MAX_CHILDREN_PER_TM_NODE,
0);
if (!tm_node)
return -ENOMEM;
tm_node->id = node_id;
+ tm_node->level = ICE_TM_NODE_TYPE_PORT;
tm_node->parent = NULL;
tm_node->reference_count = 0;
tm_node->shaper_profile = shaper_profile;
- tm_node->children = (struct ice_tm_node **)
- rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->children =
+ (void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node));
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -420,29 +403,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check the parent node */
- parent_node = ice_tm_node_search(dev, parent_node_id,
- &parent_node_type);
+ parent_node = find_node(pf->tm_conf.root, parent_node_id);
if (!parent_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent not exist";
return -EINVAL;
}
- if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
+ if (parent_node->level != ICE_TM_NODE_TYPE_PORT &&
+ parent_node->level != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
- level_id != (uint32_t)parent_node_type + 1) {
+ level_id != parent_node->level + 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
error->message = "Wrong level";
return -EINVAL;
}
/* check the node number */
- if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ if (parent_node->level == ICE_TM_NODE_TYPE_PORT) {
/* check the queue group number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
@@ -451,7 +433,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
} else {
/* check the queue number */
- if (parent_node->reference_count >= MAX_QUEUE_PER_GROUP) {
+ if (parent_node->reference_count >=
+ MAX_CHILDREN_PER_SCHED_NODE) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "too many queues";
return -EINVAL;
@@ -463,8 +446,9 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- tm_node = rte_zmalloc("ice_tm_node",
- sizeof(struct ice_tm_node),
+ tm_node = rte_zmalloc(NULL,
+ sizeof(struct ice_tm_node) +
+ sizeof(struct ice_tm_node *) * MAX_CHILDREN_PER_TM_NODE,
0);
if (!tm_node)
return -ENOMEM;
@@ -473,9 +457,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->weight = weight;
tm_node->reference_count = 0;
tm_node->parent = parent_node;
+ tm_node->level = parent_node->level + 1;
tm_node->shaper_profile = shaper_profile;
- tm_node->children = (struct ice_tm_node **)
- rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->children =
+ (void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node));
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
@@ -490,15 +475,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
- if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
- tm_node, node);
- pf->tm_conf.nb_qgroup_node++;
- } else {
- TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
- tm_node, node);
- pf->tm_conf.nb_queue_node++;
- }
tm_node->parent->reference_count++;
return 0;
@@ -509,8 +485,8 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
struct ice_tm_node *tm_node;
+ uint32_t i, j;
if (!error)
return -EINVAL;
@@ -522,7 +498,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check if the node id exists */
- tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ tm_node = find_node(pf->tm_conf.root, node_id);
if (!tm_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "no such node";
@@ -538,21 +514,21 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
}
/* root node */
- if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ if (tm_node->level == ICE_TM_NODE_TYPE_PORT) {
rte_free(tm_node);
pf->tm_conf.root = NULL;
return 0;
}
/* queue group or queue node */
+ for (i = 0; i < tm_node->parent->reference_count; i++)
+ if (tm_node->parent->children[i] == tm_node)
+ break;
+
+ for (j = i ; j < tm_node->parent->reference_count - 1; j++)
+ tm_node->parent->children[j] = tm_node->parent->children[j + 1];
+
tm_node->parent->reference_count--;
- if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
- TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
- pf->tm_conf.nb_qgroup_node--;
- } else {
- TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
- pf->tm_conf.nb_queue_node--;
- }
rte_free(tm_node);
return 0;
@@ -708,9 +684,9 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_sched_node *vsi_node = ice_get_vsi_node(hw);
- struct ice_tm_node *tm_node;
+ struct ice_tm_node *root = pf->tm_conf.root;
+ uint32_t i;
int ret;
/* reset vsi_node */
@@ -720,8 +696,12 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev)
return ret;
}
- /* reset queue group nodes */
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (root == NULL)
+ return 0;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *tm_node = root->children[i];
+
if (tm_node->sched_node == NULL)
continue;
@@ -774,9 +754,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
- struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
- struct ice_tm_node *tm_node;
+ struct ice_tm_node *root;
struct ice_sched_node *vsi_node = NULL;
struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
@@ -807,14 +785,14 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
/* config vsi node */
vsi_node = ice_get_vsi_node(hw);
- tm_node = pf->tm_conf.root;
+ root = pf->tm_conf.root;
- ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
+ ret_val = ice_set_node_rate(hw, root, vsi_node);
if (ret_val) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
PMD_DRV_LOG(ERR,
"configure vsi node %u bandwidth failed",
- tm_node->id);
+ root->id);
goto add_leaf;
}
@@ -825,13 +803,27 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
idx_vsi_child = 0;
idx_qg = 0;
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (root == NULL)
+ goto commit;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *tm_node = root->children[i];
struct ice_tm_node *tm_child_node;
struct ice_sched_node *qgroup_sched_node =
vsi_node->children[idx_vsi_child]->children[idx_qg];
+ uint32_t j;
+
+ ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group node %u failed",
+ tm_node->id);
+ goto reset_leaf;
+ }
- for (i = 0; i < tm_node->reference_count; i++) {
- tm_child_node = tm_node->children[i];
+ for (j = 0; j < tm_node->reference_count; j++) {
+ tm_child_node = tm_node->children[j];
qid = tm_child_node->id;
ret_val = ice_tx_queue_start(dev, qid);
if (ret_val) {
@@ -847,25 +839,25 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
goto reset_leaf;
}
- if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
- continue;
- ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (queue_node->info.parent_teid != qgroup_sched_node->info.node_teid) {
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node,
+ qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto reset_leaf;
+ }
+ }
+ ret_val = ice_cfg_hw_node(hw, tm_child_node, queue_node);
if (ret_val) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ PMD_DRV_LOG(ERR,
+ "configure queue group node %u failed",
+ tm_node->id);
goto reset_leaf;
}
}
- ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node);
- if (ret_val) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR,
- "configure queue group node %u failed",
- tm_node->id);
- goto reset_leaf;
- }
-
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
@@ -878,23 +870,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
}
}
- /* config queue nodes */
- TAILQ_FOREACH(tm_node, queue_list, node) {
- qid = tm_node->id;
- txq = dev->data->tx_queues[qid];
- q_teid = txq->q_teid;
- queue_node = ice_sched_get_node(hw->port_info, q_teid);
-
- ret_val = ice_cfg_hw_node(hw, tm_node, queue_node);
- if (ret_val) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR,
- "configure queue group node %u failed",
- tm_node->id);
- goto reset_leaf;
- }
- }
-
+commit:
pf->tm_conf.committed = true;
pf->tm_conf.clear_on_fail = clear_on_fail;
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v3 3/3] doc: update ice document for qos
2024-01-05 21:12 ` [PATCH v3 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 21:12 ` [PATCH v3 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
2024-01-05 21:12 ` [PATCH v3 2/3] net/ice: refactor tm config data structure Qi Zhang
@ 2024-01-05 21:12 ` Qi Zhang
2024-01-08 20:21 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Qi Zhang
3 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-05 21:12 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Add description for ice PMD's rte_tm capabilities.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Wenjun Wu <wenjun1.wu@intel.com>
---
doc/guides/nics/ice.rst | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index bafb3ba022..3d381a266b 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -352,6 +352,25 @@ queue 3 using a raw pattern::
Currently, raw pattern support is limited to the FDIR and Hash engines.
+Traffic Management Support
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ice PMD provides support for the Traffic Management API (RTE_RM), allow
+users to offload a 3-layers Tx scheduler on the E810 NIC:
+
+- ``Port Layer``
+
+ This is the root layer, support peak bandwidth configuration, max to 32 children.
+
+- ``Queue Group Layer``
+
+ The middel layer, support peak / committed bandwidth, weight, priority configurations,
+ max to 8 children.
+
+- ``Queue Layer``
+
+ The leaf layer, support peak / committed bandwidth, weight, priority configurations.
+
Additional Options
++++++++++++++++++
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v4 0/3] simplified to 3 layer Tx scheduler
2024-01-05 21:12 ` [PATCH v3 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
` (2 preceding siblings ...)
2024-01-05 21:12 ` [PATCH v3 3/3] doc: update ice document for qos Qi Zhang
@ 2024-01-08 20:21 ` Qi Zhang
2024-01-08 20:21 ` [PATCH v4 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
` (3 more replies)
3 siblings, 4 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-08 20:21 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Remove dummy layers, code refactor, complete document
v4:
- rebase.
v3:
- fix tm_node memory free.
- fix corrupt when slibling node deletion is not in a reversed order.
v2:
- fix typos.
Qi Zhang (3):
net/ice: hide port and TC layer in Tx sched tree
net/ice: refactor tm config data structure
doc: update ice document for qos
doc/guides/nics/ice.rst | 19 +++
drivers/net/ice/ice_ethdev.h | 12 +-
drivers/net/ice/ice_tm.c | 317 +++++++++++++----------------------
3 files changed, 134 insertions(+), 214 deletions(-)
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v4 1/3] net/ice: hide port and TC layer in Tx sched tree
2024-01-08 20:21 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Qi Zhang
@ 2024-01-08 20:21 ` Qi Zhang
2024-01-08 20:21 ` [PATCH v4 2/3] net/ice: refactor tm config data structure Qi Zhang
` (2 subsequent siblings)
3 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-08 20:21 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
In currently 5 layer tree implementation, the port and tc layer
is not configurable, so its not necessary to expose them to application.
The patch hides the top 2 layers and represented the root of the tree at
VSI layer. From application's point of view, its a 3 layer scheduler tree:
Port -> Queue Group -> Queue.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Wenjun Wu <wenjun1.wu@intel.com>
---
drivers/net/ice/ice_ethdev.h | 7 ----
drivers/net/ice/ice_tm.c | 79 ++++--------------------------------
2 files changed, 7 insertions(+), 79 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index fa4981ed14..ae22c29ffc 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -470,7 +470,6 @@ struct ice_tm_shaper_profile {
struct ice_tm_node {
TAILQ_ENTRY(ice_tm_node) node;
uint32_t id;
- uint32_t tc;
uint32_t priority;
uint32_t weight;
uint32_t reference_count;
@@ -484,8 +483,6 @@ struct ice_tm_node {
/* node type of Traffic Manager */
enum ice_tm_node_type {
ICE_TM_NODE_TYPE_PORT,
- ICE_TM_NODE_TYPE_TC,
- ICE_TM_NODE_TYPE_VSI,
ICE_TM_NODE_TYPE_QGROUP,
ICE_TM_NODE_TYPE_QUEUE,
ICE_TM_NODE_TYPE_MAX,
@@ -495,12 +492,8 @@ enum ice_tm_node_type {
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
struct ice_tm_node *root; /* root node - port */
- struct ice_tm_node_list tc_list; /* node list for all the TCs */
- struct ice_tm_node_list vsi_list; /* node list for all the VSIs */
struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
struct ice_tm_node_list queue_list; /* node list for all the queues */
- uint32_t nb_tc_node;
- uint32_t nb_vsi_node;
uint32_t nb_qgroup_node;
uint32_t nb_queue_node;
bool committed;
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index c00ecb6a97..d67783c77e 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -43,12 +43,8 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
/* initialize node configuration */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
- TAILQ_INIT(&pf->tm_conf.tc_list);
- TAILQ_INIT(&pf->tm_conf.vsi_list);
TAILQ_INIT(&pf->tm_conf.qgroup_list);
TAILQ_INIT(&pf->tm_conf.queue_list);
- pf->tm_conf.nb_tc_node = 0;
- pf->tm_conf.nb_vsi_node = 0;
pf->tm_conf.nb_qgroup_node = 0;
pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
@@ -79,16 +75,6 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(tm_node);
}
pf->tm_conf.nb_qgroup_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list))) {
- TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_vsi_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
- TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_tc_node = 0;
if (pf->tm_conf.root) {
rte_free(pf->tm_conf.root);
pf->tm_conf.root = NULL;
@@ -100,8 +86,6 @@ ice_tm_node_search(struct rte_eth_dev *dev,
uint32_t node_id, enum ice_tm_node_type *node_type)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node_list *tc_list = &pf->tm_conf.tc_list;
- struct ice_tm_node_list *vsi_list = &pf->tm_conf.vsi_list;
struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
struct ice_tm_node *tm_node;
@@ -111,20 +95,6 @@ ice_tm_node_search(struct rte_eth_dev *dev,
return pf->tm_conf.root;
}
- TAILQ_FOREACH(tm_node, tc_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_TC;
- return tm_node;
- }
- }
-
- TAILQ_FOREACH(tm_node, vsi_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_VSI;
- return tm_node;
- }
- }
-
TAILQ_FOREACH(tm_node, qgroup_list, node) {
if (tm_node->id == node_id) {
*node_type = ICE_TM_NODE_TYPE_QGROUP;
@@ -378,6 +348,8 @@ ice_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+#define MAX_QUEUE_PER_GROUP 8
+
static int
ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
@@ -391,8 +363,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct ice_tm_shaper_profile *shaper_profile = NULL;
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
- uint16_t tc_nb = 1;
- uint16_t vsi_nb = 1;
int ret;
if (!params || !error)
@@ -447,6 +417,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->id = node_id;
tm_node->parent = NULL;
tm_node->reference_count = 0;
+ tm_node->shaper_profile = shaper_profile;
tm_node->children = (struct ice_tm_node **)
rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
rte_memcpy(&tm_node->params, params,
@@ -455,7 +426,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or queue node */
/* check the parent node */
parent_node = ice_tm_node_search(dev, parent_node_id,
&parent_node_type);
@@ -465,8 +435,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return -EINVAL;
}
if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_TC &&
- parent_node_type != ICE_TM_NODE_TYPE_VSI &&
parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent is not valid";
@@ -482,20 +450,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
/* check the node number */
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- /* check the TC number */
- if (pf->tm_conf.nb_tc_node >= tc_nb) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "too many TCs";
- return -EINVAL;
- }
- } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
- /* check the VSI number */
- if (pf->tm_conf.nb_vsi_node >= vsi_nb) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "too many VSIs";
- return -EINVAL;
- }
- } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
/* check the queue group number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
@@ -504,7 +458,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
} else {
/* check the queue number */
- if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
+ if (parent_node->reference_count >= MAX_QUEUE_PER_GROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "too many queues";
return -EINVAL;
@@ -516,7 +470,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- /* add the TC or VSI or queue group or queue node */
tm_node = rte_zmalloc("ice_tm_node",
sizeof(struct ice_tm_node),
0);
@@ -545,24 +498,12 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
- tm_node, node);
- tm_node->tc = pf->tm_conf.nb_tc_node;
- pf->tm_conf.nb_tc_node++;
- } else if (parent_node_type == ICE_TM_NODE_TYPE_TC) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.vsi_list,
- tm_node, node);
- tm_node->tc = parent_node->tc;
- pf->tm_conf.nb_vsi_node++;
- } else if (parent_node_type == ICE_TM_NODE_TYPE_VSI) {
TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
tm_node, node);
- tm_node->tc = parent_node->parent->tc;
pf->tm_conf.nb_qgroup_node++;
} else {
TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
tm_node, node);
- tm_node->tc = parent_node->parent->parent->tc;
pf->tm_conf.nb_queue_node++;
}
tm_node->parent->reference_count++;
@@ -610,15 +551,9 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
- /* TC or VSI or queue group or queue node */
+ /* queue group or queue node */
tm_node->parent->reference_count--;
- if (node_type == ICE_TM_NODE_TYPE_TC) {
- TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
- pf->tm_conf.nb_tc_node--;
- } else if (node_type == ICE_TM_NODE_TYPE_VSI) {
- TAILQ_REMOVE(&pf->tm_conf.vsi_list, tm_node, node);
- pf->tm_conf.nb_vsi_node--;
- } else if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
+ if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
pf->tm_conf.nb_qgroup_node--;
} else {
@@ -884,7 +819,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
/* config vsi node */
vsi_node = ice_get_vsi_node(hw);
- tm_node = TAILQ_FIRST(&pf->tm_conf.vsi_list);
+ tm_node = pf->tm_conf.root;
ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
if (ret_val) {
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v4 2/3] net/ice: refactor tm config data structure
2024-01-08 20:21 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Qi Zhang
2024-01-08 20:21 ` [PATCH v4 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
@ 2024-01-08 20:21 ` Qi Zhang
2024-01-09 4:51 ` Wu, Wenjun1
2024-01-08 20:21 ` [PATCH v4 3/3] doc: update ice document for qos Qi Zhang
2024-01-09 5:30 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Zhang, Qi Z
3 siblings, 1 reply; 21+ messages in thread
From: Qi Zhang @ 2024-01-08 20:21 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Simplified struct ice_tm_conf by removing per level node list.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
drivers/net/ice/ice_ethdev.h | 5 +-
drivers/net/ice/ice_tm.c | 248 ++++++++++++++++-------------------
2 files changed, 113 insertions(+), 140 deletions(-)
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index ae22c29ffc..008a7a23b9 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -472,6 +472,7 @@ struct ice_tm_node {
uint32_t id;
uint32_t priority;
uint32_t weight;
+ uint32_t level;
uint32_t reference_count;
struct ice_tm_node *parent;
struct ice_tm_node **children;
@@ -492,10 +493,6 @@ enum ice_tm_node_type {
struct ice_tm_conf {
struct ice_shaper_profile_list shaper_profile_list;
struct ice_tm_node *root; /* root node - port */
- struct ice_tm_node_list qgroup_list; /* node list for all the queue groups */
- struct ice_tm_node_list queue_list; /* node list for all the queues */
- uint32_t nb_qgroup_node;
- uint32_t nb_queue_node;
bool committed;
bool clear_on_fail;
};
diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c
index d67783c77e..fbab0b8808 100644
--- a/drivers/net/ice/ice_tm.c
+++ b/drivers/net/ice/ice_tm.c
@@ -6,6 +6,9 @@
#include "ice_ethdev.h"
#include "ice_rxtx.h"
+#define MAX_CHILDREN_PER_SCHED_NODE 8
+#define MAX_CHILDREN_PER_TM_NODE 256
+
static int ice_hierarchy_commit(struct rte_eth_dev *dev,
int clear_on_fail,
__rte_unused struct rte_tm_error *error);
@@ -43,20 +46,28 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
/* initialize node configuration */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
pf->tm_conf.root = NULL;
- TAILQ_INIT(&pf->tm_conf.qgroup_list);
- TAILQ_INIT(&pf->tm_conf.queue_list);
- pf->tm_conf.nb_qgroup_node = 0;
- pf->tm_conf.nb_queue_node = 0;
pf->tm_conf.committed = false;
pf->tm_conf.clear_on_fail = false;
}
+static void free_node(struct ice_tm_node *root)
+{
+ uint32_t i;
+
+ if (root == NULL)
+ return;
+
+ for (i = 0; i < root->reference_count; i++)
+ free_node(root->children[i]);
+
+ rte_free(root);
+}
+
void
ice_tm_conf_uninit(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_shaper_profile *shaper_profile;
- struct ice_tm_node *tm_node;
/* clear profile */
while ((shaper_profile = TAILQ_FIRST(&pf->tm_conf.shaper_profile_list))) {
@@ -64,52 +75,8 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev)
rte_free(shaper_profile);
}
- /* clear node configuration */
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
- TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_queue_node = 0;
- while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
- TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
- rte_free(tm_node);
- }
- pf->tm_conf.nb_qgroup_node = 0;
- if (pf->tm_conf.root) {
- rte_free(pf->tm_conf.root);
- pf->tm_conf.root = NULL;
- }
-}
-
-static inline struct ice_tm_node *
-ice_tm_node_search(struct rte_eth_dev *dev,
- uint32_t node_id, enum ice_tm_node_type *node_type)
-{
- struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
- struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
- struct ice_tm_node *tm_node;
-
- if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_PORT;
- return pf->tm_conf.root;
- }
-
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_QGROUP;
- return tm_node;
- }
- }
-
- TAILQ_FOREACH(tm_node, queue_list, node) {
- if (tm_node->id == node_id) {
- *node_type = ICE_TM_NODE_TYPE_QUEUE;
- return tm_node;
- }
- }
-
- return NULL;
+ free_node(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
}
static int
@@ -202,11 +169,29 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id,
return 0;
}
+static struct ice_tm_node *
+find_node(struct ice_tm_node *root, uint32_t id)
+{
+ uint32_t i;
+
+ if (root == NULL || root->id == id)
+ return root;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *node = find_node(root->children[i], id);
+
+ if (node)
+ return node;
+ }
+
+ return NULL;
+}
+
static int
ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error)
{
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
+ struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_tm_node *tm_node;
if (!is_leaf || !error)
@@ -219,14 +204,14 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check if the node id exists */
- tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ tm_node = find_node(pf->tm_conf.root, node_id);
if (!tm_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "no such node";
return -EINVAL;
}
- if (node_type == ICE_TM_NODE_TYPE_QUEUE)
+ if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE)
*is_leaf = true;
else
*is_leaf = false;
@@ -348,8 +333,6 @@ ice_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
-#define MAX_QUEUE_PER_GROUP 8
-
static int
ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
@@ -358,8 +341,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
- enum ice_tm_node_type parent_node_type = ICE_TM_NODE_TYPE_MAX;
struct ice_tm_shaper_profile *shaper_profile = NULL;
struct ice_tm_node *tm_node;
struct ice_tm_node *parent_node;
@@ -374,7 +355,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return ret;
/* check if the node is already existed */
- if (ice_tm_node_search(dev, node_id, &node_type)) {
+ if (find_node(pf->tm_conf.root, node_id)) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "node id already used";
return -EINVAL;
@@ -409,17 +390,19 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
/* add the root node */
- tm_node = rte_zmalloc("ice_tm_node",
- sizeof(struct ice_tm_node),
+ tm_node = rte_zmalloc(NULL,
+ sizeof(struct ice_tm_node) +
+ sizeof(struct ice_tm_node *) * MAX_CHILDREN_PER_TM_NODE,
0);
if (!tm_node)
return -ENOMEM;
tm_node->id = node_id;
+ tm_node->level = ICE_TM_NODE_TYPE_PORT;
tm_node->parent = NULL;
tm_node->reference_count = 0;
tm_node->shaper_profile = shaper_profile;
- tm_node->children = (struct ice_tm_node **)
- rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->children =
+ (void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node));
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
pf->tm_conf.root = tm_node;
@@ -427,29 +410,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check the parent node */
- parent_node = ice_tm_node_search(dev, parent_node_id,
- &parent_node_type);
+ parent_node = find_node(pf->tm_conf.root, parent_node_id);
if (!parent_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent not exist";
return -EINVAL;
}
- if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
- parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
+ if (parent_node->level != ICE_TM_NODE_TYPE_PORT &&
+ parent_node->level != ICE_TM_NODE_TYPE_QGROUP) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
error->message = "parent is not valid";
return -EINVAL;
}
/* check level */
if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
- level_id != (uint32_t)parent_node_type + 1) {
+ level_id != parent_node->level + 1) {
error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
error->message = "Wrong level";
return -EINVAL;
}
/* check the node number */
- if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
+ if (parent_node->level == ICE_TM_NODE_TYPE_PORT) {
/* check the queue group number */
if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
@@ -458,7 +440,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
} else {
/* check the queue number */
- if (parent_node->reference_count >= MAX_QUEUE_PER_GROUP) {
+ if (parent_node->reference_count >=
+ MAX_CHILDREN_PER_SCHED_NODE) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "too many queues";
return -EINVAL;
@@ -470,8 +453,9 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
}
}
- tm_node = rte_zmalloc("ice_tm_node",
- sizeof(struct ice_tm_node),
+ tm_node = rte_zmalloc(NULL,
+ sizeof(struct ice_tm_node) +
+ sizeof(struct ice_tm_node *) * MAX_CHILDREN_PER_TM_NODE,
0);
if (!tm_node)
return -ENOMEM;
@@ -480,9 +464,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
tm_node->weight = weight;
tm_node->reference_count = 0;
tm_node->parent = parent_node;
+ tm_node->level = parent_node->level + 1;
tm_node->shaper_profile = shaper_profile;
- tm_node->children = (struct ice_tm_node **)
- rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)), 0);
+ tm_node->children =
+ (void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node));
tm_node->parent->children[tm_node->parent->reference_count] = tm_node;
if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE &&
@@ -497,15 +482,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
rte_memcpy(&tm_node->params, params,
sizeof(struct rte_tm_node_params));
- if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
- TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
- tm_node, node);
- pf->tm_conf.nb_qgroup_node++;
- } else {
- TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
- tm_node, node);
- pf->tm_conf.nb_queue_node++;
- }
tm_node->parent->reference_count++;
return 0;
@@ -516,8 +492,8 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
- enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
struct ice_tm_node *tm_node;
+ uint32_t i, j;
if (!error)
return -EINVAL;
@@ -529,7 +505,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
}
/* check if the node id exists */
- tm_node = ice_tm_node_search(dev, node_id, &node_type);
+ tm_node = find_node(pf->tm_conf.root, node_id);
if (!tm_node) {
error->type = RTE_TM_ERROR_TYPE_NODE_ID;
error->message = "no such node";
@@ -545,21 +521,21 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
}
/* root node */
- if (node_type == ICE_TM_NODE_TYPE_PORT) {
+ if (tm_node->level == ICE_TM_NODE_TYPE_PORT) {
rte_free(tm_node);
pf->tm_conf.root = NULL;
return 0;
}
/* queue group or queue node */
+ for (i = 0; i < tm_node->parent->reference_count; i++)
+ if (tm_node->parent->children[i] == tm_node)
+ break;
+
+ for (j = i ; j < tm_node->parent->reference_count - 1; j++)
+ tm_node->parent->children[j] = tm_node->parent->children[j + 1];
+
tm_node->parent->reference_count--;
- if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
- TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
- pf->tm_conf.nb_qgroup_node--;
- } else {
- TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
- pf->tm_conf.nb_queue_node--;
- }
rte_free(tm_node);
return 0;
@@ -720,9 +696,9 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
struct ice_sched_node *vsi_node = ice_get_vsi_node(hw);
- struct ice_tm_node *tm_node;
+ struct ice_tm_node *root = pf->tm_conf.root;
+ uint32_t i;
int ret;
/* reset vsi_node */
@@ -732,8 +708,12 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev)
return ret;
}
- /* reset queue group nodes */
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (root == NULL)
+ return 0;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *tm_node = root->children[i];
+
if (tm_node->sched_node == NULL)
continue;
@@ -786,9 +766,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
- struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
- struct ice_tm_node *tm_node;
+ struct ice_tm_node *root;
struct ice_sched_node *vsi_node = NULL;
struct ice_sched_node *queue_node;
struct ice_tx_queue *txq;
@@ -819,14 +797,14 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
/* config vsi node */
vsi_node = ice_get_vsi_node(hw);
- tm_node = pf->tm_conf.root;
+ root = pf->tm_conf.root;
- ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
+ ret_val = ice_set_node_rate(hw, root, vsi_node);
if (ret_val) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
PMD_DRV_LOG(ERR,
"configure vsi node %u bandwidth failed",
- tm_node->id);
+ root->id);
goto add_leaf;
}
@@ -837,13 +815,27 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
idx_vsi_child = 0;
idx_qg = 0;
- TAILQ_FOREACH(tm_node, qgroup_list, node) {
+ if (root == NULL)
+ goto commit;
+
+ for (i = 0; i < root->reference_count; i++) {
+ struct ice_tm_node *tm_node = root->children[i];
struct ice_tm_node *tm_child_node;
struct ice_sched_node *qgroup_sched_node =
vsi_node->children[idx_vsi_child]->children[idx_qg];
+ uint32_t j;
- for (i = 0; i < tm_node->reference_count; i++) {
- tm_child_node = tm_node->children[i];
+ ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR,
+ "configure queue group node %u failed",
+ tm_node->id);
+ goto reset_leaf;
+ }
+
+ for (j = 0; j < tm_node->reference_count; j++) {
+ tm_child_node = tm_node->children[j];
qid = tm_child_node->id;
ret_val = ice_tx_queue_start(dev, qid);
if (ret_val) {
@@ -859,25 +851,25 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR, "get queue %u node failed", qid);
goto reset_leaf;
}
- if (queue_node->info.parent_teid == qgroup_sched_node->info.node_teid)
- continue;
- ret_val = ice_move_recfg_lan_txq(dev, queue_node, qgroup_sched_node, qid);
+ if (queue_node->info.parent_teid != qgroup_sched_node->info.node_teid) {
+ ret_val = ice_move_recfg_lan_txq(dev, queue_node,
+ qgroup_sched_node, qid);
+ if (ret_val) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ goto reset_leaf;
+ }
+ }
+ ret_val = ice_cfg_hw_node(hw, tm_child_node, queue_node);
if (ret_val) {
error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR, "move queue %u failed", qid);
+ PMD_DRV_LOG(ERR,
+ "configure queue group node %u failed",
+ tm_node->id);
goto reset_leaf;
}
}
- ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node);
- if (ret_val) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR,
- "configure queue group node %u failed",
- tm_node->id);
- goto reset_leaf;
- }
-
idx_qg++;
if (idx_qg >= nb_qg) {
idx_qg = 0;
@@ -890,23 +882,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev *dev,
}
}
- /* config queue nodes */
- TAILQ_FOREACH(tm_node, queue_list, node) {
- qid = tm_node->id;
- txq = dev->data->tx_queues[qid];
- q_teid = txq->q_teid;
- queue_node = ice_sched_get_node(hw->port_info, q_teid);
-
- ret_val = ice_cfg_hw_node(hw, tm_node, queue_node);
- if (ret_val) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- PMD_DRV_LOG(ERR,
- "configure queue group node %u failed",
- tm_node->id);
- goto reset_leaf;
- }
- }
-
+commit:
pf->tm_conf.committed = true;
pf->tm_conf.clear_on_fail = clear_on_fail;
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v4 3/3] doc: update ice document for qos
2024-01-08 20:21 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Qi Zhang
2024-01-08 20:21 ` [PATCH v4 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
2024-01-08 20:21 ` [PATCH v4 2/3] net/ice: refactor tm config data structure Qi Zhang
@ 2024-01-08 20:21 ` Qi Zhang
2024-01-09 5:30 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Zhang, Qi Z
3 siblings, 0 replies; 21+ messages in thread
From: Qi Zhang @ 2024-01-08 20:21 UTC (permalink / raw)
To: qiming.yang, wenjun1.wu; +Cc: dev, Qi Zhang
Add description for ice PMD's rte_tm capabilities.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Wenjun Wu <wenjun1.wu@intel.com>
---
doc/guides/nics/ice.rst | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index bafb3ba022..163d6b8bb6 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -352,6 +352,25 @@ queue 3 using a raw pattern::
Currently, raw pattern support is limited to the FDIR and Hash engines.
+Traffic Management Support
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ice PMD provides support for the Traffic Management API (RTE_TM), allow
+users to offload a 3-layers Tx scheduler on the E810 NIC:
+
+- ``Port Layer``
+
+ This is the root layer, support peak bandwidth configuration, max to 32 children.
+
+- ``Queue Group Layer``
+
+ The middel layer, support peak / committed bandwidth, weight, priority configurations,
+ max to 8 children.
+
+- ``Queue Layer``
+
+ The leaf layer, support peak / committed bandwidth, weight, priority configurations.
+
Additional Options
++++++++++++++++++
--
2.31.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: [PATCH v2 2/3] net/ice: refactor tm config data structure
2024-01-05 14:11 ` [PATCH v2 2/3] net/ice: refactor tm config data structure Qi Zhang
2024-01-05 8:37 ` Zhang, Qi Z
@ 2024-01-09 2:50 ` Wu, Wenjun1
1 sibling, 0 replies; 21+ messages in thread
From: Wu, Wenjun1 @ 2024-01-09 2:50 UTC (permalink / raw)
To: Zhang, Qi Z, Yang, Qiming; +Cc: dev
> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Friday, January 5, 2024 10:11 PM
> To: Yang, Qiming <qiming.yang@intel.com>; Wu, Wenjun1
> <wenjun1.wu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v2 2/3] net/ice: refactor tm config data structure
>
> Simplified struct ice_tm_conf by removing per level node list.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> drivers/net/ice/ice_ethdev.h | 5 +-
> drivers/net/ice/ice_tm.c | 210 +++++++++++++++--------------------
> 2 files changed, 88 insertions(+), 127 deletions(-)
>
> diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index
> ae22c29ffc..008a7a23b9 100644
> --- a/drivers/net/ice/ice_ethdev.h
> +++ b/drivers/net/ice/ice_ethdev.h
> @@ -472,6 +472,7 @@ struct ice_tm_node {
> uint32_t id;
> uint32_t priority;
> uint32_t weight;
> + uint32_t level;
> uint32_t reference_count;
> struct ice_tm_node *parent;
> struct ice_tm_node **children;
> @@ -492,10 +493,6 @@ enum ice_tm_node_type { struct ice_tm_conf {
> struct ice_shaper_profile_list shaper_profile_list;
> struct ice_tm_node *root; /* root node - port */
> - struct ice_tm_node_list qgroup_list; /* node list for all the queue
> groups */
> - struct ice_tm_node_list queue_list; /* node list for all the queues */
> - uint32_t nb_qgroup_node;
> - uint32_t nb_queue_node;
> bool committed;
> bool clear_on_fail;
> };
> diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index
> 7ae68c683b..7c662f8a85 100644
> --- a/drivers/net/ice/ice_tm.c
> +++ b/drivers/net/ice/ice_tm.c
> @@ -43,66 +43,30 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
> /* initialize node configuration */
> TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
> pf->tm_conf.root = NULL;
> - TAILQ_INIT(&pf->tm_conf.qgroup_list);
> - TAILQ_INIT(&pf->tm_conf.queue_list);
> - pf->tm_conf.nb_qgroup_node = 0;
> - pf->tm_conf.nb_queue_node = 0;
> pf->tm_conf.committed = false;
> pf->tm_conf.clear_on_fail = false;
> }
>
> -void
> -ice_tm_conf_uninit(struct rte_eth_dev *dev)
> +static void free_node(struct ice_tm_node *root)
> {
> - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> - struct ice_tm_node *tm_node;
> + uint32_t i;
>
> - /* clear node configuration */
> - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
> - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
> - rte_free(tm_node);
> - }
> - pf->tm_conf.nb_queue_node = 0;
> - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
> - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
> - rte_free(tm_node);
> - }
> - pf->tm_conf.nb_qgroup_node = 0;
> - if (pf->tm_conf.root) {
> - rte_free(pf->tm_conf.root);
> - pf->tm_conf.root = NULL;
> - }
> + if (root == NULL)
> + return;
> +
> + for (i = 0; i < root->reference_count; i++)
> + free_node(root->children[i]);
> +
> + rte_free(root);
> }
>
> -static inline struct ice_tm_node *
> -ice_tm_node_search(struct rte_eth_dev *dev,
> - uint32_t node_id, enum ice_tm_node_type *node_type)
> +void
> +ice_tm_conf_uninit(struct rte_eth_dev *dev)
> {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
> - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
> - struct ice_tm_node *tm_node;
> -
> - if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
> - *node_type = ICE_TM_NODE_TYPE_PORT;
> - return pf->tm_conf.root;
> - }
>
> - TAILQ_FOREACH(tm_node, qgroup_list, node) {
> - if (tm_node->id == node_id) {
> - *node_type = ICE_TM_NODE_TYPE_QGROUP;
> - return tm_node;
> - }
> - }
> -
> - TAILQ_FOREACH(tm_node, queue_list, node) {
> - if (tm_node->id == node_id) {
> - *node_type = ICE_TM_NODE_TYPE_QUEUE;
> - return tm_node;
> - }
> - }
> -
> - return NULL;
> + free_node(pf->tm_conf.root);
> + pf->tm_conf.root = NULL;
> }
>
> static int
> @@ -195,11 +159,29 @@ ice_node_param_check(struct ice_pf *pf, uint32_t
> node_id,
> return 0;
> }
>
> +static struct ice_tm_node *
> +find_node(struct ice_tm_node *root, uint32_t id) {
> + uint32_t i;
> +
> + if (root == NULL || root->id == id)
> + return root;
> +
> + for (i = 0; i < root->reference_count; i++) {
> + struct ice_tm_node *node = find_node(root->children[i], id);
> +
> + if (node)
> + return node;
> + }
> +
> + return NULL;
> +}
> +
> static int
> ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
> int *is_leaf, struct rte_tm_error *error) {
> - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
> + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> struct ice_tm_node *tm_node;
>
> if (!is_leaf || !error)
> @@ -212,14 +194,14 @@ ice_node_type_get(struct rte_eth_dev *dev,
> uint32_t node_id,
> }
>
> /* check if the node id exists */
> - tm_node = ice_tm_node_search(dev, node_id, &node_type);
> + tm_node = find_node(pf->tm_conf.root, node_id);
> if (!tm_node) {
> error->type = RTE_TM_ERROR_TYPE_NODE_ID;
> error->message = "no such node";
> return -EINVAL;
> }
>
> - if (node_type == ICE_TM_NODE_TYPE_QUEUE)
> + if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE)
> *is_leaf = true;
> else
> *is_leaf = false;
> @@ -351,8 +333,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t
> node_id,
> struct rte_tm_error *error)
> {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
> - enum ice_tm_node_type parent_node_type =
> ICE_TM_NODE_TYPE_MAX;
> struct ice_tm_shaper_profile *shaper_profile = NULL;
> struct ice_tm_node *tm_node;
> struct ice_tm_node *parent_node;
> @@ -367,7 +347,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t
> node_id,
> return ret;
>
> /* check if the node is already existed */
> - if (ice_tm_node_search(dev, node_id, &node_type)) {
> + if (find_node(pf->tm_conf.root, node_id)) {
> error->type = RTE_TM_ERROR_TYPE_NODE_ID;
> error->message = "node id already used";
> return -EINVAL;
> @@ -408,6 +388,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t
> node_id,
> if (!tm_node)
> return -ENOMEM;
> tm_node->id = node_id;
> + tm_node->level = ICE_TM_NODE_TYPE_PORT;
> tm_node->parent = NULL;
> tm_node->reference_count = 0;
> tm_node->shaper_profile = shaper_profile; @@ -420,29
> +401,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
> }
>
> /* check the parent node */
> - parent_node = ice_tm_node_search(dev, parent_node_id,
> - &parent_node_type);
> + parent_node = find_node(pf->tm_conf.root, parent_node_id);
> if (!parent_node) {
> error->type =
> RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
> error->message = "parent not exist";
> return -EINVAL;
> }
> - if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
> - parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
> + if (parent_node->level != ICE_TM_NODE_TYPE_PORT &&
> + parent_node->level != ICE_TM_NODE_TYPE_QGROUP) {
> error->type =
> RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
> error->message = "parent is not valid";
> return -EINVAL;
> }
> /* check level */
> if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
> - level_id != (uint32_t)parent_node_type + 1) {
> + level_id != parent_node->level + 1) {
> error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
> error->message = "Wrong level";
> return -EINVAL;
> }
>
> /* check the node number */
> - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
> + if (parent_node->level == ICE_TM_NODE_TYPE_PORT) {
> /* check the queue group number */
> if (parent_node->reference_count >= pf->dev_data-
> >nb_tx_queues) {
> error->type = RTE_TM_ERROR_TYPE_NODE_ID; @@ -
> 473,6 +453,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t
> node_id,
> tm_node->weight = weight;
> tm_node->reference_count = 0;
> tm_node->parent = parent_node;
> + tm_node->level = parent_node->level + 1;
> tm_node->shaper_profile = shaper_profile;
> tm_node->children = (struct ice_tm_node **)
> rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)),
> 0); @@ -490,15 +471,6 @@ ice_tm_node_add(struct rte_eth_dev *dev,
> uint32_t node_id,
>
> rte_memcpy(&tm_node->params, params,
> sizeof(struct rte_tm_node_params));
> - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
> - TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
> - tm_node, node);
> - pf->tm_conf.nb_qgroup_node++;
> - } else {
> - TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
> - tm_node, node);
> - pf->tm_conf.nb_queue_node++;
> - }
> tm_node->parent->reference_count++;
>
> return 0;
> @@ -509,7 +481,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev,
> uint32_t node_id,
> struct rte_tm_error *error)
> {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
> struct ice_tm_node *tm_node;
>
> if (!error)
> @@ -522,7 +493,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev,
> uint32_t node_id,
> }
>
> /* check if the node id exists */
> - tm_node = ice_tm_node_search(dev, node_id, &node_type);
> + tm_node = find_node(pf->tm_conf.root, node_id);
> if (!tm_node) {
> error->type = RTE_TM_ERROR_TYPE_NODE_ID;
> error->message = "no such node";
> @@ -538,7 +509,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev,
> uint32_t node_id,
> }
>
> /* root node */
> - if (node_type == ICE_TM_NODE_TYPE_PORT) {
> + if (tm_node->level == ICE_TM_NODE_TYPE_PORT) {
> rte_free(tm_node);
> pf->tm_conf.root = NULL;
> return 0;
> @@ -546,13 +517,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev,
> uint32_t node_id,
>
> /* queue group or queue node */
> tm_node->parent->reference_count--;
> - if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
> - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
> - pf->tm_conf.nb_qgroup_node--;
> - } else {
> - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
> - pf->tm_conf.nb_queue_node--;
> - }
> rte_free(tm_node);
>
> return 0;
> @@ -708,9 +672,9 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev
> *dev) {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
> struct ice_sched_node *vsi_node = ice_get_vsi_node(hw);
> - struct ice_tm_node *tm_node;
> + struct ice_tm_node *root = pf->tm_conf.root;
> + uint32_t i;
> int ret;
>
> /* reset vsi_node */
> @@ -720,8 +684,12 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev
> *dev)
> return ret;
> }
>
> - /* reset queue group nodes */
> - TAILQ_FOREACH(tm_node, qgroup_list, node) {
> + if (root == NULL)
> + return 0;
> +
> + for (i = 0; i < root->reference_count; i++) {
> + struct ice_tm_node *tm_node = root->children[i];
> +
> if (tm_node->sched_node == NULL)
> continue;
>
> @@ -774,9 +742,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev, {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
> - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
> - struct ice_tm_node *tm_node;
> + struct ice_tm_node *root;
> struct ice_sched_node *vsi_node = NULL;
> struct ice_sched_node *queue_node;
> struct ice_tx_queue *txq;
> @@ -807,14 +773,14 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev,
>
> /* config vsi node */
> vsi_node = ice_get_vsi_node(hw);
> - tm_node = pf->tm_conf.root;
> + root = pf->tm_conf.root;
>
> - ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
> + ret_val = ice_set_node_rate(hw, root, vsi_node);
> if (ret_val) {
> error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
> PMD_DRV_LOG(ERR,
> "configure vsi node %u bandwidth failed",
> - tm_node->id);
> + root->id);
> goto add_leaf;
> }
>
> @@ -825,13 +791,27 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev,
> idx_vsi_child = 0;
> idx_qg = 0;
>
> - TAILQ_FOREACH(tm_node, qgroup_list, node) {
> + if (root == NULL)
> + goto commit;
> +
> + for (i = 0; i < root->reference_count; i++) {
> + struct ice_tm_node *tm_node = root->children[i];
> struct ice_tm_node *tm_child_node;
> struct ice_sched_node *qgroup_sched_node =
> vsi_node->children[idx_vsi_child]->children[idx_qg];
> + uint32_t j;
>
> - for (i = 0; i < tm_node->reference_count; i++) {
> - tm_child_node = tm_node->children[i];
> + ret_val = ice_cfg_hw_node(hw, tm_node,
> qgroup_sched_node);
> + if (ret_val) {
> + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
> + PMD_DRV_LOG(ERR,
> + "configure queue group node %u failed",
> + tm_node->id);
> + goto reset_leaf;
> + }
> +
> + for (j = 0; j < tm_node->reference_count; j++) {
> + tm_child_node = tm_node->children[j];
> qid = tm_child_node->id;
> ret_val = ice_tx_queue_start(dev, qid);
> if (ret_val) {
> @@ -847,25 +827,25 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev,
> PMD_DRV_LOG(ERR, "get queue %u node
> failed", qid);
> goto reset_leaf;
> }
> - if (queue_node->info.parent_teid ==
> qgroup_sched_node->info.node_teid)
> - continue;
> - ret_val = ice_move_recfg_lan_txq(dev, queue_node,
> qgroup_sched_node, qid);
> + if (queue_node->info.parent_teid !=
> qgroup_sched_node->info.node_teid) {
> + ret_val = ice_move_recfg_lan_txq(dev,
> queue_node,
> +
> qgroup_sched_node, qid);
> + if (ret_val) {
> + error->type =
> RTE_TM_ERROR_TYPE_UNSPECIFIED;
> + PMD_DRV_LOG(ERR, "move
> queue %u failed", qid);
> + goto reset_leaf;
> + }
> + }
> + ret_val = ice_cfg_hw_node(hw, tm_child_node,
> queue_node);
> if (ret_val) {
> error->type =
> RTE_TM_ERROR_TYPE_UNSPECIFIED;
> - PMD_DRV_LOG(ERR, "move queue %u failed",
> qid);
> + PMD_DRV_LOG(ERR,
> + "configure queue group node %u
> failed",
> + tm_node->id);
> goto reset_leaf;
> }
> }
>
> - ret_val = ice_cfg_hw_node(hw, tm_node,
> qgroup_sched_node);
> - if (ret_val) {
> - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
> - PMD_DRV_LOG(ERR,
> - "configure queue group node %u failed",
> - tm_node->id);
> - goto reset_leaf;
> - }
> -
> idx_qg++;
> if (idx_qg >= nb_qg) {
> idx_qg = 0;
> @@ -878,23 +858,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev,
> }
> }
>
> - /* config queue nodes */
> - TAILQ_FOREACH(tm_node, queue_list, node) {
> - qid = tm_node->id;
> - txq = dev->data->tx_queues[qid];
> - q_teid = txq->q_teid;
> - queue_node = ice_sched_get_node(hw->port_info, q_teid);
> -
> - ret_val = ice_cfg_hw_node(hw, tm_node, queue_node);
> - if (ret_val) {
> - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
> - PMD_DRV_LOG(ERR,
> - "configure queue group node %u failed",
> - tm_node->id);
> - goto reset_leaf;
> - }
> - }
> -
> +commit:
> pf->tm_conf.committed = true;
> pf->tm_conf.clear_on_fail = clear_on_fail;
>
> --
> 2.31.1
Acked-by: Wenjun Wu <wenjun1.wu@intel.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: [PATCH v4 2/3] net/ice: refactor tm config data structure
2024-01-08 20:21 ` [PATCH v4 2/3] net/ice: refactor tm config data structure Qi Zhang
@ 2024-01-09 4:51 ` Wu, Wenjun1
0 siblings, 0 replies; 21+ messages in thread
From: Wu, Wenjun1 @ 2024-01-09 4:51 UTC (permalink / raw)
To: Zhang, Qi Z, Yang, Qiming; +Cc: dev
> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Tuesday, January 9, 2024 4:22 AM
> To: Yang, Qiming <qiming.yang@intel.com>; Wu, Wenjun1
> <wenjun1.wu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v4 2/3] net/ice: refactor tm config data structure
>
> Simplified struct ice_tm_conf by removing per level node list.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
> drivers/net/ice/ice_ethdev.h | 5 +-
> drivers/net/ice/ice_tm.c | 248 ++++++++++++++++-------------------
> 2 files changed, 113 insertions(+), 140 deletions(-)
>
> diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index
> ae22c29ffc..008a7a23b9 100644
> --- a/drivers/net/ice/ice_ethdev.h
> +++ b/drivers/net/ice/ice_ethdev.h
> @@ -472,6 +472,7 @@ struct ice_tm_node {
> uint32_t id;
> uint32_t priority;
> uint32_t weight;
> + uint32_t level;
> uint32_t reference_count;
> struct ice_tm_node *parent;
> struct ice_tm_node **children;
> @@ -492,10 +493,6 @@ enum ice_tm_node_type { struct ice_tm_conf {
> struct ice_shaper_profile_list shaper_profile_list;
> struct ice_tm_node *root; /* root node - port */
> - struct ice_tm_node_list qgroup_list; /* node list for all the queue
> groups */
> - struct ice_tm_node_list queue_list; /* node list for all the queues */
> - uint32_t nb_qgroup_node;
> - uint32_t nb_queue_node;
> bool committed;
> bool clear_on_fail;
> };
> diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index
> d67783c77e..fbab0b8808 100644
> --- a/drivers/net/ice/ice_tm.c
> +++ b/drivers/net/ice/ice_tm.c
> @@ -6,6 +6,9 @@
> #include "ice_ethdev.h"
> #include "ice_rxtx.h"
>
> +#define MAX_CHILDREN_PER_SCHED_NODE 8
> +#define MAX_CHILDREN_PER_TM_NODE 256
> +
> static int ice_hierarchy_commit(struct rte_eth_dev *dev,
> int clear_on_fail,
> __rte_unused struct rte_tm_error *error);
> @@ -43,20 +46,28 @@ ice_tm_conf_init(struct rte_eth_dev *dev)
> /* initialize node configuration */
> TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
> pf->tm_conf.root = NULL;
> - TAILQ_INIT(&pf->tm_conf.qgroup_list);
> - TAILQ_INIT(&pf->tm_conf.queue_list);
> - pf->tm_conf.nb_qgroup_node = 0;
> - pf->tm_conf.nb_queue_node = 0;
> pf->tm_conf.committed = false;
> pf->tm_conf.clear_on_fail = false;
> }
>
> +static void free_node(struct ice_tm_node *root) {
> + uint32_t i;
> +
> + if (root == NULL)
> + return;
> +
> + for (i = 0; i < root->reference_count; i++)
> + free_node(root->children[i]);
> +
> + rte_free(root);
> +}
> +
> void
> ice_tm_conf_uninit(struct rte_eth_dev *dev) {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> struct ice_tm_shaper_profile *shaper_profile;
> - struct ice_tm_node *tm_node;
>
> /* clear profile */
> while ((shaper_profile = TAILQ_FIRST(&pf-
> >tm_conf.shaper_profile_list))) { @@ -64,52 +75,8 @@
> ice_tm_conf_uninit(struct rte_eth_dev *dev)
> rte_free(shaper_profile);
> }
>
> - /* clear node configuration */
> - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
> - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
> - rte_free(tm_node);
> - }
> - pf->tm_conf.nb_queue_node = 0;
> - while ((tm_node = TAILQ_FIRST(&pf->tm_conf.qgroup_list))) {
> - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
> - rte_free(tm_node);
> - }
> - pf->tm_conf.nb_qgroup_node = 0;
> - if (pf->tm_conf.root) {
> - rte_free(pf->tm_conf.root);
> - pf->tm_conf.root = NULL;
> - }
> -}
> -
> -static inline struct ice_tm_node *
> -ice_tm_node_search(struct rte_eth_dev *dev,
> - uint32_t node_id, enum ice_tm_node_type *node_type)
> -{
> - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
> - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
> - struct ice_tm_node *tm_node;
> -
> - if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
> - *node_type = ICE_TM_NODE_TYPE_PORT;
> - return pf->tm_conf.root;
> - }
> -
> - TAILQ_FOREACH(tm_node, qgroup_list, node) {
> - if (tm_node->id == node_id) {
> - *node_type = ICE_TM_NODE_TYPE_QGROUP;
> - return tm_node;
> - }
> - }
> -
> - TAILQ_FOREACH(tm_node, queue_list, node) {
> - if (tm_node->id == node_id) {
> - *node_type = ICE_TM_NODE_TYPE_QUEUE;
> - return tm_node;
> - }
> - }
> -
> - return NULL;
> + free_node(pf->tm_conf.root);
> + pf->tm_conf.root = NULL;
> }
>
> static int
> @@ -202,11 +169,29 @@ ice_node_param_check(struct ice_pf *pf, uint32_t
> node_id,
> return 0;
> }
>
> +static struct ice_tm_node *
> +find_node(struct ice_tm_node *root, uint32_t id) {
> + uint32_t i;
> +
> + if (root == NULL || root->id == id)
> + return root;
> +
> + for (i = 0; i < root->reference_count; i++) {
> + struct ice_tm_node *node = find_node(root->children[i], id);
> +
> + if (node)
> + return node;
> + }
> +
> + return NULL;
> +}
> +
> static int
> ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
> int *is_leaf, struct rte_tm_error *error) {
> - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
> + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> struct ice_tm_node *tm_node;
>
> if (!is_leaf || !error)
> @@ -219,14 +204,14 @@ ice_node_type_get(struct rte_eth_dev *dev,
> uint32_t node_id,
> }
>
> /* check if the node id exists */
> - tm_node = ice_tm_node_search(dev, node_id, &node_type);
> + tm_node = find_node(pf->tm_conf.root, node_id);
> if (!tm_node) {
> error->type = RTE_TM_ERROR_TYPE_NODE_ID;
> error->message = "no such node";
> return -EINVAL;
> }
>
> - if (node_type == ICE_TM_NODE_TYPE_QUEUE)
> + if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE)
> *is_leaf = true;
> else
> *is_leaf = false;
> @@ -348,8 +333,6 @@ ice_shaper_profile_del(struct rte_eth_dev *dev,
> return 0;
> }
>
> -#define MAX_QUEUE_PER_GROUP 8
> -
> static int
> ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
> uint32_t parent_node_id, uint32_t priority, @@ -358,8 +341,6
> @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id,
> struct rte_tm_error *error)
> {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
> - enum ice_tm_node_type parent_node_type =
> ICE_TM_NODE_TYPE_MAX;
> struct ice_tm_shaper_profile *shaper_profile = NULL;
> struct ice_tm_node *tm_node;
> struct ice_tm_node *parent_node;
> @@ -374,7 +355,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t
> node_id,
> return ret;
>
> /* check if the node is already existed */
> - if (ice_tm_node_search(dev, node_id, &node_type)) {
> + if (find_node(pf->tm_conf.root, node_id)) {
> error->type = RTE_TM_ERROR_TYPE_NODE_ID;
> error->message = "node id already used";
> return -EINVAL;
> @@ -409,17 +390,19 @@ ice_tm_node_add(struct rte_eth_dev *dev,
> uint32_t node_id,
> }
>
> /* add the root node */
> - tm_node = rte_zmalloc("ice_tm_node",
> - sizeof(struct ice_tm_node),
> + tm_node = rte_zmalloc(NULL,
> + sizeof(struct ice_tm_node) +
> + sizeof(struct ice_tm_node *) *
> MAX_CHILDREN_PER_TM_NODE,
> 0);
> if (!tm_node)
> return -ENOMEM;
> tm_node->id = node_id;
> + tm_node->level = ICE_TM_NODE_TYPE_PORT;
> tm_node->parent = NULL;
> tm_node->reference_count = 0;
> tm_node->shaper_profile = shaper_profile;
> - tm_node->children = (struct ice_tm_node **)
> - rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)),
> 0);
> + tm_node->children =
> + (void *)((uint8_t *)tm_node + sizeof(struct
> ice_tm_node));
> rte_memcpy(&tm_node->params, params,
> sizeof(struct rte_tm_node_params));
> pf->tm_conf.root = tm_node;
> @@ -427,29 +410,28 @@ ice_tm_node_add(struct rte_eth_dev *dev,
> uint32_t node_id,
> }
>
> /* check the parent node */
> - parent_node = ice_tm_node_search(dev, parent_node_id,
> - &parent_node_type);
> + parent_node = find_node(pf->tm_conf.root, parent_node_id);
> if (!parent_node) {
> error->type =
> RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
> error->message = "parent not exist";
> return -EINVAL;
> }
> - if (parent_node_type != ICE_TM_NODE_TYPE_PORT &&
> - parent_node_type != ICE_TM_NODE_TYPE_QGROUP) {
> + if (parent_node->level != ICE_TM_NODE_TYPE_PORT &&
> + parent_node->level != ICE_TM_NODE_TYPE_QGROUP) {
> error->type =
> RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
> error->message = "parent is not valid";
> return -EINVAL;
> }
> /* check level */
> if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
> - level_id != (uint32_t)parent_node_type + 1) {
> + level_id != parent_node->level + 1) {
> error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
> error->message = "Wrong level";
> return -EINVAL;
> }
>
> /* check the node number */
> - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
> + if (parent_node->level == ICE_TM_NODE_TYPE_PORT) {
> /* check the queue group number */
> if (parent_node->reference_count >= pf->dev_data-
> >nb_tx_queues) {
> error->type = RTE_TM_ERROR_TYPE_NODE_ID; @@ -
> 458,7 +440,8 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t
> node_id,
> }
> } else {
> /* check the queue number */
> - if (parent_node->reference_count >=
> MAX_QUEUE_PER_GROUP) {
> + if (parent_node->reference_count >=
> + MAX_CHILDREN_PER_SCHED_NODE) {
> error->type = RTE_TM_ERROR_TYPE_NODE_ID;
> error->message = "too many queues";
> return -EINVAL;
> @@ -470,8 +453,9 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t
> node_id,
> }
> }
>
> - tm_node = rte_zmalloc("ice_tm_node",
> - sizeof(struct ice_tm_node),
> + tm_node = rte_zmalloc(NULL,
> + sizeof(struct ice_tm_node) +
> + sizeof(struct ice_tm_node *) *
> MAX_CHILDREN_PER_TM_NODE,
> 0);
> if (!tm_node)
> return -ENOMEM;
> @@ -480,9 +464,10 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t
> node_id,
> tm_node->weight = weight;
> tm_node->reference_count = 0;
> tm_node->parent = parent_node;
> + tm_node->level = parent_node->level + 1;
> tm_node->shaper_profile = shaper_profile;
> - tm_node->children = (struct ice_tm_node **)
> - rte_calloc(NULL, 256, (sizeof(struct ice_tm_node *)),
> 0);
> + tm_node->children =
> + (void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node));
> tm_node->parent->children[tm_node->parent->reference_count] =
> tm_node;
>
> if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE
> && @@ -497,15 +482,6 @@ ice_tm_node_add(struct rte_eth_dev *dev,
> uint32_t node_id,
>
> rte_memcpy(&tm_node->params, params,
> sizeof(struct rte_tm_node_params));
> - if (parent_node_type == ICE_TM_NODE_TYPE_PORT) {
> - TAILQ_INSERT_TAIL(&pf->tm_conf.qgroup_list,
> - tm_node, node);
> - pf->tm_conf.nb_qgroup_node++;
> - } else {
> - TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
> - tm_node, node);
> - pf->tm_conf.nb_queue_node++;
> - }
> tm_node->parent->reference_count++;
>
> return 0;
> @@ -516,8 +492,8 @@ ice_tm_node_delete(struct rte_eth_dev *dev,
> uint32_t node_id,
> struct rte_tm_error *error)
> {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> - enum ice_tm_node_type node_type = ICE_TM_NODE_TYPE_MAX;
> struct ice_tm_node *tm_node;
> + uint32_t i, j;
>
> if (!error)
> return -EINVAL;
> @@ -529,7 +505,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev,
> uint32_t node_id,
> }
>
> /* check if the node id exists */
> - tm_node = ice_tm_node_search(dev, node_id, &node_type);
> + tm_node = find_node(pf->tm_conf.root, node_id);
> if (!tm_node) {
> error->type = RTE_TM_ERROR_TYPE_NODE_ID;
> error->message = "no such node";
> @@ -545,21 +521,21 @@ ice_tm_node_delete(struct rte_eth_dev *dev,
> uint32_t node_id,
> }
>
> /* root node */
> - if (node_type == ICE_TM_NODE_TYPE_PORT) {
> + if (tm_node->level == ICE_TM_NODE_TYPE_PORT) {
> rte_free(tm_node);
> pf->tm_conf.root = NULL;
> return 0;
> }
>
> /* queue group or queue node */
> + for (i = 0; i < tm_node->parent->reference_count; i++)
> + if (tm_node->parent->children[i] == tm_node)
> + break;
> +
> + for (j = i ; j < tm_node->parent->reference_count - 1; j++)
> + tm_node->parent->children[j] = tm_node->parent->children[j
> + 1];
> +
> tm_node->parent->reference_count--;
> - if (node_type == ICE_TM_NODE_TYPE_QGROUP) {
> - TAILQ_REMOVE(&pf->tm_conf.qgroup_list, tm_node, node);
> - pf->tm_conf.nb_qgroup_node--;
> - } else {
> - TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
> - pf->tm_conf.nb_queue_node--;
> - }
> rte_free(tm_node);
>
> return 0;
> @@ -720,9 +696,9 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev
> *dev) {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
> struct ice_sched_node *vsi_node = ice_get_vsi_node(hw);
> - struct ice_tm_node *tm_node;
> + struct ice_tm_node *root = pf->tm_conf.root;
> + uint32_t i;
> int ret;
>
> /* reset vsi_node */
> @@ -732,8 +708,12 @@ static int ice_reset_noleaf_nodes(struct rte_eth_dev
> *dev)
> return ret;
> }
>
> - /* reset queue group nodes */
> - TAILQ_FOREACH(tm_node, qgroup_list, node) {
> + if (root == NULL)
> + return 0;
> +
> + for (i = 0; i < root->reference_count; i++) {
> + struct ice_tm_node *tm_node = root->children[i];
> +
> if (tm_node->sched_node == NULL)
> continue;
>
> @@ -786,9 +766,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev, {
> struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
> struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> >dev_private);
> - struct ice_tm_node_list *qgroup_list = &pf->tm_conf.qgroup_list;
> - struct ice_tm_node_list *queue_list = &pf->tm_conf.queue_list;
> - struct ice_tm_node *tm_node;
> + struct ice_tm_node *root;
> struct ice_sched_node *vsi_node = NULL;
> struct ice_sched_node *queue_node;
> struct ice_tx_queue *txq;
> @@ -819,14 +797,14 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev,
>
> /* config vsi node */
> vsi_node = ice_get_vsi_node(hw);
> - tm_node = pf->tm_conf.root;
> + root = pf->tm_conf.root;
>
> - ret_val = ice_set_node_rate(hw, tm_node, vsi_node);
> + ret_val = ice_set_node_rate(hw, root, vsi_node);
> if (ret_val) {
> error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
> PMD_DRV_LOG(ERR,
> "configure vsi node %u bandwidth failed",
> - tm_node->id);
> + root->id);
> goto add_leaf;
> }
>
> @@ -837,13 +815,27 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev,
> idx_vsi_child = 0;
> idx_qg = 0;
>
> - TAILQ_FOREACH(tm_node, qgroup_list, node) {
> + if (root == NULL)
> + goto commit;
> +
> + for (i = 0; i < root->reference_count; i++) {
> + struct ice_tm_node *tm_node = root->children[i];
> struct ice_tm_node *tm_child_node;
> struct ice_sched_node *qgroup_sched_node =
> vsi_node->children[idx_vsi_child]->children[idx_qg];
> + uint32_t j;
>
> - for (i = 0; i < tm_node->reference_count; i++) {
> - tm_child_node = tm_node->children[i];
> + ret_val = ice_cfg_hw_node(hw, tm_node,
> qgroup_sched_node);
> + if (ret_val) {
> + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
> + PMD_DRV_LOG(ERR,
> + "configure queue group node %u failed",
> + tm_node->id);
> + goto reset_leaf;
> + }
> +
> + for (j = 0; j < tm_node->reference_count; j++) {
> + tm_child_node = tm_node->children[j];
> qid = tm_child_node->id;
> ret_val = ice_tx_queue_start(dev, qid);
> if (ret_val) {
> @@ -859,25 +851,25 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev,
> PMD_DRV_LOG(ERR, "get queue %u node
> failed", qid);
> goto reset_leaf;
> }
> - if (queue_node->info.parent_teid ==
> qgroup_sched_node->info.node_teid)
> - continue;
> - ret_val = ice_move_recfg_lan_txq(dev, queue_node,
> qgroup_sched_node, qid);
> + if (queue_node->info.parent_teid !=
> qgroup_sched_node->info.node_teid) {
> + ret_val = ice_move_recfg_lan_txq(dev,
> queue_node,
> +
> qgroup_sched_node, qid);
> + if (ret_val) {
> + error->type =
> RTE_TM_ERROR_TYPE_UNSPECIFIED;
> + PMD_DRV_LOG(ERR, "move
> queue %u failed", qid);
> + goto reset_leaf;
> + }
> + }
> + ret_val = ice_cfg_hw_node(hw, tm_child_node,
> queue_node);
> if (ret_val) {
> error->type =
> RTE_TM_ERROR_TYPE_UNSPECIFIED;
> - PMD_DRV_LOG(ERR, "move queue %u failed",
> qid);
> + PMD_DRV_LOG(ERR,
> + "configure queue group node %u
> failed",
> + tm_node->id);
> goto reset_leaf;
> }
> }
>
> - ret_val = ice_cfg_hw_node(hw, tm_node,
> qgroup_sched_node);
> - if (ret_val) {
> - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
> - PMD_DRV_LOG(ERR,
> - "configure queue group node %u failed",
> - tm_node->id);
> - goto reset_leaf;
> - }
> -
> idx_qg++;
> if (idx_qg >= nb_qg) {
> idx_qg = 0;
> @@ -890,23 +882,7 @@ int ice_do_hierarchy_commit(struct rte_eth_dev
> *dev,
> }
> }
>
> - /* config queue nodes */
> - TAILQ_FOREACH(tm_node, queue_list, node) {
> - qid = tm_node->id;
> - txq = dev->data->tx_queues[qid];
> - q_teid = txq->q_teid;
> - queue_node = ice_sched_get_node(hw->port_info, q_teid);
> -
> - ret_val = ice_cfg_hw_node(hw, tm_node, queue_node);
> - if (ret_val) {
> - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
> - PMD_DRV_LOG(ERR,
> - "configure queue group node %u failed",
> - tm_node->id);
> - goto reset_leaf;
> - }
> - }
> -
> +commit:
> pf->tm_conf.committed = true;
> pf->tm_conf.clear_on_fail = clear_on_fail;
>
> --
> 2.31.1
Acked-by: Wenjun Wu <wenjun1.wu@intel.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* RE: [PATCH v4 0/3] simplified to 3 layer Tx scheduler
2024-01-08 20:21 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Qi Zhang
` (2 preceding siblings ...)
2024-01-08 20:21 ` [PATCH v4 3/3] doc: update ice document for qos Qi Zhang
@ 2024-01-09 5:30 ` Zhang, Qi Z
3 siblings, 0 replies; 21+ messages in thread
From: Zhang, Qi Z @ 2024-01-09 5:30 UTC (permalink / raw)
To: Yang, Qiming, Wu, Wenjun1; +Cc: dev
> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Tuesday, January 9, 2024 4:22 AM
> To: Yang, Qiming <qiming.yang@intel.com>; Wu, Wenjun1
> <wenjun1.wu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v4 0/3] simplified to 3 layer Tx scheduler
>
> Remove dummy layers, code refactor, complete document
>
> v4:
> - rebase.
>
> v3:
> - fix tm_node memory free.
> - fix corrupt when slibling node deletion is not in a reversed order.
>
> v2:
> - fix typos.
>
> Qi Zhang (3):
> net/ice: hide port and TC layer in Tx sched tree
> net/ice: refactor tm config data structure
> doc: update ice document for qos
>
> doc/guides/nics/ice.rst | 19 +++
> drivers/net/ice/ice_ethdev.h | 12 +-
> drivers/net/ice/ice_tm.c | 317 +++++++++++++----------------------
> 3 files changed, 134 insertions(+), 214 deletions(-)
>
> --
> 2.31.1
Applied to dpdk-next-net-intel.
Thanks
Qi
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2024-01-09 5:30 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-05 13:59 [PATCH 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 13:59 ` [PATCH 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
2024-01-05 13:59 ` [PATCH 2/3] net/ice: refactor tm config data struture Qi Zhang
2024-01-05 13:59 ` [PATCH 3/3] doc: update ice document for qos Qi Zhang
2024-01-05 14:11 ` [PATCH v2 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 8:10 ` Wu, Wenjun1
2024-01-05 14:11 ` [PATCH v2 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
2024-01-05 14:11 ` [PATCH v2 2/3] net/ice: refactor tm config data structure Qi Zhang
2024-01-05 8:37 ` Zhang, Qi Z
2024-01-09 2:50 ` Wu, Wenjun1
2024-01-05 14:11 ` [PATCH v2 3/3] doc: update ice document for qos Qi Zhang
2024-01-05 21:12 ` [PATCH v3 0/3] net/ice: simplified to 3 layer Tx scheduler Qi Zhang
2024-01-05 21:12 ` [PATCH v3 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
2024-01-05 21:12 ` [PATCH v3 2/3] net/ice: refactor tm config data structure Qi Zhang
2024-01-05 21:12 ` [PATCH v3 3/3] doc: update ice document for qos Qi Zhang
2024-01-08 20:21 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Qi Zhang
2024-01-08 20:21 ` [PATCH v4 1/3] net/ice: hide port and TC layer in Tx sched tree Qi Zhang
2024-01-08 20:21 ` [PATCH v4 2/3] net/ice: refactor tm config data structure Qi Zhang
2024-01-09 4:51 ` Wu, Wenjun1
2024-01-08 20:21 ` [PATCH v4 3/3] doc: update ice document for qos Qi Zhang
2024-01-09 5:30 ` [PATCH v4 0/3] simplified to 3 layer Tx scheduler Zhang, Qi Z
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).