* [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe
@ 2017-05-27 8:17 Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 01/20] net/i40e: support getting TM ops Wenzhuo Lu
` (22 more replies)
0 siblings, 23 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Implement the traffic manager APIs on i40e and ixgbe.
This patch set is based on the patch set,
"ethdev: abstraction layer for QoS traffic management"
http://dpdk.org/dev/patchwork/patch/24411/
http://dpdk.org/dev/patchwork/patch/24412/
Wenzhuo Lu (20):
net/i40e: support getting TM ops
net/i40e: support getting TM capability
net/i40e: support adding TM shaper profile
net/i40e: support deleting TM shaper profile
net/i40e: support adding TM node
net/i40e: support deleting TM node
net/i40e: support getting TM node type
net/i40e: support getting TM level capability
net/i40e: support getting TM node capability
net/i40e: support committing TM hierarchy
net/ixgbe: support getting TM ops
net/ixgbe: support getting TM capability
net/ixgbe: support adding TM shaper profile
net/ixgbe: support deleting TM shaper profile
net/ixgbe: support adding TM node
net/ixgbe: support deleting TM node
net/ixgbe: support getting TM node type
net/ixgbe: support getting TM level capability
net/ixgbe: support getting TM node capability
net/ixgbe: support committing TM hierarchy
drivers/net/i40e/Makefile | 1 +
drivers/net/i40e/i40e_ethdev.c | 7 +
drivers/net/i40e/i40e_ethdev.h | 57 +++
drivers/net/i40e/i40e_tm.c | 815 +++++++++++++++++++++++++++++++++++++
drivers/net/i40e/rte_pmd_i40e.c | 9 -
drivers/net/ixgbe/Makefile | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 15 +-
drivers/net/ixgbe/ixgbe_ethdev.h | 60 +++
drivers/net/ixgbe/ixgbe_tm.c | 850 +++++++++++++++++++++++++++++++++++++++
9 files changed, 1801 insertions(+), 14 deletions(-)
create mode 100644 drivers/net/i40e/i40e_tm.c
create mode 100644 drivers/net/ixgbe/ixgbe_tm.c
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 01/20] net/i40e: support getting TM ops
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 02/20] net/i40e: support getting TM capability Wenzhuo Lu
` (21 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
To support QoS scheduler APIs, create a new C file for
the TM (Traffic Management) ops but without any function
implemented.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/Makefile | 1 +
drivers/net/i40e/i40e_ethdev.c | 1 +
drivers/net/i40e/i40e_ethdev.h | 2 ++
drivers/net/i40e/i40e_tm.c | 51 ++++++++++++++++++++++++++++++++++++++++++
4 files changed, 55 insertions(+)
create mode 100644 drivers/net/i40e/i40e_tm.c
diff --git a/drivers/net/i40e/Makefile b/drivers/net/i40e/Makefile
index 56f210d..33be5f9 100644
--- a/drivers/net/i40e/Makefile
+++ b/drivers/net/i40e/Makefile
@@ -109,6 +109,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_pf.c
SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_fdir.c
SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_flow.c
SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += rte_pmd_i40e.c
+SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_tm.c
# vector PMD driver needs SSE4.1 support
ifeq ($(findstring RTE_MACHINE_CPUFLAG_SSE4_1,$(CFLAGS)),)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 4c49673..fcc958d 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -515,6 +515,7 @@ static int i40e_sw_tunnel_filter_insert(struct i40e_pf *pf,
.get_eeprom = i40e_get_eeprom,
.mac_addr_set = i40e_set_default_mac_addr,
.mtu_set = i40e_dev_mtu_set,
+ .tm_ops_get = i40e_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 2ff8282..e5301ee 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -39,6 +39,7 @@
#include <rte_kvargs.h>
#include <rte_hash.h>
#include <rte_flow_driver.h>
+#include <rte_tm_driver.h>
#define I40E_VLAN_TAG_SIZE 4
@@ -892,6 +893,7 @@ int i40e_add_macvlan_filters(struct i40e_vsi *vsi,
struct i40e_macvlan_filter *filter,
int total);
bool is_i40e_supported(struct rte_eth_dev *dev);
+int i40e_tm_ops_get(struct rte_eth_dev *dev, void *ops);
#define I40E_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
new file mode 100644
index 0000000..2f4c866
--- /dev/null
+++ b/drivers/net/i40e/i40e_tm.c
@@ -0,0 +1,51 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "base/i40e_prototype.h"
+#include "i40e_ethdev.h"
+
+const struct rte_tm_ops i40e_tm_ops = {
+ NULL,
+};
+
+int
+i40e_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &i40e_tm_ops;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 02/20] net/i40e: support getting TM capability
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 01/20] net/i40e: support getting TM ops Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 03/20] net/i40e: support adding TM shaper profile Wenzhuo Lu
` (20 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 82 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 81 insertions(+), 1 deletion(-)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 2f4c866..86a2f74 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -34,8 +34,12 @@
#include "base/i40e_prototype.h"
#include "i40e_ethdev.h"
+static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
const struct rte_tm_ops i40e_tm_ops = {
- NULL,
+ .capabilities_get = i40e_tm_capabilities_get,
};
int
@@ -49,3 +53,79 @@
return 0;
}
+
+static inline uint16_t
+i40e_tc_nb_get(struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_vsi *main_vsi = pf->main_vsi;
+ uint16_t sum = 0;
+ int i;
+
+ for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+ if (main_vsi->enabled_tc & BIT_ULL(i))
+ sum++;
+ }
+
+ return sum;
+}
+
+static int
+i40e_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ uint16_t tc_nb = 0;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ error->type = RTE_TM_ERROR_TYPE_NONE;
+
+ /* set all the parameters to 0 first. */
+ memset(cap, 0, sizeof(struct rte_tm_capabilities));
+
+ /* only support port + TCs */
+ tc_nb = i40e_tc_nb_get(dev);
+ cap->n_nodes_max = tc_nb + 1;
+ cap->n_levels_max = 2;
+ cap->non_leaf_nodes_identical = 0;
+ cap->leaf_nodes_identical = 0;
+ cap->shaper_n_max = cap->n_nodes_max;
+ cap->shaper_private_n_max = cap->n_nodes_max;
+ cap->shaper_private_dual_rate_n_max = 0;
+ cap->shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->shaper_private_rate_max = 5000000000ull;
+ cap->shaper_shared_n_max = 0;
+ cap->shaper_shared_n_nodes_per_shaper_max = 0;
+ cap->shaper_shared_n_shapers_per_node_max = 0;
+ cap->shaper_shared_dual_rate_n_max = 0;
+ cap->shaper_shared_rate_min = 0;
+ cap->shaper_shared_rate_max = 0;
+ cap->sched_n_children_max = tc_nb;
+ cap->sched_sp_n_priorities_max = 0;
+ cap->sched_wfq_n_children_per_group_max = 0;
+ cap->sched_wfq_n_groups_max = 0;
+ cap->sched_wfq_weight_max = 0;
+ cap->cman_head_drop_supported = 0;
+ cap->dynamic_update_mask = 0;
+
+ /**
+ * not supported parameters are 0, below,
+ * shaper_pkt_length_adjust_min
+ * shaper_pkt_length_adjust_max
+ * cman_wred_context_n_max
+ * cman_wred_context_private_n_max
+ * cman_wred_context_shared_n_max
+ * cman_wred_context_shared_n_nodes_per_context_max
+ * cman_wred_context_shared_n_contexts_per_node_max
+ * mark_vlan_dei_supported
+ * mark_ip_ecn_tcp_supported
+ * mark_ip_ecn_sctp_supported
+ * mark_ip_dscp_supported
+ * stats_mask
+ */
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 03/20] net/i40e: support adding TM shaper profile
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 01/20] net/i40e: support getting TM ops Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 02/20] net/i40e: support getting TM capability Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 04/20] net/i40e: support deleting " Wenzhuo Lu
` (19 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 6 +++
drivers/net/i40e/i40e_ethdev.h | 18 +++++++
drivers/net/i40e/i40e_tm.c | 107 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 131 insertions(+)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index fcc958d..721d192 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1299,6 +1299,9 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize Traffic Manager configuration */
+ i40e_tm_conf_init(dev);
+
ret = i40e_init_ethtype_filter_list(dev);
if (ret < 0)
goto err_init_ethtype_filter_list;
@@ -1462,6 +1465,9 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
rte_free(p_flow);
}
+ /* Remove all Traffic Manager configuration */
+ i40e_tm_conf_uninit(dev);
+
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index e5301ee..da73d64 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -626,6 +626,21 @@ struct rte_flow {
TAILQ_HEAD(i40e_flow_list, rte_flow);
+/* Struct to store Traffic Manager shaper profile. */
+struct i40e_tm_shaper_profile {
+ TAILQ_ENTRY(i40e_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+TAILQ_HEAD(i40e_shaper_profile_list, i40e_tm_shaper_profile);
+
+/* Struct to store all the Traffic Manager configuration. */
+struct i40e_tm_conf {
+ struct i40e_shaper_profile_list shaper_profile_list;
+};
+
/*
* Structure to store private data specific for PF instance.
*/
@@ -686,6 +701,7 @@ struct i40e_pf {
struct i40e_flow_list flow_list;
bool mpls_replace_flag; /* 1 - MPLS filter replace is done */
bool qinq_replace_flag; /* QINQ filter replace is done */
+ struct i40e_tm_conf tm_conf;
};
enum pending_msg {
@@ -894,6 +910,8 @@ int i40e_add_macvlan_filters(struct i40e_vsi *vsi,
int total);
bool is_i40e_supported(struct rte_eth_dev *dev);
int i40e_tm_ops_get(struct rte_eth_dev *dev, void *ops);
+void i40e_tm_conf_init(struct rte_eth_dev *dev);
+void i40e_tm_conf_uninit(struct rte_eth_dev *dev);
#define I40E_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 86a2f74..a71ff45 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -31,15 +31,22 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <rte_malloc.h>
+
#include "base/i40e_prototype.h"
#include "i40e_ethdev.h"
static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
struct rte_tm_capabilities *cap,
struct rte_tm_error *error);
+static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
+ .shaper_profile_add = i40e_shaper_profile_add,
};
int
@@ -54,6 +61,30 @@ static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+void
+i40e_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize shaper profile list */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+}
+
+void
+i40e_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ /* Remove all shaper profiles */
+ while ((shaper_profile =
+ TAILQ_FIRST(&pf->tm_conf.shaper_profile_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+ rte_free(shaper_profile);
+ }
+}
+
static inline uint16_t
i40e_tc_nb_get(struct rte_eth_dev *dev)
{
@@ -129,3 +160,79 @@ static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct i40e_tm_shaper_profile *
+i40e_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+i40e_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ /* min rate not supported */
+ if (profile->committed.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+ error->message = "committed rate not supported";
+ return -EINVAL;
+ }
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ shaper_profile = i40e_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("i40e_tm_shaper_profile",
+ sizeof(struct i40e_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ (void)rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 04/20] net/i40e: support deleting TM shaper profile
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (2 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 03/20] net/i40e: support adding TM shaper profile Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 05/20] net/i40e: support adding TM node Wenzhuo Lu
` (18 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index a71ff45..233adcf 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -43,10 +43,14 @@ static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_shaper_params *profile,
struct rte_tm_error *error);
+static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
.shaper_profile_add = i40e_shaper_profile_add,
+ .shaper_profile_delete = i40e_shaper_profile_del,
};
int
@@ -236,3 +240,35 @@ static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+i40e_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = i40e_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 05/20] net/i40e: support adding TM node
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (3 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 04/20] net/i40e: support deleting " Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 06/20] net/i40e: support deleting " Wenzhuo Lu
` (17 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_ethdev.h | 28 ++++++
drivers/net/i40e/i40e_tm.c | 223 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 251 insertions(+)
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index da73d64..34ba3e5 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -636,9 +636,37 @@ struct i40e_tm_shaper_profile {
TAILQ_HEAD(i40e_shaper_profile_list, i40e_tm_shaper_profile);
+/* node type of Traffic Manager */
+enum i40e_tm_node_type {
+ I40E_TM_NODE_TYPE_PORT,
+ I40E_TM_NODE_TYPE_TC,
+ I40E_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct i40e_tm_node {
+ TAILQ_ENTRY(i40e_tm_node) node;
+ uint32_t id;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct i40e_tm_node *parent;
+ struct i40e_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+TAILQ_HEAD(i40e_tm_node_list, i40e_tm_node);
+
/* Struct to store all the Traffic Manager configuration. */
struct i40e_tm_conf {
struct i40e_shaper_profile_list shaper_profile_list;
+ struct i40e_tm_node *root; /* root node - port */
+ struct i40e_tm_node_list tc_list; /* node list for all the TCs */
+ /**
+ * The number of added TC nodes.
+ * It should be no more than the TC number of this port.
+ */
+ uint32_t nb_tc_node;
};
/*
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 233adcf..6ebce77 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -46,11 +46,16 @@ static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_error *error);
+static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
.shaper_profile_add = i40e_shaper_profile_add,
.shaper_profile_delete = i40e_shaper_profile_del,
+ .node_add = i40e_node_add,
};
int
@@ -72,6 +77,11 @@ static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
/* initialize shaper profile list */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+
+ /* initialize node configuration */
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ pf->tm_conf.nb_tc_node = 0;
}
void
@@ -79,6 +89,18 @@ static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_tm_shaper_profile *shaper_profile;
+ struct i40e_tm_node *tc;
+
+ /* clear node configuration */
+ while ((tc = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tc, node);
+ rte_free(tc);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
/* Remove all shaper profiles */
while ((shaper_profile =
@@ -272,3 +294,204 @@ static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct i40e_tm_node *
+i40e_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum i40e_tm_node_type *node_type)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct i40e_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = I40E_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = I40E_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum i40e_tm_node_type node_type = I40E_TM_NODE_TYPE_MAX;
+ struct i40e_tm_shaper_profile *shaper_profile;
+ struct i40e_tm_node *tm_node;
+ uint16_t tc_nb = 0;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority not supported";
+ return -EINVAL;
+ }
+
+ if (weight) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight not supported";
+ return -EINVAL;
+ }
+
+ /* check if the node ID is already used */
+ if (i40e_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ shaper_profile = i40e_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check the unsupported parameters */
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("i40e_tm_node",
+ sizeof(struct i40e_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = NULL;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+ }
+
+ /* TC node */
+ /* check the unsupported parameters */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ /* should have a root first */
+ if (!pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "no root yet";
+ return -EINVAL;
+ }
+ if (pf->tm_conf.root->id != parent_node_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent id doesn't belong to the root";
+ return -EINVAL;
+ }
+
+ /* check the TC number */
+ tc_nb = i40e_tc_nb_get(dev);
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too much TC";
+ return -EINVAL;
+ }
+
+ /* add the TC node */
+ tm_node = rte_zmalloc("i40e_tm_node",
+ sizeof(struct i40e_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = pf->tm_conf.root;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->parent->reference_count++;
+ pf->tm_conf.nb_tc_node++;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 06/20] net/i40e: support deleting TM node
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (4 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 05/20] net/i40e: support adding TM node Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 07/20] net/i40e: support getting TM node type Wenzhuo Lu
` (16 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 6ebce77..20172d5 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -50,12 +50,15 @@ static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
uint32_t weight, struct rte_tm_node_params *params,
struct rte_tm_error *error);
+static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
.shaper_profile_add = i40e_shaper_profile_add,
.shaper_profile_delete = i40e_shaper_profile_del,
.node_add = i40e_node_add,
+ .node_delete = i40e_node_delete,
};
int
@@ -495,3 +498,54 @@ static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum i40e_tm_node_type node_type;
+ struct i40e_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = i40e_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == I40E_TM_NODE_TYPE_PORT) {
+ tm_node->shaper_profile->reference_count--;
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC node */
+ tm_node->shaper_profile->reference_count--;
+ tm_node->parent->reference_count--;
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ pf->tm_conf.nb_tc_node--;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 07/20] net/i40e: support getting TM node type
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (5 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 06/20] net/i40e: support deleting " Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 08/20] net/i40e: support getting TM level capability Wenzhuo Lu
` (15 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_type_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 20172d5..899e88e 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -52,6 +52,8 @@ static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
+static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -59,6 +61,7 @@ static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
.shaper_profile_delete = i40e_shaper_profile_del,
.node_add = i40e_node_add,
.node_delete = i40e_node_delete,
+ .node_type_get = i40e_node_type_get,
};
int
@@ -549,3 +552,35 @@ static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum i40e_tm_node_type node_type;
+ struct i40e_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = i40e_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (tm_node->reference_count)
+ *is_leaf = false;
+ else
+ *is_leaf = true;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 08/20] net/i40e: support getting TM level capability
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (6 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 07/20] net/i40e: support getting TM node type Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 09/20] net/i40e: support getting TM node capability Wenzhuo Lu
` (14 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_level_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 899e88e..70e9b78 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -54,6 +54,10 @@ static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error);
+static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -62,6 +66,7 @@ static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
.node_add = i40e_node_add,
.node_delete = i40e_node_delete,
.node_type_get = i40e_node_type_get,
+ .level_capabilities_get = i40e_level_capabilities_get,
};
int
@@ -584,3 +589,65 @@ static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+i40e_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ uint16_t nb_tc = 0;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (level_id >= I40E_TM_NODE_TYPE_MAX) {
+ error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+ error->message = "too deep level";
+ return -EINVAL;
+ }
+
+ nb_tc = i40e_tc_nb_get(dev);
+
+ /* root node */
+ if (level_id == I40E_TM_NODE_TYPE_PORT) {
+ cap->n_nodes_max = 1;
+ cap->n_nodes_nonleaf_max = 1;
+ cap->n_nodes_leaf_max = 0;
+ cap->non_leaf_nodes_identical = false;
+ cap->leaf_nodes_identical = false;
+ cap->nonleaf.shaper_private_supported = true;
+ cap->nonleaf.shaper_private_dual_rate_supported = false;
+ cap->nonleaf.shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->nonleaf.shaper_private_rate_max = 5000000000ull;
+ cap->nonleaf.shaper_shared_n_max = 0;
+ cap->nonleaf.sched_n_children_max = nb_tc;
+ cap->nonleaf.sched_sp_n_priorities_max = 0;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 0;
+ cap->nonleaf.stats_mask = 0;
+
+ return 0;
+ }
+
+ /* TC node */
+ cap->n_nodes_max = nb_tc;
+ cap->n_nodes_nonleaf_max = 0;
+ cap->n_nodes_leaf_max = nb_tc;
+ cap->non_leaf_nodes_identical = false;
+ cap->leaf_nodes_identical = true;
+ cap->leaf.shaper_private_supported = true;
+ cap->leaf.shaper_private_dual_rate_supported = false;
+ cap->leaf.shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->leaf.shaper_private_rate_max = 5000000000ull;
+ cap->leaf.shaper_shared_n_max = 0;
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = false;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ cap->leaf.stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 09/20] net/i40e: support getting TM node capability
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (7 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 08/20] net/i40e: support getting TM level capability Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 10/20] net/i40e: support committing TM hierarchy Wenzhuo Lu
` (13 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 57 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 70e9b78..2d8217c 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -58,6 +58,10 @@ static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
uint32_t level_id,
struct rte_tm_level_capabilities *cap,
struct rte_tm_error *error);
+static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -67,6 +71,7 @@ static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
.node_delete = i40e_node_delete,
.node_type_get = i40e_node_type_get,
.level_capabilities_get = i40e_level_capabilities_get,
+ .node_capabilities_get = i40e_node_capabilities_get,
};
int
@@ -651,3 +656,55 @@ static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+i40e_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ enum i40e_tm_node_type node_type;
+ struct i40e_tm_node *tm_node;
+ uint16_t nb_tc = 0;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = i40e_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ cap->shaper_private_supported = true;
+ cap->shaper_private_dual_rate_supported = false;
+ cap->shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->shaper_private_rate_max = 5000000000ull;
+ cap->shaper_shared_n_max = 0;
+
+ if (node_type == I40E_TM_NODE_TYPE_PORT) {
+ nb_tc = i40e_tc_nb_get(dev);
+ cap->nonleaf.sched_n_children_max = nb_tc;
+ cap->nonleaf.sched_sp_n_priorities_max = 0;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 0;
+ } else {
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = false;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ }
+
+ cap->stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 10/20] net/i40e: support committing TM hierarchy
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (8 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 09/20] net/i40e: support getting TM node capability Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 11/20] net/ixgbe: support getting TM ops Wenzhuo Lu
` (12 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_hierarchy_commit.
When calling this API, the driver tries to enable
the TM configuration on HW.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_ethdev.h | 9 ++++
drivers/net/i40e/i40e_tm.c | 105 ++++++++++++++++++++++++++++++++++++++++
drivers/net/i40e/rte_pmd_i40e.c | 9 ----
3 files changed, 114 insertions(+), 9 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 34ba3e5..741cf92 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -252,6 +252,15 @@ enum i40e_flxpld_layer_idx {
I40E_INSET_FLEX_PAYLOAD_W5 | I40E_INSET_FLEX_PAYLOAD_W6 | \
I40E_INSET_FLEX_PAYLOAD_W7 | I40E_INSET_FLEX_PAYLOAD_W8)
+/* The max bandwidth of i40e is 40Gbps. */
+#define I40E_QOS_BW_MAX 40000
+/* The bandwidth should be the multiple of 50Mbps. */
+#define I40E_QOS_BW_GRANULARITY 50
+/* The min bandwidth weight is 1. */
+#define I40E_QOS_BW_WEIGHT_MIN 1
+/* The max bandwidth weight is 127. */
+#define I40E_QOS_BW_WEIGHT_MAX 127
+
/**
* The overhead from MTU to max frame size.
* Considering QinQ packet, the VLAN tag needs to be counted twice.
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 2d8217c..a9c5900 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -62,6 +62,9 @@ static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
uint32_t node_id,
struct rte_tm_node_capabilities *cap,
struct rte_tm_error *error);
+static int i40e_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -72,6 +75,7 @@ static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
.node_type_get = i40e_node_type_get,
.level_capabilities_get = i40e_level_capabilities_get,
.node_capabilities_get = i40e_node_capabilities_get,
+ .hierarchy_commit = i40e_hierarchy_commit,
};
int
@@ -708,3 +712,104 @@ static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+i40e_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct i40e_tm_node *tm_node;
+ struct i40e_vsi *vsi;
+ struct i40e_hw *hw;
+ struct i40e_aqc_configure_vsi_ets_sla_bw_data tc_bw;
+ uint64_t bw;
+ uint8_t tc_map;
+ int ret;
+ int i;
+
+ if (!error)
+ return -EINVAL;
+
+ /* check the setting */
+ if (!pf->tm_conf.root)
+ return 0;
+
+ vsi = pf->main_vsi;
+ hw = I40E_VSI_TO_HW(vsi);
+
+ /**
+ * Don't support bandwidth control for port and TCs in parallel.
+ * If the port has a max bandwidth, the TCs should have none.
+ */
+ /* port */
+ bw = pf->tm_conf.root->shaper_profile->profile.peak.rate;
+ if (bw) {
+ /* check if any TC has a max bandwidth */
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->shaper_profile->profile.peak.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "no port and TC max bandwidth"
+ " in parallel";
+ goto fail_clear;
+ }
+ }
+
+ /* interpret Bps to 50Mbps */
+ bw = bw * 8 / 1000 / 1000 / I40E_QOS_BW_GRANULARITY;
+
+ /* set the max bandwidth */
+ ret = i40e_aq_config_vsi_bw_limit(hw, vsi->seid,
+ (uint16_t)bw, 0, NULL);
+ if (ret) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "fail to set port max bandwidth";
+ goto fail_clear;
+ }
+
+ return 0;
+ }
+
+ /* TC */
+ memset(&tc_bw, 0, sizeof(tc_bw));
+ tc_bw.tc_valid_bits = vsi->enabled_tc;
+ tc_map = vsi->enabled_tc;
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ i = 0;
+ while (i < I40E_MAX_TRAFFIC_CLASS && !(tc_map & BIT_ULL(i)))
+ i++;
+ if (i >= I40E_MAX_TRAFFIC_CLASS) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "cannot find the TC";
+ goto fail_clear;
+ }
+ tc_map &= ~BIT_ULL(i);
+
+ bw = tm_node->shaper_profile->profile.peak.rate;
+ if (!bw)
+ continue;
+
+ /* interpret Bps to 50Mbps */
+ bw = bw * 8 / 1000 / 1000 / I40E_QOS_BW_GRANULARITY;
+
+ tc_bw.tc_bw_credits[i] = rte_cpu_to_le_16((uint16_t)bw);
+ }
+
+ ret = i40e_aq_config_vsi_ets_sla_bw_limit(hw, vsi->seid, &tc_bw, NULL);
+ if (ret) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "fail to set TC max bandwidth";
+ goto fail_clear;
+ }
+
+ return 0;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ i40e_tm_conf_uninit(dev);
+ i40e_tm_conf_init(dev);
+ }
+ return -EINVAL;
+}
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index f7ce62b..4f94678 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -40,15 +40,6 @@
#include "i40e_rxtx.h"
#include "rte_pmd_i40e.h"
-/* The max bandwidth of i40e is 40Gbps. */
-#define I40E_QOS_BW_MAX 40000
-/* The bandwidth should be the multiple of 50Mbps. */
-#define I40E_QOS_BW_GRANULARITY 50
-/* The min bandwidth weight is 1. */
-#define I40E_QOS_BW_WEIGHT_MIN 1
-/* The max bandwidth weight is 127. */
-#define I40E_QOS_BW_WEIGHT_MAX 127
-
int
rte_pmd_i40e_ping_vfs(uint8_t port, uint16_t vf)
{
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 11/20] net/ixgbe: support getting TM ops
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (9 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 10/20] net/i40e: support committing TM hierarchy Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 12/20] net/ixgbe: support getting TM capability Wenzhuo Lu
` (11 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
To support QoS scheduler APIs, create a new C file for
the TM (Traffic Management) ops but without any function
implemented.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/Makefile | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.h | 2 ++
drivers/net/ixgbe/ixgbe_tm.c | 50 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 54 insertions(+)
create mode 100644 drivers/net/ixgbe/ixgbe_tm.c
diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index 5529d81..0595dcf 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -124,6 +124,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
endif
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
# install this header file
SYMLINK-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)-include := rte_pmd_ixgbe.h
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 2083cde..4433590 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -608,6 +608,7 @@ static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
.l2_tunnel_offload_set = ixgbe_dev_l2_tunnel_offload_set,
.udp_tunnel_port_add = ixgbe_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ixgbe_dev_udp_tunnel_port_del,
+ .tm_ops_get = ixgbe_tm_ops_get,
};
/*
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b576a6f..7e99fd3 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -39,6 +39,7 @@
#include "ixgbe_bypass.h"
#include <rte_time.h>
#include <rte_hash.h>
+#include <rte_tm_driver.h>
/* need update link, bit flag */
#define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
@@ -671,6 +672,7 @@ int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
uint16_t tx_rate, uint64_t q_msk);
bool is_ixgbe_supported(struct rte_eth_dev *dev);
+int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops);
static inline int
ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
new file mode 100644
index 0000000..0a222a1
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -0,0 +1,50 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "ixgbe_ethdev.h"
+
+const struct rte_tm_ops ixgbe_tm_ops = {
+ NULL,
+};
+
+int
+ixgbe_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ixgbe_tm_ops;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 12/20] net/ixgbe: support getting TM capability
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (10 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 11/20] net/ixgbe: support getting TM ops Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 13/20] net/ixgbe: support adding TM shaper profile Wenzhuo Lu
` (10 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 90 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 89 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 0a222a1..77066b7 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -33,8 +33,12 @@
#include "ixgbe_ethdev.h"
+static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
const struct rte_tm_ops ixgbe_tm_ops = {
- NULL,
+ .capabilities_get = ixgbe_tm_capabilities_get,
};
int
@@ -48,3 +52,87 @@
return 0;
}
+
+static inline uint8_t
+ixgbe_tc_nb_get(struct rte_eth_dev *dev)
+{
+ struct rte_eth_conf *eth_conf;
+ uint8_t nb_tcs = 0;
+
+ eth_conf = &dev->data->dev_conf;
+ if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
+ } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
+ ETH_32_POOLS)
+ nb_tcs = ETH_4_TCS;
+ else
+ nb_tcs = ETH_8_TCS;
+ } else {
+ nb_tcs = 1;
+ }
+
+ return nb_tcs;
+}
+
+static int
+ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ uint8_t nb_tcs;
+ uint8_t nb_queues;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ error->type = RTE_TM_ERROR_TYPE_NONE;
+
+ /* set all the parameters to 0 first. */
+ memset(cap, 0, sizeof(struct rte_tm_capabilities));
+
+ nb_tcs = ixgbe_tc_nb_get(dev);
+ nb_queues = dev->data->nb_tx_queues;
+ /* port + TCs + queues */
+ cap->n_nodes_max = 1 + nb_tcs + nb_queues;
+ cap->n_levels_max = 3;
+ cap->non_leaf_nodes_identical = 0;
+ cap->leaf_nodes_identical = 0;
+ cap->shaper_n_max = cap->n_nodes_max;
+ cap->shaper_private_n_max = cap->n_nodes_max;
+ cap->shaper_private_dual_rate_n_max = 0;
+ cap->shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->shaper_private_rate_max = 1250000000ull;
+ cap->shaper_shared_n_max = 0;
+ cap->shaper_shared_n_nodes_per_shaper_max = 0;
+ cap->shaper_shared_n_shapers_per_node_max = 0;
+ cap->shaper_shared_dual_rate_n_max = 0;
+ cap->shaper_shared_rate_min = 0;
+ cap->shaper_shared_rate_max = 0;
+ cap->sched_n_children_max = (nb_tcs > nb_queues) ? nb_tcs : nb_queues;
+ cap->sched_sp_n_priorities_max = 0;
+ cap->sched_wfq_n_children_per_group_max = 0;
+ cap->sched_wfq_n_groups_max = 0;
+ cap->sched_wfq_weight_max = 0;
+ cap->cman_head_drop_supported = 0;
+ cap->dynamic_update_mask = 0;
+
+ /**
+ * not supported parameters are 0, below,
+ * shaper_pkt_length_adjust_min
+ * shaper_pkt_length_adjust_max
+ * cman_wred_context_n_max
+ * cman_wred_context_private_n_max
+ * cman_wred_context_shared_n_max
+ * cman_wred_context_shared_n_nodes_per_context_max
+ * cman_wred_context_shared_n_contexts_per_node_max
+ * mark_vlan_dei_supported
+ * mark_ip_ecn_tcp_supported
+ * mark_ip_ecn_sctp_supported
+ * mark_ip_dscp_supported
+ * stats_mask
+ */
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 13/20] net/ixgbe: support adding TM shaper profile
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (11 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 12/20] net/ixgbe: support getting TM capability Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 14/20] net/ixgbe: support deleting " Wenzhuo Lu
` (9 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 6 +++
drivers/net/ixgbe/ixgbe_ethdev.h | 21 ++++++++
drivers/net/ixgbe/ixgbe_tm.c | 111 +++++++++++++++++++++++++++++++++++++++
3 files changed, 138 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 4433590..d339fc4 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1360,6 +1360,9 @@ struct rte_ixgbe_xstats_name_off {
/* initialize bandwidth configuration info */
memset(bw_conf, 0, sizeof(struct ixgbe_bw_conf));
+ /* initialize Traffic Manager configuration */
+ ixgbe_tm_conf_init(eth_dev);
+
return 0;
}
@@ -1413,6 +1416,9 @@ struct rte_ixgbe_xstats_name_off {
/* clear all the filters list */
ixgbe_filterlist_flush();
+ /* Remove all Traffic Manager configuration */
+ ixgbe_tm_conf_uninit(eth_dev);
+
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 7e99fd3..b647702 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -435,6 +435,21 @@ struct ixgbe_bw_conf {
uint8_t tc_num; /* Number of TCs. */
};
+/* Struct to store Traffic Manager shaper profile. */
+struct ixgbe_tm_shaper_profile {
+ TAILQ_ENTRY(ixgbe_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+TAILQ_HEAD(ixgbe_shaper_profile_list, ixgbe_tm_shaper_profile);
+
+/* The configuration of Traffic Manager */
+struct ixgbe_tm_conf {
+ struct ixgbe_shaper_profile_list shaper_profile_list;
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
@@ -463,6 +478,7 @@ struct ixgbe_adapter {
struct rte_timecounter systime_tc;
struct rte_timecounter rx_tstamp_tc;
struct rte_timecounter tx_tstamp_tc;
+ struct ixgbe_tm_conf tm_conf;
};
#define IXGBE_DEV_TO_PCI(eth_dev) \
@@ -513,6 +529,9 @@ struct ixgbe_adapter {
#define IXGBE_DEV_PRIVATE_TO_BW_CONF(adapter) \
(&((struct ixgbe_adapter *)adapter)->bw_conf)
+#define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
+ (&((struct ixgbe_adapter *)adapter)->tm_conf)
+
/*
* RX/TX function prototypes
*/
@@ -673,6 +692,8 @@ int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
uint16_t tx_rate, uint64_t q_msk);
bool is_ixgbe_supported(struct rte_eth_dev *dev);
int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops);
+void ixgbe_tm_conf_init(struct rte_eth_dev *dev);
+void ixgbe_tm_conf_uninit(struct rte_eth_dev *dev);
static inline int
ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 77066b7..89e795a 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -31,14 +31,21 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <rte_malloc.h>
+
#include "ixgbe_ethdev.h"
static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
struct rte_tm_capabilities *cap,
struct rte_tm_error *error);
+static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
+ .shaper_profile_add = ixgbe_shaper_profile_add,
};
int
@@ -53,6 +60,32 @@ static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+void
+ixgbe_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+
+ /* initialize shaper profile list */
+ TAILQ_INIT(&tm_conf->shaper_profile_list);
+}
+
+void
+ixgbe_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ /* Remove all shaper profiles */
+ while ((shaper_profile =
+ TAILQ_FIRST(&tm_conf->shaper_profile_list))) {
+ TAILQ_REMOVE(&tm_conf->shaper_profile_list,
+ shaper_profile, node);
+ rte_free(shaper_profile);
+ }
+}
+
static inline uint8_t
ixgbe_tc_nb_get(struct rte_eth_dev *dev)
{
@@ -136,3 +169,81 @@ static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct ixgbe_tm_shaper_profile *
+ixgbe_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_shaper_profile_list *shaper_profile_list =
+ &tm_conf->shaper_profile_list;
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ /* min rate not supported */
+ if (profile->committed.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+ error->message = "committed rate not supported";
+ return -EINVAL;
+ }
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ shaper_profile = ixgbe_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ixgbe_tm_shaper_profile",
+ sizeof(struct ixgbe_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ (void)rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&tm_conf->shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 14/20] net/ixgbe: support deleting TM shaper profile
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (12 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 13/20] net/ixgbe: support adding TM shaper profile Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 15/20] net/ixgbe: support adding TM node Wenzhuo Lu
` (8 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 89e795a..b3b1acf 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -42,10 +42,14 @@ static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_shaper_params *profile,
struct rte_tm_error *error);
+static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
.shaper_profile_add = ixgbe_shaper_profile_add,
+ .shaper_profile_delete = ixgbe_shaper_profile_del,
};
int
@@ -247,3 +251,36 @@ static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ixgbe_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&tm_conf->shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 15/20] net/ixgbe: support adding TM node
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (13 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 14/20] net/ixgbe: support deleting " Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 16/20] net/ixgbe: support deleting " Wenzhuo Lu
` (7 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.h | 35 ++++++
drivers/net/ixgbe/ixgbe_tm.c | 259 +++++++++++++++++++++++++++++++++++++++
2 files changed, 294 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b647702..ccde335 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -445,9 +445,44 @@ struct ixgbe_tm_shaper_profile {
TAILQ_HEAD(ixgbe_shaper_profile_list, ixgbe_tm_shaper_profile);
+/* node type of Traffic Manager */
+enum ixgbe_tm_node_type {
+ IXGBE_TM_NODE_TYPE_PORT,
+ IXGBE_TM_NODE_TYPE_TC,
+ IXGBE_TM_NODE_TYPE_QUEUE,
+ IXGBE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ixgbe_tm_node {
+ TAILQ_ENTRY(ixgbe_tm_node) node;
+ uint32_t id;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct ixgbe_tm_node *parent;
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+TAILQ_HEAD(ixgbe_tm_node_list, ixgbe_tm_node);
+
/* The configuration of Traffic Manager */
struct ixgbe_tm_conf {
struct ixgbe_shaper_profile_list shaper_profile_list;
+ struct ixgbe_tm_node *root; /* root node - port */
+ struct ixgbe_tm_node_list tc_list; /* node list for all the TCs */
+ struct ixgbe_tm_node_list queue_list; /* node list for all the queues */
+ /**
+ * The number of added TC nodes.
+ * It should be no more than the TC number of this port.
+ */
+ uint32_t nb_tc_node;
+ /**
+ * The number of added queue nodes.
+ * It should be no more than the queue number of this port.
+ */
+ uint32_t nb_queue_node;
};
/*
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index b3b1acf..16e8f89 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -45,11 +45,16 @@ static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_error *error);
+static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
.shaper_profile_add = ixgbe_shaper_profile_add,
.shaper_profile_delete = ixgbe_shaper_profile_del,
+ .node_add = ixgbe_node_add,
};
int
@@ -72,6 +77,13 @@ static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
/* initialize shaper profile list */
TAILQ_INIT(&tm_conf->shaper_profile_list);
+
+ /* initialize node configuration */
+ tm_conf->root = NULL;
+ TAILQ_INIT(&tm_conf->queue_list);
+ TAILQ_INIT(&tm_conf->tc_list);
+ tm_conf->nb_tc_node = 0;
+ tm_conf->nb_queue_node = 0;
}
void
@@ -80,6 +92,23 @@ static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
struct ixgbe_tm_shaper_profile *shaper_profile;
+ struct ixgbe_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&tm_conf->queue_list))) {
+ TAILQ_REMOVE(&tm_conf->queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ tm_conf->nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&tm_conf->tc_list))) {
+ TAILQ_REMOVE(&tm_conf->tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ tm_conf->nb_tc_node = 0;
+ if (tm_conf->root) {
+ rte_free(tm_conf->root);
+ tm_conf->root = NULL;
+ }
/* Remove all shaper profiles */
while ((shaper_profile =
@@ -284,3 +313,233 @@ static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct ixgbe_tm_node *
+ixgbe_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum ixgbe_tm_node_type *node_type)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_node *tm_node;
+
+ if (tm_conf->root && tm_conf->root->id == node_id) {
+ *node_type = IXGBE_TM_NODE_TYPE_PORT;
+ return tm_conf->root;
+ }
+
+ TAILQ_FOREACH(tm_node, &tm_conf->tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = IXGBE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, &tm_conf->queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = IXGBE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ enum ixgbe_tm_node_type node_type = IXGBE_TM_NODE_TYPE_MAX;
+ enum ixgbe_tm_node_type parent_node_type = IXGBE_TM_NODE_TYPE_MAX;
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+ struct ixgbe_tm_node *tm_node;
+ struct ixgbe_tm_node *parent_node;
+ uint8_t nb_tcs;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority not supported";
+ return -EINVAL;
+ }
+
+ if (weight) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight not supported";
+ return -EINVAL;
+ }
+
+ /* check if the node ID is already used */
+ if (ixgbe_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ shaper_profile = ixgbe_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check the unsupported parameters */
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (tm_conf->root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ixgbe_tm_node",
+ sizeof(struct ixgbe_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = NULL;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ tm_conf->root = tm_node;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+ }
+
+ /* TC node */
+ /* check the unsupported parameters */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ /* check the parent node */
+ parent_node = ixgbe_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != IXGBE_TM_NODE_TYPE_PORT &&
+ parent_node_type != IXGBE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not port or TC";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == IXGBE_TM_NODE_TYPE_PORT) {
+ /* check TC number */
+ nb_tcs = ixgbe_tc_nb_get(dev);
+ if (tm_conf->nb_tc_node >= nb_tcs) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too much TC";
+ return -EINVAL;
+ }
+ } else {
+ /* check queue number */
+ if (tm_conf->nb_queue_node >= dev->data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too much queue";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC node */
+ tm_node = rte_zmalloc("ixgbe_tm_node",
+ sizeof(struct ixgbe_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == IXGBE_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&tm_conf->tc_list,
+ tm_node, node);
+ tm_conf->nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&tm_conf->queue_list,
+ tm_node, node);
+ tm_conf->nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 16/20] net/ixgbe: support deleting TM node
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (14 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 15/20] net/ixgbe: support adding TM node Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 17/20] net/ixgbe: support getting TM node type Wenzhuo Lu
` (6 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 60 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 16e8f89..39ec272 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -49,12 +49,15 @@ static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t parent_node_id, uint32_t priority,
uint32_t weight, struct rte_tm_node_params *params,
struct rte_tm_error *error);
+static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
.shaper_profile_add = ixgbe_shaper_profile_add,
.shaper_profile_delete = ixgbe_shaper_profile_del,
.node_add = ixgbe_node_add,
+ .node_delete = ixgbe_node_delete,
};
int
@@ -543,3 +546,60 @@ static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ enum ixgbe_tm_node_type node_type = IXGBE_TM_NODE_TYPE_MAX;
+ struct ixgbe_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check the if the node id exists */
+ tm_node = ixgbe_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == IXGBE_TM_NODE_TYPE_PORT) {
+ tm_node->shaper_profile->reference_count--;
+ rte_free(tm_node);
+ tm_conf->root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->shaper_profile->reference_count--;
+ tm_node->parent->reference_count--;
+ if (node_type == IXGBE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&tm_conf->tc_list, tm_node, node);
+ tm_conf->nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&tm_conf->queue_list, tm_node, node);
+ tm_conf->nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 17/20] net/ixgbe: support getting TM node type
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (15 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 16/20] net/ixgbe: support deleting " Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 18/20] net/ixgbe: support getting TM level capability Wenzhuo Lu
` (5 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_type_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 39ec272..68b26cc 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -51,6 +51,8 @@ static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
+static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -58,6 +60,7 @@ static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
.shaper_profile_delete = ixgbe_shaper_profile_del,
.node_add = ixgbe_node_add,
.node_delete = ixgbe_node_delete,
+ .node_type_get = ixgbe_node_type_get,
};
int
@@ -603,3 +606,35 @@ static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ixgbe_tm_node_type node_type;
+ struct ixgbe_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ixgbe_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (tm_node->reference_count)
+ *is_leaf = false;
+ else
+ *is_leaf = true;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 18/20] net/ixgbe: support getting TM level capability
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (16 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 17/20] net/ixgbe: support getting TM node type Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 19/20] net/ixgbe: support getting TM node capability Wenzhuo Lu
` (4 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_level_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 78 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 78 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 68b26cc..4a9947d 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -53,6 +53,10 @@ static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error);
+static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -61,6 +65,7 @@ static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
.node_add = ixgbe_node_add,
.node_delete = ixgbe_node_delete,
.node_type_get = ixgbe_node_type_get,
+ .level_capabilities_get = ixgbe_level_capabilities_get,
};
int
@@ -638,3 +643,76 @@ static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ uint8_t nb_tc = 0;
+ uint8_t nb_queue = 0;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (level_id >= IXGBE_TM_NODE_TYPE_MAX) {
+ error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+ error->message = "too deep level";
+ return -EINVAL;
+ }
+
+ nb_tc = ixgbe_tc_nb_get(dev);
+ nb_queue = dev->data->nb_tx_queues;
+
+ /* root node */
+ if (level_id == IXGBE_TM_NODE_TYPE_PORT) {
+ cap->n_nodes_max = 1;
+ cap->n_nodes_nonleaf_max = 1;
+ cap->n_nodes_leaf_max = 0;
+ cap->non_leaf_nodes_identical = false;
+ cap->leaf_nodes_identical = false;
+ cap->nonleaf.shaper_private_supported = true;
+ cap->nonleaf.shaper_private_dual_rate_supported = false;
+ cap->nonleaf.shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->nonleaf.shaper_private_rate_max = 1250000000ull;
+ cap->nonleaf.shaper_shared_n_max = 0;
+ cap->nonleaf.sched_n_children_max = nb_tc;
+ cap->nonleaf.sched_sp_n_priorities_max = 0;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 0;
+ cap->nonleaf.stats_mask = 0;
+
+ return 0;
+ }
+
+ /* TC or queue node */
+ if (level_id == IXGBE_TM_NODE_TYPE_TC) {
+ /* TC */
+ cap->n_nodes_max = nb_tc;
+ cap->n_nodes_nonleaf_max = nb_tc;
+ cap->n_nodes_leaf_max = nb_tc;
+ cap->non_leaf_nodes_identical = true;
+ } else {
+ /* queue */
+ cap->n_nodes_max = nb_queue;
+ cap->n_nodes_nonleaf_max = 0;
+ cap->n_nodes_leaf_max = nb_queue;
+ cap->non_leaf_nodes_identical = false;
+ }
+ cap->leaf_nodes_identical = true;
+ cap->leaf.shaper_private_supported = true;
+ cap->leaf.shaper_private_dual_rate_supported = false;
+ cap->leaf.shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->leaf.shaper_private_rate_max = 1250000000ull;
+ cap->leaf.shaper_shared_n_max = 0;
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = false;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ cap->leaf.stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 19/20] net/ixgbe: support getting TM node capability
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (17 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 18/20] net/ixgbe: support getting TM level capability Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 20/20] net/ixgbe: support committing TM hierarchy Wenzhuo Lu
` (3 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 63 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 4a9947d..abb4643 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -57,6 +57,10 @@ static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
uint32_t level_id,
struct rte_tm_level_capabilities *cap,
struct rte_tm_error *error);
+static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -66,6 +70,7 @@ static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
.node_delete = ixgbe_node_delete,
.node_type_get = ixgbe_node_type_get,
.level_capabilities_get = ixgbe_level_capabilities_get,
+ .node_capabilities_get = ixgbe_node_capabilities_get,
};
int
@@ -716,3 +721,61 @@ static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ enum ixgbe_tm_node_type node_type;
+ struct ixgbe_tm_node *tm_node;
+ uint8_t nb_tc = 0;
+ uint8_t nb_queue = 0;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ nb_tc = ixgbe_tc_nb_get(dev);
+ nb_queue = dev->data->nb_tx_queues;
+
+ /* check if the node id exists */
+ tm_node = ixgbe_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ cap->shaper_private_supported = true;
+ cap->shaper_private_dual_rate_supported = false;
+ cap->shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->shaper_private_rate_max = 1250000000ull;
+ cap->shaper_shared_n_max = 0;
+
+ if (node_type == IXGBE_TM_NODE_TYPE_QUEUE) {
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = false;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ } else {
+ if (node_type == IXGBE_TM_NODE_TYPE_PORT)
+ cap->nonleaf.sched_n_children_max = nb_tc;
+ else
+ cap->nonleaf.sched_n_children_max = nb_queue;
+ cap->nonleaf.sched_sp_n_priorities_max = 0;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 0;
+ }
+
+ cap->stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH 20/20] net/ixgbe: support committing TM hierarchy
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (18 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 19/20] net/ixgbe: support getting TM node capability Wenzhuo Lu
@ 2017-05-27 8:17 ` Wenzhuo Lu
2017-06-08 11:19 ` [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Ferruh Yigit
` (2 subsequent siblings)
22 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-05-27 8:17 UTC (permalink / raw)
To: dev; +Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_hierarchy_commit.
When calling this API, the driver tries to enable
the TM configuration on HW.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 8 ++---
drivers/net/ixgbe/ixgbe_ethdev.h | 2 ++
drivers/net/ixgbe/ixgbe_tm.c | 69 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 74 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index d339fc4..e234177 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -302,9 +302,6 @@ static void ixgbe_set_ivar_map(struct ixgbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
static void ixgbe_configure_msix(struct rte_eth_dev *dev);
-static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
- uint16_t queue_idx, uint16_t tx_rate);
-
static int ixgbevf_add_mac_addr(struct rte_eth_dev *dev,
struct ether_addr *mac_addr,
uint32_t index, uint32_t pool);
@@ -5605,8 +5602,9 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on)
IXGBE_WRITE_REG(hw, IXGBE_EIAC, mask);
}
-static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
- uint16_t queue_idx, uint16_t tx_rate)
+int
+ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t rf_dec, rf_int;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index ccde335..48cf5b6 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -729,6 +729,8 @@ int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops);
void ixgbe_tm_conf_init(struct rte_eth_dev *dev);
void ixgbe_tm_conf_uninit(struct rte_eth_dev *dev);
+int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t tx_rate);
static inline int
ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index abb4643..c52f591 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -61,6 +61,9 @@ static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
uint32_t node_id,
struct rte_tm_node_capabilities *cap,
struct rte_tm_error *error);
+static int ixgbe_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -71,6 +74,7 @@ static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
.node_type_get = ixgbe_node_type_get,
.level_capabilities_get = ixgbe_level_capabilities_get,
.node_capabilities_get = ixgbe_node_capabilities_get,
+ .hierarchy_commit = ixgbe_hierarchy_commit,
};
int
@@ -779,3 +783,68 @@ static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+ixgbe_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_node *tm_node;
+ uint64_t bw;
+ int ret;
+ int i;
+
+ if (!error)
+ return -EINVAL;
+
+ /* check the setting */
+ if (tm_conf->root)
+ return 0;
+
+ /* not support port max bandwidth yet */
+ if (tm_conf->root->shaper_profile->profile.peak.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "no port max bandwidth";
+ goto fail_clear;
+ }
+
+ /* HW not support TC max bandwidth */
+ TAILQ_FOREACH(tm_node, &tm_conf->tc_list, node) {
+ if (tm_node->shaper_profile->profile.peak.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "no TC max bandwidth";
+ goto fail_clear;
+ }
+ }
+
+ /* queue max bandwidth */
+ i = 0;
+ TAILQ_FOREACH(tm_node, &tm_conf->queue_list, node) {
+ bw = tm_node->shaper_profile->profile.peak.rate;
+ if (bw) {
+ /* interpret Bps to Mbps */
+ bw = bw * 8 / 1000 / 1000;
+ ret = ixgbe_set_queue_rate_limit(dev, i, bw);
+ if (ret) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message =
+ "failed to set queue max bandwidth";
+ goto fail_clear;
+ }
+ }
+
+ i++;
+ }
+
+ return 0;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ixgbe_tm_conf_uninit(dev);
+ ixgbe_tm_conf_init(dev);
+ }
+ return -EINVAL;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (19 preceding siblings ...)
2017-05-27 8:17 ` [dpdk-dev] [PATCH 20/20] net/ixgbe: support committing TM hierarchy Wenzhuo Lu
@ 2017-06-08 11:19 ` Ferruh Yigit
2017-06-08 12:52 ` Thomas Monjalon
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
22 siblings, 1 reply; 68+ messages in thread
From: Ferruh Yigit @ 2017-06-08 11:19 UTC (permalink / raw)
To: Wenzhuo Lu, dev, Dumitrescu, Cristian, Thomas Monjalon
Cc: jingjing.wu, cristian.dumitrescu, jasvinder.singh
On 5/27/2017 9:17 AM, Wenzhuo Lu wrote:
> Implement the traffic manager APIs on i40e and ixgbe.
> This patch set is based on the patch set,
> "ethdev: abstraction layer for QoS traffic management"
> http://dpdk.org/dev/patchwork/patch/24411/
> http://dpdk.org/dev/patchwork/patch/24412/
>
> Wenzhuo Lu (20):
> net/i40e: support getting TM ops
> net/i40e: support getting TM capability
> net/i40e: support adding TM shaper profile
> net/i40e: support deleting TM shaper profile
> net/i40e: support adding TM node
> net/i40e: support deleting TM node
> net/i40e: support getting TM node type
> net/i40e: support getting TM level capability
> net/i40e: support getting TM node capability
> net/i40e: support committing TM hierarchy
> net/ixgbe: support getting TM ops
> net/ixgbe: support getting TM capability
> net/ixgbe: support adding TM shaper profile
> net/ixgbe: support deleting TM shaper profile
> net/ixgbe: support adding TM node
> net/ixgbe: support deleting TM node
> net/ixgbe: support getting TM node type
> net/ixgbe: support getting TM level capability
> net/ixgbe: support getting TM node capability
> net/ixgbe: support committing TM hierarchy
Series Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Hi Cristian, Thomas,
Since these features developed based on TM code in next-tm tree, I am
for getting these patches via next-tm, instead of next-net, any objection?
Thanks,
ferruh
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe
2017-06-08 11:19 ` [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Ferruh Yigit
@ 2017-06-08 12:52 ` Thomas Monjalon
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Monjalon @ 2017-06-08 12:52 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Wenzhuo Lu, dev, Dumitrescu, Cristian, jingjing.wu, jasvinder.singh
08/06/2017 13:19, Ferruh Yigit:
> Hi Cristian, Thomas,
>
> Since these features developed based on TM code in next-tm tree, I am
> for getting these patches via next-tm, instead of next-net, any objection?
I agree
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 00/20] traffic manager on i40e and ixgbe
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (20 preceding siblings ...)
2017-06-08 11:19 ` [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Ferruh Yigit
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 01/20] net/i40e: support getting TM ops Wenzhuo Lu
` (19 more replies)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
22 siblings, 20 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Implement the traffic manager APIs on i40e and ixgbe.
This patch set is based on the patch set,
"ethdev: abstraction layer for QoS traffic management"
http://dpdk.org/dev/patchwork/patch/25275/
http://dpdk.org/dev/patchwork/patch/25276/
Series Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
v2:
- reworked based on the new TM APIs.
- fixed a queue mapping issue on ixgbe.
Wenzhuo Lu (20):
net/i40e: support getting TM ops
net/i40e: support getting TM capability
net/i40e: support adding TM shaper profile
net/i40e: support deleting TM shaper profile
net/i40e: support adding TM node
net/i40e: support deleting TM node
net/i40e: support getting TM node type
net/i40e: support getting TM level capability
net/i40e: support getting TM node capability
net/i40e: support committing TM hierarchy
net/ixgbe: support getting TM ops
net/ixgbe: support getting TM capability
net/ixgbe: support adding TM shaper profile
net/ixgbe: support deleting TM shaper profile
net/ixgbe: support adding TM node
net/ixgbe: support deleting TM node
net/ixgbe: support getting TM node type
net/ixgbe: support getting TM level capability
net/ixgbe: support getting TM node capability
net/ixgbe: support committing TM hierarchy
drivers/net/i40e/Makefile | 1 +
drivers/net/i40e/i40e_ethdev.c | 7 +
drivers/net/i40e/i40e_ethdev.h | 57 +++
drivers/net/i40e/i40e_tm.c | 831 +++++++++++++++++++++++++++++++++
drivers/net/i40e/rte_pmd_i40e.c | 9 -
drivers/net/ixgbe/Makefile | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 15 +-
drivers/net/ixgbe/ixgbe_ethdev.h | 61 +++
drivers/net/ixgbe/ixgbe_tm.c | 970 +++++++++++++++++++++++++++++++++++++++
9 files changed, 1938 insertions(+), 14 deletions(-)
create mode 100644 drivers/net/i40e/i40e_tm.c
create mode 100644 drivers/net/ixgbe/ixgbe_tm.c
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 01/20] net/i40e: support getting TM ops
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 02/20] net/i40e: support getting TM capability Wenzhuo Lu
` (18 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
To support QoS scheduler APIs, create a new C file for
the TM (Traffic Management) ops but without any function
implemented.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/Makefile | 1 +
drivers/net/i40e/i40e_ethdev.c | 1 +
drivers/net/i40e/i40e_ethdev.h | 2 ++
drivers/net/i40e/i40e_tm.c | 51 ++++++++++++++++++++++++++++++++++++++++++
4 files changed, 55 insertions(+)
create mode 100644 drivers/net/i40e/i40e_tm.c
diff --git a/drivers/net/i40e/Makefile b/drivers/net/i40e/Makefile
index 56f210d..33be5f9 100644
--- a/drivers/net/i40e/Makefile
+++ b/drivers/net/i40e/Makefile
@@ -109,6 +109,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_pf.c
SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_fdir.c
SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_flow.c
SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += rte_pmd_i40e.c
+SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_tm.c
# vector PMD driver needs SSE4.1 support
ifeq ($(findstring RTE_MACHINE_CPUFLAG_SSE4_1,$(CFLAGS)),)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index c18a93b..050d7f7 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -515,6 +515,7 @@ static int i40e_sw_tunnel_filter_insert(struct i40e_pf *pf,
.get_eeprom = i40e_get_eeprom,
.mac_addr_set = i40e_set_default_mac_addr,
.mtu_set = i40e_dev_mtu_set,
+ .tm_ops_get = i40e_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 2ff8282..e5301ee 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -39,6 +39,7 @@
#include <rte_kvargs.h>
#include <rte_hash.h>
#include <rte_flow_driver.h>
+#include <rte_tm_driver.h>
#define I40E_VLAN_TAG_SIZE 4
@@ -892,6 +893,7 @@ int i40e_add_macvlan_filters(struct i40e_vsi *vsi,
struct i40e_macvlan_filter *filter,
int total);
bool is_i40e_supported(struct rte_eth_dev *dev);
+int i40e_tm_ops_get(struct rte_eth_dev *dev, void *ops);
#define I40E_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
new file mode 100644
index 0000000..2f4c866
--- /dev/null
+++ b/drivers/net/i40e/i40e_tm.c
@@ -0,0 +1,51 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "base/i40e_prototype.h"
+#include "i40e_ethdev.h"
+
+const struct rte_tm_ops i40e_tm_ops = {
+ NULL,
+};
+
+int
+i40e_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &i40e_tm_ops;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 02/20] net/i40e: support getting TM capability
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 01/20] net/i40e: support getting TM ops Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 03/20] net/i40e: support adding TM shaper profile Wenzhuo Lu
` (17 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 82 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 81 insertions(+), 1 deletion(-)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 2f4c866..86a2f74 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -34,8 +34,12 @@
#include "base/i40e_prototype.h"
#include "i40e_ethdev.h"
+static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
const struct rte_tm_ops i40e_tm_ops = {
- NULL,
+ .capabilities_get = i40e_tm_capabilities_get,
};
int
@@ -49,3 +53,79 @@
return 0;
}
+
+static inline uint16_t
+i40e_tc_nb_get(struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_vsi *main_vsi = pf->main_vsi;
+ uint16_t sum = 0;
+ int i;
+
+ for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+ if (main_vsi->enabled_tc & BIT_ULL(i))
+ sum++;
+ }
+
+ return sum;
+}
+
+static int
+i40e_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ uint16_t tc_nb = 0;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ error->type = RTE_TM_ERROR_TYPE_NONE;
+
+ /* set all the parameters to 0 first. */
+ memset(cap, 0, sizeof(struct rte_tm_capabilities));
+
+ /* only support port + TCs */
+ tc_nb = i40e_tc_nb_get(dev);
+ cap->n_nodes_max = tc_nb + 1;
+ cap->n_levels_max = 2;
+ cap->non_leaf_nodes_identical = 0;
+ cap->leaf_nodes_identical = 0;
+ cap->shaper_n_max = cap->n_nodes_max;
+ cap->shaper_private_n_max = cap->n_nodes_max;
+ cap->shaper_private_dual_rate_n_max = 0;
+ cap->shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->shaper_private_rate_max = 5000000000ull;
+ cap->shaper_shared_n_max = 0;
+ cap->shaper_shared_n_nodes_per_shaper_max = 0;
+ cap->shaper_shared_n_shapers_per_node_max = 0;
+ cap->shaper_shared_dual_rate_n_max = 0;
+ cap->shaper_shared_rate_min = 0;
+ cap->shaper_shared_rate_max = 0;
+ cap->sched_n_children_max = tc_nb;
+ cap->sched_sp_n_priorities_max = 0;
+ cap->sched_wfq_n_children_per_group_max = 0;
+ cap->sched_wfq_n_groups_max = 0;
+ cap->sched_wfq_weight_max = 0;
+ cap->cman_head_drop_supported = 0;
+ cap->dynamic_update_mask = 0;
+
+ /**
+ * not supported parameters are 0, below,
+ * shaper_pkt_length_adjust_min
+ * shaper_pkt_length_adjust_max
+ * cman_wred_context_n_max
+ * cman_wred_context_private_n_max
+ * cman_wred_context_shared_n_max
+ * cman_wred_context_shared_n_nodes_per_context_max
+ * cman_wred_context_shared_n_contexts_per_node_max
+ * mark_vlan_dei_supported
+ * mark_ip_ecn_tcp_supported
+ * mark_ip_ecn_sctp_supported
+ * mark_ip_dscp_supported
+ * stats_mask
+ */
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 03/20] net/i40e: support adding TM shaper profile
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 01/20] net/i40e: support getting TM ops Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 02/20] net/i40e: support getting TM capability Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 04/20] net/i40e: support deleting " Wenzhuo Lu
` (16 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 6 +++
drivers/net/i40e/i40e_ethdev.h | 18 +++++++
drivers/net/i40e/i40e_tm.c | 107 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 131 insertions(+)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 050d7f7..498433d 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1299,6 +1299,9 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize Traffic Manager configuration */
+ i40e_tm_conf_init(dev);
+
ret = i40e_init_ethtype_filter_list(dev);
if (ret < 0)
goto err_init_ethtype_filter_list;
@@ -1462,6 +1465,9 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
rte_free(p_flow);
}
+ /* Remove all Traffic Manager configuration */
+ i40e_tm_conf_uninit(dev);
+
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index e5301ee..da73d64 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -626,6 +626,21 @@ struct rte_flow {
TAILQ_HEAD(i40e_flow_list, rte_flow);
+/* Struct to store Traffic Manager shaper profile. */
+struct i40e_tm_shaper_profile {
+ TAILQ_ENTRY(i40e_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+TAILQ_HEAD(i40e_shaper_profile_list, i40e_tm_shaper_profile);
+
+/* Struct to store all the Traffic Manager configuration. */
+struct i40e_tm_conf {
+ struct i40e_shaper_profile_list shaper_profile_list;
+};
+
/*
* Structure to store private data specific for PF instance.
*/
@@ -686,6 +701,7 @@ struct i40e_pf {
struct i40e_flow_list flow_list;
bool mpls_replace_flag; /* 1 - MPLS filter replace is done */
bool qinq_replace_flag; /* QINQ filter replace is done */
+ struct i40e_tm_conf tm_conf;
};
enum pending_msg {
@@ -894,6 +910,8 @@ int i40e_add_macvlan_filters(struct i40e_vsi *vsi,
int total);
bool is_i40e_supported(struct rte_eth_dev *dev);
int i40e_tm_ops_get(struct rte_eth_dev *dev, void *ops);
+void i40e_tm_conf_init(struct rte_eth_dev *dev);
+void i40e_tm_conf_uninit(struct rte_eth_dev *dev);
#define I40E_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 86a2f74..a71ff45 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -31,15 +31,22 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <rte_malloc.h>
+
#include "base/i40e_prototype.h"
#include "i40e_ethdev.h"
static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
struct rte_tm_capabilities *cap,
struct rte_tm_error *error);
+static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
+ .shaper_profile_add = i40e_shaper_profile_add,
};
int
@@ -54,6 +61,30 @@ static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+void
+i40e_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize shaper profile list */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+}
+
+void
+i40e_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ /* Remove all shaper profiles */
+ while ((shaper_profile =
+ TAILQ_FIRST(&pf->tm_conf.shaper_profile_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+ rte_free(shaper_profile);
+ }
+}
+
static inline uint16_t
i40e_tc_nb_get(struct rte_eth_dev *dev)
{
@@ -129,3 +160,79 @@ static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct i40e_tm_shaper_profile *
+i40e_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+i40e_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ /* min rate not supported */
+ if (profile->committed.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+ error->message = "committed rate not supported";
+ return -EINVAL;
+ }
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ shaper_profile = i40e_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("i40e_tm_shaper_profile",
+ sizeof(struct i40e_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ (void)rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 04/20] net/i40e: support deleting TM shaper profile
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (2 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 03/20] net/i40e: support adding TM shaper profile Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 05/20] net/i40e: support adding TM node Wenzhuo Lu
` (15 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index a71ff45..233adcf 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -43,10 +43,14 @@ static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_shaper_params *profile,
struct rte_tm_error *error);
+static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
.shaper_profile_add = i40e_shaper_profile_add,
+ .shaper_profile_delete = i40e_shaper_profile_del,
};
int
@@ -236,3 +240,35 @@ static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+i40e_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = i40e_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 05/20] net/i40e: support adding TM node
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (3 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 04/20] net/i40e: support deleting " Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 06/20] net/i40e: support deleting " Wenzhuo Lu
` (14 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_ethdev.h | 28 +++++
drivers/net/i40e/i40e_tm.c | 241 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 269 insertions(+)
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index da73d64..34ba3e5 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -636,9 +636,37 @@ struct i40e_tm_shaper_profile {
TAILQ_HEAD(i40e_shaper_profile_list, i40e_tm_shaper_profile);
+/* node type of Traffic Manager */
+enum i40e_tm_node_type {
+ I40E_TM_NODE_TYPE_PORT,
+ I40E_TM_NODE_TYPE_TC,
+ I40E_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct i40e_tm_node {
+ TAILQ_ENTRY(i40e_tm_node) node;
+ uint32_t id;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct i40e_tm_node *parent;
+ struct i40e_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+TAILQ_HEAD(i40e_tm_node_list, i40e_tm_node);
+
/* Struct to store all the Traffic Manager configuration. */
struct i40e_tm_conf {
struct i40e_shaper_profile_list shaper_profile_list;
+ struct i40e_tm_node *root; /* root node - port */
+ struct i40e_tm_node_list tc_list; /* node list for all the TCs */
+ /**
+ * The number of added TC nodes.
+ * It should be no more than the TC number of this port.
+ */
+ uint32_t nb_tc_node;
};
/*
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 233adcf..39874ca 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -46,11 +46,17 @@ static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_error *error);
+static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
.shaper_profile_add = i40e_shaper_profile_add,
.shaper_profile_delete = i40e_shaper_profile_del,
+ .node_add = i40e_node_add,
};
int
@@ -72,6 +78,11 @@ static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
/* initialize shaper profile list */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+
+ /* initialize node configuration */
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ pf->tm_conf.nb_tc_node = 0;
}
void
@@ -79,6 +90,18 @@ static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_tm_shaper_profile *shaper_profile;
+ struct i40e_tm_node *tc;
+
+ /* clear node configuration */
+ while ((tc = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tc, node);
+ rte_free(tc);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
/* Remove all shaper profiles */
while ((shaper_profile =
@@ -272,3 +295,221 @@ static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct i40e_tm_node *
+i40e_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum i40e_tm_node_type *node_type)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct i40e_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = I40E_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = I40E_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum i40e_tm_node_type node_type = I40E_TM_NODE_TYPE_MAX;
+ struct i40e_tm_shaper_profile *shaper_profile;
+ struct i40e_tm_node *tm_node;
+ uint16_t tc_nb = 0;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority not supported";
+ return -EINVAL;
+ }
+
+ if (weight) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight not supported";
+ return -EINVAL;
+ }
+
+ /* check if the node ID is already used */
+ if (i40e_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ shaper_profile = i40e_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id > I40E_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the unsupported parameters */
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("i40e_tm_node",
+ sizeof(struct i40e_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = NULL;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+ }
+
+ /* TC node */
+ /* check level. Only 2 levels supported. */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != I40E_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the unsupported parameters */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ /* should have a root first */
+ if (!pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "no root yet";
+ return -EINVAL;
+ }
+ if (pf->tm_conf.root->id != parent_node_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent id doesn't belong to the root";
+ return -EINVAL;
+ }
+
+ /* check the TC number */
+ tc_nb = i40e_tc_nb_get(dev);
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+
+ /* add the TC node */
+ tm_node = rte_zmalloc("i40e_tm_node",
+ sizeof(struct i40e_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = pf->tm_conf.root;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ tm_node->parent->reference_count++;
+ pf->tm_conf.nb_tc_node++;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 06/20] net/i40e: support deleting TM node
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (4 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 05/20] net/i40e: support adding TM node Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 07/20] net/i40e: support getting TM node type Wenzhuo Lu
` (13 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 39874ca..c132461 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -51,12 +51,15 @@ static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t weight, uint32_t level_id,
struct rte_tm_node_params *params,
struct rte_tm_error *error);
+static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
.shaper_profile_add = i40e_shaper_profile_add,
.shaper_profile_delete = i40e_shaper_profile_del,
.node_add = i40e_node_add,
+ .node_delete = i40e_node_delete,
};
int
@@ -513,3 +516,54 @@ static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum i40e_tm_node_type node_type;
+ struct i40e_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = i40e_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == I40E_TM_NODE_TYPE_PORT) {
+ tm_node->shaper_profile->reference_count--;
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC node */
+ tm_node->shaper_profile->reference_count--;
+ tm_node->parent->reference_count--;
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ pf->tm_conf.nb_tc_node--;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 07/20] net/i40e: support getting TM node type
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (5 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 06/20] net/i40e: support deleting " Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 08/20] net/i40e: support getting TM level capability Wenzhuo Lu
` (12 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_type_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index c132461..e8c41ca 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -53,6 +53,8 @@ static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
+static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -60,6 +62,7 @@ static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
.shaper_profile_delete = i40e_shaper_profile_del,
.node_add = i40e_node_add,
.node_delete = i40e_node_delete,
+ .node_type_get = i40e_node_type_get,
};
int
@@ -567,3 +570,35 @@ static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum i40e_tm_node_type node_type;
+ struct i40e_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = i40e_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (tm_node->reference_count)
+ *is_leaf = false;
+ else
+ *is_leaf = true;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 08/20] net/i40e: support getting TM level capability
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (6 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 07/20] net/i40e: support getting TM node type Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 09/20] net/i40e: support getting TM node capability Wenzhuo Lu
` (11 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_level_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 67 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index e8c41ca..3ec5777 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -55,6 +55,10 @@ static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error);
+static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -63,6 +67,7 @@ static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
.node_add = i40e_node_add,
.node_delete = i40e_node_delete,
.node_type_get = i40e_node_type_get,
+ .level_capabilities_get = i40e_level_capabilities_get,
};
int
@@ -602,3 +607,65 @@ static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+i40e_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ uint16_t nb_tc = 0;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (level_id >= I40E_TM_NODE_TYPE_MAX) {
+ error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+ error->message = "too deep level";
+ return -EINVAL;
+ }
+
+ nb_tc = i40e_tc_nb_get(dev);
+
+ /* root node */
+ if (level_id == I40E_TM_NODE_TYPE_PORT) {
+ cap->n_nodes_max = 1;
+ cap->n_nodes_nonleaf_max = 1;
+ cap->n_nodes_leaf_max = 0;
+ cap->non_leaf_nodes_identical = false;
+ cap->leaf_nodes_identical = false;
+ cap->nonleaf.shaper_private_supported = true;
+ cap->nonleaf.shaper_private_dual_rate_supported = false;
+ cap->nonleaf.shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->nonleaf.shaper_private_rate_max = 5000000000ull;
+ cap->nonleaf.shaper_shared_n_max = 0;
+ cap->nonleaf.sched_n_children_max = nb_tc;
+ cap->nonleaf.sched_sp_n_priorities_max = 0;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 0;
+ cap->nonleaf.stats_mask = 0;
+
+ return 0;
+ }
+
+ /* TC node */
+ cap->n_nodes_max = nb_tc;
+ cap->n_nodes_nonleaf_max = 0;
+ cap->n_nodes_leaf_max = nb_tc;
+ cap->non_leaf_nodes_identical = false;
+ cap->leaf_nodes_identical = true;
+ cap->leaf.shaper_private_supported = true;
+ cap->leaf.shaper_private_dual_rate_supported = false;
+ cap->leaf.shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->leaf.shaper_private_rate_max = 5000000000ull;
+ cap->leaf.shaper_shared_n_max = 0;
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = false;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ cap->leaf.stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 09/20] net/i40e: support getting TM node capability
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (7 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 08/20] net/i40e: support getting TM level capability Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 10/20] net/i40e: support committing TM hierarchy Wenzhuo Lu
` (10 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 3ec5777..8d35ff6 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -59,6 +59,10 @@ static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
uint32_t level_id,
struct rte_tm_level_capabilities *cap,
struct rte_tm_error *error);
+static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -68,6 +72,7 @@ static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
.node_delete = i40e_node_delete,
.node_type_get = i40e_node_type_get,
.level_capabilities_get = i40e_level_capabilities_get,
+ .node_capabilities_get = i40e_node_capabilities_get,
};
int
@@ -669,3 +674,53 @@ static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+i40e_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ enum i40e_tm_node_type node_type;
+ struct i40e_tm_node *tm_node;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = i40e_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ cap->shaper_private_supported = true;
+ cap->shaper_private_dual_rate_supported = false;
+ cap->shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->shaper_private_rate_max = 5000000000ull;
+ cap->shaper_shared_n_max = 0;
+
+ if (node_type == I40E_TM_NODE_TYPE_PORT) {
+ cap->nonleaf.sched_n_children_max = i40e_tc_nb_get(dev);
+ cap->nonleaf.sched_sp_n_priorities_max = 0;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 0;
+ } else {
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = false;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ }
+
+ cap->stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 10/20] net/i40e: support committing TM hierarchy
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (8 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 09/20] net/i40e: support getting TM node capability Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 11/20] net/ixgbe: support getting TM ops Wenzhuo Lu
` (9 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_hierarchy_commit.
When calling this API, the driver tries to enable
the TM configuration on HW.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_ethdev.h | 9 ++++
drivers/net/i40e/i40e_tm.c | 105 ++++++++++++++++++++++++++++++++++++++++
drivers/net/i40e/rte_pmd_i40e.c | 9 ----
3 files changed, 114 insertions(+), 9 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 34ba3e5..741cf92 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -252,6 +252,15 @@ enum i40e_flxpld_layer_idx {
I40E_INSET_FLEX_PAYLOAD_W5 | I40E_INSET_FLEX_PAYLOAD_W6 | \
I40E_INSET_FLEX_PAYLOAD_W7 | I40E_INSET_FLEX_PAYLOAD_W8)
+/* The max bandwidth of i40e is 40Gbps. */
+#define I40E_QOS_BW_MAX 40000
+/* The bandwidth should be the multiple of 50Mbps. */
+#define I40E_QOS_BW_GRANULARITY 50
+/* The min bandwidth weight is 1. */
+#define I40E_QOS_BW_WEIGHT_MIN 1
+/* The max bandwidth weight is 127. */
+#define I40E_QOS_BW_WEIGHT_MAX 127
+
/**
* The overhead from MTU to max frame size.
* Considering QinQ packet, the VLAN tag needs to be counted twice.
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 8d35ff6..7a1bf52 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -63,6 +63,9 @@ static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
uint32_t node_id,
struct rte_tm_node_capabilities *cap,
struct rte_tm_error *error);
+static int i40e_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -73,6 +76,7 @@ static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
.node_type_get = i40e_node_type_get,
.level_capabilities_get = i40e_level_capabilities_get,
.node_capabilities_get = i40e_node_capabilities_get,
+ .hierarchy_commit = i40e_hierarchy_commit,
};
int
@@ -724,3 +728,104 @@ static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+i40e_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct i40e_tm_node *tm_node;
+ struct i40e_vsi *vsi;
+ struct i40e_hw *hw;
+ struct i40e_aqc_configure_vsi_ets_sla_bw_data tc_bw;
+ uint64_t bw;
+ uint8_t tc_map;
+ int ret;
+ int i;
+
+ if (!error)
+ return -EINVAL;
+
+ /* check the setting */
+ if (!pf->tm_conf.root)
+ return 0;
+
+ vsi = pf->main_vsi;
+ hw = I40E_VSI_TO_HW(vsi);
+
+ /**
+ * Don't support bandwidth control for port and TCs in parallel.
+ * If the port has a max bandwidth, the TCs should have none.
+ */
+ /* port */
+ bw = pf->tm_conf.root->shaper_profile->profile.peak.rate;
+ if (bw) {
+ /* check if any TC has a max bandwidth */
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->shaper_profile->profile.peak.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "no port and TC max bandwidth"
+ " in parallel";
+ goto fail_clear;
+ }
+ }
+
+ /* interpret Bps to 50Mbps */
+ bw = bw * 8 / 1000 / 1000 / I40E_QOS_BW_GRANULARITY;
+
+ /* set the max bandwidth */
+ ret = i40e_aq_config_vsi_bw_limit(hw, vsi->seid,
+ (uint16_t)bw, 0, NULL);
+ if (ret) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "fail to set port max bandwidth";
+ goto fail_clear;
+ }
+
+ return 0;
+ }
+
+ /* TC */
+ memset(&tc_bw, 0, sizeof(tc_bw));
+ tc_bw.tc_valid_bits = vsi->enabled_tc;
+ tc_map = vsi->enabled_tc;
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ i = 0;
+ while (i < I40E_MAX_TRAFFIC_CLASS && !(tc_map & BIT_ULL(i)))
+ i++;
+ if (i >= I40E_MAX_TRAFFIC_CLASS) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "cannot find the TC";
+ goto fail_clear;
+ }
+ tc_map &= ~BIT_ULL(i);
+
+ bw = tm_node->shaper_profile->profile.peak.rate;
+ if (!bw)
+ continue;
+
+ /* interpret Bps to 50Mbps */
+ bw = bw * 8 / 1000 / 1000 / I40E_QOS_BW_GRANULARITY;
+
+ tc_bw.tc_bw_credits[i] = rte_cpu_to_le_16((uint16_t)bw);
+ }
+
+ ret = i40e_aq_config_vsi_ets_sla_bw_limit(hw, vsi->seid, &tc_bw, NULL);
+ if (ret) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "fail to set TC max bandwidth";
+ goto fail_clear;
+ }
+
+ return 0;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ i40e_tm_conf_uninit(dev);
+ i40e_tm_conf_init(dev);
+ }
+ return -EINVAL;
+}
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index f7ce62b..4f94678 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -40,15 +40,6 @@
#include "i40e_rxtx.h"
#include "rte_pmd_i40e.h"
-/* The max bandwidth of i40e is 40Gbps. */
-#define I40E_QOS_BW_MAX 40000
-/* The bandwidth should be the multiple of 50Mbps. */
-#define I40E_QOS_BW_GRANULARITY 50
-/* The min bandwidth weight is 1. */
-#define I40E_QOS_BW_WEIGHT_MIN 1
-/* The max bandwidth weight is 127. */
-#define I40E_QOS_BW_WEIGHT_MAX 127
-
int
rte_pmd_i40e_ping_vfs(uint8_t port, uint16_t vf)
{
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 11/20] net/ixgbe: support getting TM ops
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (9 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 10/20] net/i40e: support committing TM hierarchy Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 12/20] net/ixgbe: support getting TM capability Wenzhuo Lu
` (8 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
To support QoS scheduler APIs, create a new C file for
the TM (Traffic Management) ops but without any function
implemented.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/Makefile | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.h | 2 ++
drivers/net/ixgbe/ixgbe_tm.c | 50 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 54 insertions(+)
create mode 100644 drivers/net/ixgbe/ixgbe_tm.c
diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index 5529d81..0595dcf 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -124,6 +124,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
endif
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
# install this header file
SYMLINK-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)-include := rte_pmd_ixgbe.h
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index aeaa432..ab70c1c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -608,6 +608,7 @@ static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
.l2_tunnel_offload_set = ixgbe_dev_l2_tunnel_offload_set,
.udp_tunnel_port_add = ixgbe_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ixgbe_dev_udp_tunnel_port_del,
+ .tm_ops_get = ixgbe_tm_ops_get,
};
/*
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b576a6f..7e99fd3 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -39,6 +39,7 @@
#include "ixgbe_bypass.h"
#include <rte_time.h>
#include <rte_hash.h>
+#include <rte_tm_driver.h>
/* need update link, bit flag */
#define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
@@ -671,6 +672,7 @@ int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
uint16_t tx_rate, uint64_t q_msk);
bool is_ixgbe_supported(struct rte_eth_dev *dev);
+int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops);
static inline int
ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
new file mode 100644
index 0000000..0a222a1
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -0,0 +1,50 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "ixgbe_ethdev.h"
+
+const struct rte_tm_ops ixgbe_tm_ops = {
+ NULL,
+};
+
+int
+ixgbe_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ixgbe_tm_ops;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 12/20] net/ixgbe: support getting TM capability
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (10 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 11/20] net/ixgbe: support getting TM ops Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 13/20] net/ixgbe: support adding TM shaper profile Wenzhuo Lu
` (7 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 90 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 89 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 0a222a1..77066b7 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -33,8 +33,12 @@
#include "ixgbe_ethdev.h"
+static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
const struct rte_tm_ops ixgbe_tm_ops = {
- NULL,
+ .capabilities_get = ixgbe_tm_capabilities_get,
};
int
@@ -48,3 +52,87 @@
return 0;
}
+
+static inline uint8_t
+ixgbe_tc_nb_get(struct rte_eth_dev *dev)
+{
+ struct rte_eth_conf *eth_conf;
+ uint8_t nb_tcs = 0;
+
+ eth_conf = &dev->data->dev_conf;
+ if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
+ } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
+ ETH_32_POOLS)
+ nb_tcs = ETH_4_TCS;
+ else
+ nb_tcs = ETH_8_TCS;
+ } else {
+ nb_tcs = 1;
+ }
+
+ return nb_tcs;
+}
+
+static int
+ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ uint8_t nb_tcs;
+ uint8_t nb_queues;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ error->type = RTE_TM_ERROR_TYPE_NONE;
+
+ /* set all the parameters to 0 first. */
+ memset(cap, 0, sizeof(struct rte_tm_capabilities));
+
+ nb_tcs = ixgbe_tc_nb_get(dev);
+ nb_queues = dev->data->nb_tx_queues;
+ /* port + TCs + queues */
+ cap->n_nodes_max = 1 + nb_tcs + nb_queues;
+ cap->n_levels_max = 3;
+ cap->non_leaf_nodes_identical = 0;
+ cap->leaf_nodes_identical = 0;
+ cap->shaper_n_max = cap->n_nodes_max;
+ cap->shaper_private_n_max = cap->n_nodes_max;
+ cap->shaper_private_dual_rate_n_max = 0;
+ cap->shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->shaper_private_rate_max = 1250000000ull;
+ cap->shaper_shared_n_max = 0;
+ cap->shaper_shared_n_nodes_per_shaper_max = 0;
+ cap->shaper_shared_n_shapers_per_node_max = 0;
+ cap->shaper_shared_dual_rate_n_max = 0;
+ cap->shaper_shared_rate_min = 0;
+ cap->shaper_shared_rate_max = 0;
+ cap->sched_n_children_max = (nb_tcs > nb_queues) ? nb_tcs : nb_queues;
+ cap->sched_sp_n_priorities_max = 0;
+ cap->sched_wfq_n_children_per_group_max = 0;
+ cap->sched_wfq_n_groups_max = 0;
+ cap->sched_wfq_weight_max = 0;
+ cap->cman_head_drop_supported = 0;
+ cap->dynamic_update_mask = 0;
+
+ /**
+ * not supported parameters are 0, below,
+ * shaper_pkt_length_adjust_min
+ * shaper_pkt_length_adjust_max
+ * cman_wred_context_n_max
+ * cman_wred_context_private_n_max
+ * cman_wred_context_shared_n_max
+ * cman_wred_context_shared_n_nodes_per_context_max
+ * cman_wred_context_shared_n_contexts_per_node_max
+ * mark_vlan_dei_supported
+ * mark_ip_ecn_tcp_supported
+ * mark_ip_ecn_sctp_supported
+ * mark_ip_dscp_supported
+ * stats_mask
+ */
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 13/20] net/ixgbe: support adding TM shaper profile
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (11 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 12/20] net/ixgbe: support getting TM capability Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 14/20] net/ixgbe: support deleting " Wenzhuo Lu
` (6 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 6 +++
drivers/net/ixgbe/ixgbe_ethdev.h | 21 ++++++++
drivers/net/ixgbe/ixgbe_tm.c | 111 +++++++++++++++++++++++++++++++++++++++
3 files changed, 138 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index ab70c1c..26eaece 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1360,6 +1360,9 @@ struct rte_ixgbe_xstats_name_off {
/* initialize bandwidth configuration info */
memset(bw_conf, 0, sizeof(struct ixgbe_bw_conf));
+ /* initialize Traffic Manager configuration */
+ ixgbe_tm_conf_init(eth_dev);
+
return 0;
}
@@ -1413,6 +1416,9 @@ struct rte_ixgbe_xstats_name_off {
/* clear all the filters list */
ixgbe_filterlist_flush();
+ /* Remove all Traffic Manager configuration */
+ ixgbe_tm_conf_uninit(eth_dev);
+
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 7e99fd3..b647702 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -435,6 +435,21 @@ struct ixgbe_bw_conf {
uint8_t tc_num; /* Number of TCs. */
};
+/* Struct to store Traffic Manager shaper profile. */
+struct ixgbe_tm_shaper_profile {
+ TAILQ_ENTRY(ixgbe_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+TAILQ_HEAD(ixgbe_shaper_profile_list, ixgbe_tm_shaper_profile);
+
+/* The configuration of Traffic Manager */
+struct ixgbe_tm_conf {
+ struct ixgbe_shaper_profile_list shaper_profile_list;
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
@@ -463,6 +478,7 @@ struct ixgbe_adapter {
struct rte_timecounter systime_tc;
struct rte_timecounter rx_tstamp_tc;
struct rte_timecounter tx_tstamp_tc;
+ struct ixgbe_tm_conf tm_conf;
};
#define IXGBE_DEV_TO_PCI(eth_dev) \
@@ -513,6 +529,9 @@ struct ixgbe_adapter {
#define IXGBE_DEV_PRIVATE_TO_BW_CONF(adapter) \
(&((struct ixgbe_adapter *)adapter)->bw_conf)
+#define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
+ (&((struct ixgbe_adapter *)adapter)->tm_conf)
+
/*
* RX/TX function prototypes
*/
@@ -673,6 +692,8 @@ int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
uint16_t tx_rate, uint64_t q_msk);
bool is_ixgbe_supported(struct rte_eth_dev *dev);
int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops);
+void ixgbe_tm_conf_init(struct rte_eth_dev *dev);
+void ixgbe_tm_conf_uninit(struct rte_eth_dev *dev);
static inline int
ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 77066b7..89e795a 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -31,14 +31,21 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <rte_malloc.h>
+
#include "ixgbe_ethdev.h"
static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
struct rte_tm_capabilities *cap,
struct rte_tm_error *error);
+static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
+ .shaper_profile_add = ixgbe_shaper_profile_add,
};
int
@@ -53,6 +60,32 @@ static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+void
+ixgbe_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+
+ /* initialize shaper profile list */
+ TAILQ_INIT(&tm_conf->shaper_profile_list);
+}
+
+void
+ixgbe_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ /* Remove all shaper profiles */
+ while ((shaper_profile =
+ TAILQ_FIRST(&tm_conf->shaper_profile_list))) {
+ TAILQ_REMOVE(&tm_conf->shaper_profile_list,
+ shaper_profile, node);
+ rte_free(shaper_profile);
+ }
+}
+
static inline uint8_t
ixgbe_tc_nb_get(struct rte_eth_dev *dev)
{
@@ -136,3 +169,81 @@ static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct ixgbe_tm_shaper_profile *
+ixgbe_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_shaper_profile_list *shaper_profile_list =
+ &tm_conf->shaper_profile_list;
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ /* min rate not supported */
+ if (profile->committed.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+ error->message = "committed rate not supported";
+ return -EINVAL;
+ }
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ shaper_profile = ixgbe_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ixgbe_tm_shaper_profile",
+ sizeof(struct ixgbe_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ (void)rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&tm_conf->shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 14/20] net/ixgbe: support deleting TM shaper profile
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (12 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 13/20] net/ixgbe: support adding TM shaper profile Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 15/20] net/ixgbe: support adding TM node Wenzhuo Lu
` (5 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 89e795a..b3b1acf 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -42,10 +42,14 @@ static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_shaper_params *profile,
struct rte_tm_error *error);
+static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
.shaper_profile_add = ixgbe_shaper_profile_add,
+ .shaper_profile_delete = ixgbe_shaper_profile_del,
};
int
@@ -247,3 +251,36 @@ static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ixgbe_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&tm_conf->shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 15/20] net/ixgbe: support adding TM node
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (13 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 14/20] net/ixgbe: support deleting " Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 16/20] net/ixgbe: support deleting " Wenzhuo Lu
` (4 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.h | 36 ++++
drivers/net/ixgbe/ixgbe_tm.c | 386 +++++++++++++++++++++++++++++++++++++++
2 files changed, 422 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b647702..118c271 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -445,9 +445,45 @@ struct ixgbe_tm_shaper_profile {
TAILQ_HEAD(ixgbe_shaper_profile_list, ixgbe_tm_shaper_profile);
+/* node type of Traffic Manager */
+enum ixgbe_tm_node_type {
+ IXGBE_TM_NODE_TYPE_PORT,
+ IXGBE_TM_NODE_TYPE_TC,
+ IXGBE_TM_NODE_TYPE_QUEUE,
+ IXGBE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ixgbe_tm_node {
+ TAILQ_ENTRY(ixgbe_tm_node) node;
+ uint32_t id;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ uint16_t no;
+ struct ixgbe_tm_node *parent;
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+TAILQ_HEAD(ixgbe_tm_node_list, ixgbe_tm_node);
+
/* The configuration of Traffic Manager */
struct ixgbe_tm_conf {
struct ixgbe_shaper_profile_list shaper_profile_list;
+ struct ixgbe_tm_node *root; /* root node - port */
+ struct ixgbe_tm_node_list tc_list; /* node list for all the TCs */
+ struct ixgbe_tm_node_list queue_list; /* node list for all the queues */
+ /**
+ * The number of added TC nodes.
+ * It should be no more than the TC number of this port.
+ */
+ uint32_t nb_tc_node;
+ /**
+ * The number of added queue nodes.
+ * It should be no more than the queue number of this port.
+ */
+ uint32_t nb_queue_node;
};
/*
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index b3b1acf..82b3b20 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -45,11 +45,17 @@ static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_error *error);
+static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
.shaper_profile_add = ixgbe_shaper_profile_add,
.shaper_profile_delete = ixgbe_shaper_profile_del,
+ .node_add = ixgbe_node_add,
};
int
@@ -72,6 +78,13 @@ static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
/* initialize shaper profile list */
TAILQ_INIT(&tm_conf->shaper_profile_list);
+
+ /* initialize node configuration */
+ tm_conf->root = NULL;
+ TAILQ_INIT(&tm_conf->queue_list);
+ TAILQ_INIT(&tm_conf->tc_list);
+ tm_conf->nb_tc_node = 0;
+ tm_conf->nb_queue_node = 0;
}
void
@@ -80,6 +93,23 @@ static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
struct ixgbe_tm_shaper_profile *shaper_profile;
+ struct ixgbe_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&tm_conf->queue_list))) {
+ TAILQ_REMOVE(&tm_conf->queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ tm_conf->nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&tm_conf->tc_list))) {
+ TAILQ_REMOVE(&tm_conf->tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ tm_conf->nb_tc_node = 0;
+ if (tm_conf->root) {
+ rte_free(tm_conf->root);
+ tm_conf->root = NULL;
+ }
/* Remove all shaper profiles */
while ((shaper_profile =
@@ -284,3 +314,359 @@ static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct ixgbe_tm_node *
+ixgbe_tm_node_search(struct rte_eth_dev *dev, uint32_t node_id,
+ enum ixgbe_tm_node_type *node_type)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_node *tm_node;
+
+ if (tm_conf->root && tm_conf->root->id == node_id) {
+ *node_type = IXGBE_TM_NODE_TYPE_PORT;
+ return tm_conf->root;
+ }
+
+ TAILQ_FOREACH(tm_node, &tm_conf->tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = IXGBE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, &tm_conf->queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = IXGBE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static void
+ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
+ uint16_t *base, uint16_t *nb)
+{
+ uint8_t nb_tcs = ixgbe_tc_nb_get(dev);
+ struct rte_pci_device *pci_dev = IXGBE_DEV_TO_PCI(dev);
+ uint16_t vf_num = pci_dev->max_vfs;
+
+ *base = 0;
+ *nb = 0;
+
+ /* VT on */
+ if (vf_num) {
+ /* no DCB */
+ if (nb_tcs == 1) {
+ if (vf_num >= ETH_32_POOLS) {
+ *nb = 2;
+ *base = vf_num * 2;
+ } else if (vf_num >= ETH_16_POOLS) {
+ *nb = 4;
+ *base = vf_num * 4;
+ } else {
+ *nb = 8;
+ *base = vf_num * 8;
+ }
+ } else {
+ /* DCB */
+ *nb = 1;
+ *base = vf_num * nb_tcs + tc_node_no;
+ }
+ } else {
+ /* VT off */
+ if (nb_tcs == ETH_8_TCS) {
+ switch (tc_node_no) {
+ case 0:
+ *base = 0;
+ *nb = 32;
+ break;
+ case 1:
+ *base = 32;
+ *nb = 32;
+ break;
+ case 2:
+ *base = 64;
+ *nb = 16;
+ break;
+ case 3:
+ *base = 80;
+ *nb = 16;
+ break;
+ case 4:
+ *base = 96;
+ *nb = 8;
+ break;
+ case 5:
+ *base = 104;
+ *nb = 8;
+ break;
+ case 6:
+ *base = 112;
+ *nb = 8;
+ break;
+ case 7:
+ *base = 120;
+ *nb = 8;
+ break;
+ default:
+ return;
+ }
+ } else {
+ switch (tc_node_no) {
+ /**
+ * If no VF and no DCB, only 64 queues can be used.
+ * This case also be covered by this "case 0".
+ */
+ case 0:
+ *base = 0;
+ *nb = 64;
+ break;
+ case 1:
+ *base = 64;
+ *nb = 32;
+ break;
+ case 2:
+ *base = 96;
+ *nb = 16;
+ break;
+ case 3:
+ *base = 112;
+ *nb = 16;
+ break;
+ default:
+ return;
+ }
+ }
+ }
+}
+
+static int
+ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ enum ixgbe_tm_node_type node_type = IXGBE_TM_NODE_TYPE_MAX;
+ enum ixgbe_tm_node_type parent_node_type = IXGBE_TM_NODE_TYPE_MAX;
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+ struct ixgbe_tm_node *tm_node;
+ struct ixgbe_tm_node *parent_node;
+ uint8_t nb_tcs;
+ uint16_t q_base = 0;
+ uint16_t q_nb = 0;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority not supported";
+ return -EINVAL;
+ }
+
+ if (weight) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight not supported";
+ return -EINVAL;
+ }
+
+ /* check if the node ID is already used */
+ if (ixgbe_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ shaper_profile = ixgbe_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id > IXGBE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the unsupported parameters */
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (tm_conf->root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ixgbe_tm_node",
+ sizeof(struct ixgbe_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->no = 0;
+ tm_node->parent = NULL;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ tm_conf->root = tm_node;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the unsupported parameters */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ /* check the parent node */
+ parent_node = ixgbe_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != IXGBE_TM_NODE_TYPE_PORT &&
+ parent_node_type != IXGBE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not port or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == IXGBE_TM_NODE_TYPE_PORT) {
+ /* check TC number */
+ nb_tcs = ixgbe_tc_nb_get(dev);
+ if (tm_conf->nb_tc_node >= nb_tcs) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check queue number */
+ if (tm_conf->nb_queue_node >= dev->data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+
+ ixgbe_queue_base_nb_get(dev, parent_node->no, &q_base, &q_nb);
+ if (parent_node->reference_count >= q_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues than TC supported";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ixgbe_tm_node",
+ sizeof(struct ixgbe_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == IXGBE_TM_NODE_TYPE_PORT) {
+ tm_node->no = parent_node->reference_count;
+ TAILQ_INSERT_TAIL(&tm_conf->tc_list,
+ tm_node, node);
+ tm_conf->nb_tc_node++;
+ } else {
+ tm_node->no = q_base + parent_node->reference_count;
+ TAILQ_INSERT_TAIL(&tm_conf->queue_list,
+ tm_node, node);
+ tm_conf->nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 16/20] net/ixgbe: support deleting TM node
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (14 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 15/20] net/ixgbe: support adding TM node Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 17/20] net/ixgbe: support getting TM node type Wenzhuo Lu
` (3 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 60 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 82b3b20..f2ed607 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -50,12 +50,15 @@ static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t weight, uint32_t level_id,
struct rte_tm_node_params *params,
struct rte_tm_error *error);
+static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
.shaper_profile_add = ixgbe_shaper_profile_add,
.shaper_profile_delete = ixgbe_shaper_profile_del,
.node_add = ixgbe_node_add,
+ .node_delete = ixgbe_node_delete,
};
int
@@ -670,3 +673,60 @@ static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ enum ixgbe_tm_node_type node_type = IXGBE_TM_NODE_TYPE_MAX;
+ struct ixgbe_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check the if the node id exists */
+ tm_node = ixgbe_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == IXGBE_TM_NODE_TYPE_PORT) {
+ tm_node->shaper_profile->reference_count--;
+ rte_free(tm_node);
+ tm_conf->root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->shaper_profile->reference_count--;
+ tm_node->parent->reference_count--;
+ if (node_type == IXGBE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&tm_conf->tc_list, tm_node, node);
+ tm_conf->nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&tm_conf->queue_list, tm_node, node);
+ tm_conf->nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 17/20] net/ixgbe: support getting TM node type
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (15 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 16/20] net/ixgbe: support deleting " Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 18/20] net/ixgbe: support getting TM level capability Wenzhuo Lu
` (2 subsequent siblings)
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_type_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index f2ed607..010ceac 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -52,6 +52,8 @@ static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
+static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -59,6 +61,7 @@ static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
.shaper_profile_delete = ixgbe_shaper_profile_del,
.node_add = ixgbe_node_add,
.node_delete = ixgbe_node_delete,
+ .node_type_get = ixgbe_node_type_get,
};
int
@@ -730,3 +733,35 @@ static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ixgbe_tm_node_type node_type;
+ struct ixgbe_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ixgbe_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (tm_node->reference_count)
+ *is_leaf = false;
+ else
+ *is_leaf = true;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 18/20] net/ixgbe: support getting TM level capability
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (16 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 17/20] net/ixgbe: support getting TM node type Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 19/20] net/ixgbe: support getting TM node capability Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 20/20] net/ixgbe: support committing TM hierarchy Wenzhuo Lu
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_level_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 78 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 78 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 010ceac..6ff7026 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -54,6 +54,10 @@ static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error);
+static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -62,6 +66,7 @@ static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
.node_add = ixgbe_node_add,
.node_delete = ixgbe_node_delete,
.node_type_get = ixgbe_node_type_get,
+ .level_capabilities_get = ixgbe_level_capabilities_get,
};
int
@@ -765,3 +770,76 @@ static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ uint8_t nb_tc = 0;
+ uint8_t nb_queue = 0;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (level_id >= IXGBE_TM_NODE_TYPE_MAX) {
+ error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+ error->message = "too deep level";
+ return -EINVAL;
+ }
+
+ nb_tc = ixgbe_tc_nb_get(dev);
+ nb_queue = dev->data->nb_tx_queues;
+
+ /* root node */
+ if (level_id == IXGBE_TM_NODE_TYPE_PORT) {
+ cap->n_nodes_max = 1;
+ cap->n_nodes_nonleaf_max = 1;
+ cap->n_nodes_leaf_max = 0;
+ cap->non_leaf_nodes_identical = false;
+ cap->leaf_nodes_identical = false;
+ cap->nonleaf.shaper_private_supported = true;
+ cap->nonleaf.shaper_private_dual_rate_supported = false;
+ cap->nonleaf.shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->nonleaf.shaper_private_rate_max = 1250000000ull;
+ cap->nonleaf.shaper_shared_n_max = 0;
+ cap->nonleaf.sched_n_children_max = nb_tc;
+ cap->nonleaf.sched_sp_n_priorities_max = 0;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 0;
+ cap->nonleaf.stats_mask = 0;
+
+ return 0;
+ }
+
+ /* TC or queue node */
+ if (level_id == IXGBE_TM_NODE_TYPE_TC) {
+ /* TC */
+ cap->n_nodes_max = nb_tc;
+ cap->n_nodes_nonleaf_max = nb_tc;
+ cap->n_nodes_leaf_max = nb_tc;
+ cap->non_leaf_nodes_identical = true;
+ } else {
+ /* queue */
+ cap->n_nodes_max = nb_queue;
+ cap->n_nodes_nonleaf_max = 0;
+ cap->n_nodes_leaf_max = nb_queue;
+ cap->non_leaf_nodes_identical = false;
+ }
+ cap->leaf_nodes_identical = true;
+ cap->leaf.shaper_private_supported = true;
+ cap->leaf.shaper_private_dual_rate_supported = false;
+ cap->leaf.shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->leaf.shaper_private_rate_max = 1250000000ull;
+ cap->leaf.shaper_shared_n_max = 0;
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = false;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ cap->leaf.stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 19/20] net/ixgbe: support getting TM node capability
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (17 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 18/20] net/ixgbe: support getting TM level capability Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 20/20] net/ixgbe: support committing TM hierarchy Wenzhuo Lu
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 60 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 6ff7026..422b8c5 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -58,6 +58,10 @@ static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
uint32_t level_id,
struct rte_tm_level_capabilities *cap,
struct rte_tm_error *error);
+static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -67,6 +71,7 @@ static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
.node_delete = ixgbe_node_delete,
.node_type_get = ixgbe_node_type_get,
.level_capabilities_get = ixgbe_level_capabilities_get,
+ .node_capabilities_get = ixgbe_node_capabilities_get,
};
int
@@ -843,3 +848,58 @@ static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ enum ixgbe_tm_node_type node_type;
+ struct ixgbe_tm_node *tm_node;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ixgbe_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ cap->shaper_private_supported = true;
+ cap->shaper_private_dual_rate_supported = false;
+ cap->shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->shaper_private_rate_max = 1250000000ull;
+ cap->shaper_shared_n_max = 0;
+
+ if (node_type == IXGBE_TM_NODE_TYPE_QUEUE) {
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = false;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ } else {
+ if (node_type == IXGBE_TM_NODE_TYPE_PORT)
+ cap->nonleaf.sched_n_children_max =
+ ixgbe_tc_nb_get(dev);
+ else
+ cap->nonleaf.sched_n_children_max =
+ dev->data->nb_tx_queues;
+ cap->nonleaf.sched_sp_n_priorities_max = 0;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 0;
+ }
+
+ cap->stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 20/20] net/ixgbe: support committing TM hierarchy
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
` (18 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 19/20] net/ixgbe: support getting TM node capability Wenzhuo Lu
@ 2017-06-19 5:43 ` Wenzhuo Lu
19 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-19 5:43 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, jingjing.wu, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_hierarchy_commit.
When calling this API, the driver tries to enable
the TM configuration on HW.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 8 ++---
drivers/net/ixgbe/ixgbe_ethdev.h | 2 ++
drivers/net/ixgbe/ixgbe_tm.c | 65 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 70 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 26eaece..6c11558 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -302,9 +302,6 @@ static void ixgbe_set_ivar_map(struct ixgbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
static void ixgbe_configure_msix(struct rte_eth_dev *dev);
-static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
- uint16_t queue_idx, uint16_t tx_rate);
-
static int ixgbevf_add_mac_addr(struct rte_eth_dev *dev,
struct ether_addr *mac_addr,
uint32_t index, uint32_t pool);
@@ -5605,8 +5602,9 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on)
IXGBE_WRITE_REG(hw, IXGBE_EIAC, mask);
}
-static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
- uint16_t queue_idx, uint16_t tx_rate)
+int
+ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t rf_dec, rf_int;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 118c271..e226542 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -730,6 +730,8 @@ int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops);
void ixgbe_tm_conf_init(struct rte_eth_dev *dev);
void ixgbe_tm_conf_uninit(struct rte_eth_dev *dev);
+int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t tx_rate);
static inline int
ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 422b8c5..f33f1d6 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -62,6 +62,9 @@ static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
uint32_t node_id,
struct rte_tm_node_capabilities *cap,
struct rte_tm_error *error);
+static int ixgbe_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -72,6 +75,7 @@ static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
.node_type_get = ixgbe_node_type_get,
.level_capabilities_get = ixgbe_level_capabilities_get,
.node_capabilities_get = ixgbe_node_capabilities_get,
+ .hierarchy_commit = ixgbe_hierarchy_commit,
};
int
@@ -903,3 +907,64 @@ static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+ixgbe_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_node *tm_node;
+ uint64_t bw;
+ int ret;
+
+ if (!error)
+ return -EINVAL;
+
+ /* check the setting */
+ if (!tm_conf->root)
+ return 0;
+
+ /* not support port max bandwidth yet */
+ if (tm_conf->root->shaper_profile->profile.peak.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "no port max bandwidth";
+ goto fail_clear;
+ }
+
+ /* HW not support TC max bandwidth */
+ TAILQ_FOREACH(tm_node, &tm_conf->tc_list, node) {
+ if (tm_node->shaper_profile->profile.peak.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "no TC max bandwidth";
+ goto fail_clear;
+ }
+ }
+
+ /* queue max bandwidth */
+ TAILQ_FOREACH(tm_node, &tm_conf->queue_list, node) {
+ bw = tm_node->shaper_profile->profile.peak.rate;
+ if (bw) {
+ /* interpret Bps to Mbps */
+ bw = bw * 8 / 1000 / 1000;
+ ret = ixgbe_set_queue_rate_limit(dev, tm_node->no, bw);
+ if (ret) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message =
+ "failed to set queue max bandwidth";
+ goto fail_clear;
+ }
+ }
+ }
+
+ return 0;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ixgbe_tm_conf_uninit(dev);
+ ixgbe_tm_conf_init(dev);
+ }
+ return -EINVAL;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (21 preceding siblings ...)
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 01/20] net/i40e: support getting TM ops Wenzhuo Lu
` (20 more replies)
22 siblings, 21 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Implement the traffic manager APIs on i40e and ixgbe.
This patch set is based on the patch set,
"ethdev: abstraction layer for QoS traffic management"
http://dpdk.org/dev/patchwork/patch/25275/
http://dpdk.org/dev/patchwork/patch/25276/
Series Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Series-Acked-by: Cristian Dumitrescu <Cristian.Dumitrescu@intel.com>
v2:
- reworked based on the new TM APIs.
- fixed a queue mapping issue on ixgbe.
v3:
- supported adding queue node on i40e.
- some parameters of the TM APIs are misunderstanded, corrected them.
- added more comments about not implemented features.
- supported commit flag check.
Wenzhuo Lu (20):
net/i40e: support getting TM ops
net/i40e: support getting TM capability
net/i40e: support adding TM shaper profile
net/i40e: support deleting TM shaper profile
net/i40e: support adding TM node
net/i40e: support deleting TM node
net/i40e: support getting TM node type
net/i40e: support getting TM level capability
net/i40e: support getting TM node capability
net/i40e: support committing TM hierarchy
net/ixgbe: support getting TM ops
net/ixgbe: support getting TM capability
net/ixgbe: support adding TM shaper profile
net/ixgbe: support deleting TM shaper profile
net/ixgbe: support adding TM node
net/ixgbe: support deleting TM node
net/ixgbe: support getting TM node type
net/ixgbe: support getting TM level capability
net/ixgbe: support getting TM node capability
net/ixgbe: support committing TM hierarchy
drivers/net/i40e/Makefile | 1 +
drivers/net/i40e/i40e_ethdev.c | 15 +
drivers/net/i40e/i40e_ethdev.h | 75 +++
drivers/net/i40e/i40e_tm.c | 974 +++++++++++++++++++++++++++++++++++
drivers/net/i40e/rte_pmd_i40e.c | 9 -
drivers/net/ixgbe/Makefile | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 27 +-
drivers/net/ixgbe/ixgbe_ethdev.h | 72 +++
drivers/net/ixgbe/ixgbe_tm.c | 1041 ++++++++++++++++++++++++++++++++++++++
9 files changed, 2201 insertions(+), 14 deletions(-)
create mode 100644 drivers/net/i40e/i40e_tm.c
create mode 100644 drivers/net/ixgbe/ixgbe_tm.c
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 01/20] net/i40e: support getting TM ops
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 02/20] net/i40e: support getting TM capability Wenzhuo Lu
` (19 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
To support QoS scheduler APIs, create a new C file for
the TM (Traffic Management) ops but without any function
implemented.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/Makefile | 1 +
drivers/net/i40e/i40e_ethdev.c | 1 +
drivers/net/i40e/i40e_ethdev.h | 2 ++
drivers/net/i40e/i40e_tm.c | 51 ++++++++++++++++++++++++++++++++++++++++++
4 files changed, 55 insertions(+)
create mode 100644 drivers/net/i40e/i40e_tm.c
diff --git a/drivers/net/i40e/Makefile b/drivers/net/i40e/Makefile
index 56f210d..33be5f9 100644
--- a/drivers/net/i40e/Makefile
+++ b/drivers/net/i40e/Makefile
@@ -109,6 +109,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_pf.c
SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_fdir.c
SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_flow.c
SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += rte_pmd_i40e.c
+SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_tm.c
# vector PMD driver needs SSE4.1 support
ifeq ($(findstring RTE_MACHINE_CPUFLAG_SSE4_1,$(CFLAGS)),)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index c18a93b..050d7f7 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -515,6 +515,7 @@ static int i40e_sw_tunnel_filter_insert(struct i40e_pf *pf,
.get_eeprom = i40e_get_eeprom,
.mac_addr_set = i40e_set_default_mac_addr,
.mtu_set = i40e_dev_mtu_set,
+ .tm_ops_get = i40e_tm_ops_get,
};
/* store statistics names and its offset in stats structure */
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 2ff8282..e5301ee 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -39,6 +39,7 @@
#include <rte_kvargs.h>
#include <rte_hash.h>
#include <rte_flow_driver.h>
+#include <rte_tm_driver.h>
#define I40E_VLAN_TAG_SIZE 4
@@ -892,6 +893,7 @@ int i40e_add_macvlan_filters(struct i40e_vsi *vsi,
struct i40e_macvlan_filter *filter,
int total);
bool is_i40e_supported(struct rte_eth_dev *dev);
+int i40e_tm_ops_get(struct rte_eth_dev *dev, void *ops);
#define I40E_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
new file mode 100644
index 0000000..2f4c866
--- /dev/null
+++ b/drivers/net/i40e/i40e_tm.c
@@ -0,0 +1,51 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "base/i40e_prototype.h"
+#include "i40e_ethdev.h"
+
+const struct rte_tm_ops i40e_tm_ops = {
+ NULL,
+};
+
+int
+i40e_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &i40e_tm_ops;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 02/20] net/i40e: support getting TM capability
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 01/20] net/i40e: support getting TM ops Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-07-09 19:31 ` Thomas Monjalon
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 03/20] net/i40e: support adding TM shaper profile Wenzhuo Lu
` (18 subsequent siblings)
20 siblings, 1 reply; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 84 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 83 insertions(+), 1 deletion(-)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 2f4c866..3077472 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -34,8 +34,12 @@
#include "base/i40e_prototype.h"
#include "i40e_ethdev.h"
+static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
const struct rte_tm_ops i40e_tm_ops = {
- NULL,
+ .capabilities_get = i40e_tm_capabilities_get,
};
int
@@ -49,3 +53,81 @@
return 0;
}
+
+static inline uint16_t
+i40e_tc_nb_get(struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_vsi *main_vsi = pf->main_vsi;
+ uint16_t sum = 0;
+ int i;
+
+ for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+ if (main_vsi->enabled_tc & BIT_ULL(i))
+ sum++;
+ }
+
+ return sum;
+}
+
+static int
+i40e_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ error->type = RTE_TM_ERROR_TYPE_NONE;
+
+ /* set all the parameters to 0 first. */
+ memset(cap, 0, sizeof(struct rte_tm_capabilities));
+
+ /**
+ * support port + TCs + queues
+ * here shows the max capability not the current configuration.
+ */
+ cap->n_nodes_max = 1 + I40E_MAX_TRAFFIC_CLASS + hw->func_caps.num_tx_qp;
+ cap->n_levels_max = 3; /* port, TC, queue */
+ cap->non_leaf_nodes_identical = 1;
+ cap->leaf_nodes_identical = 1;
+ cap->shaper_n_max = cap->n_nodes_max;
+ cap->shaper_private_n_max = cap->n_nodes_max;
+ cap->shaper_private_dual_rate_n_max = 0;
+ cap->shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->shaper_private_rate_max = 5000000000ull;
+ cap->shaper_shared_n_max = 0;
+ cap->shaper_shared_n_nodes_per_shaper_max = 0;
+ cap->shaper_shared_n_shapers_per_node_max = 0;
+ cap->shaper_shared_dual_rate_n_max = 0;
+ cap->shaper_shared_rate_min = 0;
+ cap->shaper_shared_rate_max = 0;
+ cap->sched_n_children_max = hw->func_caps.num_tx_qp;
+ /**
+ * HW supports SP. But no plan to support it now.
+ * So, all the nodes should have the same priority.
+ */
+ cap->sched_sp_n_priorities_max = 1;
+ cap->sched_wfq_n_children_per_group_max = 0;
+ cap->sched_wfq_n_groups_max = 0;
+ /**
+ * SW only supports fair round robin now.
+ * So, all the nodes should have the same weight.
+ */
+ cap->sched_wfq_weight_max = 1;
+ cap->cman_head_drop_supported = 0;
+ cap->dynamic_update_mask = 0;
+ cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD;
+ cap->shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS;
+ cap->cman_wred_context_n_max = 0;
+ cap->cman_wred_context_private_n_max = 0;
+ cap->cman_wred_context_shared_n_max = 0;
+ cap->cman_wred_context_shared_n_nodes_per_context_max = 0;
+ cap->cman_wred_context_shared_n_contexts_per_node_max = 0;
+ cap->stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 03/20] net/i40e: support adding TM shaper profile
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 01/20] net/i40e: support getting TM ops Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 02/20] net/i40e: support getting TM capability Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 04/20] net/i40e: support deleting " Wenzhuo Lu
` (17 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 6 +++
drivers/net/i40e/i40e_ethdev.h | 18 +++++++
drivers/net/i40e/i40e_tm.c | 119 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 143 insertions(+)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 050d7f7..498433d 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1299,6 +1299,9 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
/* initialize mirror rule list */
TAILQ_INIT(&pf->mirror_list);
+ /* initialize Traffic Manager configuration */
+ i40e_tm_conf_init(dev);
+
ret = i40e_init_ethtype_filter_list(dev);
if (ret < 0)
goto err_init_ethtype_filter_list;
@@ -1462,6 +1465,9 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
rte_free(p_flow);
}
+ /* Remove all Traffic Manager configuration */
+ i40e_tm_conf_uninit(dev);
+
return 0;
}
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index e5301ee..da73d64 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -626,6 +626,21 @@ struct rte_flow {
TAILQ_HEAD(i40e_flow_list, rte_flow);
+/* Struct to store Traffic Manager shaper profile. */
+struct i40e_tm_shaper_profile {
+ TAILQ_ENTRY(i40e_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+TAILQ_HEAD(i40e_shaper_profile_list, i40e_tm_shaper_profile);
+
+/* Struct to store all the Traffic Manager configuration. */
+struct i40e_tm_conf {
+ struct i40e_shaper_profile_list shaper_profile_list;
+};
+
/*
* Structure to store private data specific for PF instance.
*/
@@ -686,6 +701,7 @@ struct i40e_pf {
struct i40e_flow_list flow_list;
bool mpls_replace_flag; /* 1 - MPLS filter replace is done */
bool qinq_replace_flag; /* QINQ filter replace is done */
+ struct i40e_tm_conf tm_conf;
};
enum pending_msg {
@@ -894,6 +910,8 @@ int i40e_add_macvlan_filters(struct i40e_vsi *vsi,
int total);
bool is_i40e_supported(struct rte_eth_dev *dev);
int i40e_tm_ops_get(struct rte_eth_dev *dev, void *ops);
+void i40e_tm_conf_init(struct rte_eth_dev *dev);
+void i40e_tm_conf_uninit(struct rte_eth_dev *dev);
#define I40E_DEV_TO_PCI(eth_dev) \
RTE_DEV_TO_PCI((eth_dev)->device)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 3077472..1cce2af 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -31,15 +31,22 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <rte_malloc.h>
+
#include "base/i40e_prototype.h"
#include "i40e_ethdev.h"
static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
struct rte_tm_capabilities *cap,
struct rte_tm_error *error);
+static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
+ .shaper_profile_add = i40e_shaper_profile_add,
};
int
@@ -54,6 +61,30 @@ static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+void
+i40e_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+
+ /* initialize shaper profile list */
+ TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+}
+
+void
+i40e_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ /* Remove all shaper profiles */
+ while ((shaper_profile =
+ TAILQ_FIRST(&pf->tm_conf.shaper_profile_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+ rte_free(shaper_profile);
+ }
+}
+
static inline uint16_t
i40e_tc_nb_get(struct rte_eth_dev *dev)
{
@@ -131,3 +162,91 @@ static int i40e_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct i40e_tm_shaper_profile *
+i40e_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_shaper_profile_list *shaper_profile_list =
+ &pf->tm_conf.shaper_profile_list;
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+i40e_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min rate not supported */
+ if (profile->committed.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+ error->message = "committed rate not supported";
+ return -EINVAL;
+ }
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+i40e_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = i40e_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = i40e_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("i40e_tm_shaper_profile",
+ sizeof(struct i40e_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ (void)rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&pf->tm_conf.shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 04/20] net/i40e: support deleting TM shaper profile
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (2 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 03/20] net/i40e: support adding TM shaper profile Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 05/20] net/i40e: support adding TM node Wenzhuo Lu
` (16 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 1cce2af..9adba0c 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -43,10 +43,14 @@ static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_shaper_params *profile,
struct rte_tm_error *error);
+static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
.shaper_profile_add = i40e_shaper_profile_add,
+ .shaper_profile_delete = i40e_shaper_profile_del,
};
int
@@ -250,3 +254,35 @@ static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+i40e_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = i40e_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&pf->tm_conf.shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 05/20] net/i40e: support adding TM node
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (3 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 04/20] net/i40e: support deleting " Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 06/20] net/i40e: support deleting " Wenzhuo Lu
` (15 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 3 +
drivers/net/i40e/i40e_ethdev.h | 46 ++++++
drivers/net/i40e/i40e_tm.c | 325 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 374 insertions(+)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 498433d..90457b1 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -2092,6 +2092,9 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
rte_free(intr_handle->intr_vec);
intr_handle->intr_vec = NULL;
}
+
+ /* reset hierarchy commit */
+ pf->tm_conf.committed = false;
}
static void
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index da73d64..b8ded55 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -636,9 +636,55 @@ struct i40e_tm_shaper_profile {
TAILQ_HEAD(i40e_shaper_profile_list, i40e_tm_shaper_profile);
+/* node type of Traffic Manager */
+enum i40e_tm_node_type {
+ I40E_TM_NODE_TYPE_PORT,
+ I40E_TM_NODE_TYPE_TC,
+ I40E_TM_NODE_TYPE_QUEUE,
+ I40E_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct i40e_tm_node {
+ TAILQ_ENTRY(i40e_tm_node) node;
+ uint32_t id;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ struct i40e_tm_node *parent;
+ struct i40e_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+TAILQ_HEAD(i40e_tm_node_list, i40e_tm_node);
+
/* Struct to store all the Traffic Manager configuration. */
struct i40e_tm_conf {
struct i40e_shaper_profile_list shaper_profile_list;
+ struct i40e_tm_node *root; /* root node - port */
+ struct i40e_tm_node_list tc_list; /* node list for all the TCs */
+ struct i40e_tm_node_list queue_list; /* node list for all the queues */
+ /**
+ * The number of added TC nodes.
+ * It should be no more than the TC number of this port.
+ */
+ uint32_t nb_tc_node;
+ /**
+ * The number of added queue nodes.
+ * It should be no more than the queue number of this port.
+ */
+ uint32_t nb_queue_node;
+ /**
+ * This flag is used to check if APP can change the TM node
+ * configuration.
+ * When it's true, means the configuration is applied to HW,
+ * APP should not change the configuration.
+ * As we don't support on-the-fly configuration, when starting
+ * the port, APP should call the hierarchy_commit API to set this
+ * flag to true. When stopping the port, this flag should be set
+ * to false.
+ */
+ bool committed;
};
/*
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 9adba0c..8444580 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -46,11 +46,17 @@ static int i40e_shaper_profile_add(struct rte_eth_dev *dev,
static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_error *error);
+static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
.shaper_profile_add = i40e_shaper_profile_add,
.shaper_profile_delete = i40e_shaper_profile_del,
+ .node_add = i40e_node_add,
};
int
@@ -72,6 +78,14 @@ static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
/* initialize shaper profile list */
TAILQ_INIT(&pf->tm_conf.shaper_profile_list);
+
+ /* initialize node configuration */
+ pf->tm_conf.root = NULL;
+ TAILQ_INIT(&pf->tm_conf.tc_list);
+ TAILQ_INIT(&pf->tm_conf.queue_list);
+ pf->tm_conf.nb_tc_node = 0;
+ pf->tm_conf.nb_queue_node = 0;
+ pf->tm_conf.committed = false;
}
void
@@ -79,6 +93,23 @@ static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
{
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_tm_shaper_profile *shaper_profile;
+ struct i40e_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.queue_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&pf->tm_conf.tc_list))) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ pf->tm_conf.nb_tc_node = 0;
+ if (pf->tm_conf.root) {
+ rte_free(pf->tm_conf.root);
+ pf->tm_conf.root = NULL;
+ }
/* Remove all shaper profiles */
while ((shaper_profile =
@@ -286,3 +317,297 @@ static int i40e_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct i40e_tm_node *
+i40e_tm_node_search(struct rte_eth_dev *dev,
+ uint32_t node_id, enum i40e_tm_node_type *node_type)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct i40e_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct i40e_tm_node *tm_node;
+
+ if (pf->tm_conf.root && pf->tm_conf.root->id == node_id) {
+ *node_type = I40E_TM_NODE_TYPE_PORT;
+ return pf->tm_conf.root;
+ }
+
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = I40E_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = I40E_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static int
+i40e_node_param_check(uint32_t node_id, uint32_t parent_node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for root node */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for TC or queue node */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * Now the TC and queue configuration is controlled by DCB.
+ * We need check if the node configuration follows the DCB configuration.
+ * In the future, we may use TM to cover DCB.
+ */
+static int
+i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum i40e_tm_node_type node_type = I40E_TM_NODE_TYPE_MAX;
+ enum i40e_tm_node_type parent_node_type = I40E_TM_NODE_TYPE_MAX;
+ struct i40e_tm_shaper_profile *shaper_profile;
+ struct i40e_tm_node *tm_node;
+ struct i40e_tm_node *parent_node;
+ uint16_t tc_nb = 0;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = i40e_node_param_check(node_id, parent_node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node ID is already used */
+ if (i40e_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ shaper_profile = i40e_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id > I40E_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (pf->tm_conf.root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("i40e_tm_node",
+ sizeof(struct i40e_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = NULL;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ pf->tm_conf.root = tm_node;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = i40e_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != I40E_TM_NODE_TYPE_PORT &&
+ parent_node_type != I40E_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not port or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == I40E_TM_NODE_TYPE_PORT) {
+ /* check the TC number */
+ tc_nb = i40e_tc_nb_get(dev);
+ if (pf->tm_conf.nb_tc_node >= tc_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check the queue number */
+ if (pf->tm_conf.nb_queue_node >= hw->func_caps.num_tx_qp) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+
+ /**
+ * check the node id.
+ * For queue, the node id means queue id.
+ */
+ if (node_id >= hw->func_caps.num_tx_qp) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("i40e_tm_node",
+ sizeof(struct i40e_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = pf->tm_conf.root;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == I40E_TM_NODE_TYPE_PORT) {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.tc_list,
+ tm_node, node);
+ pf->tm_conf.nb_tc_node++;
+ } else {
+ TAILQ_INSERT_TAIL(&pf->tm_conf.queue_list,
+ tm_node, node);
+ pf->tm_conf.nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 06/20] net/i40e: support deleting TM node
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (4 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 05/20] net/i40e: support adding TM node Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 07/20] net/i40e: support getting TM node type Wenzhuo Lu
` (14 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 66 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 8444580..00709c1 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -51,12 +51,15 @@ static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t weight, uint32_t level_id,
struct rte_tm_node_params *params,
struct rte_tm_error *error);
+static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
.shaper_profile_add = i40e_shaper_profile_add,
.shaper_profile_delete = i40e_shaper_profile_del,
.node_add = i40e_node_add,
+ .node_delete = i40e_node_delete,
};
int
@@ -611,3 +614,66 @@ static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ enum i40e_tm_node_type node_type = I40E_TM_NODE_TYPE_MAX;
+ struct i40e_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (pf->tm_conf.committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = i40e_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == I40E_TM_NODE_TYPE_PORT) {
+ tm_node->shaper_profile->reference_count--;
+ rte_free(tm_node);
+ pf->tm_conf.root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->shaper_profile->reference_count--;
+ tm_node->parent->reference_count--;
+ if (node_type == I40E_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&pf->tm_conf.tc_list, tm_node, node);
+ pf->tm_conf.nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&pf->tm_conf.queue_list, tm_node, node);
+ pf->tm_conf.nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 07/20] net/i40e: support getting TM node type
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (5 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 06/20] net/i40e: support deleting " Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 08/20] net/i40e: support getting TM level capability Wenzhuo Lu
` (13 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_type_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 00709c1..c2f794b 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -53,6 +53,8 @@ static int i40e_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
+static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -60,6 +62,7 @@ static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
.shaper_profile_delete = i40e_shaper_profile_del,
.node_add = i40e_node_add,
.node_delete = i40e_node_delete,
+ .node_type_get = i40e_node_type_get,
};
int
@@ -677,3 +680,35 @@ static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum i40e_tm_node_type node_type = I40E_TM_NODE_TYPE_MAX;
+ struct i40e_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = i40e_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == I40E_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 08/20] net/i40e: support getting TM level capability
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (6 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 07/20] net/i40e: support getting TM node type Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 09/20] net/i40e: support getting TM node capability Wenzhuo Lu
` (12 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_level_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index c2f794b..3ab0c70 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -55,6 +55,10 @@ static int i40e_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error);
+static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -63,6 +67,7 @@ static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
.node_add = i40e_node_add,
.node_delete = i40e_node_delete,
.node_type_get = i40e_node_type_get,
+ .level_capabilities_get = i40e_level_capabilities_get,
};
int
@@ -712,3 +717,72 @@ static int i40e_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+i40e_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (level_id >= I40E_TM_NODE_TYPE_MAX) {
+ error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+ error->message = "too deep level";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (level_id == I40E_TM_NODE_TYPE_PORT) {
+ cap->n_nodes_max = 1;
+ cap->n_nodes_nonleaf_max = 1;
+ cap->n_nodes_leaf_max = 0;
+ cap->non_leaf_nodes_identical = true;
+ cap->leaf_nodes_identical = true;
+ cap->nonleaf.shaper_private_supported = true;
+ cap->nonleaf.shaper_private_dual_rate_supported = false;
+ cap->nonleaf.shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->nonleaf.shaper_private_rate_max = 5000000000ull;
+ cap->nonleaf.shaper_shared_n_max = 0;
+ cap->nonleaf.sched_n_children_max = I40E_MAX_TRAFFIC_CLASS;
+ cap->nonleaf.sched_sp_n_priorities_max = 1;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 1;
+ cap->nonleaf.stats_mask = 0;
+
+ return 0;
+ }
+
+ /* TC or queue node */
+ if (level_id == I40E_TM_NODE_TYPE_TC) {
+ /* TC */
+ cap->n_nodes_max = I40E_MAX_TRAFFIC_CLASS;
+ cap->n_nodes_nonleaf_max = I40E_MAX_TRAFFIC_CLASS;
+ cap->n_nodes_leaf_max = 0;
+ cap->non_leaf_nodes_identical = true;
+ } else {
+ /* queue */
+ cap->n_nodes_max = hw->func_caps.num_tx_qp;
+ cap->n_nodes_nonleaf_max = 0;
+ cap->n_nodes_leaf_max = hw->func_caps.num_tx_qp;
+ cap->non_leaf_nodes_identical = true;
+ }
+ cap->leaf_nodes_identical = true;
+ cap->leaf.shaper_private_supported = true;
+ cap->leaf.shaper_private_dual_rate_supported = false;
+ cap->leaf.shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->leaf.shaper_private_rate_max = 5000000000ull;
+ cap->leaf.shaper_shared_n_max = 0;
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = true;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ cap->leaf.stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 09/20] net/i40e: support getting TM node capability
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (7 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 08/20] net/i40e: support getting TM level capability Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 10/20] net/i40e: support committing TM hierarchy Wenzhuo Lu
` (11 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_tm.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 61 insertions(+)
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 3ab0c70..435975b 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -59,6 +59,10 @@ static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
uint32_t level_id,
struct rte_tm_level_capabilities *cap,
struct rte_tm_error *error);
+static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -68,6 +72,7 @@ static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
.node_delete = i40e_node_delete,
.node_type_get = i40e_node_type_get,
.level_capabilities_get = i40e_level_capabilities_get,
+ .node_capabilities_get = i40e_node_capabilities_get,
};
int
@@ -786,3 +791,59 @@ static int i40e_level_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+i40e_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ enum i40e_tm_node_type node_type;
+ struct i40e_tm_node *tm_node;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = i40e_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ cap->shaper_private_supported = true;
+ cap->shaper_private_dual_rate_supported = false;
+ cap->shaper_private_rate_min = 0;
+ /* 40Gbps -> 5GBps */
+ cap->shaper_private_rate_max = 5000000000ull;
+ cap->shaper_shared_n_max = 0;
+
+ if (node_type == I40E_TM_NODE_TYPE_QUEUE) {
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = true;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ } else {
+ if (node_type == I40E_TM_NODE_TYPE_PORT)
+ cap->nonleaf.sched_n_children_max =
+ I40E_MAX_TRAFFIC_CLASS;
+ else
+ cap->nonleaf.sched_n_children_max =
+ hw->func_caps.num_tx_qp;
+ cap->nonleaf.sched_sp_n_priorities_max = 1;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 1;
+ }
+
+ cap->stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 10/20] net/i40e: support committing TM hierarchy
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (8 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 09/20] net/i40e: support getting TM node capability Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 11/20] net/ixgbe: support getting TM ops Wenzhuo Lu
` (10 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_hierarchy_commit.
When calling this API, the driver tries to enable
the TM configuration on HW.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/i40e/i40e_ethdev.c | 5 ++
drivers/net/i40e/i40e_ethdev.h | 9 +++
drivers/net/i40e/i40e_tm.c | 125 ++++++++++++++++++++++++++++++++++++++++
drivers/net/i40e/rte_pmd_i40e.c | 9 ---
4 files changed, 139 insertions(+), 9 deletions(-)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 90457b1..0c16951 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -2032,6 +2032,11 @@ static inline void i40e_GLQF_reg_init(struct i40e_hw *hw)
i40e_filter_restore(pf);
+ if (!pf->tm_conf.committed)
+ PMD_DRV_LOG(WARNING,
+ "please call hierarchy_commit() "
+ "before starting the port");
+
return I40E_SUCCESS;
err_up:
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index b8ded55..8989355 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -252,6 +252,15 @@ enum i40e_flxpld_layer_idx {
I40E_INSET_FLEX_PAYLOAD_W5 | I40E_INSET_FLEX_PAYLOAD_W6 | \
I40E_INSET_FLEX_PAYLOAD_W7 | I40E_INSET_FLEX_PAYLOAD_W8)
+/* The max bandwidth of i40e is 40Gbps. */
+#define I40E_QOS_BW_MAX 40000
+/* The bandwidth should be the multiple of 50Mbps. */
+#define I40E_QOS_BW_GRANULARITY 50
+/* The min bandwidth weight is 1. */
+#define I40E_QOS_BW_WEIGHT_MIN 1
+/* The max bandwidth weight is 127. */
+#define I40E_QOS_BW_WEIGHT_MAX 127
+
/**
* The overhead from MTU to max frame size.
* Considering QinQ packet, the VLAN tag needs to be counted twice.
diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c
index 435975b..f2c6e33 100644
--- a/drivers/net/i40e/i40e_tm.c
+++ b/drivers/net/i40e/i40e_tm.c
@@ -63,6 +63,9 @@ static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
uint32_t node_id,
struct rte_tm_node_capabilities *cap,
struct rte_tm_error *error);
+static int i40e_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error);
const struct rte_tm_ops i40e_tm_ops = {
.capabilities_get = i40e_tm_capabilities_get,
@@ -73,6 +76,7 @@ static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
.node_type_get = i40e_node_type_get,
.level_capabilities_get = i40e_level_capabilities_get,
.node_capabilities_get = i40e_node_capabilities_get,
+ .hierarchy_commit = i40e_hierarchy_commit,
};
int
@@ -847,3 +851,124 @@ static int i40e_node_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+i40e_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
+ struct i40e_tm_node_list *tc_list = &pf->tm_conf.tc_list;
+ struct i40e_tm_node_list *queue_list = &pf->tm_conf.queue_list;
+ struct i40e_tm_node *tm_node;
+ struct i40e_vsi *vsi;
+ struct i40e_hw *hw;
+ struct i40e_aqc_configure_vsi_ets_sla_bw_data tc_bw;
+ uint64_t bw;
+ uint8_t tc_map;
+ int ret;
+ int i;
+
+ if (!error)
+ return -EINVAL;
+
+ /* check the setting */
+ if (!pf->tm_conf.root)
+ goto done;
+
+ vsi = pf->main_vsi;
+ hw = I40E_VSI_TO_HW(vsi);
+
+ /**
+ * Don't support bandwidth control for port and TCs in parallel.
+ * If the port has a max bandwidth, the TCs should have none.
+ */
+ /* port */
+ bw = pf->tm_conf.root->shaper_profile->profile.peak.rate;
+ if (bw) {
+ /* check if any TC has a max bandwidth */
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (tm_node->shaper_profile->profile.peak.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "no port and TC max bandwidth"
+ " in parallel";
+ goto fail_clear;
+ }
+ }
+
+ /* interpret Bps to 50Mbps */
+ bw = bw * 8 / 1000 / 1000 / I40E_QOS_BW_GRANULARITY;
+
+ /* set the max bandwidth */
+ ret = i40e_aq_config_vsi_bw_limit(hw, vsi->seid,
+ (uint16_t)bw, 0, NULL);
+ if (ret) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "fail to set port max bandwidth";
+ goto fail_clear;
+ }
+
+ goto done;
+ }
+
+ /* TC */
+ memset(&tc_bw, 0, sizeof(tc_bw));
+ tc_bw.tc_valid_bits = vsi->enabled_tc;
+ tc_map = vsi->enabled_tc;
+ TAILQ_FOREACH(tm_node, tc_list, node) {
+ if (!tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "TC without queue assigned";
+ goto fail_clear;
+ }
+
+ i = 0;
+ while (i < I40E_MAX_TRAFFIC_CLASS && !(tc_map & BIT_ULL(i)))
+ i++;
+ if (i >= I40E_MAX_TRAFFIC_CLASS) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "cannot find the TC";
+ goto fail_clear;
+ }
+ tc_map &= ~BIT_ULL(i);
+
+ bw = tm_node->shaper_profile->profile.peak.rate;
+ if (!bw)
+ continue;
+
+ /* interpret Bps to 50Mbps */
+ bw = bw * 8 / 1000 / 1000 / I40E_QOS_BW_GRANULARITY;
+
+ tc_bw.tc_bw_credits[i] = rte_cpu_to_le_16((uint16_t)bw);
+ }
+
+ TAILQ_FOREACH(tm_node, queue_list, node) {
+ bw = tm_node->shaper_profile->profile.peak.rate;
+ if (bw) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "not support queue QoS";
+ goto fail_clear;
+ }
+ }
+
+ ret = i40e_aq_config_vsi_ets_sla_bw_limit(hw, vsi->seid, &tc_bw, NULL);
+ if (ret) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "fail to set TC max bandwidth";
+ goto fail_clear;
+ }
+
+ goto done;
+
+done:
+ pf->tm_conf.committed = true;
+ return 0;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ i40e_tm_conf_uninit(dev);
+ i40e_tm_conf_init(dev);
+ }
+ return -EINVAL;
+}
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index f7ce62b..4f94678 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -40,15 +40,6 @@
#include "i40e_rxtx.h"
#include "rte_pmd_i40e.h"
-/* The max bandwidth of i40e is 40Gbps. */
-#define I40E_QOS_BW_MAX 40000
-/* The bandwidth should be the multiple of 50Mbps. */
-#define I40E_QOS_BW_GRANULARITY 50
-/* The min bandwidth weight is 1. */
-#define I40E_QOS_BW_WEIGHT_MIN 1
-/* The max bandwidth weight is 127. */
-#define I40E_QOS_BW_WEIGHT_MAX 127
-
int
rte_pmd_i40e_ping_vfs(uint8_t port, uint16_t vf)
{
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 11/20] net/ixgbe: support getting TM ops
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (9 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 10/20] net/i40e: support committing TM hierarchy Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 12/20] net/ixgbe: support getting TM capability Wenzhuo Lu
` (9 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
To support QoS scheduler APIs, create a new C file for
the TM (Traffic Management) ops but without any function
implemented.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/Makefile | 1 +
drivers/net/ixgbe/ixgbe_ethdev.c | 1 +
drivers/net/ixgbe/ixgbe_ethdev.h | 2 ++
drivers/net/ixgbe/ixgbe_tm.c | 50 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 54 insertions(+)
create mode 100644 drivers/net/ixgbe/ixgbe_tm.c
diff --git a/drivers/net/ixgbe/Makefile b/drivers/net/ixgbe/Makefile
index 5529d81..0595dcf 100644
--- a/drivers/net/ixgbe/Makefile
+++ b/drivers/net/ixgbe/Makefile
@@ -124,6 +124,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_bypass.c
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_82599_bypass.c
endif
SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += rte_pmd_ixgbe.c
+SRCS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe_tm.c
# install this header file
SYMLINK-$(CONFIG_RTE_LIBRTE_IXGBE_PMD)-include := rte_pmd_ixgbe.h
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index aeaa432..ab70c1c 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -608,6 +608,7 @@ static int ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
.l2_tunnel_offload_set = ixgbe_dev_l2_tunnel_offload_set,
.udp_tunnel_port_add = ixgbe_dev_udp_tunnel_port_add,
.udp_tunnel_port_del = ixgbe_dev_udp_tunnel_port_del,
+ .tm_ops_get = ixgbe_tm_ops_get,
};
/*
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b576a6f..7e99fd3 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -39,6 +39,7 @@
#include "ixgbe_bypass.h"
#include <rte_time.h>
#include <rte_hash.h>
+#include <rte_tm_driver.h>
/* need update link, bit flag */
#define IXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
@@ -671,6 +672,7 @@ int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev,
int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
uint16_t tx_rate, uint64_t q_msk);
bool is_ixgbe_supported(struct rte_eth_dev *dev);
+int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops);
static inline int
ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
new file mode 100644
index 0000000..0a222a1
--- /dev/null
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -0,0 +1,50 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "ixgbe_ethdev.h"
+
+const struct rte_tm_ops ixgbe_tm_ops = {
+ NULL,
+};
+
+int
+ixgbe_tm_ops_get(struct rte_eth_dev *dev __rte_unused,
+ void *arg)
+{
+ if (!arg)
+ return -EINVAL;
+
+ *(const void **)arg = &ixgbe_tm_ops;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 12/20] net/ixgbe: support getting TM capability
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (10 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 11/20] net/ixgbe: support getting TM ops Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 13/20] net/ixgbe: support adding TM shaper profile Wenzhuo Lu
` (8 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 91 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 90 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 0a222a1..31be5b5 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -33,8 +33,12 @@
#include "ixgbe_ethdev.h"
+static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
const struct rte_tm_ops ixgbe_tm_ops = {
- NULL,
+ .capabilities_get = ixgbe_tm_capabilities_get,
};
int
@@ -48,3 +52,88 @@
return 0;
}
+
+static inline uint8_t
+ixgbe_tc_nb_get(struct rte_eth_dev *dev)
+{
+ struct rte_eth_conf *eth_conf;
+ uint8_t nb_tcs = 0;
+
+ eth_conf = &dev->data->dev_conf;
+ if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+ nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
+ } else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+ if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
+ ETH_32_POOLS)
+ nb_tcs = ETH_4_TCS;
+ else
+ nb_tcs = ETH_8_TCS;
+ } else {
+ nb_tcs = 1;
+ }
+
+ return nb_tcs;
+}
+
+static int
+ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ error->type = RTE_TM_ERROR_TYPE_NONE;
+
+ /* set all the parameters to 0 first. */
+ memset(cap, 0, sizeof(struct rte_tm_capabilities));
+
+ /**
+ * here is the max capability not the current configuration.
+ */
+ /* port + TCs + queues */
+ cap->n_nodes_max = 1 + IXGBE_DCB_MAX_TRAFFIC_CLASS +
+ hw->mac.max_tx_queues;
+ cap->n_levels_max = 3;
+ cap->non_leaf_nodes_identical = 1;
+ cap->leaf_nodes_identical = 1;
+ cap->shaper_n_max = cap->n_nodes_max;
+ cap->shaper_private_n_max = cap->n_nodes_max;
+ cap->shaper_private_dual_rate_n_max = 0;
+ cap->shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->shaper_private_rate_max = 1250000000ull;
+ cap->shaper_shared_n_max = 0;
+ cap->shaper_shared_n_nodes_per_shaper_max = 0;
+ cap->shaper_shared_n_shapers_per_node_max = 0;
+ cap->shaper_shared_dual_rate_n_max = 0;
+ cap->shaper_shared_rate_min = 0;
+ cap->shaper_shared_rate_max = 0;
+ cap->sched_n_children_max = hw->mac.max_tx_queues;
+ /**
+ * HW supports SP. But no plan to support it now.
+ * So, all the nodes should have the same priority.
+ */
+ cap->sched_sp_n_priorities_max = 1;
+ cap->sched_wfq_n_children_per_group_max = 0;
+ cap->sched_wfq_n_groups_max = 0;
+ /**
+ * SW only supports fair round robin now.
+ * So, all the nodes should have the same weight.
+ */
+ cap->sched_wfq_weight_max = 1;
+ cap->cman_head_drop_supported = 0;
+ cap->dynamic_update_mask = 0;
+ cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD;
+ cap->shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS;
+ cap->cman_wred_context_n_max = 0;
+ cap->cman_wred_context_private_n_max = 0;
+ cap->cman_wred_context_shared_n_max = 0;
+ cap->cman_wred_context_shared_n_nodes_per_context_max = 0;
+ cap->cman_wred_context_shared_n_contexts_per_node_max = 0;
+ cap->stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 13/20] net/ixgbe: support adding TM shaper profile
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (11 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 12/20] net/ixgbe: support getting TM capability Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 14/20] net/ixgbe: support deleting " Wenzhuo Lu
` (7 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 6 ++
drivers/net/ixgbe/ixgbe_ethdev.h | 21 +++++++
drivers/net/ixgbe/ixgbe_tm.c | 123 +++++++++++++++++++++++++++++++++++++++
3 files changed, 150 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index ab70c1c..26eaece 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1360,6 +1360,9 @@ struct rte_ixgbe_xstats_name_off {
/* initialize bandwidth configuration info */
memset(bw_conf, 0, sizeof(struct ixgbe_bw_conf));
+ /* initialize Traffic Manager configuration */
+ ixgbe_tm_conf_init(eth_dev);
+
return 0;
}
@@ -1413,6 +1416,9 @@ struct rte_ixgbe_xstats_name_off {
/* clear all the filters list */
ixgbe_filterlist_flush();
+ /* Remove all Traffic Manager configuration */
+ ixgbe_tm_conf_uninit(eth_dev);
+
return 0;
}
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 7e99fd3..b647702 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -435,6 +435,21 @@ struct ixgbe_bw_conf {
uint8_t tc_num; /* Number of TCs. */
};
+/* Struct to store Traffic Manager shaper profile. */
+struct ixgbe_tm_shaper_profile {
+ TAILQ_ENTRY(ixgbe_tm_shaper_profile) node;
+ uint32_t shaper_profile_id;
+ uint32_t reference_count;
+ struct rte_tm_shaper_params profile;
+};
+
+TAILQ_HEAD(ixgbe_shaper_profile_list, ixgbe_tm_shaper_profile);
+
+/* The configuration of Traffic Manager */
+struct ixgbe_tm_conf {
+ struct ixgbe_shaper_profile_list shaper_profile_list;
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
@@ -463,6 +478,7 @@ struct ixgbe_adapter {
struct rte_timecounter systime_tc;
struct rte_timecounter rx_tstamp_tc;
struct rte_timecounter tx_tstamp_tc;
+ struct ixgbe_tm_conf tm_conf;
};
#define IXGBE_DEV_TO_PCI(eth_dev) \
@@ -513,6 +529,9 @@ struct ixgbe_adapter {
#define IXGBE_DEV_PRIVATE_TO_BW_CONF(adapter) \
(&((struct ixgbe_adapter *)adapter)->bw_conf)
+#define IXGBE_DEV_PRIVATE_TO_TM_CONF(adapter) \
+ (&((struct ixgbe_adapter *)adapter)->tm_conf)
+
/*
* RX/TX function prototypes
*/
@@ -673,6 +692,8 @@ int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
uint16_t tx_rate, uint64_t q_msk);
bool is_ixgbe_supported(struct rte_eth_dev *dev);
int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops);
+void ixgbe_tm_conf_init(struct rte_eth_dev *dev);
+void ixgbe_tm_conf_uninit(struct rte_eth_dev *dev);
static inline int
ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 31be5b5..dccda6a 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -31,14 +31,21 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <rte_malloc.h>
+
#include "ixgbe_ethdev.h"
static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
struct rte_tm_capabilities *cap,
struct rte_tm_error *error);
+static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
+ .shaper_profile_add = ixgbe_shaper_profile_add,
};
int
@@ -53,6 +60,32 @@ static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+void
+ixgbe_tm_conf_init(struct rte_eth_dev *dev)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+
+ /* initialize shaper profile list */
+ TAILQ_INIT(&tm_conf->shaper_profile_list);
+}
+
+void
+ixgbe_tm_conf_uninit(struct rte_eth_dev *dev)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ /* Remove all shaper profiles */
+ while ((shaper_profile =
+ TAILQ_FIRST(&tm_conf->shaper_profile_list))) {
+ TAILQ_REMOVE(&tm_conf->shaper_profile_list,
+ shaper_profile, node);
+ rte_free(shaper_profile);
+ }
+}
+
static inline uint8_t
ixgbe_tc_nb_get(struct rte_eth_dev *dev)
{
@@ -137,3 +170,93 @@ static int ixgbe_tm_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct ixgbe_tm_shaper_profile *
+ixgbe_shaper_profile_search(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_shaper_profile_list *shaper_profile_list =
+ &tm_conf->shaper_profile_list;
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) {
+ if (shaper_profile_id == shaper_profile->shaper_profile_id)
+ return shaper_profile;
+ }
+
+ return NULL;
+}
+
+static int
+ixgbe_shaper_profile_param_check(struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ /* min rate not supported */
+ if (profile->committed.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
+ error->message = "committed rate not supported";
+ return -EINVAL;
+ }
+ /* min bucket size not supported */
+ if (profile->committed.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
+ error->message = "committed bucket size not supported";
+ return -EINVAL;
+ }
+ /* max bucket size not supported */
+ if (profile->peak.size) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
+ error->message = "peak bucket size not supported";
+ return -EINVAL;
+ }
+ /* length adjustment not supported */
+ if (profile->pkt_length_adjust) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
+ error->message = "packet length adjustment not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+ int ret;
+
+ if (!profile || !error)
+ return -EINVAL;
+
+ ret = ixgbe_shaper_profile_param_check(profile, error);
+ if (ret)
+ return ret;
+
+ shaper_profile = ixgbe_shaper_profile_search(dev, shaper_profile_id);
+
+ if (shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID exist";
+ return -EINVAL;
+ }
+
+ shaper_profile = rte_zmalloc("ixgbe_tm_shaper_profile",
+ sizeof(struct ixgbe_tm_shaper_profile),
+ 0);
+ if (!shaper_profile)
+ return -ENOMEM;
+ shaper_profile->shaper_profile_id = shaper_profile_id;
+ (void)rte_memcpy(&shaper_profile->profile, profile,
+ sizeof(struct rte_tm_shaper_params));
+ TAILQ_INSERT_TAIL(&tm_conf->shaper_profile_list,
+ shaper_profile, node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 14/20] net/ixgbe: support deleting TM shaper profile
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (12 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 13/20] net/ixgbe: support adding TM shaper profile Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 15/20] net/ixgbe: support adding TM node Wenzhuo Lu
` (6 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_shaper_profile_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index dccda6a..443c4cc 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -42,10 +42,14 @@ static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_shaper_params *profile,
struct rte_tm_error *error);
+static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
.shaper_profile_add = ixgbe_shaper_profile_add,
+ .shaper_profile_delete = ixgbe_shaper_profile_del,
};
int
@@ -260,3 +264,36 @@ static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+
+ if (!error)
+ return -EINVAL;
+
+ shaper_profile = ixgbe_shaper_profile_search(dev, shaper_profile_id);
+
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
+ error->message = "profile ID not exist";
+ return -EINVAL;
+ }
+
+ /* don't delete a profile if it's used by one or several nodes */
+ if (shaper_profile->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "profile in use";
+ return -EINVAL;
+ }
+
+ TAILQ_REMOVE(&tm_conf->shaper_profile_list, shaper_profile, node);
+ rte_free(shaper_profile);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 15/20] net/ixgbe: support adding TM node
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (13 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 14/20] net/ixgbe: support deleting " Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 16/20] net/ixgbe: support deleting " Wenzhuo Lu
` (5 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_add.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 5 +
drivers/net/ixgbe/ixgbe_ethdev.h | 47 +++++
drivers/net/ixgbe/ixgbe_tm.c | 436 +++++++++++++++++++++++++++++++++++++++
3 files changed, 488 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 26eaece..377f8e6 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -2724,6 +2724,8 @@ static int eth_ixgbevf_pci_remove(struct rte_pci_device *pci_dev)
struct rte_pci_device *pci_dev = IXGBE_DEV_TO_PCI(dev);
struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
int vf;
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -2770,6 +2772,9 @@ static int eth_ixgbevf_pci_remove(struct rte_pci_device *pci_dev)
rte_free(intr_handle->intr_vec);
intr_handle->intr_vec = NULL;
}
+
+ /* reset hierarchy commit */
+ tm_conf->committed = false;
}
/*
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b647702..67d2bdc 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -445,9 +445,56 @@ struct ixgbe_tm_shaper_profile {
TAILQ_HEAD(ixgbe_shaper_profile_list, ixgbe_tm_shaper_profile);
+/* node type of Traffic Manager */
+enum ixgbe_tm_node_type {
+ IXGBE_TM_NODE_TYPE_PORT,
+ IXGBE_TM_NODE_TYPE_TC,
+ IXGBE_TM_NODE_TYPE_QUEUE,
+ IXGBE_TM_NODE_TYPE_MAX,
+};
+
+/* Struct to store Traffic Manager node configuration. */
+struct ixgbe_tm_node {
+ TAILQ_ENTRY(ixgbe_tm_node) node;
+ uint32_t id;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t reference_count;
+ uint16_t no;
+ struct ixgbe_tm_node *parent;
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+ struct rte_tm_node_params params;
+};
+
+TAILQ_HEAD(ixgbe_tm_node_list, ixgbe_tm_node);
+
/* The configuration of Traffic Manager */
struct ixgbe_tm_conf {
struct ixgbe_shaper_profile_list shaper_profile_list;
+ struct ixgbe_tm_node *root; /* root node - port */
+ struct ixgbe_tm_node_list tc_list; /* node list for all the TCs */
+ struct ixgbe_tm_node_list queue_list; /* node list for all the queues */
+ /**
+ * The number of added TC nodes.
+ * It should be no more than the TC number of this port.
+ */
+ uint32_t nb_tc_node;
+ /**
+ * The number of added queue nodes.
+ * It should be no more than the queue number of this port.
+ */
+ uint32_t nb_queue_node;
+ /**
+ * This flag is used to check if APP can change the TM node
+ * configuration.
+ * When it's true, means the configuration is applied to HW,
+ * APP should not change the configuration.
+ * As we don't support on-the-fly configuration, when starting
+ * the port, APP should call the hierarchy_commit API to set this
+ * flag to true. When stopping the port, this flag should be set
+ * to false.
+ */
+ bool committed;
};
/*
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index 443c4cc..bec9015 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -45,11 +45,17 @@ static int ixgbe_shaper_profile_add(struct rte_eth_dev *dev,
static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
uint32_t shaper_profile_id,
struct rte_tm_error *error);
+static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
.shaper_profile_add = ixgbe_shaper_profile_add,
.shaper_profile_delete = ixgbe_shaper_profile_del,
+ .node_add = ixgbe_node_add,
};
int
@@ -72,6 +78,14 @@ static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
/* initialize shaper profile list */
TAILQ_INIT(&tm_conf->shaper_profile_list);
+
+ /* initialize node configuration */
+ tm_conf->root = NULL;
+ TAILQ_INIT(&tm_conf->queue_list);
+ TAILQ_INIT(&tm_conf->tc_list);
+ tm_conf->nb_tc_node = 0;
+ tm_conf->nb_queue_node = 0;
+ tm_conf->committed = false;
}
void
@@ -80,6 +94,23 @@ static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
struct ixgbe_tm_shaper_profile *shaper_profile;
+ struct ixgbe_tm_node *tm_node;
+
+ /* clear node configuration */
+ while ((tm_node = TAILQ_FIRST(&tm_conf->queue_list))) {
+ TAILQ_REMOVE(&tm_conf->queue_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ tm_conf->nb_queue_node = 0;
+ while ((tm_node = TAILQ_FIRST(&tm_conf->tc_list))) {
+ TAILQ_REMOVE(&tm_conf->tc_list, tm_node, node);
+ rte_free(tm_node);
+ }
+ tm_conf->nb_tc_node = 0;
+ if (tm_conf->root) {
+ rte_free(tm_conf->root);
+ tm_conf->root = NULL;
+ }
/* Remove all shaper profiles */
while ((shaper_profile =
@@ -297,3 +328,408 @@ static int ixgbe_shaper_profile_del(struct rte_eth_dev *dev,
return 0;
}
+
+static inline struct ixgbe_tm_node *
+ixgbe_tm_node_search(struct rte_eth_dev *dev, uint32_t node_id,
+ enum ixgbe_tm_node_type *node_type)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_node *tm_node;
+
+ if (tm_conf->root && tm_conf->root->id == node_id) {
+ *node_type = IXGBE_TM_NODE_TYPE_PORT;
+ return tm_conf->root;
+ }
+
+ TAILQ_FOREACH(tm_node, &tm_conf->tc_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = IXGBE_TM_NODE_TYPE_TC;
+ return tm_node;
+ }
+ }
+
+ TAILQ_FOREACH(tm_node, &tm_conf->queue_list, node) {
+ if (tm_node->id == node_id) {
+ *node_type = IXGBE_TM_NODE_TYPE_QUEUE;
+ return tm_node;
+ }
+ }
+
+ return NULL;
+}
+
+static void
+ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
+ uint16_t *base, uint16_t *nb)
+{
+ uint8_t nb_tcs = ixgbe_tc_nb_get(dev);
+ struct rte_pci_device *pci_dev = IXGBE_DEV_TO_PCI(dev);
+ uint16_t vf_num = pci_dev->max_vfs;
+
+ *base = 0;
+ *nb = 0;
+
+ /* VT on */
+ if (vf_num) {
+ /* no DCB */
+ if (nb_tcs == 1) {
+ if (vf_num >= ETH_32_POOLS) {
+ *nb = 2;
+ *base = vf_num * 2;
+ } else if (vf_num >= ETH_16_POOLS) {
+ *nb = 4;
+ *base = vf_num * 4;
+ } else {
+ *nb = 8;
+ *base = vf_num * 8;
+ }
+ } else {
+ /* DCB */
+ *nb = 1;
+ *base = vf_num * nb_tcs + tc_node_no;
+ }
+ } else {
+ /* VT off */
+ if (nb_tcs == ETH_8_TCS) {
+ switch (tc_node_no) {
+ case 0:
+ *base = 0;
+ *nb = 32;
+ break;
+ case 1:
+ *base = 32;
+ *nb = 32;
+ break;
+ case 2:
+ *base = 64;
+ *nb = 16;
+ break;
+ case 3:
+ *base = 80;
+ *nb = 16;
+ break;
+ case 4:
+ *base = 96;
+ *nb = 8;
+ break;
+ case 5:
+ *base = 104;
+ *nb = 8;
+ break;
+ case 6:
+ *base = 112;
+ *nb = 8;
+ break;
+ case 7:
+ *base = 120;
+ *nb = 8;
+ break;
+ default:
+ return;
+ }
+ } else {
+ switch (tc_node_no) {
+ /**
+ * If no VF and no DCB, only 64 queues can be used.
+ * This case also be covered by this "case 0".
+ */
+ case 0:
+ *base = 0;
+ *nb = 64;
+ break;
+ case 1:
+ *base = 64;
+ *nb = 32;
+ break;
+ case 2:
+ *base = 96;
+ *nb = 16;
+ break;
+ case 3:
+ *base = 112;
+ *nb = 16;
+ break;
+ default:
+ return;
+ }
+ }
+ }
+}
+
+static int
+ixgbe_node_param_check(uint32_t node_id, uint32_t parent_node_id,
+ uint32_t priority, uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ if (priority) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
+ error->message = "priority should be 0";
+ return -EINVAL;
+ }
+
+ if (weight != 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
+ error->message = "weight must be 1";
+ return -EINVAL;
+ }
+
+ /* not support shared shaper */
+ if (params->shared_shaper_id) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+ if (params->n_shared_shapers) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS;
+ error->message = "shared shaper not supported";
+ return -EINVAL;
+ }
+
+ /* for root node */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check the unsupported parameters */
+ if (params->nonleaf.wfq_weight_mode) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFQ not supported";
+ return -EINVAL;
+ }
+ if (params->nonleaf.n_sp_priorities != 1) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
+ error->message = "SP priority not supported";
+ return -EINVAL;
+ } else if (params->nonleaf.wfq_weight_mode &&
+ !(*params->nonleaf.wfq_weight_mode)) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
+ error->message = "WFP should be byte mode";
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+
+ /* for TC or queue node */
+ /* check the unsupported parameters */
+ if (params->leaf.cman) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN;
+ error->message = "Congestion management not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.wred_profile_id !=
+ RTE_TM_WRED_PROFILE_ID_NONE) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.shared_wred_context_id) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+ if (params->leaf.wred.n_shared_wred_contexts) {
+ error->type =
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS;
+ error->message = "WRED not supported";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * Now the TC and queue configuration is controlled by DCB.
+ * We need check if the node configuration follows the DCB configuration.
+ * In the future, we may use TM to cover DCB.
+ */
+static int
+ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
+ uint32_t parent_node_id, uint32_t priority,
+ uint32_t weight, uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ enum ixgbe_tm_node_type node_type = IXGBE_TM_NODE_TYPE_MAX;
+ enum ixgbe_tm_node_type parent_node_type = IXGBE_TM_NODE_TYPE_MAX;
+ struct ixgbe_tm_shaper_profile *shaper_profile;
+ struct ixgbe_tm_node *tm_node;
+ struct ixgbe_tm_node *parent_node;
+ uint8_t nb_tcs;
+ uint16_t q_base = 0;
+ uint16_t q_nb = 0;
+ int ret;
+
+ if (!params || !error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (tm_conf->committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ ret = ixgbe_node_param_check(node_id, parent_node_id, priority, weight,
+ params, error);
+ if (ret)
+ return ret;
+
+ /* check if the node ID is already used */
+ if (ixgbe_tm_node_search(dev, node_id, &node_type)) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "node id already used";
+ return -EINVAL;
+ }
+
+ /* check the shaper profile id */
+ shaper_profile = ixgbe_shaper_profile_search(dev,
+ params->shaper_profile_id);
+ if (!shaper_profile) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID;
+ error->message = "shaper profile not exist";
+ return -EINVAL;
+ }
+
+ /* root node if not have a parent */
+ if (parent_node_id == RTE_TM_NODE_ID_NULL) {
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id > IXGBE_TM_NODE_TYPE_PORT) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* obviously no more than one root */
+ if (tm_conf->root) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "already have a root";
+ return -EINVAL;
+ }
+
+ /* add the root node */
+ tm_node = rte_zmalloc("ixgbe_tm_node",
+ sizeof(struct ixgbe_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->no = 0;
+ tm_node->parent = NULL;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ tm_conf->root = tm_node;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+ }
+
+ /* TC or queue node */
+ /* check the parent node */
+ parent_node = ixgbe_tm_node_search(dev, parent_node_id,
+ &parent_node_type);
+ if (!parent_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent not exist";
+ return -EINVAL;
+ }
+ if (parent_node_type != IXGBE_TM_NODE_TYPE_PORT &&
+ parent_node_type != IXGBE_TM_NODE_TYPE_TC) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
+ error->message = "parent is not port or TC";
+ return -EINVAL;
+ }
+ /* check level */
+ if (level_id != RTE_TM_NODE_LEVEL_ID_ANY &&
+ level_id != parent_node_type + 1) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS;
+ error->message = "Wrong level";
+ return -EINVAL;
+ }
+
+ /* check the node number */
+ if (parent_node_type == IXGBE_TM_NODE_TYPE_PORT) {
+ /* check TC number */
+ nb_tcs = ixgbe_tc_nb_get(dev);
+ if (tm_conf->nb_tc_node >= nb_tcs) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many TCs";
+ return -EINVAL;
+ }
+ } else {
+ /* check queue number */
+ if (tm_conf->nb_queue_node >= dev->data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues";
+ return -EINVAL;
+ }
+
+ ixgbe_queue_base_nb_get(dev, parent_node->no, &q_base, &q_nb);
+ if (parent_node->reference_count >= q_nb) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too many queues than TC supported";
+ return -EINVAL;
+ }
+
+ /**
+ * check the node id.
+ * For queue, the node id means queue id.
+ */
+ if (node_id >= dev->data->nb_tx_queues) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "too large queue id";
+ return -EINVAL;
+ }
+ }
+
+ /* add the TC or queue node */
+ tm_node = rte_zmalloc("ixgbe_tm_node",
+ sizeof(struct ixgbe_tm_node),
+ 0);
+ if (!tm_node)
+ return -ENOMEM;
+ tm_node->id = node_id;
+ tm_node->priority = priority;
+ tm_node->weight = weight;
+ tm_node->reference_count = 0;
+ tm_node->parent = parent_node;
+ tm_node->shaper_profile = shaper_profile;
+ (void)rte_memcpy(&tm_node->params, params,
+ sizeof(struct rte_tm_node_params));
+ if (parent_node_type == IXGBE_TM_NODE_TYPE_PORT) {
+ tm_node->no = parent_node->reference_count;
+ TAILQ_INSERT_TAIL(&tm_conf->tc_list,
+ tm_node, node);
+ tm_conf->nb_tc_node++;
+ } else {
+ tm_node->no = q_base + parent_node->reference_count;
+ TAILQ_INSERT_TAIL(&tm_conf->queue_list,
+ tm_node, node);
+ tm_conf->nb_queue_node++;
+ }
+ tm_node->parent->reference_count++;
+
+ /* increase the reference counter of the shaper profile */
+ shaper_profile->reference_count++;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 16/20] net/ixgbe: support deleting TM node
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (14 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 15/20] net/ixgbe: support adding TM node Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 17/20] net/ixgbe: support getting TM node type Wenzhuo Lu
` (4 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_delete.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 67 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index bec9015..ee0f639 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -50,12 +50,15 @@ static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
uint32_t weight, uint32_t level_id,
struct rte_tm_node_params *params,
struct rte_tm_error *error);
+static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
.shaper_profile_add = ixgbe_shaper_profile_add,
.shaper_profile_delete = ixgbe_shaper_profile_del,
.node_add = ixgbe_node_add,
+ .node_delete = ixgbe_node_delete,
};
int
@@ -733,3 +736,67 @@ static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ enum ixgbe_tm_node_type node_type = IXGBE_TM_NODE_TYPE_MAX;
+ struct ixgbe_tm_node *tm_node;
+
+ if (!error)
+ return -EINVAL;
+
+ /* if already committed */
+ if (tm_conf->committed) {
+ error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
+ error->message = "already committed";
+ return -EINVAL;
+ }
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check the if the node id exists */
+ tm_node = ixgbe_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ /* the node should have no child */
+ if (tm_node->reference_count) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message =
+ "cannot delete a node which has children";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (node_type == IXGBE_TM_NODE_TYPE_PORT) {
+ tm_node->shaper_profile->reference_count--;
+ rte_free(tm_node);
+ tm_conf->root = NULL;
+ return 0;
+ }
+
+ /* TC or queue node */
+ tm_node->shaper_profile->reference_count--;
+ tm_node->parent->reference_count--;
+ if (node_type == IXGBE_TM_NODE_TYPE_TC) {
+ TAILQ_REMOVE(&tm_conf->tc_list, tm_node, node);
+ tm_conf->nb_tc_node--;
+ } else {
+ TAILQ_REMOVE(&tm_conf->queue_list, tm_node, node);
+ tm_conf->nb_queue_node--;
+ }
+ rte_free(tm_node);
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 17/20] net/ixgbe: support getting TM node type
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (15 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 16/20] net/ixgbe: support deleting " Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 18/20] net/ixgbe: support getting TM level capability Wenzhuo Lu
` (3 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_type_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ee0f639..df5672c 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -52,6 +52,8 @@ static int ixgbe_node_add(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
+static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -59,6 +61,7 @@ static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
.shaper_profile_delete = ixgbe_shaper_profile_del,
.node_add = ixgbe_node_add,
.node_delete = ixgbe_node_delete,
+ .node_type_get = ixgbe_node_type_get,
};
int
@@ -800,3 +803,35 @@ static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
+ int *is_leaf, struct rte_tm_error *error)
+{
+ enum ixgbe_tm_node_type node_type = IXGBE_TM_NODE_TYPE_MAX;
+ struct ixgbe_tm_node *tm_node;
+
+ if (!is_leaf || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ixgbe_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ if (node_type == IXGBE_TM_NODE_TYPE_QUEUE)
+ *is_leaf = true;
+ else
+ *is_leaf = false;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 18/20] net/ixgbe: support getting TM level capability
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (16 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 17/20] net/ixgbe: support getting TM node type Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 19/20] net/ixgbe: support getting TM node capability Wenzhuo Lu
` (2 subsequent siblings)
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_level_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 74 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index df5672c..ddf3f5e 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -54,6 +54,10 @@ static int ixgbe_node_delete(struct rte_eth_dev *dev, uint32_t node_id,
struct rte_tm_error *error);
static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
int *is_leaf, struct rte_tm_error *error);
+static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -62,6 +66,7 @@ static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
.node_add = ixgbe_node_add,
.node_delete = ixgbe_node_delete,
.node_type_get = ixgbe_node_type_get,
+ .level_capabilities_get = ixgbe_level_capabilities_get,
};
int
@@ -835,3 +840,72 @@ static int ixgbe_node_type_get(struct rte_eth_dev *dev, uint32_t node_id,
return 0;
}
+
+static int
+ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (level_id >= IXGBE_TM_NODE_TYPE_MAX) {
+ error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+ error->message = "too deep level";
+ return -EINVAL;
+ }
+
+ /* root node */
+ if (level_id == IXGBE_TM_NODE_TYPE_PORT) {
+ cap->n_nodes_max = 1;
+ cap->n_nodes_nonleaf_max = 1;
+ cap->n_nodes_leaf_max = 0;
+ cap->non_leaf_nodes_identical = true;
+ cap->leaf_nodes_identical = true;
+ cap->nonleaf.shaper_private_supported = true;
+ cap->nonleaf.shaper_private_dual_rate_supported = false;
+ cap->nonleaf.shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->nonleaf.shaper_private_rate_max = 1250000000ull;
+ cap->nonleaf.shaper_shared_n_max = 0;
+ cap->nonleaf.sched_n_children_max = IXGBE_DCB_MAX_TRAFFIC_CLASS;
+ cap->nonleaf.sched_sp_n_priorities_max = 1;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 1;
+ cap->nonleaf.stats_mask = 0;
+
+ return 0;
+ }
+
+ /* TC or queue node */
+ if (level_id == IXGBE_TM_NODE_TYPE_TC) {
+ /* TC */
+ cap->n_nodes_max = IXGBE_DCB_MAX_TRAFFIC_CLASS;
+ cap->n_nodes_nonleaf_max = IXGBE_DCB_MAX_TRAFFIC_CLASS;
+ cap->n_nodes_leaf_max = 0;
+ cap->non_leaf_nodes_identical = true;
+ } else {
+ /* queue */
+ cap->n_nodes_max = hw->mac.max_tx_queues;
+ cap->n_nodes_nonleaf_max = 0;
+ cap->n_nodes_leaf_max = hw->mac.max_tx_queues;
+ cap->non_leaf_nodes_identical = true;
+ }
+ cap->leaf_nodes_identical = true;
+ cap->leaf.shaper_private_supported = true;
+ cap->leaf.shaper_private_dual_rate_supported = false;
+ cap->leaf.shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->leaf.shaper_private_rate_max = 1250000000ull;
+ cap->leaf.shaper_shared_n_max = 0;
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = true;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ cap->leaf.stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 19/20] net/ixgbe: support getting TM node capability
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (17 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 18/20] net/ixgbe: support getting TM level capability Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 20/20] net/ixgbe: support committing TM hierarchy Wenzhuo Lu
2017-07-04 15:11 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Dumitrescu, Cristian
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_node_capabilities_get.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_tm.c | 61 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 61 insertions(+)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index ddf3f5e..e9dce46 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -58,6 +58,10 @@ static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
uint32_t level_id,
struct rte_tm_level_capabilities *cap,
struct rte_tm_error *error);
+static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -67,6 +71,7 @@ static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
.node_delete = ixgbe_node_delete,
.node_type_get = ixgbe_node_type_get,
.level_capabilities_get = ixgbe_level_capabilities_get,
+ .node_capabilities_get = ixgbe_node_capabilities_get,
};
int
@@ -909,3 +914,59 @@ static int ixgbe_level_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+ enum ixgbe_tm_node_type node_type = IXGBE_TM_NODE_TYPE_MAX;
+ struct ixgbe_tm_node *tm_node;
+
+ if (!cap || !error)
+ return -EINVAL;
+
+ if (node_id == RTE_TM_NODE_ID_NULL) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "invalid node id";
+ return -EINVAL;
+ }
+
+ /* check if the node id exists */
+ tm_node = ixgbe_tm_node_search(dev, node_id, &node_type);
+ if (!tm_node) {
+ error->type = RTE_TM_ERROR_TYPE_NODE_ID;
+ error->message = "no such node";
+ return -EINVAL;
+ }
+
+ cap->shaper_private_supported = true;
+ cap->shaper_private_dual_rate_supported = false;
+ cap->shaper_private_rate_min = 0;
+ /* 10Gbps -> 1.25GBps */
+ cap->shaper_private_rate_max = 1250000000ull;
+ cap->shaper_shared_n_max = 0;
+
+ if (node_type == IXGBE_TM_NODE_TYPE_QUEUE) {
+ cap->leaf.cman_head_drop_supported = false;
+ cap->leaf.cman_wred_context_private_supported = true;
+ cap->leaf.cman_wred_context_shared_n_max = 0;
+ } else {
+ if (node_type == IXGBE_TM_NODE_TYPE_PORT)
+ cap->nonleaf.sched_n_children_max =
+ IXGBE_DCB_MAX_TRAFFIC_CLASS;
+ else
+ cap->nonleaf.sched_n_children_max =
+ hw->mac.max_tx_queues;
+ cap->nonleaf.sched_sp_n_priorities_max = 1;
+ cap->nonleaf.sched_wfq_n_children_per_group_max = 0;
+ cap->nonleaf.sched_wfq_n_groups_max = 0;
+ cap->nonleaf.sched_wfq_weight_max = 1;
+ }
+
+ cap->stats_mask = 0;
+
+ return 0;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 20/20] net/ixgbe: support committing TM hierarchy
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (18 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 19/20] net/ixgbe: support getting TM node capability Wenzhuo Lu
@ 2017-06-29 4:23 ` Wenzhuo Lu
2017-07-04 15:11 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Dumitrescu, Cristian
20 siblings, 0 replies; 68+ messages in thread
From: Wenzhuo Lu @ 2017-06-29 4:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, jasvinder.singh, Wenzhuo Lu
Add the support of the Traffic Management API,
rte_tm_hierarchy_commit.
When calling this API, the driver tries to enable
the TM configuration on HW.
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 15 ++++++---
drivers/net/ixgbe/ixgbe_ethdev.h | 2 ++
drivers/net/ixgbe/ixgbe_tm.c | 69 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 81 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 377f8e6..b8b4d43 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -302,9 +302,6 @@ static void ixgbe_set_ivar_map(struct ixgbe_hw *hw, int8_t direction,
uint8_t queue, uint8_t msix_vector);
static void ixgbe_configure_msix(struct rte_eth_dev *dev);
-static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
- uint16_t queue_idx, uint16_t tx_rate);
-
static int ixgbevf_add_mac_addr(struct rte_eth_dev *dev,
struct ether_addr *mac_addr,
uint32_t index, uint32_t pool);
@@ -2512,6 +2509,8 @@ static int eth_ixgbevf_pci_remove(struct rte_pci_device *pci_dev)
int status;
uint16_t vf, idx;
uint32_t *link_speeds;
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
PMD_INIT_FUNC_TRACE();
@@ -2702,6 +2701,11 @@ static int eth_ixgbevf_pci_remove(struct rte_pci_device *pci_dev)
ixgbe_l2_tunnel_conf(dev);
ixgbe_filter_restore(dev);
+ if (!tm_conf->committed)
+ PMD_DRV_LOG(WARNING,
+ "please call hierarchy_commit() "
+ "before starting the port");
+
return 0;
error:
@@ -5610,8 +5614,9 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on)
IXGBE_WRITE_REG(hw, IXGBE_EIAC, mask);
}
-static int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
- uint16_t queue_idx, uint16_t tx_rate)
+int
+ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
+ uint16_t queue_idx, uint16_t tx_rate)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t rf_dec, rf_int;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 67d2bdc..284dca8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -741,6 +741,8 @@ int ixgbe_set_vf_rate_limit(struct rte_eth_dev *dev, uint16_t vf,
int ixgbe_tm_ops_get(struct rte_eth_dev *dev, void *ops);
void ixgbe_tm_conf_init(struct rte_eth_dev *dev);
void ixgbe_tm_conf_uninit(struct rte_eth_dev *dev);
+int ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t tx_rate);
static inline int
ixgbe_ethertype_filter_lookup(struct ixgbe_filter_info *filter_info,
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index e9dce46..c790b59 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -62,6 +62,9 @@ static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
uint32_t node_id,
struct rte_tm_node_capabilities *cap,
struct rte_tm_error *error);
+static int ixgbe_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error);
const struct rte_tm_ops ixgbe_tm_ops = {
.capabilities_get = ixgbe_tm_capabilities_get,
@@ -72,6 +75,7 @@ static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
.node_type_get = ixgbe_node_type_get,
.level_capabilities_get = ixgbe_level_capabilities_get,
.node_capabilities_get = ixgbe_node_capabilities_get,
+ .hierarchy_commit = ixgbe_hierarchy_commit,
};
int
@@ -970,3 +974,68 @@ static int ixgbe_node_capabilities_get(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+ixgbe_hierarchy_commit(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct ixgbe_tm_conf *tm_conf =
+ IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
+ struct ixgbe_tm_node *tm_node;
+ uint64_t bw;
+ int ret;
+
+ if (!error)
+ return -EINVAL;
+
+ /* check the setting */
+ if (!tm_conf->root)
+ goto done;
+
+ /* not support port max bandwidth yet */
+ if (tm_conf->root->shaper_profile->profile.peak.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "no port max bandwidth";
+ goto fail_clear;
+ }
+
+ /* HW not support TC max bandwidth */
+ TAILQ_FOREACH(tm_node, &tm_conf->tc_list, node) {
+ if (tm_node->shaper_profile->profile.peak.rate) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message = "no TC max bandwidth";
+ goto fail_clear;
+ }
+ }
+
+ /* queue max bandwidth */
+ TAILQ_FOREACH(tm_node, &tm_conf->queue_list, node) {
+ bw = tm_node->shaper_profile->profile.peak.rate;
+ if (bw) {
+ /* interpret Bps to Mbps */
+ bw = bw * 8 / 1000 / 1000;
+ ret = ixgbe_set_queue_rate_limit(dev, tm_node->no, bw);
+ if (ret) {
+ error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
+ error->message =
+ "failed to set queue max bandwidth";
+ goto fail_clear;
+ }
+ }
+ }
+
+ goto done;
+
+done:
+ tm_conf->committed = true;
+ return 0;
+
+fail_clear:
+ /* clear all the traffic manager configuration */
+ if (clear_on_fail) {
+ ixgbe_tm_conf_uninit(dev);
+ ixgbe_tm_conf_init(dev);
+ }
+ return -EINVAL;
+}
--
1.9.3
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
` (19 preceding siblings ...)
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 20/20] net/ixgbe: support committing TM hierarchy Wenzhuo Lu
@ 2017-07-04 15:11 ` Dumitrescu, Cristian
20 siblings, 0 replies; 68+ messages in thread
From: Dumitrescu, Cristian @ 2017-07-04 15:11 UTC (permalink / raw)
To: Lu, Wenzhuo, dev; +Cc: Singh, Jasvinder
> -----Original Message-----
> From: Lu, Wenzhuo
> Sent: Thursday, June 29, 2017 5:24 AM
> To: dev@dpdk.org
> Cc: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Singh, Jasvinder
> <jasvinder.singh@intel.com>; Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: [PATCH v3 00/20] traffic manager on i40e and ixgbe
>
> Implement the traffic manager APIs on i40e and ixgbe.
> This patch set is based on the patch set,
> "ethdev: abstraction layer for QoS traffic management"
> http://dpdk.org/dev/patchwork/patch/25275/
> http://dpdk.org/dev/patchwork/patch/25276/
>
> Series Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Series-Acked-by: Cristian Dumitrescu <Cristian.Dumitrescu@intel.com>
>
Series applied to next-tm repo, thanks!
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v3 02/20] net/i40e: support getting TM capability
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 02/20] net/i40e: support getting TM capability Wenzhuo Lu
@ 2017-07-09 19:31 ` Thomas Monjalon
2017-07-10 11:17 ` Dumitrescu, Cristian
0 siblings, 1 reply; 68+ messages in thread
From: Thomas Monjalon @ 2017-07-09 19:31 UTC (permalink / raw)
To: Wenzhuo Lu; +Cc: dev, cristian.dumitrescu, jasvinder.singh
29/06/2017 06:23, Wenzhuo Lu:
> +static inline uint16_t
> +i40e_tc_nb_get(struct rte_eth_dev *dev)
> +{
Error with clang 4.0:
drivers/net/i40e/i40e_tm.c:58:1: fatal error:
unused function 'i40e_tc_nb_get'
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v3 02/20] net/i40e: support getting TM capability
2017-07-09 19:31 ` Thomas Monjalon
@ 2017-07-10 11:17 ` Dumitrescu, Cristian
0 siblings, 0 replies; 68+ messages in thread
From: Dumitrescu, Cristian @ 2017-07-10 11:17 UTC (permalink / raw)
To: Thomas Monjalon, Lu, Wenzhuo; +Cc: dev, Singh, Jasvinder
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Sunday, July 9, 2017 8:31 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>;
> Singh, Jasvinder <jasvinder.singh@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 02/20] net/i40e: support getting TM
> capability
>
> 29/06/2017 06:23, Wenzhuo Lu:
> > +static inline uint16_t
> > +i40e_tc_nb_get(struct rte_eth_dev *dev)
> > +{
>
> Error with clang 4.0:
>
> drivers/net/i40e/i40e_tm.c:58:1: fatal error:
> unused function 'i40e_tc_nb_get'
Thanks for sharing the log.
This function is called in the final code, so maybe this is produced by an intermediate patch? Not sure how I missed this, as I built each patch incrementally.
^ permalink raw reply [flat|nested] 68+ messages in thread
end of thread, other threads:[~2017-07-10 11:17 UTC | newest]
Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-27 8:17 [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 01/20] net/i40e: support getting TM ops Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 02/20] net/i40e: support getting TM capability Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 03/20] net/i40e: support adding TM shaper profile Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 04/20] net/i40e: support deleting " Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 05/20] net/i40e: support adding TM node Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 06/20] net/i40e: support deleting " Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 07/20] net/i40e: support getting TM node type Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 08/20] net/i40e: support getting TM level capability Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 09/20] net/i40e: support getting TM node capability Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 10/20] net/i40e: support committing TM hierarchy Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 11/20] net/ixgbe: support getting TM ops Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 12/20] net/ixgbe: support getting TM capability Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 13/20] net/ixgbe: support adding TM shaper profile Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 14/20] net/ixgbe: support deleting " Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 15/20] net/ixgbe: support adding TM node Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 16/20] net/ixgbe: support deleting " Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 17/20] net/ixgbe: support getting TM node type Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 18/20] net/ixgbe: support getting TM level capability Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 19/20] net/ixgbe: support getting TM node capability Wenzhuo Lu
2017-05-27 8:17 ` [dpdk-dev] [PATCH 20/20] net/ixgbe: support committing TM hierarchy Wenzhuo Lu
2017-06-08 11:19 ` [dpdk-dev] [PATCH 00/20] traffic manager on i40e and ixgbe Ferruh Yigit
2017-06-08 12:52 ` Thomas Monjalon
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 " Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 01/20] net/i40e: support getting TM ops Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 02/20] net/i40e: support getting TM capability Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 03/20] net/i40e: support adding TM shaper profile Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 04/20] net/i40e: support deleting " Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 05/20] net/i40e: support adding TM node Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 06/20] net/i40e: support deleting " Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 07/20] net/i40e: support getting TM node type Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 08/20] net/i40e: support getting TM level capability Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 09/20] net/i40e: support getting TM node capability Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 10/20] net/i40e: support committing TM hierarchy Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 11/20] net/ixgbe: support getting TM ops Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 12/20] net/ixgbe: support getting TM capability Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 13/20] net/ixgbe: support adding TM shaper profile Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 14/20] net/ixgbe: support deleting " Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 15/20] net/ixgbe: support adding TM node Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 16/20] net/ixgbe: support deleting " Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 17/20] net/ixgbe: support getting TM node type Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 18/20] net/ixgbe: support getting TM level capability Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 19/20] net/ixgbe: support getting TM node capability Wenzhuo Lu
2017-06-19 5:43 ` [dpdk-dev] [PATCH v2 20/20] net/ixgbe: support committing TM hierarchy Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 01/20] net/i40e: support getting TM ops Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 02/20] net/i40e: support getting TM capability Wenzhuo Lu
2017-07-09 19:31 ` Thomas Monjalon
2017-07-10 11:17 ` Dumitrescu, Cristian
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 03/20] net/i40e: support adding TM shaper profile Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 04/20] net/i40e: support deleting " Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 05/20] net/i40e: support adding TM node Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 06/20] net/i40e: support deleting " Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 07/20] net/i40e: support getting TM node type Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 08/20] net/i40e: support getting TM level capability Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 09/20] net/i40e: support getting TM node capability Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 10/20] net/i40e: support committing TM hierarchy Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 11/20] net/ixgbe: support getting TM ops Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 12/20] net/ixgbe: support getting TM capability Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 13/20] net/ixgbe: support adding TM shaper profile Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 14/20] net/ixgbe: support deleting " Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 15/20] net/ixgbe: support adding TM node Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 16/20] net/ixgbe: support deleting " Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 17/20] net/ixgbe: support getting TM node type Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 18/20] net/ixgbe: support getting TM level capability Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 19/20] net/ixgbe: support getting TM node capability Wenzhuo Lu
2017-06-29 4:23 ` [dpdk-dev] [PATCH v3 20/20] net/ixgbe: support committing TM hierarchy Wenzhuo Lu
2017-07-04 15:11 ` [dpdk-dev] [PATCH v3 00/20] traffic manager on i40e and ixgbe Dumitrescu, Cristian
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).