From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2AADA0487 for ; Sun, 30 Jun 2019 20:10:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 06B961BA59; Sun, 30 Jun 2019 20:08:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 82D311B9D7 for ; Sun, 30 Jun 2019 20:07:40 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI50uG015649 for ; Sun, 30 Jun 2019 11:07:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=GKpsf9GH+HpEkHT3LwPERjeqkyvx9sCt4eVMt/GUfoE=; b=Uga93pAY9jWTwOzeCkaVTiZOXtuu3YPP2UzBHUUK2j6sJ4d4TdLoVanowjjl06h5KPVr +RUKcj0iT8nnRv9Xb8prMZMXuP4TGwExHzT2J6O3kBFe6P4SA6udTOmqliYH6Om6E6EL DfylMrCYMfv9Kqd4rDxIVuEADaCf9ULtPOwQO2bG84PRfJgUDErDmrYk7GelrTbSuhaK 4uVRwvZJbG1ND7ExmnPhzV+t6UAqoYMdx6/yGcrWBrNHOLgNelFj/D/MpX9idb242Gpt fHfiEpsqCnfhTtUxHQ+qstBvOYdzStBPmyey2LJbWq6wKBo/fnepjPDaa+emQWw7tpnj eQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yag-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:07:39 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:38 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:38 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 89C6A3F703F; Sun, 30 Jun 2019 11:07:36 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Krzysztof Kanas Date: Sun, 30 Jun 2019 23:35:33 +0530 Message-ID: <20190630180609.36705-22-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 21/57] net/octeontx2: introduce traffic manager X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Introduce traffic manager infra and default hierarchy creation. Upon ethdev configure, a default hierarchy is created with one-to-one mapped tm nodes. This topology will be overridden when user explicitly creates and commits a new hierarchy using rte_tm interface. Signed-off-by: Nithin Dabilpuram Signed-off-by: Krzysztof Kanas --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 16 ++ drivers/net/octeontx2/otx2_ethdev.h | 14 ++ drivers/net/octeontx2/otx2_tm.c | 252 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_tm.h | 67 ++++++++ 6 files changed, 351 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_tm.c create mode 100644 drivers/net/octeontx2/otx2_tm.h diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index f40561afb..164621087 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -28,6 +28,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ otx2_link.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 8681a2642..e344d877f 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files( + 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', 'otx2_link.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index c8271b1ab..899865749 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1034,6 +1034,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) rc = nix_store_queue_cfg_and_then_release(eth_dev); if (rc) goto fail; + otx2_nix_tm_fini(eth_dev); nix_lf_free(dev); } @@ -1067,6 +1068,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* Init the default TM scheduler hierarchy */ + rc = otx2_nix_tm_init_default(eth_dev); + if (rc) { + otx2_err("Failed to init traffic manager rc=%d", rc); + goto free_nix_lf; + } + /* Register queue IRQs */ rc = oxt2_nix_register_queue_irqs(eth_dev); if (rc) { @@ -1369,6 +1377,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) /* Also sync same MAC address to CGX table */ otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]); + /* Initialize the tm data structures */ + otx2_nix_tm_conf_init(eth_dev); + dev->tx_offload_capa = nix_get_tx_offload_capa(dev); dev->rx_offload_capa = nix_get_rx_offload_capa(dev); @@ -1424,6 +1435,11 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) } eth_dev->data->nb_rx_queues = 0; + /* Free tm resources */ + rc = otx2_nix_tm_fini(eth_dev); + if (rc) + otx2_err("Failed to cleanup tm, rc=%d", rc); + /* Unregister queue irqs */ oxt2_nix_unregister_queue_irqs(eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 4e06b7111..9f73bf89b 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -19,6 +19,7 @@ #include "otx2_irq.h" #include "otx2_mempool.h" #include "otx2_rx.h" +#include "otx2_tm.h" #include "otx2_tx.h" #define OTX2_ETH_DEV_PMD_VERSION "1.0" @@ -201,6 +202,19 @@ struct otx2_eth_dev { uint64_t rx_offload_capa; uint64_t tx_offload_capa; struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; + uint16_t txschq[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_contig[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_index[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT]; + /* Dis-contiguous queues */ + uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + /* Contiguous queues */ + uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + uint16_t otx2_tm_root_lvl; + uint16_t tm_flags; + uint16_t tm_leaf_cnt; + struct otx2_nix_tm_node_list node_list; + struct otx2_nix_tm_shaper_profile_list shaper_profile_list; struct otx2_rss_info rss_info; uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c new file mode 100644 index 000000000..bc0474242 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tm.c @@ -0,0 +1,252 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" +#include "otx2_tm.h" + +/* Use last LVL_CNT nodes as default nodes */ +#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT) + +enum otx2_tm_node_level { + OTX2_TM_LVL_ROOT = 0, + OTX2_TM_LVL_SCH1, + OTX2_TM_LVL_SCH2, + OTX2_TM_LVL_SCH3, + OTX2_TM_LVL_SCH4, + OTX2_TM_LVL_QUEUE, + OTX2_TM_LVL_MAX, +}; + +static bool +nix_tm_have_tl1_access(struct otx2_eth_dev *dev) +{ + bool is_lbk = otx2_dev_is_lbk(dev); + return otx2_dev_is_pf(dev) && !otx2_dev_is_A0(dev) && + !is_lbk && !dev->maxvf; +} + +static struct otx2_nix_tm_shaper_profile * +nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id) +{ + struct otx2_nix_tm_shaper_profile *tm_shaper_profile; + + TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) { + if (tm_shaper_profile->shaper_profile_id == shaper_id) + return tm_shaper_profile; + } + return NULL; +} + +static struct otx2_nix_tm_node * +nix_tm_node_search(struct otx2_eth_dev *dev, + uint32_t node_id, bool user) +{ + struct otx2_nix_tm_node *tm_node; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (tm_node->id == node_id && + (user == !!(tm_node->flags & NIX_TM_NODE_USER))) + return tm_node; + } + return NULL; +} + +static int +nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight, uint16_t hw_lvl_id, + uint16_t level_id, bool user, + struct rte_tm_node_params *params) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + struct otx2_nix_tm_node *tm_node, *parent_node; + uint32_t shaper_profile_id; + + shaper_profile_id = params->shaper_profile_id; + shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id); + + parent_node = nix_tm_node_search(dev, parent_node_id, user); + + tm_node = rte_zmalloc("otx2_nix_tm_node", + sizeof(struct otx2_nix_tm_node), 0); + if (!tm_node) + return -ENOMEM; + + tm_node->level_id = level_id; + tm_node->hw_lvl_id = hw_lvl_id; + + tm_node->id = node_id; + tm_node->priority = priority; + tm_node->weight = weight; + tm_node->rr_prio = 0xf; + tm_node->max_prio = UINT32_MAX; + tm_node->hw_id = UINT32_MAX; + tm_node->flags = 0; + if (user) + tm_node->flags = NIX_TM_NODE_USER; + rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); + + if (shaper_profile) + shaper_profile->reference_count++; + tm_node->parent = parent_node; + tm_node->parent_hw_id = UINT32_MAX; + + TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node); + + return 0; +} + +static int +nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + + while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) { + if (shaper_profile->reference_count) + otx2_tm_dbg("Shaper profile %u has non zero references", + shaper_profile->shaper_profile_id); + TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper); + rte_free(shaper_profile); + } + + return 0; +} + +static int +nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t def = eth_dev->data->nb_tx_queues; + struct rte_tm_node_params params; + uint32_t leaf_parent, i; + int rc = 0; + + /* Default params */ + memset(¶ms, 0, sizeof(params)); + params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE; + + if (nix_tm_have_tl1_access(dev)) { + dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1; + rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL1, + OTX2_TM_LVL_ROOT, false, ¶ms); + if (rc) + goto exit; + rc = nix_tm_node_add_to_list(dev, def + 1, def, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL2, + OTX2_TM_LVL_SCH1, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL3, + OTX2_TM_LVL_SCH2, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL4, + OTX2_TM_LVL_SCH3, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_SMQ, + OTX2_TM_LVL_SCH4, false, ¶ms); + if (rc) + goto exit; + + leaf_parent = def + 4; + } else { + dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2; + rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL2, + OTX2_TM_LVL_ROOT, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 1, def, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL3, + OTX2_TM_LVL_SCH1, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL4, + OTX2_TM_LVL_SCH2, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_SMQ, + OTX2_TM_LVL_SCH3, false, ¶ms); + if (rc) + goto exit; + + leaf_parent = def + 3; + } + + /* Add leaf nodes */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_CNT, + OTX2_TM_LVL_QUEUE, false, ¶ms); + if (rc) + break; + } + +exit: + return rc; +} + +void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + TAILQ_INIT(&dev->node_list); + TAILQ_INIT(&dev->shaper_profile_list); +} + +int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint16_t sq_cnt = eth_dev->data->nb_tx_queues; + int rc; + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(dev); + dev->tm_flags = NIX_TM_DEFAULT_TREE; + + rc = nix_tm_prepare_default_tree(eth_dev); + if (rc != 0) + return rc; + + dev->tm_leaf_cnt = sq_cnt; + + return 0; +} + +int +otx2_nix_tm_fini(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(dev); + + dev->tm_flags = 0; + return 0; +} diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h new file mode 100644 index 000000000..94023fa99 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tm.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_TM_H__ +#define __OTX2_TM_H__ + +#include + +#include + +#define NIX_TM_DEFAULT_TREE BIT_ULL(0) + +struct otx2_eth_dev; + +void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev); + +struct otx2_nix_tm_node { + TAILQ_ENTRY(otx2_nix_tm_node) node; + uint32_t id; + uint32_t hw_id; + uint32_t priority; + uint32_t weight; + uint16_t level_id; + uint16_t hw_lvl_id; + uint32_t rr_prio; + uint32_t rr_num; + uint32_t max_prio; + uint32_t parent_hw_id; + uint32_t flags; +#define NIX_TM_NODE_HWRES BIT_ULL(0) +#define NIX_TM_NODE_ENABLED BIT_ULL(1) +#define NIX_TM_NODE_USER BIT_ULL(2) + struct otx2_nix_tm_node *parent; + struct rte_tm_node_params params; +}; + +struct otx2_nix_tm_shaper_profile { + TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper; + uint32_t shaper_profile_id; + uint32_t reference_count; + struct rte_tm_shaper_params profile; +}; + +struct shaper_params { + uint64_t burst_exponent; + uint64_t burst_mantissa; + uint64_t div_exp; + uint64_t exponent; + uint64_t mantissa; + uint64_t burst; + uint64_t rate; +}; + +TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node); +TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); + +#define MAX_SCHED_WEIGHT ((uint8_t)~0) +#define NIX_TM_RR_QUANTUM_MAX ((1 << 24) - 1) + +/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */ +/* = NIX_MAX_HW_MTU */ +#define DEFAULT_RR_WEIGHT 71 + +#endif /* __OTX2_TM_H__ */ -- 2.21.0