From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A85B1A059F; Sat, 11 Apr 2020 13:44:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4936C1C13B; Sat, 11 Apr 2020 13:44:50 +0200 (CEST) Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by dpdk.org (Postfix) with ESMTP id 62FD31C198 for ; Sat, 11 Apr 2020 13:44:48 +0200 (CEST) Received: by mail-pg1-f195.google.com with SMTP id k191so2125487pgc.13 for ; Sat, 11 Apr 2020 04:44:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=c9hYUXHT36ZmtZe8oY+RdW+kdiDvnD4g+zVx8ZLapoc=; b=MnSifT4YgAQfUKBB953q1cE+pWDyB7mIKTitYyNqUkv6KusYG31Cp0Wkl9ATv6y4P5 bowRqg7M7NNwsT9gIHd+SBnmk5gR6A9gffNpcHCb5CbwXg2f+QPLK/GAkZbmeUbL2XQK KkJMWGXRr/YWpvOHTLWUmqO+8eqg+ufqlEtlUn1m5ZcY6qkE7cloLibicWYqGnvmtZb9 tUXhSIywPiQRu4zXf9adBDIi8YXyLNcLrH5UdE93y1naWEBJHIkyEf49K4HPIWORTGPR Jyy/ER99rVu7HNr/ILA5RklbnIVIujfjBymbPYahLVsahGR2/2nZZ0UYUP6k3TPjAWD+ VVFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=c9hYUXHT36ZmtZe8oY+RdW+kdiDvnD4g+zVx8ZLapoc=; b=XP7PgaCDx2DdT04vmJcmusOpFvFkKWlN/Oyi0vXjy+yiESMLC99B+toDccr5XhLqy9 ayVmjvR8XElHmEf/AeFMya9NVqI0GV+3nXlNLtflHWQAo1QA+vSXfYtt36W6uGzFQn54 m2/RXMc+Vp0I5PQwAi1EvOuM8GLbk8XLhIU6nn5dY4f+NNzKa4MzVClhAYKPMTGQ4edX fTRA5wCeZ8dwaUTuHbM6qk73i3cEp5dV7EYb+NNV6vFwp7uPWzS7Psh8pwbR9Hld1lW7 ryaMDWINQNOKPL0O43Bjl+KscSA2xyBJFRGuRZhzwN0OwcVBPV9jHdb/N288r0bRZn7O T4Ng== X-Gm-Message-State: AGi0PuYu6vybfrsJpfG4M8huzSW5nbF3jnSoL5znBvvuh92U2rvvpzBC I4lS1yn7y/vE6f4/3Lo2bA0= X-Google-Smtp-Source: APiQypJduJuc7Cr9R3mBKWgUueTlCevzfr2CdIkyDPDT6TI9NrRTgKGMJ/nOvvx6K+dCRlcwsJXGjg== X-Received: by 2002:a63:db51:: with SMTP id x17mr8368628pgi.162.1586605487360; Sat, 11 Apr 2020 04:44:47 -0700 (PDT) Received: from hyd1588t430.marvell.com ([115.113.156.2]) by smtp.gmail.com with ESMTPSA id mg20sm4004793pjb.12.2020.04.11.04.44.43 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 11 Apr 2020 04:44:46 -0700 (PDT) From: Nithin Dabilpuram To: Beilei Xing , Qi Zhang , Rosen Xu , Wenzhuo Lu , Konstantin Ananyev , Tomasz Duszynski , Liron Himi , Jasvinder Singh , Cristian Dumitrescu Cc: dev@dpdk.org, jerinj@marvell.com, kkanas@marvell.com, Nithin Dabilpuram Date: Sat, 11 Apr 2020 17:14:28 +0530 Message-Id: <20200411114430.18506-2-nithind1988@gmail.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200411114430.18506-1-nithind1988@gmail.com> References: <20200330160019.29674-1-ndabilpuram@marvell.com> <20200411114430.18506-1-nithind1988@gmail.com> Subject: [dpdk-dev] [PATCH v2 2/4] drivers/net: update tm capability for existing pmds X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Since existing PMD's support shaper byte mode and scheduler wfq byte mode, update the same in their port/level/node capabilities that are added. Signed-off-by: Nithin Dabilpuram --- v1..v2: - Newly included patch to change exiting pmd's with tm support of byte mode to show the same in port/level/node cap. drivers/net/i40e/i40e_tm.c | 16 ++++++++++++ drivers/net/ipn3ke/ipn3ke_tm.c | 26 ++++++++++++++++++ drivers/net/ixgbe/ixgbe_tm.c | 16 ++++++++++++ drivers/net/mvpp2/mrvl_tm.c | 14 ++++++++++ drivers/net/softnic/rte_eth_softnic_tm.c | 45 ++++++++++++++++++++++++++++++++ 5 files changed, 117 insertions(+) diff --git a/drivers/net/i40e/i40e_tm.c b/drivers/net/i40e/i40e_tm.c index c76760c..ab272e9 100644 --- a/drivers/net/i40e/i40e_tm.c +++ b/drivers/net/i40e/i40e_tm.c @@ -160,12 +160,16 @@ i40e_tm_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->shaper_private_rate_max = 5000000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; cap->shaper_shared_n_shapers_per_node_max = 0; cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->sched_n_children_max = hw->func_caps.num_tx_qp; /** * HW supports SP. But no plan to support it now. @@ -179,6 +183,8 @@ i40e_tm_capabilities_get(struct rte_eth_dev *dev, * So, all the nodes should have the same weight. */ cap->sched_wfq_weight_max = 1; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 0; cap->cman_head_drop_supported = 0; cap->dynamic_update_mask = 0; cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD; @@ -754,6 +760,8 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->nonleaf.shaper_private_rate_max = 5000000000ull; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; if (level_id == I40E_TM_NODE_TYPE_PORT) cap->nonleaf.sched_n_children_max = @@ -765,6 +773,8 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = 0; return 0; @@ -776,6 +786,8 @@ i40e_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->leaf.shaper_private_rate_max = 5000000000ull; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; cap->leaf.cman_head_drop_supported = false; cap->leaf.cman_wred_context_private_supported = true; @@ -817,6 +829,8 @@ i40e_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 40Gbps -> 5GBps */ cap->shaper_private_rate_max = 5000000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; if (node_type == I40E_TM_NODE_TYPE_QUEUE) { @@ -834,6 +848,8 @@ i40e_node_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; } cap->stats_mask = 0; diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c index 5a16c5f..35c90b8 100644 --- a/drivers/net/ipn3ke/ipn3ke_tm.c +++ b/drivers/net/ipn3ke/ipn3ke_tm.c @@ -440,6 +440,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->shaper_private_dual_rate_n_max = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = 1 + IPN3KE_TM_VT_NODE_NUM; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; @@ -447,6 +449,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; cap->shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; @@ -456,6 +460,8 @@ ipn3ke_tm_capabilities_get(__rte_unused struct rte_eth_dev *dev, cap->sched_wfq_n_children_per_group_max = UINT32_MAX; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = UINT32_MAX; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 1; cap->cman_wred_packet_mode_supported = 0; cap->cman_wred_byte_mode_supported = 0; @@ -517,6 +523,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_dual_rate_supported = 0; cap->nonleaf.shaper_private_rate_min = 1; cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; @@ -524,6 +532,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 0; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; break; @@ -539,6 +549,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_dual_rate_supported = 0; cap->nonleaf.shaper_private_rate_min = 1; cap->nonleaf.shaper_private_rate_max = UINT32_MAX; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; @@ -546,6 +558,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 0; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = STATS_MASK_DEFAULT; break; @@ -561,6 +575,8 @@ ipn3ke_tm_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_dual_rate_supported = 0; cap->leaf.shaper_private_rate_min = 0; cap->leaf.shaper_private_rate_max = 0; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; cap->leaf.cman_head_drop_supported = 0; @@ -632,6 +648,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_VT_NODE_NUM; @@ -640,6 +658,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, IPN3KE_TM_VT_NODE_NUM; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->stats_mask = STATS_MASK_DEFAULT; break; @@ -649,6 +669,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 1; cap->shaper_private_rate_max = UINT32_MAX; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->nonleaf.sched_n_children_max = IPN3KE_TM_COS_NODE_NUM; @@ -657,6 +679,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, IPN3KE_TM_COS_NODE_NUM; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->stats_mask = STATS_MASK_DEFAULT; break; @@ -666,6 +690,8 @@ ipn3ke_tm_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_dual_rate_supported = 0; cap->shaper_private_rate_min = 0; cap->shaper_private_rate_max = 0; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 0; cap->shaper_shared_n_max = 0; cap->leaf.cman_head_drop_supported = 0; diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c index 73845a7..c067109 100644 --- a/drivers/net/ixgbe/ixgbe_tm.c +++ b/drivers/net/ixgbe/ixgbe_tm.c @@ -168,12 +168,16 @@ ixgbe_tm_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->shaper_private_rate_max = 1250000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; cap->shaper_shared_n_nodes_per_shaper_max = 0; cap->shaper_shared_n_shapers_per_node_max = 0; cap->shaper_shared_dual_rate_n_max = 0; cap->shaper_shared_rate_min = 0; cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; cap->sched_n_children_max = hw->mac.max_tx_queues; /** * HW supports SP. But no plan to support it now. @@ -182,6 +186,8 @@ ixgbe_tm_capabilities_get(struct rte_eth_dev *dev, cap->sched_sp_n_priorities_max = 1; cap->sched_wfq_n_children_per_group_max = 0; cap->sched_wfq_n_groups_max = 0; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 0; /** * SW only supports fair round robin now. * So, all the nodes should have the same weight. @@ -875,6 +881,8 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->nonleaf.shaper_private_rate_max = 1250000000ull; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.shaper_shared_n_max = 0; if (level_id == IXGBE_TM_NODE_TYPE_PORT) cap->nonleaf.sched_n_children_max = @@ -886,6 +894,8 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; cap->nonleaf.stats_mask = 0; return 0; @@ -897,6 +907,8 @@ ixgbe_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->leaf.shaper_private_rate_max = 1250000000ull; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.shaper_shared_n_max = 0; cap->leaf.cman_head_drop_supported = false; cap->leaf.cman_wred_context_private_supported = true; @@ -938,6 +950,8 @@ ixgbe_node_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_rate_min = 0; /* 10Gbps -> 1.25GBps */ cap->shaper_private_rate_max = 1250000000ull; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->shaper_shared_n_max = 0; if (node_type == IXGBE_TM_NODE_TYPE_QUEUE) { @@ -955,6 +969,8 @@ ixgbe_node_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.sched_wfq_n_children_per_group_max = 0; cap->nonleaf.sched_wfq_n_groups_max = 0; cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; } cap->stats_mask = 0; diff --git a/drivers/net/mvpp2/mrvl_tm.c b/drivers/net/mvpp2/mrvl_tm.c index 3de8997..e98f576 100644 --- a/drivers/net/mvpp2/mrvl_tm.c +++ b/drivers/net/mvpp2/mrvl_tm.c @@ -193,12 +193,16 @@ mrvl_capabilities_get(struct rte_eth_dev *dev, cap->shaper_private_n_max = cap->shaper_n_max; cap->shaper_private_rate_min = MRVL_RATE_MIN; cap->shaper_private_rate_max = priv->rate_max; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; cap->sched_n_children_max = dev->data->nb_tx_queues; cap->sched_sp_n_priorities_max = dev->data->nb_tx_queues; cap->sched_wfq_n_children_per_group_max = dev->data->nb_tx_queues; cap->sched_wfq_n_groups_max = 1; cap->sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 1; cap->dynamic_update_mask = RTE_TM_UPDATE_NODE_SUSPEND_RESUME | RTE_TM_UPDATE_NODE_STATS; @@ -244,6 +248,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, cap->nonleaf.shaper_private_supported = 1; cap->nonleaf.shaper_private_rate_min = MRVL_RATE_MIN; cap->nonleaf.shaper_private_rate_max = priv->rate_max; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; cap->nonleaf.sched_sp_n_priorities_max = 1; @@ -251,6 +257,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, dev->data->nb_tx_queues; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; cap->nonleaf.stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; } else { /* level_id == MRVL_NODE_QUEUE */ @@ -261,6 +269,8 @@ mrvl_level_capabilities_get(struct rte_eth_dev *dev, cap->leaf.shaper_private_supported = 1; cap->leaf.shaper_private_rate_min = MRVL_RATE_MIN; cap->leaf.shaper_private_rate_max = priv->rate_max; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; cap->leaf.stats_mask = RTE_TM_STATS_N_PKTS; } @@ -300,6 +310,8 @@ mrvl_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, cap->shaper_private_supported = 1; cap->shaper_private_rate_min = MRVL_RATE_MIN; cap->shaper_private_rate_max = priv->rate_max; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; if (node->type == MRVL_NODE_PORT) { cap->nonleaf.sched_n_children_max = dev->data->nb_tx_queues; @@ -308,6 +320,8 @@ mrvl_node_capabilities_get(struct rte_eth_dev *dev, uint32_t node_id, dev->data->nb_tx_queues; cap->nonleaf.sched_wfq_n_groups_max = 1; cap->nonleaf.sched_wfq_weight_max = MRVL_WEIGHT_MAX; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 1; cap->stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES; } else { cap->stats_mask = RTE_TM_STATS_N_PKTS; diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c b/drivers/net/softnic/rte_eth_softnic_tm.c index 80a470c..ac14fe1 100644 --- a/drivers/net/softnic/rte_eth_softnic_tm.c +++ b/drivers/net/softnic/rte_eth_softnic_tm.c @@ -447,6 +447,8 @@ static const struct rte_tm_capabilities tm_cap = { .shaper_private_dual_rate_n_max = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = UINT32_MAX, .shaper_shared_n_nodes_per_shaper_max = UINT32_MAX, @@ -454,6 +456,8 @@ static const struct rte_tm_capabilities tm_cap = { .shaper_shared_dual_rate_n_max = 0, .shaper_shared_rate_min = 1, .shaper_shared_rate_max = UINT32_MAX, + .shaper_shared_packet_mode_supported = 0, + .shaper_shared_byte_mode_supported = 1, .shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD_FCS, .shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS, @@ -463,6 +467,8 @@ static const struct rte_tm_capabilities tm_cap = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, .cman_wred_packet_mode_supported = WRED_SUPPORTED, .cman_wred_byte_mode_supported = 0, @@ -548,6 +554,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, .sched_n_children_max = UINT32_MAX, @@ -555,6 +563,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -572,6 +582,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, .sched_n_children_max = UINT32_MAX, @@ -580,9 +592,14 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_groups_max = 1, #ifdef RTE_SCHED_SUBPORT_TC_OV .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, #else .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, #endif + .stats_mask = STATS_MASK_DEFAULT, } }, }, @@ -599,6 +616,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, .sched_n_children_max = @@ -608,6 +627,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .sched_wfq_n_children_per_group_max = 1, .sched_wfq_n_groups_max = 0, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -625,6 +646,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 1, .sched_n_children_max = @@ -634,6 +657,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, .stats_mask = STATS_MASK_DEFAULT, } }, @@ -651,6 +676,8 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 0, .shaper_private_rate_max = 0, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 0, .shaper_shared_n_max = 0, .cman_head_drop_supported = 0, @@ -736,6 +763,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, {.nonleaf = { @@ -744,6 +773,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -754,6 +785,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, {.nonleaf = { @@ -762,6 +795,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = UINT32_MAX, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -772,6 +807,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 0, {.nonleaf = { @@ -782,6 +819,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .sched_wfq_n_children_per_group_max = 1, .sched_wfq_n_groups_max = 0, .sched_wfq_weight_max = 1, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 0, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -792,6 +831,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 1, .shaper_private_rate_max = UINT32_MAX, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 1, .shaper_shared_n_max = 1, {.nonleaf = { @@ -802,6 +843,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, + .sched_wfq_packet_mode_supported = 0, + .sched_wfq_byte_mode_supported = 1, } }, .stats_mask = STATS_MASK_DEFAULT, @@ -812,6 +855,8 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { .shaper_private_dual_rate_supported = 0, .shaper_private_rate_min = 0, .shaper_private_rate_max = 0, + .shaper_private_packet_mode_supported = 0, + .shaper_private_byte_mode_supported = 0, .shaper_shared_n_max = 0, -- 2.8.4