From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2813A00BE; Thu, 16 Jun 2022 11:26:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A2D8C4281E; Thu, 16 Jun 2022 11:26:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id F14674003C for ; Thu, 16 Jun 2022 11:26:38 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25G9CxP2009317; Thu, 16 Jun 2022 02:24:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=nA9S9d9oPariaZ7ZK39FrBEc3/Q8VGxm8u+Xp5Lg9TE=; b=MvHPWQJzF1Z6ksPFGcg9acSX35F/IsWjr5eT7hvNzO0ZtR50LldYRuS/BD4n22kQZwYi SQQc6n/b+mKU+OTfyrgoAIZfTurIUuj/odiZUvmSi6E+QVwtTbC0i8C26xzB8aHdqTu1 bciuRh1yNLQLrVL4kL48of5uOD3IflzZG12H13jgHzHrfcTuT8Gi4ip13V0jr+zdIkp6 LhJeSxYkS1Au+ZKqJQH6C7TZXONNvoJvSHQtWEn7j/f81kuKnBtuZscEFtUS0QmSg8lQ vOdzM7cNeBU8ApP1is1XjaUy43Dh4bYfCIKHudPmscfKV8FKj0uOtiAwEMWL4XejtM9c ZA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3gr1pmg1ws-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 16 Jun 2022 02:24:34 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 16 Jun 2022 02:24:33 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 16 Jun 2022 02:24:33 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 652533F7078; Thu, 16 Jun 2022 02:24:30 -0700 (PDT) From: Nithin Dabilpuram To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao , Ray Kinsella , "Pavan Nikhilesh" , Shijith Thotton CC: Subject: [PATCH v2 03/12] common/cnxk: add PFC support for VFs Date: Thu, 16 Jun 2022 14:54:11 +0530 Message-ID: <20220616092420.17861-3-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220616092420.17861-1-ndabilpuram@marvell.com> References: <20220616070743.30658-1-ndabilpuram@marvell.com> <20220616092420.17861-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-ORIG-GUID: xE8nEaPZpVbRd5VPuI24hbD4oVO4xZrv X-Proofpoint-GUID: xE8nEaPZpVbRd5VPuI24hbD4oVO4xZrv X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-16_05,2022-06-15_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Sunil Kumar Kori Current PFC implementation does not support VFs. Patch enables PFC on VFs too. Also fix the config of aura.bp to be based on number of buffers(aura.limit) and corresponding shift value(aura.shift). Fixes: cb4bfd6e7bdf ("event/cnxk: support Rx adapter") Signed-off-by: Sunil Kumar Kori --- drivers/common/cnxk/roc_nix.h | 14 +++- drivers/common/cnxk/roc_nix_fc.c | 120 +++++++++++++++++++++++++++---- drivers/common/cnxk/roc_nix_priv.h | 2 + drivers/common/cnxk/roc_nix_queue.c | 47 ++++++++++++ drivers/common/cnxk/roc_nix_tm.c | 67 +++++++++-------- drivers/common/cnxk/version.map | 3 +- drivers/event/cnxk/cnxk_eventdev_adptr.c | 12 ++-- drivers/net/cnxk/cnxk_ethdev.h | 2 + 8 files changed, 217 insertions(+), 50 deletions(-) diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 944e4c6..f0d7fc8 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -157,6 +157,7 @@ struct roc_nix_fc_cfg { #define ROC_NIX_FC_RXCHAN_CFG 0 #define ROC_NIX_FC_CQ_CFG 1 #define ROC_NIX_FC_TM_CFG 2 +#define ROC_NIX_FC_RQ_CFG 3 uint8_t type; union { struct { @@ -171,6 +172,14 @@ struct roc_nix_fc_cfg { } cq_cfg; struct { + uint32_t rq; + uint16_t tc; + uint16_t cq_drop; + bool enable; + uint64_t pool; + } rq_cfg; + + struct { uint32_t sq; uint16_t tc; bool enable; @@ -791,8 +800,8 @@ uint16_t __roc_api roc_nix_chan_count_get(struct roc_nix *roc_nix); enum roc_nix_fc_mode __roc_api roc_nix_fc_mode_get(struct roc_nix *roc_nix); -void __roc_api rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, - uint8_t ena, uint8_t force); +void __roc_api roc_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, + uint8_t ena, uint8_t force, uint8_t tc); /* NPC */ int __roc_api roc_nix_npc_promisc_ena_dis(struct roc_nix *roc_nix, int enable); @@ -845,6 +854,7 @@ int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, int __roc_api roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena); int __roc_api roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable); +int __roc_api roc_nix_rq_is_sso_enable(struct roc_nix *roc_nix, uint32_t qid); int __roc_api roc_nix_rq_fini(struct roc_nix_rq *rq); int __roc_api roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq); int __roc_api roc_nix_cq_fini(struct roc_nix_cq *cq); diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c index cef5d07..daae285 100644 --- a/drivers/common/cnxk/roc_nix_fc.c +++ b/drivers/common/cnxk/roc_nix_fc.c @@ -148,6 +148,61 @@ nix_fc_cq_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) } static int +nix_fc_rq_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) +{ + struct mbox *mbox = get_mbox(roc_nix); + struct nix_aq_enq_rsp *rsp; + struct npa_aq_enq_req *npa_req; + struct npa_aq_enq_rsp *npa_rsp; + int rc; + + if (roc_model_is_cn9k()) { + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) + return -ENOSPC; + + aq->qidx = fc_cfg->rq_cfg.rq; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + } else { + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + if (!aq) + return -ENOSPC; + + aq->qidx = fc_cfg->rq_cfg.rq; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + } + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + npa_req = mbox_alloc_msg_npa_aq_enq(mbox); + if (!npa_req) + return -ENOSPC; + + npa_req->aura_id = rsp->rq.lpb_aura; + npa_req->ctype = NPA_AQ_CTYPE_AURA; + npa_req->op = NPA_AQ_INSTOP_READ; + + rc = mbox_process_msg(mbox, (void *)&npa_rsp); + if (rc) + goto exit; + + fc_cfg->cq_cfg.cq_drop = npa_rsp->aura.bp; + fc_cfg->cq_cfg.enable = npa_rsp->aura.bp_ena; + fc_cfg->type = ROC_NIX_FC_RQ_CFG; + +exit: + return rc; +} + +static int nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); @@ -198,6 +253,33 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) return mbox_process(mbox); } +static int +nix_fc_rq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) +{ + struct roc_nix_fc_cfg tmp; + int sso_ena = 0; + + /* Check whether RQ is connected to SSO or not */ + sso_ena = roc_nix_rq_is_sso_enable(roc_nix, fc_cfg->rq_cfg.rq); + if (sso_ena < 0) + return -EINVAL; + + if (sso_ena) + roc_nix_fc_npa_bp_cfg(roc_nix, fc_cfg->rq_cfg.pool, + fc_cfg->rq_cfg.enable, true, + fc_cfg->rq_cfg.tc); + + /* Copy RQ config to CQ config as they are occupying same area */ + memset(&tmp, 0, sizeof(tmp)); + tmp.type = ROC_NIX_FC_CQ_CFG; + tmp.cq_cfg.rq = fc_cfg->rq_cfg.rq; + tmp.cq_cfg.tc = fc_cfg->rq_cfg.tc; + tmp.cq_cfg.cq_drop = fc_cfg->rq_cfg.cq_drop; + tmp.cq_cfg.enable = fc_cfg->rq_cfg.enable; + + return nix_fc_cq_config_set(roc_nix, &tmp); +} + int roc_nix_fc_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) { @@ -207,6 +289,8 @@ roc_nix_fc_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) if (fc_cfg->type == ROC_NIX_FC_CQ_CFG) return nix_fc_cq_config_get(roc_nix, fc_cfg); + else if (fc_cfg->type == ROC_NIX_FC_RQ_CFG) + return nix_fc_rq_config_get(roc_nix, fc_cfg); else if (fc_cfg->type == ROC_NIX_FC_RXCHAN_CFG) return nix_fc_rxchan_bpid_get(roc_nix, fc_cfg); else if (fc_cfg->type == ROC_NIX_FC_TM_CFG) @@ -218,12 +302,10 @@ roc_nix_fc_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) int roc_nix_fc_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) { - if (!roc_nix_is_pf(roc_nix) && !roc_nix_is_lbk(roc_nix) && - !roc_nix_is_sdp(roc_nix)) - return 0; - if (fc_cfg->type == ROC_NIX_FC_CQ_CFG) return nix_fc_cq_config_set(roc_nix, fc_cfg); + else if (fc_cfg->type == ROC_NIX_FC_RQ_CFG) + return nix_fc_rq_config_set(roc_nix, fc_cfg); else if (fc_cfg->type == ROC_NIX_FC_RXCHAN_CFG) return nix_fc_rxchan_bpid_set(roc_nix, fc_cfg->rxchan_cfg.enable); @@ -320,8 +402,8 @@ roc_nix_fc_mode_set(struct roc_nix *roc_nix, enum roc_nix_fc_mode mode) } void -rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena, - uint8_t force) +roc_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena, + uint8_t force, uint8_t tc) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); struct npa_lf *lf = idev_npa_obj_get(); @@ -329,6 +411,7 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena, struct npa_aq_enq_rsp *rsp; struct mbox *mbox; uint32_t limit; + uint64_t shift; int rc; if (roc_nix_is_sdp(roc_nix)) @@ -351,8 +434,10 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena, return; limit = rsp->aura.limit; + shift = rsp->aura.shift; + /* BP is already enabled. */ - if (rsp->aura.bp_ena) { + if (rsp->aura.bp_ena && ena) { uint16_t bpid; bool nix1; @@ -363,12 +448,15 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena, bpid = rsp->aura.nix0_bpid; /* If BP ids don't match disable BP. */ - if (((nix1 != nix->is_nix1) || (bpid != nix->bpid[0])) && + if (((nix1 != nix->is_nix1) || (bpid != nix->bpid[tc])) && !force) { req = mbox_alloc_msg_npa_aq_enq(mbox); if (req == NULL) return; + plt_info("Disabling BP/FC on aura 0x%" PRIx64 + " as it shared across ports or tc", + pool_id); req->aura_id = roc_npa_aura_handle_to_aura(pool_id); req->ctype = NPA_AQ_CTYPE_AURA; req->op = NPA_AQ_INSTOP_WRITE; @@ -378,11 +466,15 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena, mbox_process(mbox); } + + if ((nix1 != nix->is_nix1) || (bpid != nix->bpid[tc])) + plt_info("Ignoring aura 0x%" PRIx64 "->%u bpid mapping", + pool_id, nix->bpid[tc]); return; } /* BP was previously enabled but now disabled skip. */ - if (rsp->aura.bp) + if (rsp->aura.bp && ena) return; req = mbox_alloc_msg_npa_aq_enq(mbox); @@ -395,14 +487,16 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena, if (ena) { if (nix->is_nix1) { - req->aura.nix1_bpid = nix->bpid[0]; + req->aura.nix1_bpid = nix->bpid[tc]; req->aura_mask.nix1_bpid = ~(req->aura_mask.nix1_bpid); } else { - req->aura.nix0_bpid = nix->bpid[0]; + req->aura.nix0_bpid = nix->bpid[tc]; req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid); } - req->aura.bp = NIX_RQ_AURA_THRESH( - limit > 128 ? 256 : limit); /* 95% of size*/ + req->aura.bp = NIX_RQ_AURA_THRESH(limit >> shift); + req->aura_mask.bp = ~(req->aura_mask.bp); + } else { + req->aura.bp = 0; req->aura_mask.bp = ~(req->aura_mask.bp); } diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index cc69d71..5e865f8 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -357,6 +357,8 @@ nix_tm_tree2str(enum roc_nix_tm_tree tree) return "Default Tree"; else if (tree == ROC_NIX_TM_RLIMIT) return "Rate Limit Tree"; + else if (tree == ROC_NIX_TM_PFC) + return "PFC Tree"; else if (tree == ROC_NIX_TM_USER) return "User Tree"; return "???"; diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index 76c049c..fa4c954 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -94,6 +94,53 @@ roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable) } int +roc_nix_rq_is_sso_enable(struct roc_nix *roc_nix, uint32_t qid) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct dev *dev = &nix->dev; + struct mbox *mbox = dev->mbox; + bool sso_enable; + int rc; + + if (roc_model_is_cn9k()) { + struct nix_aq_enq_rsp *rsp; + struct nix_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) + return -ENOSPC; + + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + sso_enable = rsp->rq.sso_ena; + } else { + struct nix_cn10k_aq_enq_rsp *rsp; + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + if (!aq) + return -ENOSPC; + + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + sso_enable = rsp->rq.sso_ena; + } + + return sso_enable ? true : false; +} + +int nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, bool ena) { diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index 7fd54ef..151e217 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -98,7 +98,6 @@ int nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree) { struct nix_tm_node_list *list; - bool is_pf_or_lbk = false; struct nix_tm_node *node; bool skip_bp = false; uint32_t hw_lvl; @@ -106,9 +105,6 @@ nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree) list = nix_tm_node_list(nix, tree); - if ((!dev_is_vf(&nix->dev) || nix->lbk_link) && !nix->sdp_link) - is_pf_or_lbk = true; - for (hw_lvl = 0; hw_lvl <= nix->tm_root_lvl; hw_lvl++) { TAILQ_FOREACH(node, list, node) { if (node->hw_lvl != hw_lvl) @@ -118,7 +114,7 @@ nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree) * set per channel only for PF or lbk vf. */ node->bp_capa = 0; - if (is_pf_or_lbk && !skip_bp && + if (!nix->sdp_link && !skip_bp && node->hw_lvl == nix->tm_link_cfg_lvl) { node->bp_capa = 1; skip_bp = false; @@ -329,6 +325,7 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc, struct nix_tm_node *sq_node; struct nix_tm_node *parent; struct nix_tm_node *node; + uint8_t parent_lvl; uint8_t k = 0; int rc = 0; @@ -336,9 +333,12 @@ nix_tm_bp_config_set(struct roc_nix *roc_nix, uint16_t sq, uint16_t tc, if (!sq_node) return -ENOENT; + parent_lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH2 : + ROC_TM_LVL_SCH1); + parent = sq_node->parent; while (parent) { - if (parent->lvl == ROC_TM_LVL_SCH2) + if (parent->lvl == parent_lvl) break; parent = parent->parent; @@ -1469,16 +1469,18 @@ int roc_nix_tm_pfc_prepare_tree(struct roc_nix *roc_nix) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); + uint8_t leaf_lvl, lvl, lvl_start, lvl_end; uint32_t nonleaf_id = nix->nb_tx_queues; struct nix_tm_node *node = NULL; - uint8_t leaf_lvl, lvl, lvl_end; uint32_t tl2_node_id; uint32_t parent, i; int rc = -ENOMEM; parent = ROC_NIX_TM_NODE_ID_INVALID; - lvl_end = ROC_TM_LVL_SCH3; - leaf_lvl = ROC_TM_LVL_QUEUE; + lvl_end = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH3 : + ROC_TM_LVL_SCH2); + leaf_lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_QUEUE : + ROC_TM_LVL_SCH4); /* TL1 node */ node = nix_tm_node_alloc(); @@ -1501,31 +1503,37 @@ roc_nix_tm_pfc_prepare_tree(struct roc_nix *roc_nix) parent = nonleaf_id; nonleaf_id++; - /* TL2 node */ - rc = -ENOMEM; - node = nix_tm_node_alloc(); - if (!node) - goto error; + lvl_start = ROC_TM_LVL_SCH1; + if (roc_nix_is_pf(roc_nix)) { + /* TL2 node */ + rc = -ENOMEM; + node = nix_tm_node_alloc(); + if (!node) + goto error; - node->id = nonleaf_id; - node->parent_id = parent; - node->priority = 0; - node->weight = NIX_TM_DFLT_RR_WT; - node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; - node->lvl = ROC_TM_LVL_SCH1; - node->tree = ROC_NIX_TM_PFC; - node->rel_chan = NIX_TM_CHAN_INVALID; + node->id = nonleaf_id; + node->parent_id = parent; + node->priority = 0; + node->weight = NIX_TM_DFLT_RR_WT; + node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE; + node->lvl = ROC_TM_LVL_SCH1; + node->tree = ROC_NIX_TM_PFC; + node->rel_chan = NIX_TM_CHAN_INVALID; - rc = nix_tm_node_add(roc_nix, node); - if (rc) - goto error; + rc = nix_tm_node_add(roc_nix, node); + if (rc) + goto error; - tl2_node_id = nonleaf_id; - nonleaf_id++; + lvl_start = ROC_TM_LVL_SCH2; + tl2_node_id = nonleaf_id; + nonleaf_id++; + } else { + tl2_node_id = parent; + } for (i = 0; i < nix->nb_tx_queues; i++) { parent = tl2_node_id; - for (lvl = ROC_TM_LVL_SCH2; lvl <= lvl_end; lvl++) { + for (lvl = lvl_start; lvl <= lvl_end; lvl++) { rc = -ENOMEM; node = nix_tm_node_alloc(); if (!node) @@ -1549,7 +1557,8 @@ roc_nix_tm_pfc_prepare_tree(struct roc_nix *roc_nix) nonleaf_id++; } - lvl = ROC_TM_LVL_SCH4; + lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH4 : + ROC_TM_LVL_SCH3); rc = -ENOMEM; node = nix_tm_node_alloc(); diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 1ba5b4f..27e81f2 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -122,7 +122,7 @@ INTERNAL { roc_nix_fc_config_set; roc_nix_fc_mode_set; roc_nix_fc_mode_get; - rox_nix_fc_npa_bp_cfg; + roc_nix_fc_npa_bp_cfg; roc_nix_get_base_chan; roc_nix_get_pf; roc_nix_get_pf_func; @@ -220,6 +220,7 @@ INTERNAL { roc_nix_rq_ena_dis; roc_nix_rq_fini; roc_nix_rq_init; + roc_nix_rq_is_sso_enable; roc_nix_rq_modify; roc_nix_rss_default_setup; roc_nix_rss_flowkey_set; diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c index cf5b1dd..8fcc377 100644 --- a/drivers/event/cnxk/cnxk_eventdev_adptr.c +++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c @@ -250,9 +250,11 @@ cnxk_sso_rx_adapter_queue_add( rc |= roc_nix_rx_drop_re_set(&cnxk_eth_dev->nix, false); } - rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix, - rxq_sp->qconf.mp->pool_id, true, - dev->force_ena_bp); + + if (rxq_sp->tx_pause) + roc_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix, + rxq_sp->qconf.mp->pool_id, true, + dev->force_ena_bp, rxq_sp->tc); cnxk_eth_dev->nb_rxq_sso++; } @@ -293,9 +295,9 @@ cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, rxq_sp = cnxk_eth_rxq_to_sp( eth_dev->data->rx_queues[rx_queue_id]); rc = cnxk_sso_rxq_disable(cnxk_eth_dev, (uint16_t)rx_queue_id); - rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix, + roc_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix, rxq_sp->qconf.mp->pool_id, false, - dev->force_ena_bp); + dev->force_ena_bp, 0); cnxk_eth_dev->nb_rxq_sso--; /* Enable drop_re if it was disabled earlier */ diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index e992302..0400d73 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -443,6 +443,8 @@ struct cnxk_eth_rxq_sp { struct cnxk_eth_dev *dev; struct cnxk_eth_qconf qconf; uint16_t qid; + uint8_t tx_pause; + uint8_t tc; } __plt_cache_aligned; struct cnxk_eth_txq_sp { -- 2.8.4