From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7D4645A3A; Thu, 26 Sep 2024 18:02:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7590040B8F; Thu, 26 Sep 2024 18:02:29 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9A8CA40A8A for ; Thu, 26 Sep 2024 18:02:27 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 48QAWYlg012386 for ; Thu, 26 Sep 2024 09:02:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=/ cfjbHCixMYJqsnDdIbrPvVkC/b4cU5vUGvMRsJxaSA=; b=k/tsLleatkjuRZhhz sCW0DqsD34kaszlCYfK29b9PiiKXZSgVGT18cn6a6qVlwd3c/AWqcDA5C7q1QpyP 2t2bseQ45mqvKr5uIa9zaW0qUOzvi3BJr8metMh5mvROIjeWhslvXwe+Kqw0Re6P awqU1iG6rxcErqfyz7gzr1Oea1eSUXlLBcqop7AbW+pJoCt179Ybk377/6MSOp/I xwsOG6LqnzBrSj5v6pRc10Vr3Aj2AYLdKw29Clkfc14P+zeGi8IMkKmgwTyj3ZsI XGZywyYlnvlHQM13N8XYT6dtinCXcjNQ8g9thvllcDwRe/iqcDc/Wv6uAUcQhOO8 lAFSA== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 41w5xyscqb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 26 Sep 2024 09:02:26 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 26 Sep 2024 09:02:24 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 26 Sep 2024 09:02:24 -0700 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 4D40F3F7051; Thu, 26 Sep 2024 09:02:22 -0700 (PDT) From: Nithin Dabilpuram To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: Subject: [PATCH v2 06/18] common/cnxk: support NIX queue config for cn20k Date: Thu, 26 Sep 2024 21:31:46 +0530 Message-ID: <20240926160158.3206321-7-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240926160158.3206321-1-ndabilpuram@marvell.com> References: <20240910085909.1514457-1-ndabilpuram@marvell.com> <20240926160158.3206321-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: 7dkUIfj_Ed1v5fff1jgznuJhG6dT_WiS X-Proofpoint-GUID: 7dkUIfj_Ed1v5fff1jgznuJhG6dT_WiS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Satha Rao Add support to setup NIX RQ, SQ, CQ for cn20k. Signed-off-by: Satha Rao --- drivers/common/cnxk/roc_nix_fc.c | 52 ++- drivers/common/cnxk/roc_nix_inl.c | 2 + drivers/common/cnxk/roc_nix_priv.h | 1 + drivers/common/cnxk/roc_nix_queue.c | 532 ++++++++++++++++++++++++++- drivers/common/cnxk/roc_nix_stats.c | 55 ++- drivers/common/cnxk/roc_nix_tm.c | 22 +- drivers/common/cnxk/roc_nix_tm_ops.c | 14 +- 7 files changed, 650 insertions(+), 28 deletions(-) diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c index 2f72e67993..0676363c58 100644 --- a/drivers/common/cnxk/roc_nix_fc.c +++ b/drivers/common/cnxk/roc_nix_fc.c @@ -127,7 +127,7 @@ nix_fc_cq_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) aq->qidx = fc_cfg->cq_cfg.rq; aq->ctype = NIX_AQ_CTYPE_CQ; aq->op = NIX_AQ_INSTOP_READ; - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_req *aq; aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); @@ -136,6 +136,18 @@ nix_fc_cq_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) goto exit; } + aq->qidx = fc_cfg->cq_cfg.rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_READ; + } else { + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) { + rc = -ENOSPC; + goto exit; + } + aq->qidx = fc_cfg->cq_cfg.rq; aq->ctype = NIX_AQ_CTYPE_CQ; aq->op = NIX_AQ_INSTOP_READ; @@ -179,7 +191,7 @@ nix_fc_rq_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) aq->qidx = fc_cfg->rq_cfg.rq; aq->ctype = NIX_AQ_CTYPE_RQ; aq->op = NIX_AQ_INSTOP_READ; - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_req *aq; aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); @@ -188,6 +200,18 @@ nix_fc_rq_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) goto exit; } + aq->qidx = fc_cfg->rq_cfg.rq; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + } else { + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) { + rc = -ENOSPC; + goto exit; + } + aq->qidx = fc_cfg->rq_cfg.rq; aq->ctype = NIX_AQ_CTYPE_RQ; aq->op = NIX_AQ_INSTOP_READ; @@ -270,7 +294,7 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) aq->cq.bp_ena = !!(fc_cfg->cq_cfg.enable); aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena); - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_req *aq; aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); @@ -290,6 +314,28 @@ nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg) aq->cq_mask.bp = ~(aq->cq_mask.bp); } + aq->cq.bp_ena = !!(fc_cfg->cq_cfg.enable); + aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena); + } else { + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) { + rc = -ENOSPC; + goto exit; + } + + aq->qidx = fc_cfg->cq_cfg.rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + if (fc_cfg->cq_cfg.enable) { + aq->cq.bpid = nix->bpid[fc_cfg->cq_cfg.tc]; + aq->cq_mask.bpid = ~(aq->cq_mask.bpid); + aq->cq.bp = fc_cfg->cq_cfg.cq_drop; + aq->cq_mask.bp = ~(aq->cq_mask.bp); + } + aq->cq.bp_ena = !!(fc_cfg->cq_cfg.enable); aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena); } diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c index a984ac56d9..a759052973 100644 --- a/drivers/common/cnxk/roc_nix_inl.c +++ b/drivers/common/cnxk/roc_nix_inl.c @@ -1385,6 +1385,8 @@ roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq, bool enable) mbox = mbox_get(dev->mbox); if (roc_model_is_cn9k()) rc = nix_rq_cn9k_cfg(dev, inl_rq, inl_dev->qints, false, enable); + else if (roc_model_is_cn10k()) + rc = nix_rq_cn10k_cfg(dev, inl_rq, inl_dev->qints, false, enable); else rc = nix_rq_cfg(dev, inl_rq, inl_dev->qints, false, enable); if (rc) { diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h index 275ffc8ea3..ade42c1878 100644 --- a/drivers/common/cnxk/roc_nix_priv.h +++ b/drivers/common/cnxk/roc_nix_priv.h @@ -409,6 +409,7 @@ int nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node, int nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, bool ena); +int nix_rq_cn10k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, bool ena); int nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, bool ena); int nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable); diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index f5441e0e6b..bb1b70424f 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -69,7 +69,7 @@ nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable) aq->rq.ena = enable; aq->rq_mask.ena = ~(aq->rq_mask.ena); - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_req *aq; aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); @@ -82,6 +82,21 @@ nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable) aq->ctype = NIX_AQ_CTYPE_RQ; aq->op = NIX_AQ_INSTOP_WRITE; + aq->rq.ena = enable; + aq->rq_mask.ena = ~(aq->rq_mask.ena); + } else { + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) { + rc = -ENOSPC; + goto exit; + } + + aq->qidx = rq->qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->rq.ena = enable; aq->rq_mask.ena = ~(aq->rq_mask.ena); } @@ -150,7 +165,7 @@ roc_nix_rq_is_sso_enable(struct roc_nix *roc_nix, uint32_t qid) goto exit; sso_enable = rsp->rq.sso_ena; - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_rsp *rsp; struct nix_cn10k_aq_enq_req *aq; @@ -164,6 +179,25 @@ roc_nix_rq_is_sso_enable(struct roc_nix *roc_nix, uint32_t qid) aq->ctype = NIX_AQ_CTYPE_RQ; aq->op = NIX_AQ_INSTOP_READ; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto exit; + + sso_enable = rsp->rq.sso_ena; + } else { + struct nix_cn20k_aq_enq_rsp *rsp; + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) { + rc = -ENOSPC; + goto exit; + } + + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = mbox_process_msg(mbox, (void *)&rsp); if (rc) goto exit; @@ -222,7 +256,7 @@ nix_rq_aura_buf_type_update(struct roc_nix_rq *rq, bool set) if (rsp->rq.spb_ena) spb_aura = roc_npa_aura_handle_gen(rsp->rq.spb_aura, aura_base); mbox_put(mbox); - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_rsp *rsp; struct nix_cn10k_aq_enq_req *aq; @@ -249,6 +283,32 @@ nix_rq_aura_buf_type_update(struct roc_nix_rq *rq, bool set) if (rsp->rq.vwqe_ena) vwqe_aura = roc_npa_aura_handle_gen(rsp->rq.wqe_aura, aura_base); + mbox_put(mbox); + } else { + struct nix_cn20k_aq_enq_rsp *rsp; + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox_get(mbox)); + if (!aq) { + mbox_put(mbox); + return -ENOSPC; + } + + aq->qidx = rq->qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + mbox_put(mbox); + return rc; + } + + /* Get aura handle from aura */ + lpb_aura = roc_npa_aura_handle_gen(rsp->rq.lpb_aura, aura_base); + if (rsp->rq.spb_ena) + spb_aura = roc_npa_aura_handle_gen(rsp->rq.spb_aura, aura_base); + mbox_put(mbox); } @@ -443,8 +503,7 @@ nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, } int -nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, - bool ena) +nix_rq_cn10k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, bool ena) { struct nix_cn10k_aq_enq_req *aq; struct mbox *mbox = dev->mbox; @@ -667,6 +726,171 @@ nix_rq_cman_cfg(struct dev *dev, struct roc_nix_rq *rq) return rc; } +int +nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg, bool ena) +{ + struct nix_cn20k_aq_enq_req *aq; + struct mbox *mbox = dev->mbox; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) + return -ENOSPC; + + aq->qidx = rq->qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = cfg ? NIX_AQ_INSTOP_WRITE : NIX_AQ_INSTOP_INIT; + + if (rq->sso_ena) { + /* SSO mode */ + aq->rq.sso_ena = 1; + aq->rq.sso_tt = rq->tt; + aq->rq.sso_grp = rq->hwgrp; + aq->rq.ena_wqwd = 1; + aq->rq.wqe_skip = rq->wqe_skip; + aq->rq.wqe_caching = 1; + + aq->rq.good_utag = rq->tag_mask >> 24; + aq->rq.bad_utag = rq->tag_mask >> 24; + aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0); + + if (rq->vwqe_ena) + aq->rq.wqe_aura = roc_npa_aura_handle_to_aura(rq->vwqe_aura_handle); + } else { + /* CQ mode */ + aq->rq.sso_ena = 0; + aq->rq.good_utag = rq->tag_mask >> 24; + aq->rq.bad_utag = rq->tag_mask >> 24; + aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0); + aq->rq.cq = rq->cqid; + } + + if (rq->ipsech_ena) { + aq->rq.ipsech_ena = 1; + aq->rq.ipsecd_drop_en = 1; + aq->rq.ena_wqwd = 1; + aq->rq.wqe_skip = rq->wqe_skip; + aq->rq.wqe_caching = 1; + } + + aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle); + + /* Sizes must be aligned to 8 bytes */ + if (rq->first_skip & 0x7 || rq->later_skip & 0x7 || rq->lpb_size & 0x7) + return -EINVAL; + + /* Expressed in number of dwords */ + aq->rq.first_skip = rq->first_skip / 8; + aq->rq.later_skip = rq->later_skip / 8; + aq->rq.flow_tagw = rq->flow_tag_width; /* 32-bits */ + aq->rq.lpb_sizem1 = rq->lpb_size / 8; + aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */ + aq->rq.ena = ena; + + if (rq->spb_ena) { + uint32_t spb_sizem1; + + aq->rq.spb_ena = 1; + aq->rq.spb_aura = + roc_npa_aura_handle_to_aura(rq->spb_aura_handle); + + if (rq->spb_size & 0x7 || + rq->spb_size > NIX_RQ_CN10K_SPB_MAX_SIZE) + return -EINVAL; + + spb_sizem1 = rq->spb_size / 8; /* Expressed in no. of dwords */ + spb_sizem1 -= 1; /* Expressed in size minus one */ + aq->rq.spb_sizem1 = spb_sizem1 & 0x3F; + aq->rq.spb_high_sizem1 = (spb_sizem1 >> 6) & 0x7; + } else { + aq->rq.spb_ena = 0; + } + + aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ + aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ + aq->rq.rq_int_ena = 0; + /* Many to one reduction */ + aq->rq.qint_idx = rq->qid % qints; + aq->rq.xqe_drop_ena = 0; + aq->rq.lpb_drop_ena = rq->lpb_drop_ena; + aq->rq.spb_drop_ena = rq->spb_drop_ena; + + /* If RED enabled, then fill enable for all cases */ + if (rq->red_pass && (rq->red_pass >= rq->red_drop)) { + aq->rq.spb_pool_pass = rq->spb_red_pass; + aq->rq.lpb_pool_pass = rq->red_pass; + aq->rq.wqe_pool_pass = rq->red_pass; + aq->rq.xqe_pass = rq->red_pass; + + aq->rq.spb_pool_drop = rq->spb_red_drop; + aq->rq.lpb_pool_drop = rq->red_drop; + aq->rq.wqe_pool_drop = rq->red_drop; + aq->rq.xqe_drop = rq->red_drop; + } + + if (cfg) { + if (rq->sso_ena) { + /* SSO mode */ + aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena; + aq->rq_mask.sso_tt = ~aq->rq_mask.sso_tt; + aq->rq_mask.sso_grp = ~aq->rq_mask.sso_grp; + aq->rq_mask.ena_wqwd = ~aq->rq_mask.ena_wqwd; + aq->rq_mask.wqe_skip = ~aq->rq_mask.wqe_skip; + aq->rq_mask.wqe_caching = ~aq->rq_mask.wqe_caching; + aq->rq_mask.good_utag = ~aq->rq_mask.good_utag; + aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag; + aq->rq_mask.ltag = ~aq->rq_mask.ltag; + if (rq->vwqe_ena) + aq->rq_mask.wqe_aura = ~aq->rq_mask.wqe_aura; + } else { + /* CQ mode */ + aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena; + aq->rq_mask.good_utag = ~aq->rq_mask.good_utag; + aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag; + aq->rq_mask.ltag = ~aq->rq_mask.ltag; + aq->rq_mask.cq = ~aq->rq_mask.cq; + } + + if (rq->ipsech_ena) + aq->rq_mask.ipsech_ena = ~aq->rq_mask.ipsech_ena; + + if (rq->spb_ena) { + aq->rq_mask.spb_aura = ~aq->rq_mask.spb_aura; + aq->rq_mask.spb_sizem1 = ~aq->rq_mask.spb_sizem1; + aq->rq_mask.spb_high_sizem1 = + ~aq->rq_mask.spb_high_sizem1; + } + + aq->rq_mask.spb_ena = ~aq->rq_mask.spb_ena; + aq->rq_mask.lpb_aura = ~aq->rq_mask.lpb_aura; + aq->rq_mask.first_skip = ~aq->rq_mask.first_skip; + aq->rq_mask.later_skip = ~aq->rq_mask.later_skip; + aq->rq_mask.flow_tagw = ~aq->rq_mask.flow_tagw; + aq->rq_mask.lpb_sizem1 = ~aq->rq_mask.lpb_sizem1; + aq->rq_mask.ena = ~aq->rq_mask.ena; + aq->rq_mask.pb_caching = ~aq->rq_mask.pb_caching; + aq->rq_mask.xqe_imm_size = ~aq->rq_mask.xqe_imm_size; + aq->rq_mask.rq_int_ena = ~aq->rq_mask.rq_int_ena; + aq->rq_mask.qint_idx = ~aq->rq_mask.qint_idx; + aq->rq_mask.xqe_drop_ena = ~aq->rq_mask.xqe_drop_ena; + aq->rq_mask.lpb_drop_ena = ~aq->rq_mask.lpb_drop_ena; + aq->rq_mask.spb_drop_ena = ~aq->rq_mask.spb_drop_ena; + + if (rq->red_pass && (rq->red_pass >= rq->red_drop)) { + aq->rq_mask.spb_pool_pass = ~aq->rq_mask.spb_pool_pass; + aq->rq_mask.lpb_pool_pass = ~aq->rq_mask.lpb_pool_pass; + aq->rq_mask.wqe_pool_pass = ~aq->rq_mask.wqe_pool_pass; + aq->rq_mask.xqe_pass = ~aq->rq_mask.xqe_pass; + + aq->rq_mask.spb_pool_drop = ~aq->rq_mask.spb_pool_drop; + aq->rq_mask.lpb_pool_drop = ~aq->rq_mask.lpb_pool_drop; + aq->rq_mask.wqe_pool_drop = ~aq->rq_mask.wqe_pool_drop; + aq->rq_mask.xqe_drop = ~aq->rq_mask.xqe_drop; + } + } + + return 0; +} + int roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena) { @@ -691,6 +915,8 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena) if (is_cn9k) rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, false, ena); + else if (roc_model_is_cn10k()) + rc = nix_rq_cn10k_cfg(dev, rq, nix->qints, false, ena); else rc = nix_rq_cfg(dev, rq, nix->qints, false, ena); @@ -745,6 +971,8 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena) mbox = mbox_get(m_box); if (is_cn9k) rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, true, ena); + else if (roc_model_is_cn10k()) + rc = nix_rq_cn10k_cfg(dev, rq, nix->qints, true, ena); else rc = nix_rq_cfg(dev, rq, nix->qints, true, ena); @@ -817,12 +1045,121 @@ roc_nix_rq_fini(struct roc_nix_rq *rq) return 0; } +static inline int +roc_nix_cn20k_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq) +{ + struct nix *nix = roc_nix_to_nix_priv(roc_nix); + struct mbox *mbox = (&nix->dev)->mbox; + volatile struct nix_cn20k_cq_ctx_s *cq_ctx; + uint16_t drop_thresh = NIX_CQ_THRESH_LEVEL; + uint16_t cpt_lbpid = nix->cpt_lbpid; + struct nix_cn20k_aq_enq_req *aq; + enum nix_q_size qsize; + size_t desc_sz; + int rc; + + if (cq == NULL) + return NIX_ERR_PARAM; + + qsize = nix_qsize_clampup(cq->nb_desc); + cq->nb_desc = nix_qsize_to_val(qsize); + cq->qmask = cq->nb_desc - 1; + cq->door = nix->base + NIX_LF_CQ_OP_DOOR; + cq->status = (int64_t *)(nix->base + NIX_LF_CQ_OP_STATUS); + cq->wdata = (uint64_t)cq->qid << 32; + cq->roc_nix = roc_nix; + + /* CQE of W16 */ + desc_sz = cq->nb_desc * NIX_CQ_ENTRY_SZ; + cq->desc_base = plt_zmalloc(desc_sz, NIX_CQ_ALIGN); + if (cq->desc_base == NULL) { + rc = NIX_ERR_NO_MEM; + goto fail; + } + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox_get(mbox)); + if (!aq) { + mbox_put(mbox); + return -ENOSPC; + } + + aq->qidx = cq->qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_INIT; + cq_ctx = &aq->cq; + + cq_ctx->ena = 1; + cq_ctx->caching = 1; + cq_ctx->qsize = qsize; + cq_ctx->base = (uint64_t)cq->desc_base; + cq_ctx->avg_level = 0xff; + cq_ctx->cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT); + cq_ctx->cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR); + if (roc_feature_nix_has_late_bp() && roc_nix_inl_inb_is_enabled(roc_nix)) { + cq_ctx->cq_err_int_ena |= BIT(NIX_CQERRINT_CPT_DROP); + cq_ctx->cpt_drop_err_en = 1; + /* Enable Late BP only when non zero CPT BPID */ + if (cpt_lbpid) { + cq_ctx->lbp_ena = 1; + cq_ctx->lbpid_low = cpt_lbpid & 0x7; + cq_ctx->lbpid_med = (cpt_lbpid >> 3) & 0x7; + cq_ctx->lbpid_high = (cpt_lbpid >> 6) & 0x7; + cq_ctx->lbp_frac = NIX_CQ_LPB_THRESH_FRAC; + } + drop_thresh = NIX_CQ_SEC_THRESH_LEVEL; + } + + /* Many to one reduction */ + cq_ctx->qint_idx = cq->qid % nix->qints; + /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */ + cq_ctx->cint_idx = cq->qid; + + if (roc_errata_nix_has_cq_min_size_4k()) { + const float rx_cq_skid = NIX_CQ_FULL_ERRATA_SKID; + uint16_t min_rx_drop; + + min_rx_drop = ceil(rx_cq_skid / (float)cq->nb_desc); + cq_ctx->drop = min_rx_drop; + cq_ctx->drop_ena = 1; + cq->drop_thresh = min_rx_drop; + } else { + cq->drop_thresh = drop_thresh; + /* Drop processing or red drop cannot be enabled due to + * due to packets coming for second pass from CPT. + */ + if (!roc_nix_inl_inb_is_enabled(roc_nix)) { + cq_ctx->drop = cq->drop_thresh; + cq_ctx->drop_ena = 1; + } + } + cq_ctx->bp = cq->drop_thresh; + + if (roc_feature_nix_has_cqe_stash()) { + if (cq_ctx->caching) { + cq_ctx->stashing = 1; + cq_ctx->stash_thresh = cq->stash_thresh; + } + } + + rc = mbox_process(mbox); + mbox_put(mbox); + if (rc) + goto free_mem; + + return nix_tel_node_add_cq(cq); + +free_mem: + plt_free(cq->desc_base); +fail: + return rc; +} + int roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq) { struct nix *nix = roc_nix_to_nix_priv(roc_nix); struct mbox *mbox = (&nix->dev)->mbox; - volatile struct nix_cq_ctx_s *cq_ctx; + volatile struct nix_cq_ctx_s *cq_ctx = NULL; uint16_t drop_thresh = NIX_CQ_THRESH_LEVEL; uint16_t cpt_lbpid = nix->cpt_lbpid; enum nix_q_size qsize; @@ -832,6 +1169,9 @@ roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq) if (cq == NULL) return NIX_ERR_PARAM; + if (roc_model_is_cn20k()) + return roc_nix_cn20k_cq_init(roc_nix, cq); + qsize = nix_qsize_clampup(cq->nb_desc); cq->nb_desc = nix_qsize_to_val(qsize); cq->qmask = cq->nb_desc - 1; @@ -861,7 +1201,7 @@ roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq) aq->ctype = NIX_AQ_CTYPE_CQ; aq->op = NIX_AQ_INSTOP_INIT; cq_ctx = &aq->cq; - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_req *aq; aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox_get(mbox)); @@ -972,7 +1312,7 @@ roc_nix_cq_fini(struct roc_nix_cq *cq) aq->cq.bp_ena = 0; aq->cq_mask.ena = ~aq->cq_mask.ena; aq->cq_mask.bp_ena = ~aq->cq_mask.bp_ena; - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_req *aq; aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); @@ -981,6 +1321,26 @@ roc_nix_cq_fini(struct roc_nix_cq *cq) return -ENOSPC; } + aq->qidx = cq->qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->cq.ena = 0; + aq->cq.bp_ena = 0; + aq->cq_mask.ena = ~aq->cq_mask.ena; + aq->cq_mask.bp_ena = ~aq->cq_mask.bp_ena; + if (roc_feature_nix_has_late_bp() && roc_nix_inl_inb_is_enabled(cq->roc_nix)) { + aq->cq.lbp_ena = 0; + aq->cq_mask.lbp_ena = ~aq->cq_mask.lbp_ena; + } + } else { + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) { + mbox_put(mbox); + return -ENOSPC; + } + aq->qidx = cq->qid; aq->ctype = NIX_AQ_CTYPE_CQ; aq->op = NIX_AQ_INSTOP_WRITE; @@ -1227,14 +1587,152 @@ sq_cn9k_fini(struct nix *nix, struct roc_nix_sq *sq) return 0; } +static int +sq_cn10k_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum, uint16_t smq) +{ + struct roc_nix *roc_nix = nix_priv_to_roc_nix(nix); + struct mbox *mbox = (&nix->dev)->mbox; + struct nix_cn10k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + if (!aq) + return -ENOSPC; + + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_INIT; + aq->sq.max_sqe_size = sq->max_sqe_sz; + + aq->sq.max_sqe_size = sq->max_sqe_sz; + aq->sq.smq = smq; + aq->sq.smq_rr_weight = rr_quantum; + if (roc_nix_is_sdp(roc_nix)) + aq->sq.default_chan = nix->tx_chan_base + (sq->qid % nix->tx_chan_cnt); + else + aq->sq.default_chan = nix->tx_chan_base; + aq->sq.sqe_stype = NIX_STYPE_STF; + aq->sq.ena = 1; + aq->sq.sso_ena = !!sq->sso_ena; + aq->sq.cq_ena = !!sq->cq_ena; + aq->sq.cq = sq->cqid; + aq->sq.cq_limit = sq->cq_drop_thresh; + if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8) + aq->sq.sqe_stype = NIX_STYPE_STP; + aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle); + aq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR); + aq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL); + aq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR); + aq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR); + + /* Many to one reduction */ + aq->sq.qint_idx = sq->qid % nix->qints; + if (roc_errata_nix_assign_incorrect_qint()) { + /* Assigning QINT 0 to all the SQs, an errata exists where NIXTX can + * send incorrect QINT_IDX when reporting queue interrupt (QINT). This + * might result in software missing the interrupt. + */ + aq->sq.qint_idx = 0; + } + return 0; +} + +static int +sq_cn10k_fini(struct nix *nix, struct roc_nix_sq *sq) +{ + struct mbox *mbox = mbox_get((&nix->dev)->mbox); + struct nix_cn10k_aq_enq_rsp *rsp; + struct nix_cn10k_aq_enq_req *aq; + uint16_t sqes_per_sqb; + void *sqb_buf; + int rc, count; + + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + if (!aq) { + mbox_put(mbox); + return -ENOSPC; + } + + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + mbox_put(mbox); + return rc; + } + + /* Check if sq is already cleaned up */ + if (!rsp->sq.ena) { + mbox_put(mbox); + return 0; + } + + /* Disable sq */ + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + if (!aq) { + mbox_put(mbox); + return -ENOSPC; + } + + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->sq_mask.ena = ~aq->sq_mask.ena; + aq->sq.ena = 0; + rc = mbox_process(mbox); + if (rc) { + mbox_put(mbox); + return rc; + } + + /* Read SQ and free sqb's */ + aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + if (!aq) { + mbox_put(mbox); + return -ENOSPC; + } + + aq->qidx = sq->qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + mbox_put(mbox); + return rc; + } + + if (aq->sq.smq_pend) + plt_err("SQ has pending SQE's"); + + count = aq->sq.sqb_count; + sqes_per_sqb = 1 << sq->sqes_per_sqb_log2; + /* Free SQB's that are used */ + sqb_buf = (void *)rsp->sq.head_sqb; + while (count) { + void *next_sqb; + + next_sqb = *(void **)((uint64_t *)sqb_buf + + (uint32_t)((sqes_per_sqb - 1) * (0x2 >> sq->max_sqe_sz) * 8)); + roc_npa_aura_op_free(sq->aura_handle, 1, (uint64_t)sqb_buf); + sqb_buf = next_sqb; + count--; + } + + /* Free next to use sqb */ + if (rsp->sq.next_sqb) + roc_npa_aura_op_free(sq->aura_handle, 1, rsp->sq.next_sqb); + mbox_put(mbox); + return 0; +} + static int sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum, uint16_t smq) { struct roc_nix *roc_nix = nix_priv_to_roc_nix(nix); struct mbox *mbox = (&nix->dev)->mbox; - struct nix_cn10k_aq_enq_req *aq; + struct nix_cn20k_aq_enq_req *aq; - aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); if (!aq) return -ENOSPC; @@ -1280,13 +1778,13 @@ static int sq_fini(struct nix *nix, struct roc_nix_sq *sq) { struct mbox *mbox = mbox_get((&nix->dev)->mbox); - struct nix_cn10k_aq_enq_rsp *rsp; - struct nix_cn10k_aq_enq_req *aq; + struct nix_cn20k_aq_enq_rsp *rsp; + struct nix_cn20k_aq_enq_req *aq; uint16_t sqes_per_sqb; void *sqb_buf; int rc, count; - aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); if (!aq) { mbox_put(mbox); return -ENOSPC; @@ -1308,7 +1806,7 @@ sq_fini(struct nix *nix, struct roc_nix_sq *sq) } /* Disable sq */ - aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); if (!aq) { mbox_put(mbox); return -ENOSPC; @@ -1326,7 +1824,7 @@ sq_fini(struct nix *nix, struct roc_nix_sq *sq) } /* Read SQ and free sqb's */ - aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); if (!aq) { mbox_put(mbox); return -ENOSPC; @@ -1408,6 +1906,8 @@ roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq) /* Init SQ context */ if (roc_model_is_cn9k()) rc = sq_cn9k_init(nix, sq, rr_quantum, smq); + else if (roc_model_is_cn10k()) + rc = sq_cn10k_init(nix, sq, rr_quantum, smq); else rc = sq_init(nix, sq, rr_quantum, smq); @@ -1464,6 +1964,8 @@ roc_nix_sq_fini(struct roc_nix_sq *sq) /* Release SQ context */ if (roc_model_is_cn9k()) rc |= sq_cn9k_fini(roc_nix_to_nix_priv(sq->roc_nix), sq); + else if (roc_model_is_cn10k()) + rc |= sq_cn10k_fini(roc_nix_to_nix_priv(sq->roc_nix), sq); else rc |= sq_fini(roc_nix_to_nix_priv(sq->roc_nix), sq); diff --git a/drivers/common/cnxk/roc_nix_stats.c b/drivers/common/cnxk/roc_nix_stats.c index 7a9619b39d..6f241c72de 100644 --- a/drivers/common/cnxk/roc_nix_stats.c +++ b/drivers/common/cnxk/roc_nix_stats.c @@ -173,7 +173,7 @@ nix_stat_rx_queue_reset(struct nix *nix, uint16_t qid) aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs); aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts); aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts); - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_req *aq; aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); @@ -192,6 +192,30 @@ nix_stat_rx_queue_reset(struct nix *nix, uint16_t qid) aq->rq.drop_pkts = 0; aq->rq.re_pkts = 0; + aq->rq_mask.octs = ~(aq->rq_mask.octs); + aq->rq_mask.pkts = ~(aq->rq_mask.pkts); + aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs); + aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts); + aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts); + } else { + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) { + rc = -ENOSPC; + goto exit; + } + + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->rq.octs = 0; + aq->rq.pkts = 0; + aq->rq.drop_octs = 0; + aq->rq.drop_pkts = 0; + aq->rq.re_pkts = 0; + aq->rq_mask.octs = ~(aq->rq_mask.octs); aq->rq_mask.pkts = ~(aq->rq_mask.pkts); aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs); @@ -233,7 +257,7 @@ nix_stat_tx_queue_reset(struct nix *nix, uint16_t qid) aq->sq_mask.pkts = ~(aq->sq_mask.pkts); aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs); aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts); - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_req *aq; aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); @@ -250,6 +274,29 @@ nix_stat_tx_queue_reset(struct nix *nix, uint16_t qid) aq->sq.drop_octs = 0; aq->sq.drop_pkts = 0; + aq->sq_mask.octs = ~(aq->sq_mask.octs); + aq->sq_mask.pkts = ~(aq->sq_mask.pkts); + aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs); + aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts); + aq->sq_mask.aged_drop_octs = ~(aq->sq_mask.aged_drop_octs); + aq->sq_mask.aged_drop_pkts = ~(aq->sq_mask.aged_drop_pkts); + } else { + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) { + rc = -ENOSPC; + goto exit; + } + + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + aq->sq.octs = 0; + aq->sq.pkts = 0; + aq->sq.drop_octs = 0; + aq->sq.drop_pkts = 0; + aq->sq_mask.octs = ~(aq->sq_mask.octs); aq->sq_mask.pkts = ~(aq->sq_mask.pkts); aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs); @@ -375,7 +422,7 @@ roc_nix_xstats_get(struct roc_nix *roc_nix, struct roc_nix_xstat *xstats, xstats[count].id = count; count++; - if (roc_model_is_cn10k()) { + if (roc_model_is_cn10k() || roc_model_is_cn20k()) { for (i = 0; i < CNXK_NIX_NUM_CN10K_RX_XSTATS; i++) { xstats[count].value = NIX_RX_STATS(nix_cn10k_rx_xstats[i].offset); @@ -492,7 +539,7 @@ roc_nix_xstats_names_get(struct roc_nix *roc_nix, count++; } - if (roc_model_is_cn10k()) { + if (roc_model_is_cn10k() || roc_model_is_cn20k()) { for (i = 0; i < CNXK_NIX_NUM_CN10K_RX_XSTATS; i++) { NIX_XSTATS_NAME_PRINT(xstats_names, count, nix_cn10k_rx_xstats, i); diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c index ac522f8235..5725ef568a 100644 --- a/drivers/common/cnxk/roc_nix_tm.c +++ b/drivers/common/cnxk/roc_nix_tm.c @@ -1058,7 +1058,7 @@ nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node, } aq->sq.smq_rr_quantum = rr_quantum; aq->sq_mask.smq_rr_quantum = ~aq->sq_mask.smq_rr_quantum; - } else { + } else if (roc_model_is_cn10k()) { struct nix_cn10k_aq_enq_req *aq; aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox); @@ -1071,6 +1071,26 @@ nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node, aq->ctype = NIX_AQ_CTYPE_SQ; aq->op = NIX_AQ_INSTOP_WRITE; + /* smq update only when needed */ + if (!rr_quantum_only) { + aq->sq.smq = smq; + aq->sq_mask.smq = ~aq->sq_mask.smq; + } + aq->sq.smq_rr_weight = rr_quantum; + aq->sq_mask.smq_rr_weight = ~aq->sq_mask.smq_rr_weight; + } else { + struct nix_cn20k_aq_enq_req *aq; + + aq = mbox_alloc_msg_nix_cn20k_aq_enq(mbox); + if (!aq) { + rc = -ENOSPC; + goto exit; + } + + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + /* smq update only when needed */ if (!rr_quantum_only) { aq->sq.smq = smq; diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index 8144675f89..a9cfadd1b0 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -1294,15 +1294,19 @@ roc_nix_tm_rsrc_max(bool pf, uint16_t schq[ROC_TM_LVL_MAX]) switch (hw_lvl) { case NIX_TXSCH_LVL_SMQ: - max = (roc_model_is_cn9k() ? - NIX_CN9K_TXSCH_LVL_SMQ_MAX : - NIX_TXSCH_LVL_SMQ_MAX); + max = (roc_model_is_cn9k() ? NIX_CN9K_TXSCH_LVL_SMQ_MAX : + (roc_model_is_cn10k() ? NIX_CN10K_TXSCH_LVL_SMQ_MAX : + NIX_TXSCH_LVL_SMQ_MAX)); break; case NIX_TXSCH_LVL_TL4: - max = NIX_TXSCH_LVL_TL4_MAX; + max = (roc_model_is_cn9k() ? NIX_CN9K_TXSCH_LVL_TL4_MAX : + (roc_model_is_cn10k() ? NIX_CN10K_TXSCH_LVL_TL4_MAX : + NIX_TXSCH_LVL_TL4_MAX)); break; case NIX_TXSCH_LVL_TL3: - max = NIX_TXSCH_LVL_TL3_MAX; + max = (roc_model_is_cn9k() ? NIX_CN9K_TXSCH_LVL_TL3_MAX : + (roc_model_is_cn10k() ? NIX_CN10K_TXSCH_LVL_TL3_MAX : + NIX_TXSCH_LVL_TL3_MAX)); break; case NIX_TXSCH_LVL_TL2: max = pf ? NIX_TXSCH_LVL_TL2_MAX : 1; -- 2.34.1