From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33623A0547; Fri, 5 Mar 2021 14:41:28 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72B2F22A375; Fri, 5 Mar 2021 14:40:03 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 65B7D22A375 for ; Fri, 5 Mar 2021 14:40:02 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 125DV6tc018520 for ; Fri, 5 Mar 2021 05:40:01 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Ius/ObuTSGa9VLL1XGxo4EiV4DPcxvGnYKKQwQzRepM=; b=M20jwTC/aLqDqTXxbcSJQhagbXvsrS/Np/5BWpI+oVxtx4UG5QxN73QF8u2MiO61s3gc 7mQTt1mwYm1XPWQYMqBRZx6XrmGUMsfog2Qsf2SLzi0whhxTegmaOU8tyEevotATqkbd tK0O/8smGOcJUaBPQOP5Whs/BEDQNtHHqveJu9yf/afBqEMgG7WLugkP1SqtmIZ6EYRP NgiRkPUEqa72YWWq4BTCMM4Rj+QNrdCh5LgdE0HwOGVViyVpSzP9uYfZYjWclqDFrCxT oDDCIZx1Wn84QiJMqRpjrgGNOL0r/qEFJLRpsrHi1dK9g+gKE5yKYlaqdl50jnQiaIUT Dw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 372s2umrm3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 05 Mar 2021 05:40:01 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 05:40:00 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 05:39:59 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 5 Mar 2021 05:39:59 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 43EF53F703F; Fri, 5 Mar 2021 05:39:57 -0800 (PST) From: Nithin Dabilpuram To: CC: , , , , , , Date: Fri, 5 Mar 2021 19:08:36 +0530 Message-ID: <20210305133918.8005-11-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210305133918.8005-1-ndabilpuram@marvell.com> References: <20210305133918.8005-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-05_08:2021-03-03, 2021-03-05 signatures=0 Subject: [dpdk-dev] [PATCH 10/52] common/cnxk: add npa irq support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ashwin Sekhar T K Add support for NPA IRQs. Signed-off-by: Ashwin Sekhar T K --- drivers/common/cnxk/meson.build | 1 + drivers/common/cnxk/roc_npa.c | 7 + drivers/common/cnxk/roc_npa_irq.c | 297 +++++++++++++++++++++++++++++++++++++ drivers/common/cnxk/roc_npa_priv.h | 4 + 4 files changed, 309 insertions(+) create mode 100644 drivers/common/cnxk/roc_npa_irq.c diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build index c684e1d..60af484 100644 --- a/drivers/common/cnxk/meson.build +++ b/drivers/common/cnxk/meson.build @@ -17,6 +17,7 @@ sources = files('roc_dev.c', 'roc_mbox.c', 'roc_model.c', 'roc_npa.c', + 'roc_npa_irq.c', 'roc_platform.c', 'roc_utils.c') includes += include_directories('../../bus/pci') diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c index 762f025..003bd8c 100644 --- a/drivers/common/cnxk/roc_npa.c +++ b/drivers/common/cnxk/roc_npa.c @@ -242,11 +242,17 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev) idev->npa = lf; plt_wmb(); + rc = npa_register_irqs(lf); + if (rc) + goto npa_fini; + plt_npa_dbg("npa=%p max_pools=%d pf_func=0x%x msix=0x%x", lf, roc_idev_npa_maxpools_get(), lf->pf_func, npa_msixoff); return 0; +npa_fini: + npa_dev_fini(idev->npa); npa_detach: npa_detach(dev->mbox); fail: @@ -268,6 +274,7 @@ npa_lf_fini(void) if (__atomic_sub_fetch(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST) != 0) return 0; + npa_unregister_irqs(idev->npa); rc |= npa_dev_fini(idev->npa); rc |= npa_detach(idev->npa->mbox); idev_set_defaults(idev); diff --git a/drivers/common/cnxk/roc_npa_irq.c b/drivers/common/cnxk/roc_npa_irq.c new file mode 100644 index 0000000..99b57b0 --- /dev/null +++ b/drivers/common/cnxk/roc_npa_irq.c @@ -0,0 +1,297 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell. + */ + +#include "roc_api.h" +#include "roc_priv.h" + +static void +npa_err_irq(void *param) +{ + struct npa_lf *lf = (struct npa_lf *)param; + uint64_t intr; + + intr = plt_read64(lf->base + NPA_LF_ERR_INT); + if (intr == 0) + return; + + plt_err("Err_intr=0x%" PRIx64 "", intr); + + /* Clear interrupt */ + plt_write64(intr, lf->base + NPA_LF_ERR_INT); +} + +static int +npa_register_err_irq(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int rc, vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); + /* Register err interrupt vector */ + rc = dev_irq_register(handle, npa_err_irq, lf, vec); + + /* Enable hw interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S); + + return rc; +} + +static void +npa_unregister_err_irq(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C); + dev_irq_unregister(handle, npa_err_irq, lf, vec); +} + +static void +npa_ras_irq(void *param) +{ + struct npa_lf *lf = (struct npa_lf *)param; + uint64_t intr; + + intr = plt_read64(lf->base + NPA_LF_RAS); + if (intr == 0) + return; + + plt_err("Ras_intr=0x%" PRIx64 "", intr); + + /* Clear interrupt */ + plt_write64(intr, lf->base + NPA_LF_RAS); +} + +static int +npa_register_ras_irq(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int rc, vec; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); + /* Set used interrupt vectors */ + rc = dev_irq_register(handle, npa_ras_irq, lf, vec); + /* Enable hw interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S); + + return rc; +} + +static void +npa_unregister_ras_irq(struct npa_lf *lf) +{ + int vec; + struct plt_intr_handle *handle = lf->intr_handle; + + vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C); + dev_irq_unregister(handle, npa_ras_irq, lf, vec); +} + +static inline uint8_t +npa_q_irq_get_and_clear(struct npa_lf *lf, uint32_t q, uint32_t off, + uint64_t mask) +{ + uint64_t reg, wdata; + uint8_t qint; + + wdata = (uint64_t)q << 44; + reg = roc_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off)); + + if (reg & BIT_ULL(42) /* OP_ERR */) { + plt_err("Failed execute irq get off=0x%x", off); + return 0; + } + + qint = reg & 0xff; + wdata &= mask; + plt_write64(wdata | qint, lf->base + off); + + return qint; +} + +static inline uint8_t +npa_pool_irq_get_and_clear(struct npa_lf *lf, uint32_t p) +{ + return npa_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00); +} + +static inline uint8_t +npa_aura_irq_get_and_clear(struct npa_lf *lf, uint32_t a) +{ + return npa_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00); +} + +static void +npa_q_irq(void *param) +{ + struct npa_qint *qint = (struct npa_qint *)param; + struct npa_lf *lf = qint->lf; + uint8_t irq, qintx = qint->qintx; + uint32_t q, pool, aura; + uint64_t intr; + + intr = plt_read64(lf->base + NPA_LF_QINTX_INT(qintx)); + if (intr == 0) + return; + + plt_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx); + + /* Handle pool queue interrupts */ + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled POOL */ + if (plt_bitmap_get(lf->npa_bmp, q)) + continue; + + pool = q % lf->qints; + irq = npa_pool_irq_get_and_clear(lf, pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS)) + plt_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE)) + plt_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool); + + if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR)) + plt_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool); + } + + /* Handle aura queue interrupts */ + for (q = 0; q < lf->nr_pools; q++) { + /* Skip disabled AURA */ + if (plt_bitmap_get(lf->npa_bmp, q)) + continue; + + aura = q % lf->qints; + irq = npa_aura_irq_get_and_clear(lf, aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER)) + plt_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER)) + plt_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER)) + plt_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura); + + if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS)) + plt_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura); + } + + /* Clear interrupt */ + plt_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx)); +} + +static int +npa_register_queue_irqs(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int vec, q, qs, rc = 0; + + /* Figure out max qintx required */ + qs = PLT_MIN(lf->qints, lf->nr_pools); + + for (q = 0; q < qs; q++) { + vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + + /* Clear interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); + + struct npa_qint *qintmem = lf->npa_qint_mem; + + qintmem += q; + + qintmem->lf = lf; + qintmem->qintx = q; + + /* Sync qints_mem update */ + plt_wmb(); + + /* Register queue irq vector */ + rc = dev_irq_register(handle, npa_q_irq, qintmem, vec); + if (rc) + break; + + plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + plt_write64(0, lf->base + NPA_LF_QINTX_INT(q)); + /* Enable QINT interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q)); + } + + return rc; +} + +static void +npa_unregister_queue_irqs(struct npa_lf *lf) +{ + struct plt_intr_handle *handle = lf->intr_handle; + int vec, q, qs; + + /* Figure out max qintx required */ + qs = PLT_MIN(lf->qints, lf->nr_pools); + + for (q = 0; q < qs; q++) { + vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q)); + plt_write64(0, lf->base + NPA_LF_QINTX_INT(q)); + + /* Clear interrupt */ + plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q)); + + struct npa_qint *qintmem = lf->npa_qint_mem; + + qintmem += q; + + /* Unregister queue irq vector */ + dev_irq_unregister(handle, npa_q_irq, qintmem, vec); + + qintmem->lf = NULL; + qintmem->qintx = 0; + } +} + +int +npa_register_irqs(struct npa_lf *lf) +{ + int rc; + + if (lf->npa_msixoff == MSIX_VECTOR_INVALID) { + plt_err("Invalid NPALF MSIX vector offset vector: 0x%x", + lf->npa_msixoff); + return NPA_ERR_PARAM; + } + + /* Register lf err interrupt */ + rc = npa_register_err_irq(lf); + /* Register RAS interrupt */ + rc |= npa_register_ras_irq(lf); + /* Register queue interrupts */ + rc |= npa_register_queue_irqs(lf); + + return rc; +} + +void +npa_unregister_irqs(struct npa_lf *lf) +{ + npa_unregister_err_irq(lf); + npa_unregister_ras_irq(lf); + npa_unregister_queue_irqs(lf); +} diff --git a/drivers/common/cnxk/roc_npa_priv.h b/drivers/common/cnxk/roc_npa_priv.h index a2173c4..c87de24 100644 --- a/drivers/common/cnxk/roc_npa_priv.h +++ b/drivers/common/cnxk/roc_npa_priv.h @@ -56,4 +56,8 @@ roc_npa_to_npa_priv(struct roc_npa *roc_npa) int npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev); int npa_lf_fini(void); +/* IRQ */ +int npa_register_irqs(struct npa_lf *lf); +void npa_unregister_irqs(struct npa_lf *lf); + #endif /* _ROC_NPA_PRIV_H_ */ -- 2.8.4