From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB0B8A0487 for ; Sun, 30 Jun 2019 20:07:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D89A81B9CC; Sun, 30 Jun 2019 20:07:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id BB2F61B9AB for ; Sun, 30 Jun 2019 20:06:59 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5DtH027158; Sun, 30 Jun 2019 11:06:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=5mSMfWErHElWYnV3VxsM9wSUnfg8MQAMvhHaZDlBw2I=; b=bMKv8a5fGwO/lIcj9n0OCNmFJhCt//jYEB/U50UWDoMMzG31rTeiaE79zT4rcb3yJ5P+ mHhc+HBND0PSR+h5Z6EdwDspp6pfvb70R8HZ3lQ9k9+DDEcDPC/wU4yWhAm5sWDF/SfH aeTlsBTAbnsFJgXqlPEBJ+galDmPgADj1BRc5AHS01lZQ7ZLN8YBp6z/XMpdVF/vl8y3 ZJI7jJYXIfxHZ+W2dG7sk1ZO7BeX9EKWhDJTNpin/Hew9AucVCrFIdlbHaSEXES9dPNC lVLISM6CH2ZTnw/9bkORtxxsc9k3CF9mD3BTQe6M4Lmn/rlotOVw/UlY6OsNfrSgj27T fA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gej-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:06:58 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:06:57 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:06:57 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id DBCD13F703F; Sun, 30 Jun 2019 11:06:55 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "John McNamara" , Marko Kovacevic Date: Sun, 30 Jun 2019 23:35:20 +0530 Message-ID: <20190630180609.36705-9-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 08/57] net/octeontx2: handle queue specific error interrupts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Handle queue specific error interrupts. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/otx2_ethdev.c | 16 +- drivers/net/octeontx2/otx2_ethdev.h | 9 ++ drivers/net/octeontx2/otx2_ethdev_irq.c | 191 ++++++++++++++++++++++++ 4 files changed, 216 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index e3f4c2c43..50e825968 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - SR-IOV VF - Lock-free Tx queue +- Debug utilities - error interrupt support Prerequisites ------------- diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 65d72a47f..045855c2e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -163,8 +163,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) } /* Free the resources allocated from the previous configure */ - if (dev->configured == 1) + if (dev->configured == 1) { + oxt2_nix_unregister_queue_irqs(eth_dev); nix_lf_free(dev); + } if (otx2_dev_is_A0(dev) && (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) && @@ -189,6 +191,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto fail; } + /* Register queue IRQs */ + rc = oxt2_nix_register_queue_irqs(eth_dev); + if (rc) { + otx2_err("Failed to register queue interrupts rc=%d", rc); + goto free_nix_lf; + } + /* Update the mac address */ ea = eth_dev->data->mac_addrs; memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN); @@ -210,6 +219,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) dev->configured_nb_tx_qs = data->nb_tx_queues; return 0; +free_nix_lf: + rc = nix_lf_free(dev); fail: return rc; } @@ -413,6 +424,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Unregister queue irqs */ + oxt2_nix_unregister_queue_irqs(eth_dev); + rc = nix_lf_free(dev); if (rc) otx2_err("Failed to free nix lf, rc=%d", rc); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index c1528e2ac..d9cdd33b5 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -106,6 +106,11 @@ DEV_RX_OFFLOAD_QINQ_STRIP | \ DEV_RX_OFFLOAD_TIMESTAMP) +struct otx2_qint { + struct rte_eth_dev *eth_dev; + uint8_t qintx; +}; + struct otx2_rss_info { uint16_t rss_size; uint8_t rss_grps; @@ -134,6 +139,7 @@ struct otx2_eth_dev { uint16_t cints; uint16_t qints; uint8_t configured; + uint8_t configured_qints; uint8_t configured_nb_rx_qs; uint8_t configured_nb_tx_qs; uint16_t nix_msixoff; @@ -147,6 +153,7 @@ struct otx2_eth_dev { uint64_t tx_offloads; uint64_t rx_offload_capa; uint64_t tx_offload_capa; + struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; struct otx2_rss_info rss_info; struct otx2_npc_flow_info npc_flow; } __rte_cache_aligned; @@ -163,7 +170,9 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, /* IRQ */ int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); +int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev); void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); +void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev); /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index 33fed93c4..476c7ea78 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -112,6 +112,197 @@ nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev) otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec); } +static inline uint8_t +nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q, + uint32_t off, uint64_t mask) +{ + uint64_t reg, wdata; + uint8_t qint; + + wdata = (uint64_t)q << 44; + reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off)); + + if (reg & BIT_ULL(42) /* OP_ERR */) { + otx2_err("Failed execute irq get off=0x%x", off); + return 0; + } + + qint = reg & 0xff; + wdata &= mask; + otx2_write64(wdata, dev->base + off); + + return qint; +} + +static inline uint8_t +nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq) +{ + return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00); +} + +static inline uint8_t +nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq) +{ + return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00); +} + +static inline uint8_t +nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq) +{ + return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00); +} + +static inline void +nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off) +{ + uint64_t reg; + + reg = otx2_read64(dev->base + off); + if (reg & BIT_ULL(44)) + otx2_err("SQ=%d err_code=0x%x", + (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff)); +} + +static void +nix_lf_q_irq(void *param) +{ + struct otx2_qint *qint = (struct otx2_qint *)param; + struct rte_eth_dev *eth_dev = qint->eth_dev; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint8_t irq, qintx = qint->qintx; + int q, cq, rq, sq; + uint64_t intr; + + intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx)); + if (intr == 0) + return; + + otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d", + intr, qintx, dev->pf, dev->vf); + + /* Handle RQ interrupts */ + for (q = 0; q < eth_dev->data->nb_rx_queues; q++) { + rq = q % dev->qints; + irq = nix_lf_rq_irq_get_and_clear(dev, rq); + + if (irq & BIT_ULL(NIX_RQINT_DROP)) + otx2_err("RQ=%d NIX_RQINT_DROP", rq); + + if (irq & BIT_ULL(NIX_RQINT_RED)) + otx2_err("RQ=%d NIX_RQINT_RED", rq); + } + + /* Handle CQ interrupts */ + for (q = 0; q < eth_dev->data->nb_rx_queues; q++) { + cq = q % dev->qints; + irq = nix_lf_cq_irq_get_and_clear(dev, cq); + + if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR)) + otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq); + + if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL)) + otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq); + + if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT)) + otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq); + } + + /* Handle SQ interrupts */ + for (q = 0; q < eth_dev->data->nb_tx_queues; q++) { + sq = q % dev->qints; + irq = nix_lf_sq_irq_get_and_clear(dev, sq); + + if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) { + otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) { + otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) { + otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) { + otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG); + } + } + + /* Clear interrupt */ + otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx)); +} + +int +oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec, q, sqs, rqs, qs, rc = 0; + + /* Figure out max qintx required */ + rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues); + sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues); + qs = RTE_MAX(rqs, sqs); + + dev->configured_qints = qs; + + for (q = 0; q < qs; q++) { + vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q)); + + /* Clear interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q)); + + dev->qints_mem[q].eth_dev = eth_dev; + dev->qints_mem[q].qintx = q; + + /* Sync qints_mem update */ + rte_smp_wmb(); + + /* Register queue irq vector */ + rc = otx2_register_irq(handle, nix_lf_q_irq, + &dev->qints_mem[q], vec); + if (rc) + break; + + otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q)); + otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q)); + /* Enable QINT interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q)); + } + + return rc; +} + +void +oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec, q; + + for (q = 0; q < dev->configured_qints; q++) { + vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q)); + otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q)); + + /* Clear interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q)); + + /* Unregister queue irq vector */ + otx2_unregister_irq(handle, nix_lf_q_irq, + &dev->qints_mem[q], vec); + } +} + int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev) { -- 2.21.0