From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 6948FA0FBE for ; Thu, 23 May 2019 10:18:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5EE781B9CD; Thu, 23 May 2019 10:17:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id EB6A91B94F for ; Thu, 23 May 2019 10:17:03 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x4N89eux019037; Thu, 23 May 2019 01:17:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=0+KQk0Qhi92rzP4UR2jlHr9xs+LM56gGLePeg9d1M54=; b=B3+8SIVs5F6h2G9yJba82jv18u1YKy0C2IOvbxuxABs14R+DW/XCUMgWqhpKX0N5TXKE PhhxMF720FfrwY5Huxzp/FhL+SnxcItD6jrOyT14zoLX55eZdphRbMme4FDU3gfKu8Kc U+XTKDFwBHPkuwQDQwEXg+5TXrWtYtzXk2/LjHhCMStDX42j95zb+MomeF/emjQbuyEu U15ERgkhg5PMRJQbCYpdXlnyzJ/fORE9WCj0nCmHlbFFbC08CV+pPdHylNCRqNzsmCcq ndTQDGHan2Ggrw6utsL2fLIcb4Jk2d1s0Y788xoGmMteXvfo22Cq5Tk32jSO/zHh8XVG AQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2smnwk0sfp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 23 May 2019 01:17:03 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 23 May 2019 01:15:34 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 23 May 2019 01:15:34 -0700 Received: from jerin-lab.marvell.com (unknown [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 8C7C63F703F; Thu, 23 May 2019 01:15:33 -0700 (PDT) From: To: CC: , Nithin Dabilpuram , Krzysztof Kanas Date: Thu, 23 May 2019 13:43:23 +0530 Message-ID: <20190523081339.56348-12-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190523081339.56348-1-jerinj@marvell.com> References: <20190523081339.56348-1-jerinj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-05-23_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 11/27] common/octeontx2: add PF to VF mailbox IRQ and msg handlers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram PF has additional responsibility being server for VF messages and forward to AF and once AF process it then forward the response back to VF. otx2_vf_pf_mbox_irq() will process the VF mailbox request and af_pf_wait_msg() will until getting a response back from AF. Signed-off-by: Nithin Dabilpuram Signed-off-by: Krzysztof Kanas --- drivers/common/octeontx2/otx2_dev.c | 240 +++++++++++++++++++++++++++- 1 file changed, 239 insertions(+), 1 deletion(-) diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c index ba4fd9547..09b551819 100644 --- a/drivers/common/octeontx2/otx2_dev.c +++ b/drivers/common/octeontx2/otx2_dev.c @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -50,6 +51,200 @@ mbox_mem_unmap(void *va, size_t size) munmap(va, size); } +static int +af_pf_wait_msg(struct otx2_dev *dev, uint16_t vf, int num_msg) +{ + uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox; + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + volatile uint64_t int_status; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + struct mbox_msghdr *rsp; + uint64_t offset; + size_t size; + int i; + + /* We need to disable PF interrupts. We are in timer interrupt */ + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); + + /* Send message */ + otx2_mbox_msg_send(mbox, 0); + + do { + rte_delay_ms(sleep); + timeout++; + if (timeout >= MBOX_RSP_TIMEOUT) { + otx2_err("Routed messages %d timeout: %dms", + num_msg, MBOX_RSP_TIMEOUT); + break; + } + int_status = otx2_read64(dev->bar2 + RVU_PF_INT); + } while ((int_status & 0x1) != 0x1); + + /* Clear */ + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT); + + /* Enable interrupts */ + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); + + rte_spinlock_lock(&mdev->mbox_lock); + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (req_hdr->num_msgs != num_msg) + otx2_err("Routed messages: %d received: %d", num_msg, + req_hdr->num_msgs); + + /* Get messages from mbox */ + offset = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + for (i = 0; i < req_hdr->num_msgs; i++) { + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + size = mbox->rx_start + msg->next_msgoff - offset; + + /* Reserve PF/VF mbox message */ + size = RTE_ALIGN(size, MBOX_MSG_ALIGN); + rsp = otx2_mbox_alloc_msg(&dev->mbox_vfpf, vf, size); + otx2_mbox_rsp_init(msg->id, rsp); + + /* Copy message from AF<->PF mbox to PF<->VF mbox */ + otx2_mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr), + (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + + /* Set status and sender pf_func data */ + rsp->rc = msg->rc; + rsp->pcifunc = msg->pcifunc; + + offset = mbox->rx_start + msg->next_msgoff; + } + rte_spinlock_unlock(&mdev->mbox_lock); + + return req_hdr->num_msgs; +} + +static int +vf_pf_process_msgs(struct otx2_dev *dev, uint16_t vf) +{ + int offset, routed = 0; struct otx2_mbox *mbox = &dev->mbox_vfpf; + struct otx2_mbox_dev *mdev = &mbox->dev[vf]; + struct mbox_hdr *req_hdr; + struct mbox_msghdr *msg; + size_t size; + uint16_t i; + + req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); + if (!req_hdr->num_msgs) + return 0; + + offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + + for (i = 0; i < req_hdr->num_msgs; i++) { + + msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset); + size = mbox->rx_start + msg->next_msgoff - offset; + + /* RVU_PF_FUNC_S */ + msg->pcifunc = otx2_pfvf_func(dev->pf, vf); + + if (msg->id == MBOX_MSG_READY) { + struct ready_msg_rsp *rsp; + uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8; + + /* Handle READY message in PF */ + dev->active_vfs[vf / max_bits] |= + BIT_ULL(vf % max_bits); + rsp = (struct ready_msg_rsp *) + otx2_mbox_alloc_msg(mbox, vf, sizeof(*rsp)); + otx2_mbox_rsp_init(msg->id, rsp); + + /* PF/VF function ID */ + rsp->hdr.pcifunc = msg->pcifunc; + rsp->hdr.rc = 0; + } else { + struct mbox_msghdr *af_req; + /* Reserve AF/PF mbox message */ + size = RTE_ALIGN(size, MBOX_MSG_ALIGN); + af_req = otx2_mbox_alloc_msg(dev->mbox, 0, size); + otx2_mbox_req_init(msg->id, af_req); + + /* Copy message from VF<->PF mbox to PF<->AF mbox */ + otx2_mbox_memcpy((uint8_t *)af_req + + sizeof(struct mbox_msghdr), + (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + af_req->pcifunc = msg->pcifunc; + routed++; + } + offset = mbox->rx_start + msg->next_msgoff; + } + + if (routed > 0) { + otx2_base_dbg("pf:%d routed %d messages from vf:%d to AF", + dev->pf, routed, vf); + af_pf_wait_msg(dev, vf, routed); + otx2_mbox_reset(dev->mbox, 0); + } + + /* Send mbox responses to VF */ + if (mdev->num_msgs) { + otx2_base_dbg("pf:%d reply %d messages to vf:%d", + dev->pf, mdev->num_msgs, vf); + otx2_mbox_msg_send(mbox, vf); + } + + return i; +} + +static void +otx2_vf_pf_mbox_handle_msg(void *param) +{ + uint16_t vf, max_vf, max_bits; + struct otx2_dev *dev = param; + + max_bits = sizeof(dev->intr.bits[0]) * sizeof(uint64_t); + max_vf = max_bits * MAX_VFPF_DWORD_BITS; + + for (vf = 0; vf < max_vf; vf++) { + if (dev->intr.bits[vf/max_bits] & BIT_ULL(vf%max_bits)) { + otx2_base_dbg("Process vf:%d request (pf:%d, vf:%d)", + vf, dev->pf, dev->vf); + vf_pf_process_msgs(dev, vf); + dev->intr.bits[vf/max_bits] &= ~(BIT_ULL(vf%max_bits)); + } + } + dev->timer_set = 0; +} + +static void +otx2_vf_pf_mbox_irq(void *param) +{ + struct otx2_dev *dev = param; + bool alarm_set = false; + uint64_t intr; + int vfpf; + + for (vfpf = 0; vfpf < MAX_VFPF_DWORD_BITS; ++vfpf) { + intr = otx2_read64(dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf)); + if (!intr) + continue; + + otx2_base_dbg("vfpf: %d intr: 0x%" PRIx64 " (pf:%d, vf:%d)", + vfpf, intr, dev->pf, dev->vf); + + /* Save and clear intr bits */ + dev->intr.bits[vfpf] |= intr; + otx2_write64(intr, dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf)); + alarm_set = true; + } + + if (!dev->timer_set && alarm_set) { + dev->timer_set = 1; + /* Start timer to handle messages */ + rte_eal_alarm_set(VF_PF_MBOX_TIMER_MS, + otx2_vf_pf_mbox_handle_msg, dev); + } +} + static void otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox) { @@ -118,12 +313,33 @@ static int mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; - int rc; + int i, rc; + + /* HW clear irq */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) + otx2_write64(~0ull, dev->bar2 + + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i)); otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); dev->timer_set = 0; + /* MBOX interrupt for VF(0...63) <-> PF */ + rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX0); + + if (rc) { + otx2_err("Fail to register PF(VF0-63) mbox irq"); + return rc; + } + /* MBOX interrupt for VF(64...128) <-> PF */ + rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX1); + + if (rc) { + otx2_err("Fail to register PF(VF64-128) mbox irq"); + return rc; + } /* MBOX interrupt AF <-> PF */ rc = otx2_register_irq(intr_handle, otx2_af_pf_mbox_irq, dev, RVU_PF_INT_VEC_AFPF_MBOX); @@ -132,6 +348,11 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) return rc; } + /* HW enable intr */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) + otx2_write64(~0ull, dev->bar2 + + RVU_PF_VFPF_MBOX_INT_ENA_W1SX(i)); + otx2_write64(~0ull, dev->bar2 + RVU_PF_INT); otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S); @@ -142,11 +363,28 @@ static void mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev) { struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int i; + + /* HW clear irq */ + for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) + otx2_write64(~0ull, dev->bar2 + + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i)); otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C); dev->timer_set = 0; + rte_eal_alarm_cancel(otx2_vf_pf_mbox_handle_msg, dev); + + /* Unregister the interrupt handler for each vectors */ + /* MBOX interrupt for VF(0...63) <-> PF */ + otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX0); + + /* MBOX interrupt for VF(64...128) <-> PF */ + otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev, + RVU_PF_INT_VEC_VFPF_MBOX1); + /* MBOX interrupt AF <-> PF */ otx2_unregister_irq(intr_handle, otx2_af_pf_mbox_irq, dev, RVU_PF_INT_VEC_AFPF_MBOX); -- 2.21.0