From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 81720A0471 for ; Mon, 17 Jun 2019 18:01:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7C9331BFD8; Mon, 17 Jun 2019 17:57:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 4143D1BFAC for ; Mon, 17 Jun 2019 17:56:58 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFpsXk001115; Mon, 17 Jun 2019 08:56:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=F1fHqOAgJrFtFLqnxmni9YArWHbWN6yxNAC3IcV2uj4=; b=Dathf9P7yRLnG0B5lVNQ9LawuAg1CtYf4U0QYhJLrytzHwmwnolmDiZZo1SnEjTpPJLy p8JxoLtI2IH/bsbyvPjNrjRFJO1j8yPEq4EYCCeMjZtV4knYhjn9ALiTB+Z8hIWTG/Bi IdkU3s/SkDmrkH8Rl1KVD5rmoyZ47tsOt2Sbx29tnv5SlyKvmGSSSWxTraCQE9Mr94zU TrKSmV0mSylu2aQcTZTwiLBjk5JbDImKNt37I8GdK/IexK5/Uuz/qjFdoKJG1ztTVcFR HWyJ784COPtMVQQ9TWa63D//dR0pwRMvwsZsymZpcEe3F0uGA3jdwQlaET4YJUj40J+0 mA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb2c-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 17 Jun 2019 08:56:57 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:56:56 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:56:56 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id BE4FF3F703F; Mon, 17 Jun 2019 08:56:54 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Olivier Matz Date: Mon, 17 Jun 2019 21:25:32 +0530 Message-ID: <20190617155537.36144-23-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 22/27] mempool/octeontx2: add mempool free op X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The DPDK mempool free operation frees HW AURA and POOL reserved in alloc operation. In addition to that it free all the memory resources allocated in mempool alloc operations. Cc: Olivier Matz Signed-off-by: Jerin Jacob --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 104 +++++++++++++++++++ 1 file changed, 104 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index 0e7b7a77c..94570319a 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -47,6 +47,62 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, return NPA_LF_ERR_AURA_POOL_INIT; } +static int +npa_lf_aura_pool_fini(struct otx2_mbox *mbox, + uint32_t aura_id, + uint64_t aura_handle) +{ + struct npa_aq_enq_req *aura_req, *pool_req; + struct npa_aq_enq_rsp *aura_rsp, *pool_rsp; + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + struct ndc_sync_op *ndc_req; + int rc, off; + + /* Procedure for disabling an aura/pool */ + rte_delay_us(10); + npa_lf_aura_op_alloc(aura_handle, 0); + + pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + pool_req->aura_id = aura_id; + pool_req->ctype = NPA_AQ_CTYPE_POOL; + pool_req->op = NPA_AQ_INSTOP_WRITE; + pool_req->pool.ena = 0; + pool_req->pool_mask.ena = ~pool_req->pool_mask.ena; + + aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + aura_req->aura_id = aura_id; + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_WRITE; + aura_req->aura.ena = 0; + aura_req->aura_mask.ena = ~aura_req->aura_mask.ena; + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + off = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + off = mbox->rx_start + pool_rsp->hdr.next_msgoff; + aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + if (rc != 2 || aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0) + return NPA_LF_ERR_AURA_POOL_FINI; + + /* Sync NDC-NPA for LF */ + ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); + ndc_req->npa_lf_sync = 1; + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Error on NDC-NPA LF sync, rc %d", rc); + return NPA_LF_ERR_AURA_POOL_FINI; + } + return 0; +} + static inline char* npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name) { @@ -65,6 +121,18 @@ npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name, RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN); } +static inline int +npa_lf_stack_dma_free(struct otx2_npa_lf *lf, char *name, int pool_id) +{ + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(npa_lf_stack_memzone_name(lf, pool_id, name)); + if (mz == NULL) + return -EINVAL; + + return rte_memzone_free(mz); +} + static inline int bitmap_ctzll(uint64_t slab) { @@ -179,6 +247,24 @@ npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size, return rc; } +static int +npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle) +{ + char name[RTE_MEMZONE_NAMESIZE]; + int aura_id, pool_id, rc; + + if (!lf || !aura_handle) + return NPA_LF_ERR_PARAM; + + aura_id = pool_id = npa_lf_aura_handle_to_aura(aura_handle); + rc = npa_lf_aura_pool_fini(lf->mbox, aura_id, aura_handle); + rc |= npa_lf_stack_dma_free(lf, name, pool_id); + + rte_bitmap_set(lf->npa_bmp, aura_id); + + return rc; +} + static int otx2_npa_alloc(struct rte_mempool *mp) { @@ -238,9 +324,27 @@ otx2_npa_alloc(struct rte_mempool *mp) return rc; } +static void +otx2_npa_free(struct rte_mempool *mp) +{ + struct otx2_npa_lf *lf = otx2_npa_lf_obj_get(); + int rc = 0; + + otx2_npa_dbg("lf=%p aura_handle=0x%"PRIx64, lf, mp->pool_id); + if (lf != NULL) + rc = npa_lf_aura_pool_pair_free(lf, mp->pool_id); + + if (rc) + otx2_err("Failed to free pool or aura rc=%d", rc); + + /* Release the reference of npalf */ + otx2_npa_lf_fini(); +} + static struct rte_mempool_ops otx2_npa_ops = { .name = "octeontx2_npa", .alloc = otx2_npa_alloc, + .free = otx2_npa_free, }; MEMPOOL_REGISTER_OPS(otx2_npa_ops); -- 2.21.0