From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 3D2BDA0471 for ; Sat, 22 Jun 2019 15:28:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F324A1CE51; Sat, 22 Jun 2019 15:26:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 92CC31C612 for ; Sat, 22 Jun 2019 15:25:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5MDP8MW030689; Sat, 22 Jun 2019 06:25:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=F1fHqOAgJrFtFLqnxmni9YArWHbWN6yxNAC3IcV2uj4=; b=aMwOJ9x3Llp9VmS85MAFg33tEVUgT4hlGBUzdPd0rIJ1pmFeqmNjD2igAD2DOjRodr+2 LB/cGDVktRmdPN+fp15GUG0Q0Jl/jzFMbUlS4A5EZAGyoqwnXeaWqPC6iIyxx2AM+jl3 o9FmferIuZSCKfeoo7TuR8oXkLED7K5fRJAK6qT2a/C3bsN6p4/jz/xqugi7X7VenSYZ FW8cVPwxEiRUs8N5NOPg7c5QHpdksKQCDnFQgDyTwuYRhZN607sS729aVrRQqkKgZ0lM nm1BRgC4B0gHb6l+rsvzHEvccJ0SM254iuQSWUXQlc71cDhotQjmv5Vnt9QZdUqDtcI0 6w== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2t9kuj865m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sat, 22 Jun 2019 06:25:36 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sat, 22 Jun 2019 06:25:35 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sat, 22 Jun 2019 06:25:34 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 4A54A3F703F; Sat, 22 Jun 2019 06:25:33 -0700 (PDT) From: To: Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: , Olivier Matz Date: Sat, 22 Jun 2019 18:54:12 +0530 Message-ID: <20190622132417.32694-23-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190622132417.32694-1-jerinj@marvell.com> References: <20190617155537.36144-1-jerinj@marvell.com> <20190622132417.32694-1-jerinj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-22_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v4 22/27] mempool/octeontx2: add mempool free op X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The DPDK mempool free operation frees HW AURA and POOL reserved in alloc operation. In addition to that it free all the memory resources allocated in mempool alloc operations. Cc: Olivier Matz Signed-off-by: Jerin Jacob --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 104 +++++++++++++++++++ 1 file changed, 104 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index 0e7b7a77c..94570319a 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -47,6 +47,62 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id, return NPA_LF_ERR_AURA_POOL_INIT; } +static int +npa_lf_aura_pool_fini(struct otx2_mbox *mbox, + uint32_t aura_id, + uint64_t aura_handle) +{ + struct npa_aq_enq_req *aura_req, *pool_req; + struct npa_aq_enq_rsp *aura_rsp, *pool_rsp; + struct otx2_mbox_dev *mdev = &mbox->dev[0]; + struct ndc_sync_op *ndc_req; + int rc, off; + + /* Procedure for disabling an aura/pool */ + rte_delay_us(10); + npa_lf_aura_op_alloc(aura_handle, 0); + + pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + pool_req->aura_id = aura_id; + pool_req->ctype = NPA_AQ_CTYPE_POOL; + pool_req->op = NPA_AQ_INSTOP_WRITE; + pool_req->pool.ena = 0; + pool_req->pool_mask.ena = ~pool_req->pool_mask.ena; + + aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + aura_req->aura_id = aura_id; + aura_req->ctype = NPA_AQ_CTYPE_AURA; + aura_req->op = NPA_AQ_INSTOP_WRITE; + aura_req->aura.ena = 0; + aura_req->aura_mask.ena = ~aura_req->aura_mask.ena; + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + off = mbox->rx_start + + RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + off = mbox->rx_start + pool_rsp->hdr.next_msgoff; + aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off); + + if (rc != 2 || aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0) + return NPA_LF_ERR_AURA_POOL_FINI; + + /* Sync NDC-NPA for LF */ + ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); + ndc_req->npa_lf_sync = 1; + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Error on NDC-NPA LF sync, rc %d", rc); + return NPA_LF_ERR_AURA_POOL_FINI; + } + return 0; +} + static inline char* npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name) { @@ -65,6 +121,18 @@ npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name, RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN); } +static inline int +npa_lf_stack_dma_free(struct otx2_npa_lf *lf, char *name, int pool_id) +{ + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(npa_lf_stack_memzone_name(lf, pool_id, name)); + if (mz == NULL) + return -EINVAL; + + return rte_memzone_free(mz); +} + static inline int bitmap_ctzll(uint64_t slab) { @@ -179,6 +247,24 @@ npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size, return rc; } +static int +npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle) +{ + char name[RTE_MEMZONE_NAMESIZE]; + int aura_id, pool_id, rc; + + if (!lf || !aura_handle) + return NPA_LF_ERR_PARAM; + + aura_id = pool_id = npa_lf_aura_handle_to_aura(aura_handle); + rc = npa_lf_aura_pool_fini(lf->mbox, aura_id, aura_handle); + rc |= npa_lf_stack_dma_free(lf, name, pool_id); + + rte_bitmap_set(lf->npa_bmp, aura_id); + + return rc; +} + static int otx2_npa_alloc(struct rte_mempool *mp) { @@ -238,9 +324,27 @@ otx2_npa_alloc(struct rte_mempool *mp) return rc; } +static void +otx2_npa_free(struct rte_mempool *mp) +{ + struct otx2_npa_lf *lf = otx2_npa_lf_obj_get(); + int rc = 0; + + otx2_npa_dbg("lf=%p aura_handle=0x%"PRIx64, lf, mp->pool_id); + if (lf != NULL) + rc = npa_lf_aura_pool_pair_free(lf, mp->pool_id); + + if (rc) + otx2_err("Failed to free pool or aura rc=%d", rc); + + /* Release the reference of npalf */ + otx2_npa_lf_fini(); +} + static struct rte_mempool_ops otx2_npa_ops = { .name = "octeontx2_npa", .alloc = otx2_npa_alloc, + .free = otx2_npa_free, }; MEMPOOL_REGISTER_OPS(otx2_npa_ops); -- 2.21.0