From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 57563A05D3 for ; Thu, 23 May 2019 10:18:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DBA031B965; Thu, 23 May 2019 10:17:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id B31041B947 for ; Thu, 23 May 2019 10:16:57 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x4N89euu019037; Thu, 23 May 2019 01:16:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=n//xOsoCq7QWzfqAT50Hxj+CLCwJmhcdLcYHfysEmmI=; b=v9Bw2cfO6qRascU5ZEAjBK//pZQUMHhxKzVngUfuwXJYvSd5tUDgAmH6xeYeRWCJiLL6 nQVyN8iKQDxKBW89w2uSXI48pZl+/kO95i+STIWz7cyLbfqLQ9JoTQjRfJ8zlyT/JXv+ TFASvkwWwOyd8/czcb5DsMmYVWiZkvlOsE5zUa2Q1Xfc92atdX1SP62FZ/i4nj/MxhUD XhUQwV6eScXeDFy0sqmBFwCDGVggwF+gqxsKvYnvMA8EFLGh9KatTm1jATY+5dFrVhF6 kMsh1Ef4Va93QC1DlJnC8E6rzRl3+kKJMsU5zNALSbzlHqmuDD80bR9ekVJD1u7kEr0l qw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2smnwk0s80-11 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 23 May 2019 01:16:57 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 23 May 2019 01:15:58 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 23 May 2019 01:15:58 -0700 Received: from jerin-lab.marvell.com (unknown [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 2C80A3F703F; Thu, 23 May 2019 01:15:56 -0700 (PDT) From: To: CC: , Jerin Jacob , Kiran Kumar K Date: Thu, 23 May 2019 13:43:30 +0530 Message-ID: <20190523081339.56348-19-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190523081339.56348-1-jerinj@marvell.com> References: <20190523081339.56348-1-jerinj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-05-23_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 18/27] mempool/octeontx2: add NPA HW operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Implement the low-level NPA HW operations such as alloc, free memory, etc. Signed-off-by: Jerin Jacob Signed-off-by: Kiran Kumar K --- drivers/mempool/octeontx2/otx2_mempool.h | 146 +++++++++++++++++++++++ 1 file changed, 146 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h index e1c255c60..871b45870 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.h +++ b/drivers/mempool/octeontx2/otx2_mempool.h @@ -48,6 +48,152 @@ struct otx2_npa_lf { #define AURA_ID_MASK (BIT_ULL(16) - 1) +/* + * Generate 64bit handle to have optimized alloc and free aura operation. + * 0 - AURA_ID_MASK for storing the aura_id. + * AURA_ID_MASK+1 - (2^64 - 1) for storing the lf base address. + * This scheme is valid when OS can give AURA_ID_MASK + * aligned address for lf base address. + */ +static inline uint64_t +npa_lf_aura_handle_gen(uint32_t aura_id, uintptr_t addr) +{ + uint64_t val; + + val = aura_id & AURA_ID_MASK; + return (uint64_t)addr | val; +} + +static inline uint64_t +npa_lf_aura_handle_to_aura(uint64_t aura_handle) +{ + return aura_handle & AURA_ID_MASK; +} + +static inline uintptr_t +npa_lf_aura_handle_to_base(uint64_t aura_handle) +{ + return (uintptr_t)(aura_handle & ~AURA_ID_MASK); +} + +static inline uint64_t +npa_lf_aura_op_alloc(uint64_t aura_handle, const int drop) +{ + uint64_t wdata = npa_lf_aura_handle_to_aura(aura_handle); + + if (drop) + wdata |= BIT_ULL(63); /* DROP */ + + return otx2_atomic64_add_nosync(wdata, + (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_ALLOCX(0))); +} + +static inline void +npa_lf_aura_op_free(uint64_t aura_handle, const int fabs, uint64_t iova) +{ + uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle); + + if (fabs) + reg |= BIT_ULL(63); /* FABS */ + + otx2_store_pair(iova, reg, + npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_FREE0); +} + +static inline uint64_t +npa_lf_aura_op_cnt_get(uint64_t aura_handle) +{ + uint64_t wdata; + uint64_t reg; + + wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44; + + reg = otx2_atomic64_add_nosync(wdata, + (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_CNT)); + + if (reg & BIT_ULL(42) /* OP_ERR */) + return 0; + else + return reg & 0xFFFFFFFFF; +} + +static inline void +npa_lf_aura_op_cnt_set(uint64_t aura_handle, const int sign, uint64_t count) +{ + uint64_t reg = count & (BIT_ULL(36) - 1); + + if (sign) + reg |= BIT_ULL(43); /* CNT_ADD */ + + reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44); + + otx2_write64(reg, + npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_CNT); +} + +static inline uint64_t +npa_lf_aura_op_limit_get(uint64_t aura_handle) +{ + uint64_t wdata; + uint64_t reg; + + wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44; + + reg = otx2_atomic64_add_nosync(wdata, + (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_AURA_OP_LIMIT)); + + if (reg & BIT_ULL(42) /* OP_ERR */) + return 0; + else + return reg & 0xFFFFFFFFF; +} + +static inline void +npa_lf_aura_op_limit_set(uint64_t aura_handle, uint64_t limit) +{ + uint64_t reg = limit & (BIT_ULL(36) - 1); + + reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44); + + otx2_write64(reg, + npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_LIMIT); +} + +static inline uint64_t +npa_lf_aura_op_available(uint64_t aura_handle) +{ + uint64_t wdata; + uint64_t reg; + + wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44; + + reg = otx2_atomic64_add_nosync(wdata, + (int64_t *)(npa_lf_aura_handle_to_base( + aura_handle) + NPA_LF_POOL_OP_AVAILABLE)); + + if (reg & BIT_ULL(42) /* OP_ERR */) + return 0; + else + return reg & 0xFFFFFFFFF; +} + +static inline void +npa_lf_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova, + uint64_t end_iova) +{ + uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle); + + otx2_store_pair(start_iova, reg, + npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_POOL_OP_PTR_START0); + otx2_store_pair(end_iova, reg, + npa_lf_aura_handle_to_base(aura_handle) + + NPA_LF_POOL_OP_PTR_END0); +} + /* NPA LF */ int otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev); int otx2_npa_lf_fini(void); -- 2.21.0