From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id BC97CA05D3 for ; Thu, 23 May 2019 10:19:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CB1DB1B9C1; Thu, 23 May 2019 10:17:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id E3BCC1B9A6 for ; Thu, 23 May 2019 10:17:04 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x4N89ev1019037; Thu, 23 May 2019 01:17:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=4xEJY/5BkQjftI32K5M9cpH9eYN3GfgpBQsO/PJpimM=; b=EbCK/kGorM+VgNkBlOkp+0nYhKPa8almAuxLdIBdtBS8lhfArqB1kxN9CGQgcE8T0gX6 k2PDBWgAVwAQnYZ220KB0qANqISWcq+1y7aWo9iHH3Iudi7Um6x7bogbFUEYsGGn7eeQ UZRZAeWWtUoQuQRPGjLuxNRxQrbl4YCTYaU3ax11JYtJCX9CEi4XE7tLJKtUic7sJSmD 0t9DCQF3f3hfGhh4jQGCZPo7Tu0Eg16vEByIM88o7t+WaBKQerePhhq5uOzny3quWRqY HMtG9HuPGOiTMvVlkr5F712Qh2Ypq9SmfJs02xih5X0jolxO0GEczStkGYUAcIfEiK42 AA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2smnwk0s80-15 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 23 May 2019 01:17:04 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 23 May 2019 01:16:20 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 23 May 2019 01:16:20 -0700 Received: from jerin-lab.marvell.com (unknown [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 41C873F703F; Thu, 23 May 2019 01:16:19 -0700 (PDT) From: To: CC: , Jerin Jacob , Harman Kalra Date: Thu, 23 May 2019 13:43:38 +0530 Message-ID: <20190523081339.56348-27-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190523081339.56348-1-jerinj@marvell.com> References: <20190523081339.56348-1-jerinj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-05-23_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 26/27] mempool/octeontx2: add devargs for max pool selection X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The maximum number of mempools per application needs to be configured on HW during mempool driver initialization. HW can support up to 1M mempools, Since each mempool costs set of HW resources, the max_pools devargs parameter is being introduced to configure the number of mempools required for the application. For example: -w 0002:02:00.0,max_pools=512 With the above configuration, the driver will set up only 512 mempools for the given application to save HW resources. Signed-off-by: Jerin Jacob Signed-off-by: Harman Kalra --- drivers/mempool/octeontx2/otx2_mempool.c | 41 +++++++++++++++++++++++- 1 file changed, 40 insertions(+), 1 deletion(-) diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c index 1bcb86cf4..ff7fcac85 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.c +++ b/drivers/mempool/octeontx2/otx2_mempool.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -142,6 +143,42 @@ otx2_aura_size_to_u32(uint8_t val) return 1 << (val + 6); } +static int +parse_max_pools(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + if (val < otx2_aura_size_to_u32(NPA_AURA_SZ_128)) + val = 128; + if (val > otx2_aura_size_to_u32(NPA_AURA_SZ_1M)) + val = BIT_ULL(20); + + *(uint8_t *)extra_args = rte_log2_u32(val) - 6; + return 0; +} + +#define OTX2_MAX_POOLS "max_pools" + +static uint8_t +otx2_parse_aura_size(struct rte_devargs *devargs) +{ + uint8_t aura_sz = NPA_AURA_SZ_128; + struct rte_kvargs *kvlist; + + if (devargs == NULL) + goto exit; + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + goto exit; + + rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz); + rte_kvargs_free(kvlist); +exit: + return aura_sz; +} + static inline int npa_lf_attach(struct otx2_mbox *mbox) { @@ -234,7 +271,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev) if (rc) goto npa_detach; - aura_sz = NPA_AURA_SZ_128; + aura_sz = otx2_parse_aura_size(pci_dev->device.devargs); nr_pools = otx2_aura_size_to_u32(aura_sz); lf = &dev->npalf; @@ -397,3 +434,5 @@ static struct rte_pci_driver pci_npa = { RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa); RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map); RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2, + OTX2_MAX_POOLS "=<128-1048576>"); -- 2.21.0