From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2A6C9A0579; Thu, 8 Apr 2021 11:51:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C09E4141063; Thu, 8 Apr 2021 11:51:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D472014103E for ; Thu, 8 Apr 2021 11:51:07 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 1389jJ8h010928 for ; Thu, 8 Apr 2021 02:51:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=hFp3F8sTs/agzuR6lgFT2IrFAHA6g8ldQXi8Cl/qULU=; b=NdCQ/eVOxXT79vP/e+9xuZbNTpkObrvBqaHaAFYFlq0qu2OunrFyftO/WPjgXlOVawbu SVNwG0uUZJYr2X2ayygBJ3jMgvVa9C5lr13LsHF4KWl3Rvj49BR+sv9qVXnB74Q2ZKDr mkmqhyaBUDpClBwX0wCTOF6OCZg6e1IgS/pyxPvIL41qvJMlIfqdrBoCuxSjYCi0v35k vr9kxCuX8+5RFrCe9THVkGB+h4cU1GMz7uDkaZNEdmGrwAZqwXR+TYuFYyJ09ZcXjrEn yPYAUXeSiNbo1Ru86YjYeI9ST3ElxX7e1YkpIc1HbXeIfZfqyKTN9xyb36onyVqlJ//y Jg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37swewgf98-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 08 Apr 2021 02:51:06 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 8 Apr 2021 02:51:04 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 8 Apr 2021 02:51:04 -0700 Received: from lab-ci-142.marvell.com (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 5A9BC3F7051; Thu, 8 Apr 2021 02:51:01 -0700 (PDT) From: Ashwin Sekhar T K To: CC: , , , , , , Date: Thu, 8 Apr 2021 15:20:40 +0530 Message-ID: <20210408095049.3100322-3-asekhar@marvell.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20210408095049.3100322-1-asekhar@marvell.com> References: <20210305162149.2196166-1-asekhar@marvell.com> <20210408095049.3100322-1-asekhar@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: 08JdinWgY256ktKud86D06SuA-eSAhn6 X-Proofpoint-GUID: 08JdinWgY256ktKud86D06SuA-eSAhn6 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-04-08_02:2021-04-08, 2021-04-08 signatures=0 Subject: [dpdk-dev] [PATCH v4 02/11] mempool/cnxk: add device probe/remove X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the implementation for CNXk mempool device probe and remove. Signed-off-by: Pavan Nikhilesh Signed-off-by: Ashwin Sekhar T K --- doc/guides/mempool/cnxk.rst | 23 +++++ drivers/mempool/cnxk/cnxk_mempool.c | 132 +++++++++++++++++++++++++++- 2 files changed, 151 insertions(+), 4 deletions(-) diff --git a/doc/guides/mempool/cnxk.rst b/doc/guides/mempool/cnxk.rst index e72a77c361..907c19c841 100644 --- a/doc/guides/mempool/cnxk.rst +++ b/doc/guides/mempool/cnxk.rst @@ -30,6 +30,29 @@ Pre-Installation Configuration ------------------------------ +Runtime Config Options +~~~~~~~~~~~~~~~~~~~~~~ + +- ``Maximum number of mempools per application`` (default ``128``) + + The maximum number of mempools per application needs to be configured on + HW during mempool driver initialization. HW can support up to 1M mempools, + Since each mempool costs set of HW resources, the ``max_pools`` ``devargs`` + parameter is being introduced to configure the number of mempools required + for the application. + For example:: + + -a 0002:02:00.0,max_pools=512 + + With the above configuration, the driver will set up only 512 mempools for + the given application to save HW resources. + +.. note:: + + Since this configuration is per application, the end user needs to + provide ``max_pools`` parameter to the first PCIe device probed by the given + application. + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c index 947078c052..dd4d74ca05 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.c +++ b/drivers/mempool/cnxk/cnxk_mempool.c @@ -15,21 +15,143 @@ #include "roc_api.h" +#define CNXK_NPA_DEV_NAME RTE_STR(cnxk_npa_dev_) +#define CNXK_NPA_DEV_NAME_LEN (sizeof(CNXK_NPA_DEV_NAME) + PCI_PRI_STR_SIZE) +#define CNXK_NPA_MAX_POOLS_PARAM "max_pools" + +static inline uint32_t +npa_aura_size_to_u32(uint8_t val) +{ + if (val == NPA_AURA_SZ_0) + return 128; + if (val >= NPA_AURA_SZ_MAX) + return BIT_ULL(20); + + return 1 << (val + 6); +} + static int -npa_remove(struct rte_pci_device *pci_dev) +parse_max_pools(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + if (val < npa_aura_size_to_u32(NPA_AURA_SZ_128)) + val = 128; + if (val > npa_aura_size_to_u32(NPA_AURA_SZ_1M)) + val = BIT_ULL(20); + + *(uint8_t *)extra_args = rte_log2_u32(val) - 6; + return 0; +} + +static inline uint8_t +parse_aura_size(struct rte_devargs *devargs) +{ + uint8_t aura_sz = NPA_AURA_SZ_128; + struct rte_kvargs *kvlist; + + if (devargs == NULL) + goto exit; + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + goto exit; + + rte_kvargs_process(kvlist, CNXK_NPA_MAX_POOLS_PARAM, &parse_max_pools, + &aura_sz); + rte_kvargs_free(kvlist); +exit: + return aura_sz; +} + +static inline char * +npa_dev_to_name(struct rte_pci_device *pci_dev, char *name) +{ + snprintf(name, CNXK_NPA_DEV_NAME_LEN, CNXK_NPA_DEV_NAME PCI_PRI_FMT, + pci_dev->addr.domain, pci_dev->addr.bus, pci_dev->addr.devid, + pci_dev->addr.function); + + return name; +} + +static int +npa_init(struct rte_pci_device *pci_dev) { - RTE_SET_USED(pci_dev); + char name[CNXK_NPA_DEV_NAME_LEN]; + const struct rte_memzone *mz; + struct roc_npa *dev; + int rc = -ENOMEM; + + mz = rte_memzone_reserve_aligned(npa_dev_to_name(pci_dev, name), + sizeof(*dev), SOCKET_ID_ANY, 0, + RTE_CACHE_LINE_SIZE); + if (mz == NULL) + goto error; + + dev = mz->addr; + dev->pci_dev = pci_dev; + + roc_idev_npa_maxpools_set(parse_aura_size(pci_dev->device.devargs)); + rc = roc_npa_dev_init(dev); + if (rc) + goto mz_free; + + return 0; + +mz_free: + rte_memzone_free(mz); +error: + plt_err("failed to initialize npa device rc=%d", rc); + return rc; +} + +static int +npa_fini(struct rte_pci_device *pci_dev) +{ + char name[CNXK_NPA_DEV_NAME_LEN]; + const struct rte_memzone *mz; + int rc; + + mz = rte_memzone_lookup(npa_dev_to_name(pci_dev, name)); + if (mz == NULL) + return -EINVAL; + + rc = roc_npa_dev_fini(mz->addr); + if (rc) { + if (rc != -EAGAIN) + plt_err("Failed to remove npa dev, rc=%d", rc); + return rc; + } + rte_memzone_free(mz); return 0; } +static int +npa_remove(struct rte_pci_device *pci_dev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return npa_fini(pci_dev); +} + static int npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { + int rc; + RTE_SET_USED(pci_drv); - RTE_SET_USED(pci_dev); - return 0; + rc = roc_plt_init(); + if (rc < 0) + return rc; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return npa_init(pci_dev); } static const struct rte_pci_id npa_pci_map[] = { @@ -76,3 +198,5 @@ static struct rte_pci_driver npa_pci = { RTE_PMD_REGISTER_PCI(mempool_cnxk, npa_pci); RTE_PMD_REGISTER_PCI_TABLE(mempool_cnxk, npa_pci_map); RTE_PMD_REGISTER_KMOD_DEP(mempool_cnxk, "vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(mempool_cnxk, + CNXK_NPA_MAX_POOLS_PARAM "=<128-1048576>"); -- 2.31.0