From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B6D8A0C43; Fri, 15 Oct 2021 21:02:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 927C141135; Fri, 15 Oct 2021 21:02:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 1624A410FF for ; Fri, 15 Oct 2021 21:02:46 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 19F8qB1w020992; Fri, 15 Oct 2021 12:02:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=4ZM9xMqjJ1jWfIoDr3MdgQkSrh0Z2YK7vOX3SQ7NerU=; b=E4E6CV2RpSTv+ykzOKtJJ/sLiZ/xp63SGrzHm5T0JKCDNEGKIROYgDEPRstVaLh95HKt sOKwqv4pu+MkPRs2BzzRX4AAnhwcsZxC5iUelxMy8sO+uzZp4W+Aqkyx6UTO8JlHOlaT fFIZmfD4XNJvqIXKXCzxC4gHihhNhyLjFZVqo4Iieg1kgf5uQStC9f9V1Tov6/GxIJz7 PLAz7+f9eP3llARhT6tO8qXKDEkL8MOLls83lveX7hbTATUwnkJEmVwo0VcWpF0TYIy0 Q/9HnAkPe+4Ivo9Z2M3KGGQAcS8Z/vWPKLlnCzmgH03W2FtTlHaOfW5WyVcQq6nLdfwj UQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3bq6gta8dx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 15 Oct 2021 12:02:46 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 15 Oct 2021 12:02:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 15 Oct 2021 12:02:44 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id 70DD15B6928; Fri, 15 Oct 2021 12:02:42 -0700 (PDT) From: To: , Bruce Richardson , Anatoly Burakov CC: , Pavan Nikhilesh Date: Sat, 16 Oct 2021 00:32:10 +0530 Message-ID: <20211015190221.2160-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211015190221.2160-1-pbhagavatula@marvell.com> References: <20211006065012.16508-1-pbhagavatula@marvell.com> <20211015190221.2160-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: 7R5PL7FhpijoZh1G_30OscqR_E55HtXA X-Proofpoint-ORIG-GUID: 7R5PL7FhpijoZh1G_30OscqR_E55HtXA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-10-15_06,2021-10-14_02,2020-04-07_01 Subject: [dpdk-dev] [PATCH v4 03/14] eventdev: allocate max space for internal arrays X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Allocate max space for internal port, port config, queue config and link map arrays. Introduce new macro RTE_EVENT_MAX_PORTS_PER_DEV and set it to max possible value. This simplifies the port and queue reconfigure scenarios and will also allow inline functions to refer pointer to internal port data without extra checking of current number of configured queues. Signed-off-by: Pavan Nikhilesh --- config/rte_config.h | 1 + lib/eventdev/rte_eventdev.c | 154 +++++++------------------------ lib/eventdev/rte_eventdev_core.h | 9 +- 3 files changed, 38 insertions(+), 126 deletions(-) diff --git a/config/rte_config.h b/config/rte_config.h index 590903c07d..e0ead8b251 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -72,6 +72,7 @@ /* eventdev defines */ #define RTE_EVENT_MAX_DEVS 16 +#define RTE_EVENT_MAX_PORTS_PER_DEV 255 #define RTE_EVENT_MAX_QUEUES_PER_DEV 255 #define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32 #define RTE_EVENT_ETH_INTR_RING_SIZE 1024 diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index e347d6dfd5..bfcfa31cd1 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -209,7 +209,7 @@ rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id, } static inline int -rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues) +event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues) { uint8_t old_nb_queues = dev->data->nb_queues; struct rte_event_queue_conf *queues_cfg; @@ -218,37 +218,13 @@ rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues) RTE_EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues, dev->data->dev_id); - /* First time configuration */ - if (dev->data->queues_cfg == NULL && nb_queues != 0) { - /* Allocate memory to store queue configuration */ - dev->data->queues_cfg = rte_zmalloc_socket( - "eventdev->data->queues_cfg", - sizeof(dev->data->queues_cfg[0]) * nb_queues, - RTE_CACHE_LINE_SIZE, dev->data->socket_id); - if (dev->data->queues_cfg == NULL) { - dev->data->nb_queues = 0; - RTE_EDEV_LOG_ERR("failed to get mem for queue cfg," - "nb_queues %u", nb_queues); - return -(ENOMEM); - } - /* Re-configure */ - } else if (dev->data->queues_cfg != NULL && nb_queues != 0) { + if (nb_queues != 0) { + queues_cfg = dev->data->queues_cfg; RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP); for (i = nb_queues; i < old_nb_queues; i++) (*dev->dev_ops->queue_release)(dev, i); - /* Re allocate memory to store queue configuration */ - queues_cfg = dev->data->queues_cfg; - queues_cfg = rte_realloc(queues_cfg, - sizeof(queues_cfg[0]) * nb_queues, - RTE_CACHE_LINE_SIZE); - if (queues_cfg == NULL) { - RTE_EDEV_LOG_ERR("failed to realloc queue cfg memory," - " nb_queues %u", nb_queues); - return -(ENOMEM); - } - dev->data->queues_cfg = queues_cfg; if (nb_queues > old_nb_queues) { uint8_t new_qs = nb_queues - old_nb_queues; @@ -256,7 +232,7 @@ rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues) memset(queues_cfg + old_nb_queues, 0, sizeof(queues_cfg[0]) * new_qs); } - } else if (dev->data->queues_cfg != NULL && nb_queues == 0) { + } else { RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP); for (i = nb_queues; i < old_nb_queues; i++) @@ -270,7 +246,7 @@ rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues) #define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead) static inline int -rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) +event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) { uint8_t old_nb_ports = dev->data->nb_ports; void **ports; @@ -281,46 +257,7 @@ rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports, dev->data->dev_id); - /* First time configuration */ - if (dev->data->ports == NULL && nb_ports != 0) { - dev->data->ports = rte_zmalloc_socket("eventdev->data->ports", - sizeof(dev->data->ports[0]) * nb_ports, - RTE_CACHE_LINE_SIZE, dev->data->socket_id); - if (dev->data->ports == NULL) { - dev->data->nb_ports = 0; - RTE_EDEV_LOG_ERR("failed to get mem for port meta data," - "nb_ports %u", nb_ports); - return -(ENOMEM); - } - - /* Allocate memory to store port configurations */ - dev->data->ports_cfg = - rte_zmalloc_socket("eventdev->ports_cfg", - sizeof(dev->data->ports_cfg[0]) * nb_ports, - RTE_CACHE_LINE_SIZE, dev->data->socket_id); - if (dev->data->ports_cfg == NULL) { - dev->data->nb_ports = 0; - RTE_EDEV_LOG_ERR("failed to get mem for port cfg," - "nb_ports %u", nb_ports); - return -(ENOMEM); - } - - /* Allocate memory to store queue to port link connection */ - dev->data->links_map = - rte_zmalloc_socket("eventdev->links_map", - sizeof(dev->data->links_map[0]) * nb_ports * - RTE_EVENT_MAX_QUEUES_PER_DEV, - RTE_CACHE_LINE_SIZE, dev->data->socket_id); - if (dev->data->links_map == NULL) { - dev->data->nb_ports = 0; - RTE_EDEV_LOG_ERR("failed to get mem for port_map area," - "nb_ports %u", nb_ports); - return -(ENOMEM); - } - for (i = 0; i < nb_ports * RTE_EVENT_MAX_QUEUES_PER_DEV; i++) - dev->data->links_map[i] = - EVENT_QUEUE_SERVICE_PRIORITY_INVALID; - } else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */ + if (nb_ports != 0) { /* re-config */ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP); ports = dev->data->ports; @@ -330,37 +267,6 @@ rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) for (i = nb_ports; i < old_nb_ports; i++) (*dev->dev_ops->port_release)(ports[i]); - /* Realloc memory for ports */ - ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports, - RTE_CACHE_LINE_SIZE); - if (ports == NULL) { - RTE_EDEV_LOG_ERR("failed to realloc port meta data," - " nb_ports %u", nb_ports); - return -(ENOMEM); - } - - /* Realloc memory for ports_cfg */ - ports_cfg = rte_realloc(ports_cfg, - sizeof(ports_cfg[0]) * nb_ports, - RTE_CACHE_LINE_SIZE); - if (ports_cfg == NULL) { - RTE_EDEV_LOG_ERR("failed to realloc port cfg mem," - " nb_ports %u", nb_ports); - return -(ENOMEM); - } - - /* Realloc memory to store queue to port link connection */ - links_map = rte_realloc(links_map, - sizeof(dev->data->links_map[0]) * nb_ports * - RTE_EVENT_MAX_QUEUES_PER_DEV, - RTE_CACHE_LINE_SIZE); - if (links_map == NULL) { - dev->data->nb_ports = 0; - RTE_EDEV_LOG_ERR("failed to realloc mem for port_map," - "nb_ports %u", nb_ports); - return -(ENOMEM); - } - if (nb_ports > old_nb_ports) { uint8_t new_ps = nb_ports - old_nb_ports; unsigned int old_links_map_end = @@ -376,16 +282,14 @@ rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) links_map[i] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID; } - - dev->data->ports = ports; - dev->data->ports_cfg = ports_cfg; - dev->data->links_map = links_map; - } else if (dev->data->ports != NULL && nb_ports == 0) { + } else { RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP); ports = dev->data->ports; - for (i = nb_ports; i < old_nb_ports; i++) + for (i = nb_ports; i < old_nb_ports; i++) { (*dev->dev_ops->port_release)(ports[i]); + ports[i] = NULL; + } } dev->data->nb_ports = nb_ports; @@ -550,19 +454,19 @@ rte_event_dev_configure(uint8_t dev_id, memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf)); /* Setup new number of queues and reconfigure device. */ - diag = rte_event_dev_queue_config(dev, dev_conf->nb_event_queues); + diag = event_dev_queue_config(dev, dev_conf->nb_event_queues); if (diag != 0) { - RTE_EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d", - dev_id, diag); + RTE_EDEV_LOG_ERR("dev%d event_dev_queue_config = %d", dev_id, + diag); return diag; } /* Setup new number of ports and reconfigure device. */ - diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports); + diag = event_dev_port_config(dev, dev_conf->nb_event_ports); if (diag != 0) { - rte_event_dev_queue_config(dev, 0); - RTE_EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d", - dev_id, diag); + event_dev_queue_config(dev, 0); + RTE_EDEV_LOG_ERR("dev%d event_dev_port_config = %d", dev_id, + diag); return diag; } @@ -570,8 +474,8 @@ rte_event_dev_configure(uint8_t dev_id, diag = (*dev->dev_ops->dev_configure)(dev); if (diag != 0) { RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag); - rte_event_dev_queue_config(dev, 0); - rte_event_dev_port_config(dev, 0); + event_dev_queue_config(dev, 0); + event_dev_port_config(dev, 0); } dev->data->event_dev_cap = info.event_dev_cap; @@ -1403,8 +1307,8 @@ rte_event_dev_close(uint8_t dev_id) } static inline int -rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data, - int socket_id) +eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data, + int socket_id) { char mz_name[RTE_EVENTDEV_NAME_MAX_LEN]; const struct rte_memzone *mz; @@ -1426,14 +1330,20 @@ rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data, return -ENOMEM; *data = mz->addr; - if (rte_eal_process_type() == RTE_PROC_PRIMARY) + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { memset(*data, 0, sizeof(struct rte_eventdev_data)); + for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * + RTE_EVENT_MAX_QUEUES_PER_DEV; + n++) + (*data)->links_map[n] = + EVENT_QUEUE_SERVICE_PRIORITY_INVALID; + } return 0; } static inline uint8_t -rte_eventdev_find_free_device_index(void) +eventdev_find_free_device_index(void) { uint8_t dev_id; @@ -1475,7 +1385,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) return NULL; } - dev_id = rte_eventdev_find_free_device_index(); + dev_id = eventdev_find_free_device_index(); if (dev_id == RTE_EVENT_MAX_DEVS) { RTE_EDEV_LOG_ERR("Reached maximum number of event devices"); return NULL; @@ -1490,8 +1400,8 @@ rte_event_pmd_allocate(const char *name, int socket_id) if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; - int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data, - socket_id); + int retval = + eventdev_data_alloc(dev_id, &eventdev_data, socket_id); if (retval < 0 || eventdev_data == NULL) return NULL; diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index b97cdf84fe..115b97e431 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -58,13 +58,14 @@ struct rte_eventdev_data { /**< Number of event queues. */ uint8_t nb_ports; /**< Number of event ports. */ - void **ports; + void *ports[RTE_EVENT_MAX_PORTS_PER_DEV]; /**< Array of pointers to ports. */ - struct rte_event_port_conf *ports_cfg; + struct rte_event_port_conf ports_cfg[RTE_EVENT_MAX_PORTS_PER_DEV]; /**< Array of port configuration structures. */ - struct rte_event_queue_conf *queues_cfg; + struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV]; /**< Array of queue configuration structures. */ - uint16_t *links_map; + uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV * + RTE_EVENT_MAX_QUEUES_PER_DEV]; /**< Memory to store queues to port connections. */ void *dev_private; /**< PMD-specific private data */ -- 2.17.1