From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D430C43857; Sun, 7 Jan 2024 16:35:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DA5ED40A6E; Sun, 7 Jan 2024 16:35:24 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id E9785406B6 for ; Sun, 7 Jan 2024 16:35:21 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407EUkDp010955 for ; Sun, 7 Jan 2024 07:35:21 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=EN8zo3otmbL55reV4TM9B26/UxRPcesvofljJJrI72E=; b=Lvm EiKCTSl/Tl2Y2QdwFvTA+05ZaVbRKLT6DFnTCtFz4HWeemLkriWJ6/Gb9jp+dn10 8gh+ItSK3yiWEYL2qjBAqHsoLH74H0dfmH9rBmtacyIKzIjcfAcSPUBMHEl+pbds QdDBnDsnJYF6vj/jZ4DLkSP4uwRyb15GyN39PP9neHHTKNz7M/mJh0sTpdFrwQoz 0fvC28R1VhVR9fycSE6N3CgAeJJ9tbYn/CT/tqvQYgvhPtWuewTlv4q3MNuZj0fH RjWrRmPLWlRF0tD99eucTBxOZBFPA5/GUFOsKUahsUwNSGRFXWp3dSu6GV8fSAhF 9vL4woP8WCC/5hkBw3Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3vf78n2a1k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 07 Jan 2024 07:35:21 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:19 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:19 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id D74273F7093; Sun, 7 Jan 2024 07:35:18 -0800 (PST) From: Srikanth Yalavarthi To: Jerin Jacob , Srikanth Yalavarthi CC: , , , Subject: [PATCH 05/11] event/ml: add adapter queue pair add and delete Date: Sun, 7 Jan 2024 07:34:44 -0800 Message-ID: <20240107153454.3909-6-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: BB7JTLllKzM-4kQpIMvA_7JmMIC8iPHN X-Proofpoint-GUID: BB7JTLllKzM-4kQpIMvA_7JmMIC8iPHN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added ML adapter queue-pair add and delete functions Signed-off-by: Srikanth Yalavarthi --- lib/eventdev/eventdev_pmd.h | 54 ++++++++ lib/eventdev/rte_event_ml_adapter.c | 193 ++++++++++++++++++++++++++++ 2 files changed, 247 insertions(+) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 94d505753dc..48e970a5097 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -1549,6 +1549,56 @@ struct rte_ml_dev; typedef int (*eventdev_ml_adapter_caps_get_t)(const struct rte_eventdev *dev, const struct rte_ml_dev *mldev, uint32_t *caps); +/** + * This API may change without prior notice + * + * Add ML queue pair to event device. This callback is invoked if + * the caps returned from rte_event_ml_adapter_caps_get(, mldev_id) + * has RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_* set. + * + * @param dev + * Event device pointer + * + * @param mldev + * MLDEV pointer + * + * @param queue_pair_id + * MLDEV queue pair identifier. + * + * @param event + * Event information required for binding mldev queue pair to event queue. + * This structure will have a valid value for only those HW PMDs supporting + * @see RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND capability. + * + * @return + * - 0: Success, mldev queue pair added successfully. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_ml_adapter_queue_pair_add_t)(const struct rte_eventdev *dev, + const struct rte_ml_dev *mldev, + int32_t queue_pair_id, + const struct rte_event *event); + +/** + * This API may change without prior notice + * + * Delete ML queue pair to event device. This callback is invoked if + * the caps returned from rte_event_ml_adapter_caps_get(, mldev_id) + * has RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_* set. + * + * @param queue_pair_id + * mldev queue pair identifier. + * + * @return + * - 0: Success, mldev queue pair deleted successfully. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_ml_adapter_queue_pair_del_t)(const struct rte_eventdev *dev, + const struct rte_ml_dev *cdev, + int32_t queue_pair_id); + /** Event device operations function pointer table */ struct eventdev_ops { eventdev_info_get_t dev_infos_get; /**< Get device info. */ @@ -1690,6 +1740,10 @@ struct eventdev_ops { eventdev_ml_adapter_caps_get_t ml_adapter_caps_get; /**< Get ML adapter capabilities */ + eventdev_ml_adapter_queue_pair_add_t ml_adapter_queue_pair_add; + /**< Add queue pair to ML adapter */ + eventdev_ml_adapter_queue_pair_del_t ml_adapter_queue_pair_del; + /**< Delete queue pair from ML adapter */ eventdev_selftest dev_selftest; /**< Start eventdev Selftest */ diff --git a/lib/eventdev/rte_event_ml_adapter.c b/lib/eventdev/rte_event_ml_adapter.c index 93ba58b3e9e..9d441c5d967 100644 --- a/lib/eventdev/rte_event_ml_adapter.c +++ b/lib/eventdev/rte_event_ml_adapter.c @@ -33,10 +33,27 @@ struct ml_ops_circular_buffer { struct rte_ml_op **op_buffer; } __rte_cache_aligned; +/* Queue pair information */ +struct ml_queue_pair_info { + /* Set to indicate queue pair is enabled */ + bool qp_enabled; + + /* Circular buffer for batching ML ops to mldev */ + struct ml_ops_circular_buffer mlbuf; +} __rte_cache_aligned; + /* ML device information */ struct ml_device_info { /* Pointer to mldev */ struct rte_ml_dev *dev; + + /* Pointer to queue pair info */ + struct ml_queue_pair_info *qpairs; + + /* If num_qpairs > 0, the start callback will + * be invoked if not already invoked + */ + uint16_t num_qpairs; } __rte_cache_aligned; struct event_ml_adapter { @@ -72,6 +89,9 @@ struct event_ml_adapter { /* Set if default_cb is being used */ int default_cb_arg; + + /* No. of queue pairs configured */ + uint16_t nb_qps; } __rte_cache_aligned; static struct event_ml_adapter **event_ml_adapter; @@ -340,3 +360,176 @@ rte_event_ml_adapter_event_port_get(uint8_t id, uint8_t *event_port_id) return 0; } + +static void +emla_update_qp_info(struct event_ml_adapter *adapter, struct ml_device_info *dev_info, + int32_t queue_pair_id, uint8_t add) +{ + struct ml_queue_pair_info *qp_info; + int enabled; + uint16_t i; + + if (dev_info->qpairs == NULL) + return; + + if (queue_pair_id == -1) { + for (i = 0; i < dev_info->dev->data->nb_queue_pairs; i++) + emla_update_qp_info(adapter, dev_info, i, add); + } else { + qp_info = &dev_info->qpairs[queue_pair_id]; + enabled = qp_info->qp_enabled; + if (add) { + adapter->nb_qps += !enabled; + dev_info->num_qpairs += !enabled; + } else { + adapter->nb_qps -= enabled; + dev_info->num_qpairs -= enabled; + } + qp_info->qp_enabled = !!add; + } +} + +int +rte_event_ml_adapter_queue_pair_add(uint8_t id, int16_t mldev_id, int32_t queue_pair_id, + const struct rte_event *event) +{ + struct event_ml_adapter *adapter; + struct ml_device_info *dev_info; + struct rte_eventdev *dev; + uint32_t cap; + int ret; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + if (!rte_ml_dev_is_valid_dev(mldev_id)) { + RTE_EDEV_LOG_ERR("Invalid mldev_id = %" PRIu8, mldev_id); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_ml_adapter_caps_get(adapter->eventdev_id, mldev_id, &cap); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %u mldev %u", id, mldev_id); + return ret; + } + + if ((cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) && (event == NULL)) { + RTE_EDEV_LOG_ERR("Event can not be NULL for mldev_id = %u", mldev_id); + return -EINVAL; + } + + dev_info = &adapter->mldevs[mldev_id]; + if (queue_pair_id != -1 && (uint16_t)queue_pair_id >= dev_info->dev->data->nb_queue_pairs) { + RTE_EDEV_LOG_ERR("Invalid queue_pair_id %u", (uint16_t)queue_pair_id); + return -EINVAL; + } + + /* In case HW cap is RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, no + * need of service core as HW supports event forward capability. + */ + if ((cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND && + adapter->mode == RTE_EVENT_ML_ADAPTER_OP_NEW) || + (cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_ML_ADAPTER_OP_NEW)) { + if (*dev->dev_ops->ml_adapter_queue_pair_add == NULL) + return -ENOTSUP; + + if (dev_info->qpairs == NULL) { + dev_info->qpairs = + rte_zmalloc_socket(adapter->mem_name, + dev_info->dev->data->nb_queue_pairs * + sizeof(struct ml_queue_pair_info), + 0, adapter->socket_id); + if (dev_info->qpairs == NULL) + return -ENOMEM; + } + + ret = (*dev->dev_ops->ml_adapter_queue_pair_add)(dev, dev_info->dev, queue_pair_id, + event); + if (ret == 0) + emla_update_qp_info(adapter, &adapter->mldevs[mldev_id], queue_pair_id, 1); + } + + return ret; +} + +int +rte_event_ml_adapter_queue_pair_del(uint8_t id, int16_t mldev_id, int32_t queue_pair_id) +{ + struct event_ml_adapter *adapter; + struct ml_device_info *dev_info; + struct rte_eventdev *dev; + int ret; + uint32_t cap; + uint16_t i; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + if (!rte_ml_dev_is_valid_dev(mldev_id)) { + RTE_EDEV_LOG_ERR("Invalid mldev_id = %" PRIu8, mldev_id); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_ml_adapter_caps_get(adapter->eventdev_id, mldev_id, &cap); + if (ret) + return ret; + + dev_info = &adapter->mldevs[mldev_id]; + + if (queue_pair_id != -1 && (uint16_t)queue_pair_id >= dev_info->dev->data->nb_queue_pairs) { + RTE_EDEV_LOG_ERR("Invalid queue_pair_id %" PRIu16, (uint16_t)queue_pair_id); + return -EINVAL; + } + + if ((cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_ML_ADAPTER_OP_NEW)) { + if (*dev->dev_ops->ml_adapter_queue_pair_del == NULL) + return -ENOTSUP; + + ret = (*dev->dev_ops->ml_adapter_queue_pair_del)(dev, dev_info->dev, queue_pair_id); + if (ret == 0) { + emla_update_qp_info(adapter, &adapter->mldevs[mldev_id], queue_pair_id, 0); + if (dev_info->num_qpairs == 0) { + rte_free(dev_info->qpairs); + dev_info->qpairs = NULL; + } + } + } else { + if (adapter->nb_qps == 0) + return 0; + + rte_spinlock_lock(&adapter->lock); + if (queue_pair_id == -1) { + for (i = 0; i < dev_info->dev->data->nb_queue_pairs; i++) + emla_update_qp_info(adapter, dev_info, queue_pair_id, 0); + } else { + emla_update_qp_info(adapter, dev_info, (uint16_t)queue_pair_id, 0); + } + + if (dev_info->num_qpairs == 0) { + rte_free(dev_info->qpairs); + dev_info->qpairs = NULL; + } + + rte_spinlock_unlock(&adapter->lock); + } + + return ret; +} -- 2.42.0