* [PATCH 0/4] Implementation of CNXK ML event adapter driver
@ 2024-01-07 15:40 Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 1/4] event/cnxk: add ML adapter capabilities get Srikanth Yalavarthi
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Srikanth Yalavarthi @ 2024-01-07 15:40 UTC (permalink / raw)
Cc: dev, aprabhu, syalavarthi, sshankarnara, ptakkar, jerinj
This series of patches is an implementation of event ML adapter for
Marvell's Octeon platform.
Srikanth Yalavarthi (4):
event/cnxk: add ML adapter capabilities get
event/cnxk: implement queue pair add and delete
ml/cnxk: add adapter enqueue function
ml/cnxk: add adapter dequeue function
drivers/event/cnxk/cn10k_eventdev.c | 121 +++++++++++++++++++++++
drivers/event/cnxk/cn10k_worker.h | 3 +
drivers/event/cnxk/cnxk_eventdev.h | 4 +
drivers/event/cnxk/meson.build | 2 +-
drivers/ml/cnxk/cn10k_ml_event_dp.h | 18 ++++
drivers/ml/cnxk/cn10k_ml_ops.c | 146 +++++++++++++++++++++++++++-
drivers/ml/cnxk/cn10k_ml_ops.h | 3 +
drivers/ml/cnxk/cnxk_ml_ops.h | 20 ++++
drivers/ml/cnxk/meson.build | 2 +-
drivers/ml/cnxk/version.map | 8 ++
10 files changed, 320 insertions(+), 7 deletions(-)
create mode 100644 drivers/ml/cnxk/cn10k_ml_event_dp.h
create mode 100644 drivers/ml/cnxk/version.map
--
2.42.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 1/4] event/cnxk: add ML adapter capabilities get
2024-01-07 15:40 [PATCH 0/4] Implementation of CNXK ML event adapter driver Srikanth Yalavarthi
@ 2024-01-07 15:40 ` Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 2/4] event/cnxk: implement queue pair add and delete Srikanth Yalavarthi
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Srikanth Yalavarthi @ 2024-01-07 15:40 UTC (permalink / raw)
To: Pavan Nikhilesh, Shijith Thotton, Srikanth Yalavarthi
Cc: dev, aprabhu, sshankarnara, ptakkar, jerinj
Implemented driver function to get ML adapter capabilities.
Signed-off-by: Srikanth Yalavarthi <syalavarthi@marvell.com>
---
Depends-on: series-30752 ("Introduce Event ML Adapter")
drivers/event/cnxk/cn10k_eventdev.c | 15 +++++++++++++++
drivers/event/cnxk/meson.build | 2 +-
drivers/ml/cnxk/cn10k_ml_ops.h | 2 ++
3 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index bb0c9105535..09eff569052 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -6,6 +6,7 @@
#include "cn10k_worker.h"
#include "cn10k_ethdev.h"
#include "cn10k_cryptodev_ops.h"
+#include "cnxk_ml_ops.h"
#include "cnxk_eventdev.h"
#include "cnxk_worker.h"
@@ -1020,6 +1021,18 @@ cn10k_crypto_adapter_vec_limits(const struct rte_eventdev *event_dev,
return 0;
}
+static int
+cn10k_ml_adapter_caps_get(const struct rte_eventdev *event_dev, const struct rte_ml_dev *mldev,
+ uint32_t *caps)
+{
+ CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn10k", EINVAL);
+ CNXK_VALID_DEV_OR_ERR_RET(mldev->device, "ml_cn10k", EINVAL);
+
+ *caps = RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD;
+
+ return 0;
+}
+
static struct eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -1061,6 +1074,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
.crypto_adapter_queue_pair_del = cn10k_crypto_adapter_qp_del,
.crypto_adapter_vector_limits_get = cn10k_crypto_adapter_vec_limits,
+ .ml_adapter_caps_get = cn10k_ml_adapter_caps_get,
+
.xstats_get = cnxk_sso_xstats_get,
.xstats_reset = cnxk_sso_xstats_reset,
.xstats_get_names = cnxk_sso_xstats_get_names,
diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build
index 13281d687f7..e09ad97b660 100644
--- a/drivers/event/cnxk/meson.build
+++ b/drivers/event/cnxk/meson.build
@@ -316,7 +316,7 @@ foreach flag: extra_flags
endforeach
headers = files('rte_pmd_cnxk_eventdev.h')
-deps += ['bus_pci', 'common_cnxk', 'net_cnxk', 'crypto_cnxk']
+deps += ['bus_pci', 'common_cnxk', 'net_cnxk', 'crypto_cnxk', 'ml_cnxk']
require_iova_in_mbuf = false
diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h
index eb3e1c139c7..d225ed2098e 100644
--- a/drivers/ml/cnxk/cn10k_ml_ops.h
+++ b/drivers/ml/cnxk/cn10k_ml_ops.h
@@ -10,6 +10,8 @@
#include <roc_api.h>
+#include "cnxk_ml_xstats.h"
+
struct cnxk_ml_dev;
struct cnxk_ml_qp;
struct cnxk_ml_model;
--
2.42.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 2/4] event/cnxk: implement queue pair add and delete
2024-01-07 15:40 [PATCH 0/4] Implementation of CNXK ML event adapter driver Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 1/4] event/cnxk: add ML adapter capabilities get Srikanth Yalavarthi
@ 2024-01-07 15:40 ` Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 3/4] ml/cnxk: add adapter enqueue function Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 4/4] ml/cnxk: add adapter dequeue function Srikanth Yalavarthi
3 siblings, 0 replies; 6+ messages in thread
From: Srikanth Yalavarthi @ 2024-01-07 15:40 UTC (permalink / raw)
To: Pavan Nikhilesh, Shijith Thotton, Srikanth Yalavarthi
Cc: dev, aprabhu, sshankarnara, ptakkar, jerinj
Added structures for ML event adapter.
Implemented ML event adapter queue-pair add and delete functions.
Signed-off-by: Srikanth Yalavarthi <syalavarthi@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 103 ++++++++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev.h | 4 ++
drivers/ml/cnxk/cnxk_ml_ops.h | 12 ++++
3 files changed, 119 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 09eff569052..201972cec9e 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -1033,6 +1033,107 @@ cn10k_ml_adapter_caps_get(const struct rte_eventdev *event_dev, const struct rte
return 0;
}
+static int
+ml_adapter_qp_free(struct cnxk_ml_qp *qp)
+{
+ rte_mempool_free(qp->mla.req_mp);
+ qp->mla.enabled = false;
+
+ return 0;
+}
+
+static int
+ml_adapter_qp_setup(const struct rte_ml_dev *mldev, struct cnxk_ml_qp *qp)
+{
+ char name[RTE_MEMPOOL_NAMESIZE];
+ uint32_t cache_size, nb_req;
+ unsigned int req_size;
+
+ snprintf(name, RTE_MEMPOOL_NAMESIZE, "cnxk_mla_req_%u_%u", mldev->data->dev_id, qp->id);
+ req_size = sizeof(struct cn10k_ml_req);
+ cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE;
+ nb_req = cache_size * rte_lcore_count();
+ qp->mla.req_mp = rte_mempool_create(name, nb_req, req_size, cache_size, 0, NULL, NULL, NULL,
+ NULL, rte_socket_id(), 0);
+ if (qp->mla.req_mp == NULL)
+ return -ENOMEM;
+
+ qp->mla.enabled = true;
+
+ return 0;
+}
+
+static int
+cn10k_ml_adapter_qp_del(const struct rte_eventdev *event_dev, const struct rte_ml_dev *mldev,
+ int32_t queue_pair_id)
+{
+ struct cnxk_ml_qp *qp;
+
+ CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn10k", EINVAL);
+ CNXK_VALID_DEV_OR_ERR_RET(mldev->device, "ml_cn10k", EINVAL);
+
+ if (queue_pair_id == -1) {
+ uint16_t qp_id;
+
+ for (qp_id = 0; qp_id < mldev->data->nb_queue_pairs; qp_id++) {
+ qp = mldev->data->queue_pairs[qp_id];
+ if (qp->mla.enabled)
+ ml_adapter_qp_free(qp);
+ }
+ } else {
+ qp = mldev->data->queue_pairs[queue_pair_id];
+ if (qp->mla.enabled)
+ ml_adapter_qp_free(qp);
+ }
+
+ return 0;
+}
+
+static int
+cn10k_ml_adapter_qp_add(const struct rte_eventdev *event_dev, const struct rte_ml_dev *mldev,
+ int32_t queue_pair_id, const struct rte_event *event)
+{
+ struct cnxk_sso_evdev *sso_evdev = cnxk_sso_pmd_priv(event_dev);
+ uint32_t adptr_xae_cnt = 0;
+ struct cnxk_ml_qp *qp;
+ int ret;
+
+ PLT_SET_USED(event);
+
+ CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn10k", EINVAL);
+ CNXK_VALID_DEV_OR_ERR_RET(mldev->device, "ml_cn10k", EINVAL);
+
+ sso_evdev->is_mla_internal_port = 1;
+ cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+
+ if (queue_pair_id == -1) {
+ uint16_t qp_id;
+
+ for (qp_id = 0; qp_id < mldev->data->nb_queue_pairs; qp_id++) {
+ qp = mldev->data->queue_pairs[qp_id];
+ ret = ml_adapter_qp_setup(mldev, qp);
+ if (ret != 0) {
+ cn10k_ml_adapter_qp_del(event_dev, mldev, -1);
+ return ret;
+ }
+ adptr_xae_cnt += qp->mla.req_mp->size;
+ }
+ } else {
+ qp = mldev->data->queue_pairs[queue_pair_id];
+ ret = ml_adapter_qp_setup(mldev, qp);
+ if (ret != 0)
+ return ret;
+
+ adptr_xae_cnt = qp->mla.req_mp->size;
+ }
+
+ /* Update ML adapter XAE count */
+ sso_evdev->adptr_xae_cnt += adptr_xae_cnt;
+ cnxk_sso_xae_reconfigure((struct rte_eventdev *)(uintptr_t)event_dev);
+
+ return ret;
+}
+
static struct eventdev_ops cn10k_sso_dev_ops = {
.dev_infos_get = cn10k_sso_info_get,
.dev_configure = cn10k_sso_dev_configure,
@@ -1075,6 +1176,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
.crypto_adapter_vector_limits_get = cn10k_crypto_adapter_vec_limits,
.ml_adapter_caps_get = cn10k_ml_adapter_caps_get,
+ .ml_adapter_queue_pair_add = cn10k_ml_adapter_qp_add,
+ .ml_adapter_queue_pair_del = cn10k_ml_adapter_qp_del,
.xstats_get = cnxk_sso_xstats_get,
.xstats_reset = cnxk_sso_xstats_reset,
diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
index d42d1afa1a1..bc51e952c9a 100644
--- a/drivers/event/cnxk/cnxk_eventdev.h
+++ b/drivers/event/cnxk/cnxk_eventdev.h
@@ -124,6 +124,10 @@ struct cnxk_sso_evdev {
uint32_t gw_mode;
uint16_t stash_cnt;
struct cnxk_sso_stash *stash_parse_data;
+ /* Crypto adapter */
+ uint8_t is_ca_internal_port;
+ /* ML adapter */
+ uint8_t is_mla_internal_port;
} __rte_cache_aligned;
/* Event port a.k.a GWS */
diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h
index 7b49793a574..81f91df2a80 100644
--- a/drivers/ml/cnxk/cnxk_ml_ops.h
+++ b/drivers/ml/cnxk/cnxk_ml_ops.h
@@ -5,6 +5,7 @@
#ifndef _CNXK_ML_OPS_H_
#define _CNXK_ML_OPS_H_
+#include <rte_mempool.h>
#include <rte_mldev.h>
#include <rte_mldev_core.h>
@@ -56,6 +57,14 @@ struct cnxk_ml_queue {
uint64_t wait_cycles;
};
+struct cnxk_ml_adapter_info {
+ /**< Set if queue pair is added to ML adapter */
+ bool enabled;
+
+ /**< ML in-flight request mempool */
+ struct rte_mempool *req_mp;
+};
+
/* Queue-pair structure */
struct cnxk_ml_qp {
/* ID */
@@ -69,6 +78,9 @@ struct cnxk_ml_qp {
/* Statistics per queue-pair */
struct rte_ml_dev_stats stats;
+
+ /**< ML adapter related info */
+ struct cnxk_ml_adapter_info mla;
};
extern struct rte_ml_dev_ops cnxk_ml_ops;
--
2.42.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 3/4] ml/cnxk: add adapter enqueue function
2024-01-07 15:40 [PATCH 0/4] Implementation of CNXK ML event adapter driver Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 1/4] event/cnxk: add ML adapter capabilities get Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 2/4] event/cnxk: implement queue pair add and delete Srikanth Yalavarthi
@ 2024-01-07 15:40 ` Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 4/4] ml/cnxk: add adapter dequeue function Srikanth Yalavarthi
3 siblings, 0 replies; 6+ messages in thread
From: Srikanth Yalavarthi @ 2024-01-07 15:40 UTC (permalink / raw)
To: Pavan Nikhilesh, Shijith Thotton, Srikanth Yalavarthi
Cc: dev, aprabhu, sshankarnara, ptakkar, jerinj
Implemented ML adapter enqueue function. Rename internal
fast-path JD preparation function for poll mode. Added JD
preparation function for event mode. Updated meson build
dependencies for ml/cnxk driver.
Signed-off-by: Srikanth Yalavarthi <syalavarthi@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 3 +
drivers/ml/cnxk/cn10k_ml_event_dp.h | 16 ++++
drivers/ml/cnxk/cn10k_ml_ops.c | 129 ++++++++++++++++++++++++++--
drivers/ml/cnxk/cn10k_ml_ops.h | 1 +
drivers/ml/cnxk/cnxk_ml_ops.h | 8 ++
drivers/ml/cnxk/meson.build | 2 +-
drivers/ml/cnxk/version.map | 7 ++
7 files changed, 160 insertions(+), 6 deletions(-)
create mode 100644 drivers/ml/cnxk/cn10k_ml_event_dp.h
create mode 100644 drivers/ml/cnxk/version.map
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 201972cec9e..3b5dce23fe9 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -6,6 +6,7 @@
#include "cn10k_worker.h"
#include "cn10k_ethdev.h"
#include "cn10k_cryptodev_ops.h"
+#include "cn10k_ml_event_dp.h"
#include "cnxk_ml_ops.h"
#include "cnxk_eventdev.h"
#include "cnxk_worker.h"
@@ -478,6 +479,8 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
else
event_dev->ca_enqueue = cn10k_cpt_sg_ver1_crypto_adapter_enqueue;
+ event_dev->mla_enqueue = cn10k_ml_adapter_enqueue;
+
if (dev->tx_offloads & NIX_TX_MULTI_SEG_F)
CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq_seg);
else
diff --git a/drivers/ml/cnxk/cn10k_ml_event_dp.h b/drivers/ml/cnxk/cn10k_ml_event_dp.h
new file mode 100644
index 00000000000..bf7fc57bceb
--- /dev/null
+++ b/drivers/ml/cnxk/cn10k_ml_event_dp.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef _CN10K_ML_EVENT_DP_H_
+#define _CN10K_ML_EVENT_DP_H_
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_eventdev.h>
+
+__rte_internal
+__rte_hot uint16_t cn10k_ml_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events);
+
+#endif /* _CN10K_ML_EVENT_DP_H_ */
diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c
index 834e55e88e9..4bc17eaa8c4 100644
--- a/drivers/ml/cnxk/cn10k_ml_ops.c
+++ b/drivers/ml/cnxk/cn10k_ml_ops.c
@@ -2,11 +2,13 @@
* Copyright (c) 2022 Marvell.
*/
+#include <rte_event_ml_adapter.h>
#include <rte_mldev.h>
#include <rte_mldev_pmd.h>
#include <mldev_utils.h>
+#include "cn10k_ml_event_dp.h"
#include "cnxk_ml_dev.h"
#include "cnxk_ml_model.h"
#include "cnxk_ml_ops.h"
@@ -144,8 +146,8 @@ cn10k_ml_prep_sp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_l
}
static __rte_always_inline void
-cn10k_ml_prep_fp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_req *req,
- uint16_t index, void *input, void *output, uint16_t nb_batches)
+cn10k_ml_prep_fp_job_descriptor_poll(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_req *req,
+ uint16_t index, void *input, void *output, uint16_t nb_batches)
{
struct cn10k_ml_dev *cn10k_mldev;
@@ -166,6 +168,33 @@ cn10k_ml_prep_fp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_r
req->cn10k_req.jd.model_run.num_batches = nb_batches;
}
+static __rte_always_inline void
+cn10k_ml_prep_fp_job_descriptor_event(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_req *req,
+ uint16_t index, void *input, void *output, uint16_t nb_batches
+
+ ,
+ uint64_t *compl_W0)
+{
+
+ struct cn10k_ml_dev *cn10k_mldev;
+
+ cn10k_mldev = &cnxk_mldev->cn10k_mldev;
+
+ req->cn10k_req.jd.hdr.jce.w0.u64 = *compl_W0;
+ req->cn10k_req.jd.hdr.jce.w1.s.wqp = PLT_U64_CAST(req);
+ req->cn10k_req.jd.hdr.model_id = index;
+ req->cn10k_req.jd.hdr.job_type = ML_CN10K_JOB_TYPE_MODEL_RUN;
+ req->cn10k_req.jd.hdr.fp_flags = ML_FLAGS_SSO_COMPL;
+ req->cn10k_req.jd.hdr.sp_flags = 0x0;
+ req->cn10k_req.jd.hdr.result =
+ roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->cn10k_req.result);
+ req->cn10k_req.jd.model_run.input_ddr_addr =
+ PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, input));
+ req->cn10k_req.jd.model_run.output_ddr_addr =
+ PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, output));
+ req->cn10k_req.jd.model_run.num_batches = nb_batches;
+}
+
static void
cn10k_ml_xstats_layer_name_update(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id,
uint16_t layer_id)
@@ -1305,13 +1334,16 @@ cn10k_ml_enqueue_single(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op, ui
model = cnxk_mldev->mldev->data->models[op->model_id];
model->set_poll_addr(req);
- cn10k_ml_prep_fp_job_descriptor(cnxk_mldev, req, model->layer[layer_id].index,
- op->input[0]->addr, op->output[0]->addr, op->nb_batches);
+ cn10k_ml_prep_fp_job_descriptor_poll(cnxk_mldev, req, model->layer[layer_id].index,
+ op->input[0]->addr, op->output[0]->addr,
+ op->nb_batches);
memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result));
error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code;
error_code->s.etype = ML_CNXK_ETYPE_UNKNOWN;
req->cn10k_req.result.user_ptr = op->user_ptr;
+ req->cnxk_mldev = cnxk_mldev;
+ req->qp_id = qp->id;
cnxk_ml_set_poll_ptr(req);
if (unlikely(!cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jcmd)))
@@ -1383,7 +1415,7 @@ cn10k_ml_inference_sync(void *device, uint16_t index, void *input, void *output,
op.impl_opaque = 0;
cn10k_ml_set_poll_addr(req);
- cn10k_ml_prep_fp_job_descriptor(cnxk_mldev, req, index, input, output, nb_batches);
+ cn10k_ml_prep_fp_job_descriptor_poll(cnxk_mldev, req, index, input, output, nb_batches);
memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result));
error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code;
@@ -1541,3 +1573,90 @@ cn10k_ml_free(const char *name)
return plt_memzone_free(mz);
}
+
+static int
+cn10k_ml_meta_info_extract(struct rte_ml_op *op, struct cnxk_ml_qp **qp, uint64_t *W0,
+ struct rte_ml_dev **dev)
+{
+ union rte_event_ml_metadata *eml_mdata;
+ struct rte_event *rsp_info;
+ union ml_jce_w0 jce_w0;
+ uint8_t mldev_id;
+ uint16_t qp_id;
+
+ eml_mdata = (union rte_event_ml_metadata *)((uint8_t *)op + op->private_data_offset);
+ rsp_info = &eml_mdata->response_info;
+ mldev_id = eml_mdata->request_info.mldev_id;
+ qp_id = eml_mdata->request_info.queue_pair_id;
+
+ *dev = rte_ml_dev_pmd_get_dev(mldev_id);
+ *qp = (*dev)->data->queue_pairs[qp_id];
+
+ jce_w0.s.ttype = rsp_info->sched_type;
+ jce_w0.s.pf_func = roc_ml_sso_pf_func_get();
+ jce_w0.s.ggrp = rsp_info->queue_id;
+ jce_w0.s.tag =
+ (RTE_EVENT_TYPE_MLDEV << 28) | (rsp_info->sub_event_type << 20) | rsp_info->flow_id;
+ *W0 = jce_w0.u64;
+
+ return 0;
+}
+
+__rte_hot uint16_t
+cn10k_ml_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
+{
+ union cn10k_ml_error_code *error_code;
+ struct cn10k_ml_dev *cn10k_mldev;
+ struct cnxk_ml_dev *cnxk_mldev;
+ struct cnxk_ml_model *model;
+ struct cnxk_ml_req *req;
+ struct cnxk_ml_qp *qp;
+
+ struct rte_ml_dev *dev;
+ struct rte_ml_op *op;
+
+ uint16_t count;
+ uint64_t W0;
+ int ret, i;
+
+ PLT_SET_USED(ws);
+
+ count = 0;
+ for (i = 0; i < nb_events; i++) {
+ op = ev[i].event_ptr;
+ ret = cn10k_ml_meta_info_extract(op, &qp, &W0, &dev);
+ if (ret) {
+ rte_errno = EINVAL;
+ return count;
+ }
+
+ cnxk_mldev = dev->data->dev_private;
+ cn10k_mldev = &cnxk_mldev->cn10k_mldev;
+ if (rte_mempool_get(qp->mla.req_mp, (void **)(&req)) != 0) {
+ rte_errno = ENOMEM;
+ return 0;
+ }
+ req->cn10k_req.jcmd.w1.s.jobptr = PLT_U64_CAST(&req->cn10k_req.jd);
+
+ model = cnxk_mldev->mldev->data->models[op->model_id];
+ cn10k_ml_prep_fp_job_descriptor_event(cnxk_mldev, req, model->layer[0].index,
+ op->input[0]->addr, op->output[0]->addr,
+ op->nb_batches, &W0);
+ memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result));
+ error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code;
+ error_code->s.etype = ML_CNXK_ETYPE_UNKNOWN;
+ req->cn10k_req.result.user_ptr = op->user_ptr;
+ req->cnxk_mldev = cnxk_mldev;
+ req->qp_id = qp->id;
+ rte_wmb();
+
+ if (!cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jcmd)) {
+ rte_mempool_put(qp->mla.req_mp, req);
+ break;
+ }
+
+ count++;
+ }
+
+ return count;
+}
diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h
index d225ed2098e..bf3a9fdc26c 100644
--- a/drivers/ml/cnxk/cn10k_ml_ops.h
+++ b/drivers/ml/cnxk/cn10k_ml_ops.h
@@ -5,6 +5,7 @@
#ifndef _CN10K_ML_OPS_H_
#define _CN10K_ML_OPS_H_
+#include <rte_eventdev.h>
#include <rte_mldev.h>
#include <rte_mldev_pmd.h>
diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h
index 81f91df2a80..745701185ea 100644
--- a/drivers/ml/cnxk/cnxk_ml_ops.h
+++ b/drivers/ml/cnxk/cnxk_ml_ops.h
@@ -19,6 +19,8 @@
#include "mvtvm_ml_stubs.h"
#endif
+struct cnxk_ml_dev;
+
/* Request structure */
struct cnxk_ml_req {
/* Device specific request */
@@ -40,6 +42,12 @@ struct cnxk_ml_req {
/* Op */
struct rte_ml_op *op;
+
+ /* Device handle */
+ struct cnxk_ml_dev *cnxk_mldev;
+
+ /* Queue-pair ID */
+ uint16_t qp_id;
} __rte_aligned(ROC_ALIGN);
/* Request queue */
diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build
index 0680a0faa5c..a37250babf4 100644
--- a/drivers/ml/cnxk/meson.build
+++ b/drivers/ml/cnxk/meson.build
@@ -55,7 +55,7 @@ sources = files(
'cnxk_ml_utils.c',
)
-deps += ['mldev', 'common_cnxk', 'kvargs', 'hash']
+deps += ['mldev', 'common_cnxk', 'kvargs', 'hash', 'eventdev']
if enable_mvtvm
diff --git a/drivers/ml/cnxk/version.map b/drivers/ml/cnxk/version.map
new file mode 100644
index 00000000000..c2cacaf8c65
--- /dev/null
+++ b/drivers/ml/cnxk/version.map
@@ -0,0 +1,7 @@
+INTERNAL {
+ global:
+
+ cn10k_ml_adapter_enqueue;
+
+ local: *;
+};
--
2.42.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 4/4] ml/cnxk: add adapter dequeue function
2024-01-07 15:40 [PATCH 0/4] Implementation of CNXK ML event adapter driver Srikanth Yalavarthi
` (2 preceding siblings ...)
2024-01-07 15:40 ` [PATCH 3/4] ml/cnxk: add adapter enqueue function Srikanth Yalavarthi
@ 2024-01-07 15:40 ` Srikanth Yalavarthi
2024-02-02 9:00 ` Jerin Jacob
3 siblings, 1 reply; 6+ messages in thread
From: Srikanth Yalavarthi @ 2024-01-07 15:40 UTC (permalink / raw)
To: Pavan Nikhilesh, Shijith Thotton, Srikanth Yalavarthi
Cc: dev, aprabhu, sshankarnara, ptakkar, jerinj
Implemented ML adapter dequeue function.
Signed-off-by: Srikanth Yalavarthi <syalavarthi@marvell.com>
---
drivers/event/cnxk/cn10k_worker.h | 3 +++
drivers/ml/cnxk/cn10k_ml_event_dp.h | 2 ++
drivers/ml/cnxk/cn10k_ml_ops.c | 17 +++++++++++++++++
drivers/ml/cnxk/version.map | 1 +
4 files changed, 23 insertions(+)
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index 8aa916fa129..1a0ca7f9493 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -7,6 +7,7 @@
#include <rte_eventdev.h>
#include "cn10k_cryptodev_event_dp.h"
+#include "cn10k_ml_event_dp.h"
#include "cn10k_rx.h"
#include "cnxk_worker.h"
#include "cn10k_eventdev.h"
@@ -236,6 +237,8 @@ cn10k_sso_hws_post_process(struct cn10k_sso_hws *ws, uint64_t *u64,
/* Mark vector mempool object as get */
RTE_MEMPOOL_CHECK_COOKIES(rte_mempool_from_obj((void *)u64[1]),
(void **)&u64[1], 1, 1);
+ } else if (CNXK_EVENT_TYPE_FROM_TAG(u64[0]) == RTE_EVENT_TYPE_MLDEV) {
+ u64[1] = cn10k_ml_adapter_dequeue(u64[1]);
}
}
diff --git a/drivers/ml/cnxk/cn10k_ml_event_dp.h b/drivers/ml/cnxk/cn10k_ml_event_dp.h
index bf7fc57bceb..0ff92091296 100644
--- a/drivers/ml/cnxk/cn10k_ml_event_dp.h
+++ b/drivers/ml/cnxk/cn10k_ml_event_dp.h
@@ -12,5 +12,7 @@
__rte_internal
__rte_hot uint16_t cn10k_ml_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events);
+__rte_internal
+__rte_hot uintptr_t cn10k_ml_adapter_dequeue(uintptr_t get_work1);
#endif /* _CN10K_ML_EVENT_DP_H_ */
diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c
index 4bc17eaa8c4..c33a7a85987 100644
--- a/drivers/ml/cnxk/cn10k_ml_ops.c
+++ b/drivers/ml/cnxk/cn10k_ml_ops.c
@@ -1660,3 +1660,20 @@ cn10k_ml_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events)
return count;
}
+
+__rte_hot uintptr_t
+cn10k_ml_adapter_dequeue(uintptr_t get_work1)
+{
+ struct cnxk_ml_dev *cnxk_mldev;
+ struct cnxk_ml_req *req;
+ struct cnxk_ml_qp *qp;
+
+ req = (struct cnxk_ml_req *)(get_work1);
+ cnxk_mldev = req->cnxk_mldev;
+ qp = cnxk_mldev->mldev->data->queue_pairs[req->qp_id];
+
+ cn10k_ml_result_update(cnxk_mldev, req->qp_id, req);
+ rte_mempool_put(qp->mla.req_mp, req);
+
+ return (uintptr_t)req->op;
+}
diff --git a/drivers/ml/cnxk/version.map b/drivers/ml/cnxk/version.map
index c2cacaf8c65..97c2c149998 100644
--- a/drivers/ml/cnxk/version.map
+++ b/drivers/ml/cnxk/version.map
@@ -2,6 +2,7 @@ INTERNAL {
global:
cn10k_ml_adapter_enqueue;
+ cn10k_ml_adapter_dequeue;
local: *;
};
--
2.42.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 4/4] ml/cnxk: add adapter dequeue function
2024-01-07 15:40 ` [PATCH 4/4] ml/cnxk: add adapter dequeue function Srikanth Yalavarthi
@ 2024-02-02 9:00 ` Jerin Jacob
0 siblings, 0 replies; 6+ messages in thread
From: Jerin Jacob @ 2024-02-02 9:00 UTC (permalink / raw)
To: Srikanth Yalavarthi
Cc: Pavan Nikhilesh, Shijith Thotton, dev, aprabhu, sshankarnara,
ptakkar, jerinj
On Sun, Jan 7, 2024 at 11:39 PM Srikanth Yalavarthi
<syalavarthi@marvell.com> wrote:
>
> Implemented ML adapter dequeue function.
>
> Signed-off-by: Srikanth Yalavarthi <syalavarthi@marvell.com>
Update the release notes for this new feature in PMD section.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-02-02 9:01 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-07 15:40 [PATCH 0/4] Implementation of CNXK ML event adapter driver Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 1/4] event/cnxk: add ML adapter capabilities get Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 2/4] event/cnxk: implement queue pair add and delete Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 3/4] ml/cnxk: add adapter enqueue function Srikanth Yalavarthi
2024-01-07 15:40 ` [PATCH 4/4] ml/cnxk: add adapter dequeue function Srikanth Yalavarthi
2024-02-02 9:00 ` Jerin Jacob
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).