From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A6FF43196; Wed, 18 Oct 2023 08:52:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4DC7542E84; Wed, 18 Oct 2023 08:48:56 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id A000242830 for ; Wed, 18 Oct 2023 08:48:27 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39HKFB6w012357 for ; Tue, 17 Oct 2023 23:48:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=2kazrlkJ8G3gufmpp2G/E0Dzi9lM68hQzm+kfmomEd4=; b=idWTmYwluWIJgUSYnER7Bwwu8ZeYwj5Eb0HNK25z4daZheq7XKf/C8+G9AQU8RU2i3cf wM25ggTnH3VrWE06z6ivpukG47ap/kWGeXZoNMZBL/MbPSWNSEsXElLPqbjXiJr9CCvT VUjmFnf6rJ8mZAa/IGOvQTth1Hcw70AppsInaLkLSk+oRTFUU8BCFeo+iU02Szj9PvZg 5Ih/dZjY44MCRSAzFkD9THUwEgbJk+tqUFdZ/QfYOKkAlb+1oeQfdNfcL1c3f9aEOzTo N1D36ZLfgHmSLuhlT+DXTAKG4dp8aWDhu2kN8inoy7XOxZdf49QJDhR1tF+RKsw1JQ79 /g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3tt1481rx0-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 17 Oct 2023 23:48:26 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 17 Oct 2023 23:48:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 17 Oct 2023 23:48:19 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 9D62C3F704C; Tue, 17 Oct 2023 23:48:17 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v5 29/34] ml/cnxk: enable reporting model runtime as xstats Date: Tue, 17 Oct 2023 23:47:57 -0700 Message-ID: <20231018064806.24145-30-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231018064806.24145-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20231018064806.24145-1-syalavarthi@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: TvrRwLaFsLvTA3zgimFeIcptDnMWz_r8 X-Proofpoint-ORIG-GUID: TvrRwLaFsLvTA3zgimFeIcptDnMWz_r8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-18_04,2023-10-17_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added model xstats entries to compute runtime latency. Allocated internal resources for TVM model xstats. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 9 +++ drivers/ml/cnxk/cn10k_ml_ops.h | 2 + drivers/ml/cnxk/cnxk_ml_ops.c | 131 +++++++++++++++++++++++++++---- drivers/ml/cnxk/cnxk_ml_ops.h | 1 + drivers/ml/cnxk/cnxk_ml_xstats.h | 7 ++ drivers/ml/cnxk/mvtvm_ml_model.h | 24 ++++++ drivers/ml/cnxk/mvtvm_ml_ops.c | 96 +++++++++++++++++++++- drivers/ml/cnxk/mvtvm_ml_ops.h | 8 ++ drivers/ml/cnxk/mvtvm_ml_stubs.c | 23 ++++++ drivers/ml/cnxk/mvtvm_ml_stubs.h | 6 ++ 10 files changed, 289 insertions(+), 18 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 2d308802cf..0c67ce7b40 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -197,6 +197,15 @@ cn10k_ml_xstats_layer_name_update(struct cnxk_ml_dev *cnxk_mldev, uint16_t model } } +void +cn10k_ml_xstat_model_name_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + uint16_t stat_id, uint16_t entry, char *suffix) +{ + snprintf(cnxk_mldev->xstats.entries[stat_id].map.name, + sizeof(cnxk_mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", + model->glow.metadata.model.name, model_xstats[entry].name, suffix); +} + #define ML_AVG_FOREACH_QP(cnxk_mldev, layer, qp_id, str, value, count) \ do { \ value = 0; \ diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 3d18303ed3..045e2e6cd2 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -331,6 +331,8 @@ int cn10k_ml_layer_start(void *device, uint16_t model_id, const char *layer_name int cn10k_ml_layer_stop(void *device, uint16_t model_id, const char *layer_name); /* xstats ops */ +void cn10k_ml_xstat_model_name_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + uint16_t stat_id, uint16_t entry, char *suffix); uint64_t cn10k_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, enum cnxk_ml_xstats_type type); diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 66cda513db..fd2c46ac1f 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -138,7 +138,8 @@ cnxk_ml_xstats_init(struct cnxk_ml_dev *cnxk_mldev) /* Allocate memory for xstats entries. Don't allocate during reconfigure */ nb_stats = RTE_DIM(device_xstats) + - RTE_DIM(layer_xstats) * ML_CNXK_MAX_MODELS * ML_CNXK_MODEL_MAX_LAYERS; + RTE_DIM(layer_xstats) * ML_CNXK_MAX_MODELS * ML_CNXK_MODEL_MAX_LAYERS + + RTE_DIM(model_xstats) * ML_CNXK_MAX_MODELS; if (cnxk_mldev->xstats.entries == NULL) cnxk_mldev->xstats.entries = rte_zmalloc( "cnxk_ml_xstats", sizeof(struct cnxk_ml_xstats_entry) * nb_stats, @@ -169,6 +170,25 @@ cnxk_ml_xstats_init(struct cnxk_ml_dev *cnxk_mldev) for (model = 0; model < ML_CNXK_MAX_MODELS; model++) { cnxk_mldev->xstats.offset_for_model[model] = stat_id; + for (i = 0; i < RTE_DIM(model_xstats); i++) { + cnxk_mldev->xstats.entries[stat_id].map.id = stat_id; + cnxk_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_MODEL; + cnxk_mldev->xstats.entries[stat_id].group = CNXK_ML_XSTATS_GROUP_MODEL; + cnxk_mldev->xstats.entries[stat_id].type = model_xstats[i].type; + cnxk_mldev->xstats.entries[stat_id].fn_id = CNXK_ML_XSTATS_FN_MODEL; + cnxk_mldev->xstats.entries[stat_id].obj_idx = model; + cnxk_mldev->xstats.entries[stat_id].layer_id = -1; + cnxk_mldev->xstats.entries[stat_id].reset_allowed = + model_xstats[i].reset_allowed; + + /* Name of xstat is updated during model load */ + snprintf(cnxk_mldev->xstats.entries[stat_id].map.name, + sizeof(cnxk_mldev->xstats.entries[stat_id].map.name), + "Model-%u-%s", model, model_xstats[i].name); + + stat_id++; + } + for (layer = 0; layer < ML_CNXK_MODEL_MAX_LAYERS; layer++) { cnxk_mldev->xstats.offset_for_layer[model][layer] = stat_id; @@ -195,7 +215,8 @@ cnxk_ml_xstats_init(struct cnxk_ml_dev *cnxk_mldev) cnxk_mldev->xstats.count_per_layer[model][layer] = RTE_DIM(layer_xstats); } - cnxk_mldev->xstats.count_per_model[model] = RTE_DIM(layer_xstats); + cnxk_mldev->xstats.count_per_model[model] = + RTE_DIM(layer_xstats) + ML_CNXK_MODEL_MAX_LAYERS * RTE_DIM(model_xstats); } cnxk_mldev->xstats.count_mode_model = stat_id - cnxk_mldev->xstats.count_mode_device; @@ -204,6 +225,36 @@ cnxk_ml_xstats_init(struct cnxk_ml_dev *cnxk_mldev) return 0; } +void +cnxk_ml_xstats_model_name_update(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id) +{ + struct cnxk_ml_model *model; + uint16_t rclk_freq; + uint16_t sclk_freq; + uint16_t stat_id; + char suffix[8]; + uint16_t i; + + model = cnxk_mldev->mldev->data->models[model_id]; + stat_id = cnxk_mldev->xstats.offset_for_model[model_id]; + + roc_clk_freq_get(&rclk_freq, &sclk_freq); + if (sclk_freq == 0) + strcpy(suffix, "cycles"); + else + strcpy(suffix, "ns"); + + /* Update xstat name based on layer name and sclk availability */ + for (i = 0; i < RTE_DIM(model_xstats); i++) { + if (model->type == ML_CNXK_MODEL_TYPE_GLOW) + cn10k_ml_xstat_model_name_set(cnxk_mldev, model, stat_id, i, suffix); + else + mvtvm_ml_model_xstat_name_set(cnxk_mldev, model, stat_id, i, suffix); + + stat_id++; + } +} + static void cnxk_ml_xstats_uninit(struct cnxk_ml_dev *cnxk_mldev) { @@ -247,13 +298,22 @@ cnxk_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx, int32_ if (model == NULL) return 0; - if (layer_id >= 0) + if (layer_id >= 0) { layer = &model->layer[layer_id]; - else - return 0; + goto layer_xstats; + } else { + layer = NULL; + goto model_xstats; + } +layer_xstats: value = cn10k_ml_model_xstat_get(cnxk_mldev, layer, type); + goto exit_xstats; +model_xstats: + value = mvtvm_ml_model_xstat_get(cnxk_mldev, model, type); + +exit_xstats: roc_clk_freq_get(&rclk_freq, &sclk_freq); if (sclk_freq != 0) /* return in ns */ value = (value * 1000ULL) / sclk_freq; @@ -836,8 +896,9 @@ cnxk_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode { struct cnxk_ml_xstats_entry *xs; struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; uint32_t xstats_mode_count; - uint16_t layer_id = 0; + uint16_t layer_id; uint32_t idx = 0; uint32_t i; @@ -854,7 +915,17 @@ cnxk_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode case RTE_ML_DEV_XSTATS_MODEL: if (model_id >= ML_CNXK_MAX_MODELS) break; - xstats_mode_count = cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + + model = cnxk_mldev->mldev->data->models[model_id]; + for (layer_id = 0; layer_id < model->nb_layers; layer_id++) { + if (model->layer[layer_id].type == ML_CNXK_LAYER_TYPE_MRVL) + xstats_mode_count += + cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + } + + if ((model->type == ML_CNXK_MODEL_TYPE_TVM) && + (model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_MRVL)) + xstats_mode_count += RTE_DIM(model_xstats); break; default: return -EINVAL; @@ -868,9 +939,20 @@ cnxk_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode if (xs->mode != mode) continue; - if (mode == RTE_ML_DEV_XSTATS_MODEL && - (model_id != xs->obj_idx || layer_id != xs->layer_id)) - continue; + if (mode == RTE_ML_DEV_XSTATS_MODEL) { + if (model_id != xs->obj_idx) + continue; + + model = cnxk_mldev->mldev->data->models[model_id]; + if ((model->type == ML_CNXK_MODEL_TYPE_GLOW || + model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) && + xs->group == CNXK_ML_XSTATS_GROUP_MODEL) + continue; + + if (model->type == ML_CNXK_MODEL_TYPE_TVM && + model->layer[xs->layer_id].type == ML_CNXK_LAYER_TYPE_LLVM) + continue; + } strncpy(xstats_map[idx].name, xs->map.name, RTE_ML_STR_MAX); xstats_map[idx].id = xs->map.id; @@ -931,9 +1013,10 @@ cnxk_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, { struct cnxk_ml_xstats_entry *xs; struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; uint32_t xstats_mode_count; - uint16_t layer_id = 0; cnxk_ml_xstats_fn fn; + uint16_t layer_id; uint64_t val; uint32_t idx; uint32_t i; @@ -951,7 +1034,14 @@ cnxk_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, case RTE_ML_DEV_XSTATS_MODEL: if (model_id >= ML_CNXK_MAX_MODELS) return -EINVAL; - xstats_mode_count = cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + + model = cnxk_mldev->mldev->data->models[model_id]; + for (layer_id = 0; layer_id < model->nb_layers; layer_id++) + xstats_mode_count += cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + + if ((model->type == ML_CNXK_MODEL_TYPE_TVM) && + (model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_MRVL)) + xstats_mode_count += RTE_DIM(model_xstats); break; default: return -EINVAL; @@ -963,11 +1053,18 @@ cnxk_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, if (stat_ids[i] > cnxk_mldev->xstats.count || xs->mode != mode) continue; - if (mode == RTE_ML_DEV_XSTATS_MODEL && - (model_id != xs->obj_idx || layer_id != xs->layer_id)) { - plt_err("Invalid stats_id[%d] = %d for model_id = %d\n", i, stat_ids[i], - model_id); - return -EINVAL; + if (mode == RTE_ML_DEV_XSTATS_MODEL) { + if (model_id != xs->obj_idx) + continue; + + model = cnxk_mldev->mldev->data->models[xs->obj_idx]; + if ((model->type == ML_CNXK_MODEL_TYPE_GLOW || + model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) && + xs->group == CNXK_ML_XSTATS_GROUP_MODEL) + continue; + + if (xs->layer_id == -1 && xs->group == CNXK_ML_XSTATS_GROUP_LAYER) + continue; } switch (xs->fn_id) { diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h index b22a2b0d95..ab32676b3e 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.h +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -70,6 +70,7 @@ extern struct rte_ml_dev_ops cnxk_ml_ops; int cnxk_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id); int cnxk_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id); +void cnxk_ml_xstats_model_name_update(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id); __rte_hot uint16_t cnxk_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops); diff --git a/drivers/ml/cnxk/cnxk_ml_xstats.h b/drivers/ml/cnxk/cnxk_ml_xstats.h index 5e02bb876c..a2c9adfe4a 100644 --- a/drivers/ml/cnxk/cnxk_ml_xstats.h +++ b/drivers/ml/cnxk/cnxk_ml_xstats.h @@ -142,4 +142,11 @@ static const struct cnxk_ml_xstat_info layer_xstats[] = { {"Min-FW-Latency", min_fw_latency, 1}, {"Max-FW-Latency", max_fw_latency, 1}, }; +/* Model xstats */ +static const struct cnxk_ml_xstat_info model_xstats[] = { + {"Avg-RT-Latency", avg_rt_latency, 1}, + {"Min-RT-Latency", min_rt_latency, 1}, + {"Max-RT-Latency", max_rt_latency, 1}, +}; + #endif /* _CNXK_ML_XSTATS_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index 900ba44fa0..66c3af18e1 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -33,6 +33,27 @@ struct mvtvm_ml_model_object { int64_t size; }; +/* Model fast-path stats */ +struct mvtvm_ml_model_xstats { + /* Total TVM runtime latency, sum of all inferences */ + uint64_t tvm_rt_latency_tot; + + /* TVM runtime latency */ + uint64_t tvm_rt_latency; + + /* Minimum TVM runtime latency */ + uint64_t tvm_rt_latency_min; + + /* Maximum TVM runtime latency */ + uint64_t tvm_rt_latency_max; + + /* Total jobs dequeued */ + uint64_t dequeued_count; + + /* Hardware stats reset index */ + uint64_t tvm_rt_reset_count; +}; + struct mvtvm_ml_model_data { /* Model metadata */ struct tvmdp_model_metadata metadata; @@ -45,6 +66,9 @@ struct mvtvm_ml_model_data { /* Model I/O info */ struct cnxk_ml_io_info info; + + /* Stats for burst ops */ + struct mvtvm_ml_model_xstats *burst_xstats; }; enum cnxk_ml_model_type mvtvm_ml_model_type_get(struct rte_ml_model_params *params); diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index f13ba76207..832837034b 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -10,10 +10,83 @@ #include "cnxk_ml_dev.h" #include "cnxk_ml_model.h" #include "cnxk_ml_ops.h" +#include "cnxk_ml_xstats.h" /* ML model macros */ #define MVTVM_ML_MODEL_MEMZONE_NAME "ml_mvtvm_model_mz" +void +mvtvm_ml_model_xstat_name_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + uint16_t stat_id, uint16_t entry, char *suffix) +{ + snprintf(cnxk_mldev->xstats.entries[stat_id].map.name, + sizeof(cnxk_mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", + model->mvtvm.metadata.model.name, model_xstats[entry].name, suffix); +} + +#define ML_AVG_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count) \ + do { \ + value = 0; \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + value += model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_tot; \ + count += model->mvtvm.burst_xstats[qp_id].dequeued_count - \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_reset_count; \ + } \ + if (count != 0) \ + value = value / count; \ + } while (0) + +#define ML_MIN_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count) \ + do { \ + value = UINT64_MAX; \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + value = PLT_MIN(value, \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_min); \ + count += model->mvtvm.burst_xstats[qp_id].dequeued_count - \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_reset_count; \ + } \ + if (count == 0) \ + value = 0; \ + } while (0) + +#define ML_MAX_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count) \ + do { \ + value = 0; \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + value = PLT_MAX(value, \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_max); \ + count += model->mvtvm.burst_xstats[qp_id].dequeued_count - \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_reset_count; \ + } \ + if (count == 0) \ + value = 0; \ + } while (0) + +uint64_t +mvtvm_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + enum cnxk_ml_xstats_type type) +{ + uint64_t count = 0; + uint64_t value = 0; + uint32_t qp_id; + + switch (type) { + case avg_rt_latency: + ML_AVG_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count); + break; + case min_rt_latency: + ML_MIN_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count); + break; + case max_rt_latency: + ML_MAX_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count); + break; + default: + value = 0; + } + + return value; +} + int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf) { @@ -53,6 +126,7 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * char str[RTE_MEMZONE_NAMESIZE]; const struct plt_memzone *mz; size_t model_object_size = 0; + size_t model_xstats_size = 0; uint16_t nb_mrvl_layers; uint16_t nb_llvm_layers; uint8_t layer_id = 0; @@ -68,7 +142,11 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * model_object_size = RTE_ALIGN_CEIL(object[0].size, RTE_CACHE_LINE_MIN_SIZE) + RTE_ALIGN_CEIL(object[1].size, RTE_CACHE_LINE_MIN_SIZE) + RTE_ALIGN_CEIL(object[2].size, RTE_CACHE_LINE_MIN_SIZE); - mz_size += model_object_size; + + model_xstats_size = + cnxk_mldev->mldev->data->nb_queue_pairs * sizeof(struct mvtvm_ml_model_xstats); + + mz_size += model_object_size + model_xstats_size; /* Allocate memzone for model object */ snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", MVTVM_ML_MODEL_MEMZONE_NAME, model->model_id); @@ -181,6 +259,22 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * /* Set model info */ mvtvm_ml_model_info_set(cnxk_mldev, model); + /* Update model xstats name */ + cnxk_ml_xstats_model_name_update(cnxk_mldev, model->model_id); + + model->mvtvm.burst_xstats = RTE_PTR_ADD( + model->mvtvm.object.params.addr, + RTE_ALIGN_CEIL(model->mvtvm.object.params.size, RTE_CACHE_LINE_MIN_SIZE)); + + for (int qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_tot = 0; + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency = 0; + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_min = UINT64_MAX; + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_max = 0; + model->mvtvm.burst_xstats[qp_id].tvm_rt_reset_count = 0; + model->mvtvm.burst_xstats[qp_id].dequeued_count = 0; + } + return 0; error: diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.h b/drivers/ml/cnxk/mvtvm_ml_ops.h index 55459f9f7f..22e0340146 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.h +++ b/drivers/ml/cnxk/mvtvm_ml_ops.h @@ -11,8 +11,11 @@ #include +#include "cnxk_ml_xstats.h" + struct cnxk_ml_dev; struct cnxk_ml_model; +struct cnxk_ml_layer; int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf); int mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev); @@ -22,4 +25,9 @@ int mvtvm_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model * int mvtvm_ml_model_start(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); int mvtvm_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); +void mvtvm_ml_model_xstat_name_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + uint16_t stat_id, uint16_t entry, char *suffix); +uint64_t mvtvm_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + enum cnxk_ml_xstats_type type); + #endif /* _MVTVM_ML_OPS_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_stubs.c b/drivers/ml/cnxk/mvtvm_ml_stubs.c index 260a051b08..19af1d2703 100644 --- a/drivers/ml/cnxk/mvtvm_ml_stubs.c +++ b/drivers/ml/cnxk/mvtvm_ml_stubs.c @@ -8,6 +8,7 @@ #include "cnxk_ml_dev.h" #include "cnxk_ml_model.h" +#include "cnxk_ml_xstats.h" enum cnxk_ml_model_type mvtvm_ml_model_type_get(struct rte_ml_model_params *params) @@ -44,6 +45,28 @@ mvtvm_ml_layer_print(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer RTE_SET_USED(fp); } +void +mvtvm_ml_model_xstat_name_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + uint16_t stat_id, uint16_t entry, char *suffix) +{ + RTE_SET_USED(cnxk_mldev); + RTE_SET_USED(model); + RTE_SET_USED(stat_id); + RTE_SET_USED(entry); + RTE_SET_USED(suffix); +} + +uint64_t +mvtvm_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + enum cnxk_ml_xstats_type type) +{ + RTE_SET_USED(cnxk_mldev); + RTE_SET_USED(model); + RTE_SET_USED(type); + + return 0; +} + int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf) { diff --git a/drivers/ml/cnxk/mvtvm_ml_stubs.h b/drivers/ml/cnxk/mvtvm_ml_stubs.h index d6d0edbcf1..3fd1f04c35 100644 --- a/drivers/ml/cnxk/mvtvm_ml_stubs.h +++ b/drivers/ml/cnxk/mvtvm_ml_stubs.h @@ -7,6 +7,8 @@ #include +#include "cnxk_ml_xstats.h" + struct cnxk_ml_dev; struct cnxk_ml_model; struct cnxk_ml_layer; @@ -24,5 +26,9 @@ int mvtvm_ml_model_get_layer_id(struct cnxk_ml_model *model, const char *layer_n uint16_t *layer_id); struct cnxk_ml_io_info *mvtvm_ml_model_io_info_get(struct cnxk_ml_model *model, uint16_t layer_id); void mvtvm_ml_layer_print(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, FILE *fp); +void mvtvm_ml_model_xstat_name_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + uint16_t stat_id, uint16_t entry, char *suffix); +uint64_t mvtvm_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + enum cnxk_ml_xstats_type type); #endif /* _MVTVM_ML_STUBS_H_ */ -- 2.42.0