From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DE247A0093; Thu, 8 Dec 2022 21:05:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A710C42DA7; Thu, 8 Dec 2022 21:02:53 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9798242D3F for ; Thu, 8 Dec 2022 21:02:36 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2B8JjnOI005114 for ; Thu, 8 Dec 2022 12:02:35 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=d4tKEwktq3eFKI7Wm+x/PbpEXNgbsndcgFpWW5QO1U0=; b=CgRHnHpn3/efciitceeLg+S7vG5N9HNX6qRJsqeMUngjvZZ3iLSLNitzp7GpGq1KRBoN 9ehEIXUk8Tk6B4C2KLVVlLY5dv/+Wfo4jvNB0p4e/77/86j2ogv1ZcQhFhBiVsYkuwYs SJzHdns6cArzoomcftZR7X5KT3m+zK3e7nAzPQBndnsjrfma8izajsBeoNX6p0LEcZ6C baR9hNBqBMjkW5w7tW/o+tR7HsrBD3mWEy5Rmwqfx3a5EFKeXwDmF1dzk27MwBeEo3fi 2e+D/whP5FNQzGts3mKFs+n0NVJPZjuUV1Wm72Ydar+Yq5qVNtofp8+BJbq3c7r7Q0Pm ew== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3m86usnfsn-14 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 08 Dec 2022 12:02:35 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 8 Dec 2022 12:02:28 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 8 Dec 2022 12:02:28 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id B287F3F706F; Thu, 8 Dec 2022 12:02:28 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v1 22/37] ml/cnxk: enable quantization and dequantization Date: Thu, 8 Dec 2022 12:02:05 -0800 Message-ID: <20221208200220.20267-23-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221208200220.20267-1-syalavarthi@marvell.com> References: <20221208200220.20267-1-syalavarthi@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-GUID: G8Ozxxka-Lbt34zDoXqhZB8xZGIj3T_q X-Proofpoint-ORIG-GUID: G8Ozxxka-Lbt34zDoXqhZB8xZGIj3T_q X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-08_11,2022-12-08_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented driver functions to quantize / dequantize input and output data. Support is enabled for multiple batches. Quantization / dequantization use the type conversion functions defined in ML common code. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 150 +++++++++++++++++++++++++++++++++ 1 file changed, 150 insertions(+) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index c96f17ebd8..9868d2a598 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -5,6 +5,8 @@ #include #include +#include + #include "cn10k_ml_dev.h" #include "cn10k_ml_model.h" #include "cn10k_ml_ops.h" @@ -987,6 +989,152 @@ cn10k_ml_io_output_size_get(struct rte_ml_dev *dev, int16_t model_id, uint32_t n return 0; } +static int +cn10k_ml_io_quantize(struct rte_ml_dev *dev, int16_t model_id, uint16_t nb_batches, void *dbuffer, + void *qbuffer) +{ + struct cn10k_ml_model *model; + uint8_t *lcl_dbuffer; + uint8_t *lcl_qbuffer; + uint32_t batch_id; + uint32_t i; + int ret; + + model = dev->data->models[model_id]; + + if (model == NULL) { + plt_err("Invalid model_id = %d", model_id); + return -EINVAL; + } + + lcl_dbuffer = dbuffer; + lcl_qbuffer = qbuffer; + batch_id = 0; + +next_batch: + for (i = 0; i < model->metadata.model.num_input; i++) { + if (model->metadata.input[i].input_type == + model->metadata.input[i].model_input_type) { + memcpy(lcl_qbuffer, lcl_dbuffer, model->addr.input[i].sz_d); + } else { + switch (model->metadata.input[i].model_input_type) { + case RTE_ML_IO_TYPE_INT8: + ret = ml_float32_to_int8(model->metadata.input[i].qscale, + model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + case RTE_ML_IO_TYPE_UINT8: + ret = ml_float32_to_uint8(model->metadata.input[i].qscale, + model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + case RTE_ML_IO_TYPE_INT16: + ret = ml_float32_to_int16(model->metadata.input[i].qscale, + model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + case RTE_ML_IO_TYPE_UINT16: + ret = ml_float32_to_uint16(model->metadata.input[i].qscale, + model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + case RTE_ML_IO_TYPE_FP16: + ret = ml_float32_to_float16(model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + default: + plt_err("Unsupported model_input_type[%u] : %u", i, + model->metadata.input[i].model_input_type); + ret = -ENOTSUP; + } + if (ret < 0) + return ret; + } + + lcl_dbuffer += model->addr.input[i].sz_d; + lcl_qbuffer += model->addr.input[i].sz_q; + } + + batch_id++; + if (batch_id < PLT_DIV_CEIL(nb_batches, model->batch_size)) + goto next_batch; + + return 0; +} + +static int +cn10k_ml_io_dequantize(struct rte_ml_dev *dev, int16_t model_id, uint16_t nb_batches, void *qbuffer, + void *dbuffer) +{ + struct cn10k_ml_model *model; + uint8_t *lcl_qbuffer; + uint8_t *lcl_dbuffer; + uint32_t batch_id; + uint32_t i; + int ret; + + model = dev->data->models[model_id]; + + if (model == NULL) { + plt_err("Invalid model_id = %d", model_id); + return -EINVAL; + } + + lcl_dbuffer = dbuffer; + lcl_qbuffer = qbuffer; + batch_id = 0; + +next_batch: + for (i = 0; i < model->metadata.model.num_output; i++) { + if (model->metadata.output[i].output_type == + model->metadata.output[i].model_output_type) { + memcpy(lcl_dbuffer, lcl_qbuffer, model->addr.output[i].sz_q); + } else { + switch (model->metadata.output[i].model_output_type) { + case RTE_ML_IO_TYPE_INT8: + ret = ml_int8_to_float32(model->metadata.output[i].dscale, + model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + case RTE_ML_IO_TYPE_UINT8: + ret = ml_uint8_to_float32(model->metadata.output[i].dscale, + model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + case RTE_ML_IO_TYPE_INT16: + ret = ml_int16_to_float32(model->metadata.output[i].dscale, + model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + case RTE_ML_IO_TYPE_UINT16: + ret = ml_uint16_to_float32(model->metadata.output[i].dscale, + model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + case RTE_ML_IO_TYPE_FP16: + ret = ml_float16_to_float32(model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + default: + plt_err("Unsupported model_output_type[%u] : %u", i, + model->metadata.output[i].model_output_type); + ret = -ENOTSUP; + } + if (ret < 0) + return ret; + } + + lcl_qbuffer += model->addr.output[i].sz_q; + lcl_dbuffer += model->addr.output[i].sz_d; + } + + batch_id++; + if (batch_id < PLT_DIV_CEIL(nb_batches, model->batch_size)) + goto next_batch; + + return 0; +} + struct rte_ml_dev_ops cn10k_ml_ops = { /* Device control ops */ .dev_info_get = cn10k_ml_dev_info_get, @@ -1010,4 +1158,6 @@ struct rte_ml_dev_ops cn10k_ml_ops = { /* I/O ops */ .io_input_size_get = cn10k_ml_io_input_size_get, .io_output_size_get = cn10k_ml_io_output_size_get, + .io_quantize = cn10k_ml_io_quantize, + .io_dequantize = cn10k_ml_io_dequantize, }; -- 2.17.1