From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7471A0545; Tue, 20 Dec 2022 20:29:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 92AE542D8B; Tue, 20 Dec 2022 20:27:17 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7A32740684 for ; Tue, 20 Dec 2022 20:27:00 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2BKD9Qsf004952 for ; Tue, 20 Dec 2022 11:26:59 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=jKfOsZU88JHSrfOwZFAbai44FfcpdldSb6FNfJHWN7g=; b=k6ZC8N9Pk5Dgh5jGflgmBZNf/SeALYsgZsc5qzdVVOrIPsDalVP6CqaPjnr79+EHQd3R S9JyxNxQK6nT8TKDSYfTgTthxGzmzcNiLRgSzuPm8of9AuPWAAGEHnyokdTJLDJ9hIFB cwT8gsWP1M+1xk9TXzIwabB0zv155kauVfo/PNWK4dfONrMEN78+qwL6ne8KVRNi9m3o yHcQ6UCmrta/kdU6wQTTqwf17Wh73ZB42YlTdYyFVV5u4qqrV33sj36rzWT85LAqg8jW 1zSd6A54kqkZlo57xxKCpLRoMHS81GuCuJUlVNANcRdUrR0OhpJQ2xvPJlVx55q7jzTo 3g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3mkapj2tpx-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 20 Dec 2022 11:26:59 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Tue, 20 Dec 2022 11:26:57 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Tue, 20 Dec 2022 11:26:57 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 985813F7051; Tue, 20 Dec 2022 11:26:57 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v3 23/38] ml/cnxk: enable quantization and dequantization Date: Tue, 20 Dec 2022 11:26:30 -0800 Message-ID: <20221220192645.14042-24-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221220192645.14042-1-syalavarthi@marvell.com> References: <20221208201806.21893-1-syalavarthi@marvell.com> <20221220192645.14042-1-syalavarthi@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-ORIG-GUID: 6kHa4Z4OZV0T3wMzjPtmQGWbko6t7dtx X-Proofpoint-GUID: 6kHa4Z4OZV0T3wMzjPtmQGWbko6t7dtx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-12-20_06,2022-12-20_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented driver functions to quantize / dequantize input and output data. Support is enabled for multiple batches. Quantization / dequantization use the type conversion functions defined in ML common code. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 150 +++++++++++++++++++++++++++++++++ 1 file changed, 150 insertions(+) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index d825c72a8e..e02b8335b7 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -5,6 +5,8 @@ #include #include +#include + #include "cn10k_ml_dev.h" #include "cn10k_ml_model.h" #include "cn10k_ml_ops.h" @@ -983,6 +985,152 @@ cn10k_ml_io_output_size_get(struct rte_ml_dev *dev, int16_t model_id, uint32_t n return 0; } +static int +cn10k_ml_io_quantize(struct rte_ml_dev *dev, int16_t model_id, uint16_t nb_batches, void *dbuffer, + void *qbuffer) +{ + struct cn10k_ml_model *model; + uint8_t *lcl_dbuffer; + uint8_t *lcl_qbuffer; + uint32_t batch_id; + uint32_t i; + int ret; + + model = dev->data->models[model_id]; + + if (model == NULL) { + plt_err("Invalid model_id = %d", model_id); + return -EINVAL; + } + + lcl_dbuffer = dbuffer; + lcl_qbuffer = qbuffer; + batch_id = 0; + +next_batch: + for (i = 0; i < model->metadata.model.num_input; i++) { + if (model->metadata.input[i].input_type == + model->metadata.input[i].model_input_type) { + rte_memcpy(lcl_qbuffer, lcl_dbuffer, model->addr.input[i].sz_d); + } else { + switch (model->metadata.input[i].model_input_type) { + case RTE_ML_IO_TYPE_INT8: + ret = ml_float32_to_int8(model->metadata.input[i].qscale, + model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + case RTE_ML_IO_TYPE_UINT8: + ret = ml_float32_to_uint8(model->metadata.input[i].qscale, + model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + case RTE_ML_IO_TYPE_INT16: + ret = ml_float32_to_int16(model->metadata.input[i].qscale, + model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + case RTE_ML_IO_TYPE_UINT16: + ret = ml_float32_to_uint16(model->metadata.input[i].qscale, + model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + case RTE_ML_IO_TYPE_FP16: + ret = ml_float32_to_float16(model->addr.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); + break; + default: + plt_err("Unsupported model_input_type[%u] : %u", i, + model->metadata.input[i].model_input_type); + ret = -ENOTSUP; + } + if (ret < 0) + return ret; + } + + lcl_dbuffer += model->addr.input[i].sz_d; + lcl_qbuffer += model->addr.input[i].sz_q; + } + + batch_id++; + if (batch_id < PLT_DIV_CEIL(nb_batches, model->batch_size)) + goto next_batch; + + return 0; +} + +static int +cn10k_ml_io_dequantize(struct rte_ml_dev *dev, int16_t model_id, uint16_t nb_batches, void *qbuffer, + void *dbuffer) +{ + struct cn10k_ml_model *model; + uint8_t *lcl_qbuffer; + uint8_t *lcl_dbuffer; + uint32_t batch_id; + uint32_t i; + int ret; + + model = dev->data->models[model_id]; + + if (model == NULL) { + plt_err("Invalid model_id = %d", model_id); + return -EINVAL; + } + + lcl_dbuffer = dbuffer; + lcl_qbuffer = qbuffer; + batch_id = 0; + +next_batch: + for (i = 0; i < model->metadata.model.num_output; i++) { + if (model->metadata.output[i].output_type == + model->metadata.output[i].model_output_type) { + rte_memcpy(lcl_dbuffer, lcl_qbuffer, model->addr.output[i].sz_q); + } else { + switch (model->metadata.output[i].model_output_type) { + case RTE_ML_IO_TYPE_INT8: + ret = ml_int8_to_float32(model->metadata.output[i].dscale, + model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + case RTE_ML_IO_TYPE_UINT8: + ret = ml_uint8_to_float32(model->metadata.output[i].dscale, + model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + case RTE_ML_IO_TYPE_INT16: + ret = ml_int16_to_float32(model->metadata.output[i].dscale, + model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + case RTE_ML_IO_TYPE_UINT16: + ret = ml_uint16_to_float32(model->metadata.output[i].dscale, + model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + case RTE_ML_IO_TYPE_FP16: + ret = ml_float16_to_float32(model->addr.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); + break; + default: + plt_err("Unsupported model_output_type[%u] : %u", i, + model->metadata.output[i].model_output_type); + ret = -ENOTSUP; + } + if (ret < 0) + return ret; + } + + lcl_qbuffer += model->addr.output[i].sz_q; + lcl_dbuffer += model->addr.output[i].sz_d; + } + + batch_id++; + if (batch_id < PLT_DIV_CEIL(nb_batches, model->batch_size)) + goto next_batch; + + return 0; +} + struct rte_ml_dev_ops cn10k_ml_ops = { /* Device control ops */ .dev_info_get = cn10k_ml_dev_info_get, @@ -1006,4 +1154,6 @@ struct rte_ml_dev_ops cn10k_ml_ops = { /* I/O ops */ .io_input_size_get = cn10k_ml_io_input_size_get, .io_output_size_get = cn10k_ml_io_output_size_get, + .io_quantize = cn10k_ml_io_quantize, + .io_dequantize = cn10k_ml_io_dequantize, }; -- 2.17.1