From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2C0FC41B9D; Wed, 1 Feb 2023 10:26:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36FF442FDE; Wed, 1 Feb 2023 10:23:49 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4B94442D29 for ; Wed, 1 Feb 2023 10:23:25 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3116LRY7024189 for ; Wed, 1 Feb 2023 01:23:24 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=HYlitpx+p6igE/AWWVGt6X4aHYyLLxtW8w7ZvQ0WXWE=; b=E5MGo1DpRU5pG9amMl+gR9IAcFP7PqQjr1W0pBrK1QTGlkzRhJkMHsvCkiPNNsIPP8f1 a7vR0JACchqnLk3ssWGGhXIvxqfgM1QGASyu110GYAsCr5Kh5JZopw3Kp26RkzXNMyCj ssstw+Mx0lC+vueNLAi3R3byfQPZ/VmYdJBbWqRrJLzKc/f3gw+crSq8RhKMAqNXy4s3 E0OkhcVmlK2sAIYLRO+Lbkr09icvmOB5mkHqS2tvu+0rYUDnrMbghHbcMwCRlyeSu4up dh+yzALFfVmb3YfqIO5s/LehRQcUGqZQJ3+LpjIC16ZPcX1R5b3VHx+PzjcmL2TrG//p kg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3nfjr8rgv6-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 01 Feb 2023 01:23:24 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 1 Feb 2023 01:23:20 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Wed, 1 Feb 2023 01:23:20 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 846133F7041; Wed, 1 Feb 2023 01:23:18 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v4 18/39] ml/cnxk: enable support to start an ML model Date: Wed, 1 Feb 2023 01:22:49 -0800 Message-ID: <20230201092310.23252-19-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230201092310.23252-1-syalavarthi@marvell.com> References: <20221208200220.20267-1-syalavarthi@marvell.com> <20230201092310.23252-1-syalavarthi@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-GUID: WP1bVhEFWGsxyTHZEGdzCwsoR_-LuWA5 X-Proofpoint-ORIG-GUID: WP1bVhEFWGsxyTHZEGdzCwsoR_-LuWA5 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-02-01_03,2023-01-31_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented model start driver function. A model start job is checked for completion in synchronous mode. Tilemask and OCM slot is calculated before starting the model. Model start is enqueued through scratch registers. OCM pages are reserved after model start completion. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.h | 3 + drivers/ml/cnxk/cn10k_ml_ops.c | 207 +++++++++++++++++++++++++++++++++ drivers/ml/cnxk/cn10k_ml_ops.h | 4 + 3 files changed, 214 insertions(+) diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index 68fcc957fa..8f6bc24370 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -33,6 +33,9 @@ /* ML command timeout in seconds */ #define ML_CN10K_CMD_TIMEOUT 5 +/* ML slow-path job flags */ +#define ML_CN10K_SP_FLAGS_OCM_NONRELOCATABLE BIT(0) + /* Poll mode job state */ #define ML_CN10K_POLL_JOB_START 0 #define ML_CN10K_POLL_JOB_FINISH 1 diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 8603cba20e..65f69ba8fb 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -114,6 +114,64 @@ cn10k_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_des return NULL; } +static void +cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *mldev, struct cn10k_ml_model *model, + struct cn10k_ml_req *req, enum cn10k_ml_job_type job_type) +{ + struct cn10k_ml_model_metadata *metadata; + struct cn10k_ml_model_addr *addr; + + metadata = &model->metadata; + addr = &model->addr; + + memset(&req->jd, 0, sizeof(struct cn10k_ml_jd)); + req->jd.hdr.jce.w0.u64 = 0; + req->jd.hdr.jce.w1.u64 = PLT_U64_CAST(&req->status); + req->jd.hdr.model_id = model->model_id; + req->jd.hdr.job_type = job_type; + req->jd.hdr.fp_flags = 0x0; + req->jd.hdr.result = roc_ml_addr_ap2mlip(&mldev->roc, &req->result); + + if (job_type == ML_CN10K_JOB_TYPE_MODEL_START) { + if (!model->metadata.model.ocm_relocatable) + req->jd.hdr.sp_flags = ML_CN10K_SP_FLAGS_OCM_NONRELOCATABLE; + else + req->jd.hdr.sp_flags = 0x0; + req->jd.model_start.model_src_ddr_addr = + PLT_U64_CAST(roc_ml_addr_ap2mlip(&mldev->roc, addr->init_load_addr)); + req->jd.model_start.model_dst_ddr_addr = + PLT_U64_CAST(roc_ml_addr_ap2mlip(&mldev->roc, addr->init_run_addr)); + req->jd.model_start.model_init_offset = 0x0; + req->jd.model_start.model_main_offset = metadata->init_model.file_size; + req->jd.model_start.model_finish_offset = + metadata->init_model.file_size + metadata->main_model.file_size; + req->jd.model_start.model_init_size = metadata->init_model.file_size; + req->jd.model_start.model_main_size = metadata->main_model.file_size; + req->jd.model_start.model_finish_size = metadata->finish_model.file_size; + req->jd.model_start.model_wb_offset = metadata->init_model.file_size + + metadata->main_model.file_size + + metadata->finish_model.file_size; + req->jd.model_start.num_layers = metadata->model.num_layers; + req->jd.model_start.num_gather_entries = 0; + req->jd.model_start.num_scatter_entries = 0; + req->jd.model_start.tilemask = 0; /* Updated after reserving pages */ + req->jd.model_start.batch_size = model->batch_size; + req->jd.model_start.ocm_wb_base_address = 0; /* Updated after reserving pages */ + req->jd.model_start.ocm_wb_range_start = metadata->model.ocm_wb_range_start; + req->jd.model_start.ocm_wb_range_end = metadata->model.ocm_wb_range_end; + req->jd.model_start.ddr_wb_base_address = PLT_U64_CAST(roc_ml_addr_ap2mlip( + &mldev->roc, + PLT_PTR_ADD(addr->finish_load_addr, metadata->finish_model.file_size))); + req->jd.model_start.ddr_wb_range_start = metadata->model.ddr_wb_range_start; + req->jd.model_start.ddr_wb_range_end = metadata->model.ddr_wb_range_end; + req->jd.model_start.input.s.ddr_range_start = metadata->model.ddr_input_range_start; + req->jd.model_start.input.s.ddr_range_end = metadata->model.ddr_input_range_end; + req->jd.model_start.output.s.ddr_range_start = + metadata->model.ddr_output_range_start; + req->jd.model_start.output.s.ddr_range_end = metadata->model.ddr_output_range_end; + } +} + static int cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) { @@ -561,6 +619,154 @@ cn10k_ml_model_unload(struct rte_ml_dev *dev, int16_t model_id) return plt_memzone_free(plt_memzone_lookup(str)); } +int +cn10k_ml_model_start(struct rte_ml_dev *dev, int16_t model_id) +{ + struct cn10k_ml_model *model; + struct cn10k_ml_dev *mldev; + struct cn10k_ml_ocm *ocm; + struct cn10k_ml_req *req; + + bool job_enqueued; + bool job_dequeued; + uint8_t num_tiles; + uint64_t tilemask; + int wb_page_start; + int tile_start; + int tile_end; + bool locked; + int ret = 0; + + mldev = dev->data->dev_private; + ocm = &mldev->ocm; + model = dev->data->models[model_id]; + + if (model == NULL) { + plt_err("Invalid model_id = %d", model_id); + return -EINVAL; + } + + /* Prepare JD */ + req = model->req; + cn10k_ml_prep_sp_job_descriptor(mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_START); + req->result.error_code = 0x0; + req->result.user_ptr = NULL; + + plt_write64(ML_CN10K_POLL_JOB_START, &req->status); + plt_wmb(); + + num_tiles = model->metadata.model.tile_end - model->metadata.model.tile_start + 1; + + locked = false; + while (!locked) { + if (plt_spinlock_trylock(&model->lock) != 0) { + if (model->state == ML_CN10K_MODEL_STATE_STARTED) { + plt_ml_dbg("Model already started, model = 0x%016lx", + PLT_U64_CAST(model)); + plt_spinlock_unlock(&model->lock); + return 1; + } + + if (model->state == ML_CN10K_MODEL_STATE_JOB_ACTIVE) { + plt_err("A slow-path job is active for the model = 0x%016lx", + PLT_U64_CAST(model)); + plt_spinlock_unlock(&model->lock); + return -EBUSY; + } + + model->state = ML_CN10K_MODEL_STATE_JOB_ACTIVE; + plt_spinlock_unlock(&model->lock); + locked = true; + } + } + + while (!model->model_mem_map.ocm_reserved) { + if (plt_spinlock_trylock(&ocm->lock) != 0) { + wb_page_start = cn10k_ml_ocm_tilemask_find( + dev, num_tiles, model->model_mem_map.wb_pages, + model->model_mem_map.scratch_pages, &tilemask); + + if (wb_page_start == -1) { + plt_err("Free pages not available on OCM tiles"); + plt_err("Failed to start model = 0x%016lx, name = %s", + PLT_U64_CAST(model), model->metadata.model.name); + + plt_spinlock_unlock(&ocm->lock); + return -ENOMEM; + } + + model->model_mem_map.tilemask = tilemask; + model->model_mem_map.wb_page_start = wb_page_start; + + cn10k_ml_ocm_reserve_pages( + dev, model->model_id, model->model_mem_map.tilemask, + model->model_mem_map.wb_page_start, model->model_mem_map.wb_pages, + model->model_mem_map.scratch_pages); + model->model_mem_map.ocm_reserved = true; + plt_spinlock_unlock(&ocm->lock); + } + } + + /* Update JD */ + cn10k_ml_ocm_tilecount(model->model_mem_map.tilemask, &tile_start, &tile_end); + req->jd.model_start.tilemask = GENMASK_ULL(tile_end, tile_start); + req->jd.model_start.ocm_wb_base_address = + model->model_mem_map.wb_page_start * ocm->page_size; + + job_enqueued = false; + job_dequeued = false; + do { + if (!job_enqueued) { + req->timeout = plt_tsc_cycles() + ML_CN10K_CMD_TIMEOUT * plt_tsc_hz(); + job_enqueued = roc_ml_scratch_enqueue(&mldev->roc, &req->jd); + } + + if (job_enqueued && !job_dequeued) + job_dequeued = roc_ml_scratch_dequeue(&mldev->roc, &req->jd); + + if (job_dequeued) + break; + } while (plt_tsc_cycles() < req->timeout); + + if (job_dequeued) { + if (plt_read64(&req->status) == ML_CN10K_POLL_JOB_FINISH) { + if (req->result.error_code == 0) + ret = 0; + else + ret = -1; + } + } else { /* Reset scratch registers */ + roc_ml_scratch_queue_reset(&mldev->roc); + ret = -ETIME; + } + + locked = false; + while (!locked) { + if (plt_spinlock_trylock(&model->lock) != 0) { + if (ret == 0) + model->state = ML_CN10K_MODEL_STATE_STARTED; + else + model->state = ML_CN10K_MODEL_STATE_UNKNOWN; + + plt_spinlock_unlock(&model->lock); + locked = true; + } + } + + if (model->state == ML_CN10K_MODEL_STATE_UNKNOWN) { + while (model->model_mem_map.ocm_reserved) { + if (plt_spinlock_trylock(&ocm->lock) != 0) { + cn10k_ml_ocm_free_pages(dev, model->model_id); + model->model_mem_map.ocm_reserved = false; + model->model_mem_map.tilemask = 0x0; + plt_spinlock_unlock(&ocm->lock); + } + } + } + + return ret; +} + struct rte_ml_dev_ops cn10k_ml_ops = { /* Device control ops */ .dev_info_get = cn10k_ml_dev_info_get, @@ -576,4 +782,5 @@ struct rte_ml_dev_ops cn10k_ml_ops = { /* Model ops */ .model_load = cn10k_ml_model_load, .model_unload = cn10k_ml_model_unload, + .model_start = cn10k_ml_model_start, }; diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 981aa52655..af2ea19dce 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -25,6 +25,9 @@ struct cn10k_ml_req { /* Job command */ struct ml_job_cmd_s jcmd; + + /* Timeout cycle */ + uint64_t timeout; } __rte_aligned(ROC_ALIGN); /* Request queue */ @@ -61,5 +64,6 @@ extern struct rte_ml_dev_ops cn10k_ml_ops; int cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, int16_t *model_id); int cn10k_ml_model_unload(struct rte_ml_dev *dev, int16_t model_id); +int cn10k_ml_model_start(struct rte_ml_dev *dev, int16_t model_id); #endif /* _CN10K_ML_OPS_H_ */ -- 2.17.1