From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B8AD5A0093; Tue, 29 Nov 2022 09:22:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7ABDF42D3D; Tue, 29 Nov 2022 09:21:25 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9249B42C29 for ; Tue, 29 Nov 2022 09:21:18 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AT3NpLU005657 for ; Tue, 29 Nov 2022 00:21:17 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=0Rz9efhGd5HpLaez3RtpgQdD0rjoRv+UxSyzmCA8Erc=; b=cB/pSfunXMJozOTTVq/jDgvCABi2RjSx+m4Z7wmNXhS9B7pp2IBp8wZ+MXu5NX7IVIlF PvOiKr5DvsFQ1lGf3rVkcbplylaOCwDIA5VoHgfUmzVrooqYkXs3PF7X93PnjilPgPui 6eMV/qSkwXWn8kcYn8YjosnDqg0oT2pb3tnKTNdjEI7qhfrcAvcdLBKNHDY2xS9HsWAF 9gPzLhNrODSgEuuLe5OrUNS1MQgJJHGOYs8Elocy2UWY1aqCN1OAg68kIc6qP5Xdq3ae ohivnxVWkQIV13ixkPWdFyvb4lHWpq9fbHAAZYt6etCEgTeKxm1KW0c7IfEOFabRYqhn AA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3m5a508yyp-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 29 Nov 2022 00:21:17 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 29 Nov 2022 00:21:15 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 29 Nov 2022 00:21:15 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 19B713F706A; Tue, 29 Nov 2022 00:21:15 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v2 06/12] app/mldev: add test case to interleave inferences Date: Tue, 29 Nov 2022 00:21:03 -0800 Message-ID: <20221129082109.6809-6-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129082109.6809-1-syalavarthi@marvell.com> References: <20221129070746.20396-2-syalavarthi@marvell.com> <20221129082109.6809-1-syalavarthi@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-ORIG-GUID: I-MJIGheO-uWknjVKHsUNjnoFuuDsmSu X-Proofpoint-GUID: I-MJIGheO-uWknjVKHsUNjnoFuuDsmSu X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_06,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added test case to interleave inference requests from multiple models. Interleaving would load and start all models and launch inference requests for the models using available queue-pairs Operations sequence when testing with N models and R reps, (load + start) x N -> (enqueue + dequeue) x N x R ... -> (stop + unload) x N Test can be executed by selecting "inference_interleave" test. Signed-off-by: Srikanth Yalavarthi --- app/test-mldev/meson.build | 1 + app/test-mldev/ml_options.c | 3 +- app/test-mldev/test_inference_common.c | 12 +-- app/test-mldev/test_inference_common.h | 4 +- app/test-mldev/test_inference_interleave.c | 118 +++++++++++++++++++++ 5 files changed, 129 insertions(+), 9 deletions(-) create mode 100644 app/test-mldev/test_inference_interleave.c diff --git a/app/test-mldev/meson.build b/app/test-mldev/meson.build index 475d76d126..41d22fb22c 100644 --- a/app/test-mldev/meson.build +++ b/app/test-mldev/meson.build @@ -18,6 +18,7 @@ sources = files( 'test_model_ops.c', 'test_inference_common.c', 'test_inference_ordered.c', + 'test_inference_interleave.c', ) deps += ['mldev'] diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 10dad18fff..01ea050ee7 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -162,7 +162,8 @@ ml_dump_test_options(const char *testname) printf("\n"); } - if (strcmp(testname, "inference_ordered") == 0) { + if ((strcmp(testname, "inference_ordered") == 0) || + (strcmp(testname, "inference_interleave") == 0)) { printf("\t\t--filelist : comma separated list of model, input and output\n" "\t\t--repetitions : number of inference repetitions\n"); printf("\n"); diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c index 8b5dc89346..f0b15861a0 100644 --- a/app/test-mldev/test_inference_common.c +++ b/app/test-mldev/test_inference_common.c @@ -115,7 +115,7 @@ ml_dequeue_single(void *arg) total_deq += burst_deq; if (unlikely(op->status == RTE_ML_OP_STATUS_ERROR)) { rte_ml_op_error_get(t->cmn.opt->dev_id, op, &error); - ml_err("error_code = 0x%016lx, error_message = %s\n", error.errcode, + ml_err("error_code = 0x%" PRIx64 ", error_message = %s\n", error.errcode, error.message); } req = (struct ml_request *)op->user_ptr; @@ -334,10 +334,10 @@ ml_request_initialize(struct rte_mempool *mp, void *opaque, void *obj, unsigned RTE_SET_USED(mp); RTE_SET_USED(obj_idx); - req->input = RTE_PTR_ADD( - obj, RTE_ALIGN_CEIL(sizeof(struct ml_request), t->cmn.dev_info.min_align_size)); - req->output = RTE_PTR_ADD(req->input, RTE_ALIGN_CEIL(t->model[t->fid].inp_qsize, - t->cmn.dev_info.min_align_size)); + req->input = (uint8_t *)obj + + RTE_ALIGN_CEIL(sizeof(struct ml_request), t->cmn.dev_info.min_align_size); + req->output = req->input + + RTE_ALIGN_CEIL(t->model[t->fid].inp_qsize, t->cmn.dev_info.min_align_size); req->niters = 0; /* quantize data */ @@ -387,7 +387,7 @@ ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, int16_t f } t->model[fid].input = mz->addr; - t->model[fid].output = RTE_PTR_ADD(t->model[fid].input, t->model[fid].inp_dsize); + t->model[fid].output = t->model[fid].input + t->model[fid].inp_dsize; /* load input file */ fp = fopen(opt->filelist[fid].input, "r"); diff --git a/app/test-mldev/test_inference_common.h b/app/test-mldev/test_inference_common.h index 91007954b4..b058abada4 100644 --- a/app/test-mldev/test_inference_common.h +++ b/app/test-mldev/test_inference_common.h @@ -17,8 +17,8 @@ #include "test_model_common.h" struct ml_request { - void *input; - void *output; + uint8_t *input; + uint8_t *output; int16_t fid; uint64_t niters; }; diff --git a/app/test-mldev/test_inference_interleave.c b/app/test-mldev/test_inference_interleave.c new file mode 100644 index 0000000000..74ad0c597f --- /dev/null +++ b/app/test-mldev/test_inference_interleave.c @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include + +#include +#include + +#include "ml_common.h" +#include "ml_test.h" +#include "test_inference_common.h" +#include "test_model_common.h" + +static int +test_inference_interleave_driver(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t; + int16_t fid = 0; + int ret = 0; + + t = ml_test_priv(test); + + ret = ml_inference_mldev_setup(test, opt); + if (ret != 0) + return ret; + + ret = ml_inference_mem_setup(test, opt); + if (ret != 0) + return ret; + + /* load and start all models */ + for (fid = 0; fid < opt->nb_filelist; fid++) { + ret = ml_model_load(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + ret = ml_model_start(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + ret = ml_inference_iomem_setup(test, opt, fid); + if (ret != 0) + goto error; + } + + /* launch inference requests */ + ret = ml_inference_launch_cores(test, opt, 0, opt->nb_filelist - 1); + if (ret != 0) { + ml_err("failed to launch cores"); + goto error; + } + + rte_eal_mp_wait_lcore(); + + /* stop and unload all models */ + for (fid = 0; fid < opt->nb_filelist; fid++) { + ret = ml_inference_result(test, opt, fid); + if (ret != ML_TEST_SUCCESS) + goto error; + + ml_inference_iomem_destroy(test, opt, fid); + + ret = ml_model_stop(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + ret = ml_model_unload(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + } + + ml_inference_mem_destroy(test, opt); + + ret = ml_inference_mldev_destroy(test, opt); + if (ret != 0) + return ret; + + t->cmn.result = ML_TEST_SUCCESS; + + return 0; + +error: + ml_inference_mem_destroy(test, opt); + for (fid = 0; fid < opt->nb_filelist; fid++) { + ml_inference_iomem_destroy(test, opt, fid); + ml_model_stop(test, opt, &t->model[fid], fid); + ml_model_unload(test, opt, &t->model[fid], fid); + } + + t->cmn.result = ML_TEST_FAILED; + + return ret; +} + +static int +test_inference_interleave_result(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t; + + RTE_SET_USED(opt); + + t = ml_test_priv(test); + + return t->cmn.result; +} + +static const struct ml_test_ops inference_interleave = { + .cap_check = test_inference_cap_check, + .opt_check = test_inference_opt_check, + .opt_dump = test_inference_opt_dump, + .test_setup = test_inference_setup, + .test_destroy = test_inference_destroy, + .test_driver = test_inference_interleave_driver, + .test_result = test_inference_interleave_result, +}; + +ML_TEST_REGISTER(inference_interleave); -- 2.17.1