From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C999141E74; Sat, 11 Mar 2023 16:10:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 55B8B42D3D; Sat, 11 Mar 2023 16:09:23 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D138441151 for ; Sat, 11 Mar 2023 16:09:13 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32BDP3A5030644 for ; Sat, 11 Mar 2023 07:09:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=5yaIwO154N0NXTlhh50CSJEWaecxwLMobkisXagrnj0=; b=ILlj3r8iywKJYWFeB3Z3huE3c+M04wFANCPTA/2ZANuXr1G4i98A38n/EMQwbnhlGd1p hx2x7C1wxY/yO46IKdOxyW+2yS/PpU3lq1rAz3usArCCx6ZqoQXeoMwfgXgJCTH0kQwU 3MytP9Ae813talaqnU74tL7ZhPhl1nkG+BzIdiPirJIntkq+zbW7AWnkHiXJxT/+pog8 2nb/cITeVdCS34uR5zpiJy9sAoOMGgNvf29F2u6XZxrpxrBNZASFczylkcnzYFuQJ4Vl oKujlzXcIABMxSoz6J5UfkuVG56ehT/P7uYuqn2x+lo6zWYebwczjVSoQzThhhbtSV7/ MA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3p8t1t086h-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sat, 11 Mar 2023 07:09:13 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Sat, 11 Mar 2023 07:09:09 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Sat, 11 Mar 2023 07:09:09 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 6DE4B3F707F; Sat, 11 Mar 2023 07:09:09 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , , , Subject: [PATCH v6 08/12] app/mldev: enable support for queue pairs and size Date: Sat, 11 Mar 2023 07:09:01 -0800 Message-ID: <20230311150905.26824-9-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230311150905.26824-1-syalavarthi@marvell.com> References: <20221129070746.20396-1-syalavarthi@marvell.com> <20230311150905.26824-1-syalavarthi@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-GUID: c588iuq1Xl67GxKVWLO81Pfi5pgmCdUA X-Proofpoint-ORIG-GUID: c588iuq1Xl67GxKVWLO81Pfi5pgmCdUA X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-11_04,2023-03-10_01,2023-02-09_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to create multiple queue-pairs per device to enqueue and dequeue inference requests. Number of queue pairs to be created can be specified through "--queue_pairs" option. Support is also enabled to control the number of descriptors per each queue pair through "--queue_size" option. Inference requests for a model are distributed across all available queue-pairs. Signed-off-by: Srikanth Yalavarthi Acked-by: Anup Prabhu --- app/test-mldev/ml_options.c | 40 ++++++++++--- app/test-mldev/ml_options.h | 4 ++ app/test-mldev/test_common.c | 2 +- app/test-mldev/test_inference_common.c | 79 +++++++++++++++++++++----- app/test-mldev/test_inference_common.h | 1 + 5 files changed, 102 insertions(+), 24 deletions(-) diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 7fbf9c5678..c81dec6e30 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -25,6 +25,8 @@ ml_options_default(struct ml_options *opt) opt->nb_filelist = 0; opt->repetitions = 1; opt->burst_size = 1; + opt->queue_pairs = 1; + opt->queue_size = 1; opt->debug = false; } @@ -152,11 +154,30 @@ ml_parse_burst_size(struct ml_options *opt, const char *arg) return parser_read_uint16(&opt->burst_size, arg); } +static int +ml_parse_queue_pairs(struct ml_options *opt, const char *arg) +{ + int ret; + + ret = parser_read_uint16(&opt->queue_pairs, arg); + + return ret; +} + +static int +ml_parse_queue_size(struct ml_options *opt, const char *arg) +{ + return parser_read_uint16(&opt->queue_size, arg); +} + static void ml_dump_test_options(const char *testname) { - if (strcmp(testname, "device_ops") == 0) + if (strcmp(testname, "device_ops") == 0) { + printf("\t\t--queue_pairs : number of queue pairs to create\n" + "\t\t--queue_size : size fo queue-pair\n"); printf("\n"); + } if (strcmp(testname, "model_ops") == 0) { printf("\t\t--models : comma separated list of models\n"); @@ -167,7 +188,9 @@ ml_dump_test_options(const char *testname) (strcmp(testname, "inference_interleave") == 0)) { printf("\t\t--filelist : comma separated list of model, input and output\n" "\t\t--repetitions : number of inference repetitions\n" - "\t\t--burst_size : inference burst size\n"); + "\t\t--burst_size : inference burst size\n" + "\t\t--queue_pairs : number of queue pairs to create\n" + "\t\t--queue_size : size fo queue-pair\n"); printf("\n"); } } @@ -187,11 +210,11 @@ print_usage(char *program) ml_test_dump_names(ml_dump_test_options); } -static struct option lgopts[] = {{ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, - {ML_SOCKET_ID, 1, 0, 0}, {ML_MODELS, 1, 0, 0}, - {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, - {ML_BURST_SIZE, 1, 0, 0}, {ML_DEBUG, 0, 0, 0}, - {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; +static struct option lgopts[] = { + {ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, {ML_SOCKET_ID, 1, 0, 0}, + {ML_MODELS, 1, 0, 0}, {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, + {ML_BURST_SIZE, 1, 0, 0}, {ML_QUEUE_PAIRS, 1, 0, 0}, {ML_QUEUE_SIZE, 1, 0, 0}, + {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; static int ml_opts_parse_long(int opt_idx, struct ml_options *opt) @@ -202,7 +225,8 @@ ml_opts_parse_long(int opt_idx, struct ml_options *opt) {ML_TEST, ml_parse_test_name}, {ML_DEVICE_ID, ml_parse_dev_id}, {ML_SOCKET_ID, ml_parse_socket_id}, {ML_MODELS, ml_parse_models}, {ML_FILELIST, ml_parse_filelist}, {ML_REPETITIONS, ml_parse_repetitions}, - {ML_BURST_SIZE, ml_parse_burst_size}, + {ML_BURST_SIZE, ml_parse_burst_size}, {ML_QUEUE_PAIRS, ml_parse_queue_pairs}, + {ML_QUEUE_SIZE, ml_parse_queue_size}, }; for (i = 0; i < RTE_DIM(parsermap); i++) { diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h index 00342d8a0c..c4018ee9d1 100644 --- a/app/test-mldev/ml_options.h +++ b/app/test-mldev/ml_options.h @@ -19,6 +19,8 @@ #define ML_FILELIST ("filelist") #define ML_REPETITIONS ("repetitions") #define ML_BURST_SIZE ("burst_size") +#define ML_QUEUE_PAIRS ("queue_pairs") +#define ML_QUEUE_SIZE ("queue_size") #define ML_DEBUG ("debug") #define ML_HELP ("help") @@ -36,6 +38,8 @@ struct ml_options { uint8_t nb_filelist; uint64_t repetitions; uint16_t burst_size; + uint16_t queue_pairs; + uint16_t queue_size; bool debug; }; diff --git a/app/test-mldev/test_common.c b/app/test-mldev/test_common.c index 8c4da4609a..016b31c6ba 100644 --- a/app/test-mldev/test_common.c +++ b/app/test-mldev/test_common.c @@ -75,7 +75,7 @@ ml_test_device_configure(struct ml_test *test, struct ml_options *opt) /* configure device */ dev_config.socket_id = opt->socket_id; dev_config.nb_models = t->dev_info.max_models; - dev_config.nb_queue_pairs = t->dev_info.max_queue_pairs; + dev_config.nb_queue_pairs = opt->queue_pairs; ret = rte_ml_dev_configure(opt->dev_id, &dev_config); if (ret != 0) { ml_err("Failed to configure ml device, dev_id = %d\n", opt->dev_id); diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c index 35323306de..b4ad3c4b72 100644 --- a/app/test-mldev/test_inference_common.c +++ b/app/test-mldev/test_inference_common.c @@ -66,7 +66,7 @@ ml_enqueue_single(void *arg) req->fid = fid; enqueue_req: - burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, 0, &op, 1); + burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, args->qp_id, &op, 1); if (burst_enq == 0) goto enqueue_req; @@ -103,7 +103,7 @@ ml_dequeue_single(void *arg) return 0; dequeue_req: - burst_deq = rte_ml_dequeue_burst(t->cmn.opt->dev_id, 0, &op, 1); + burst_deq = rte_ml_dequeue_burst(t->cmn.opt->dev_id, args->qp_id, &op, 1); if (likely(burst_deq == 1)) { total_deq += burst_deq; @@ -183,7 +183,8 @@ ml_enqueue_burst(void *arg) pending = ops_count; enqueue_reqs: - burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, 0, &args->enq_ops[idx], pending); + burst_enq = + rte_ml_enqueue_burst(t->cmn.opt->dev_id, args->qp_id, &args->enq_ops[idx], pending); pending = pending - burst_enq; if (pending > 0) { @@ -224,8 +225,8 @@ ml_dequeue_burst(void *arg) return 0; dequeue_burst: - burst_deq = - rte_ml_dequeue_burst(t->cmn.opt->dev_id, 0, args->deq_ops, t->cmn.opt->burst_size); + burst_deq = rte_ml_dequeue_burst(t->cmn.opt->dev_id, args->qp_id, args->deq_ops, + t->cmn.opt->burst_size); if (likely(burst_deq > 0)) { total_deq += burst_deq; @@ -259,6 +260,19 @@ test_inference_cap_check(struct ml_options *opt) return false; rte_ml_dev_info_get(opt->dev_id, &dev_info); + + if (opt->queue_pairs > dev_info.max_queue_pairs) { + ml_err("Insufficient capabilities: queue_pairs = %u, max_queue_pairs = %u", + opt->queue_pairs, dev_info.max_queue_pairs); + return false; + } + + if (opt->queue_size > dev_info.max_desc) { + ml_err("Insufficient capabilities: queue_size = %u, max_desc = %u", opt->queue_size, + dev_info.max_desc); + return false; + } + if (opt->nb_filelist > dev_info.max_models) { ml_err("Insufficient capabilities: Filelist count exceeded device limit, count = %u (max limit = %u)", opt->nb_filelist, dev_info.max_models); @@ -310,10 +324,21 @@ test_inference_opt_check(struct ml_options *opt) return -EINVAL; } + if (opt->queue_pairs == 0) { + ml_err("Invalid option, queue_pairs = %u\n", opt->queue_pairs); + return -EINVAL; + } + + if (opt->queue_size == 0) { + ml_err("Invalid option, queue_size = %u\n", opt->queue_size); + return -EINVAL; + } + /* check number of available lcores. */ - if (rte_lcore_count() < 3) { + if (rte_lcore_count() < (uint32_t)(opt->queue_pairs * 2 + 1)) { ml_err("Insufficient lcores = %u\n", rte_lcore_count()); - ml_err("Minimum lcores required to create %u queue-pairs = %u\n", 1, 3); + ml_err("Minimum lcores required to create %u queue-pairs = %u\n", opt->queue_pairs, + (opt->queue_pairs * 2 + 1)); return -EINVAL; } @@ -331,6 +356,8 @@ test_inference_opt_dump(struct ml_options *opt) /* dump test opts */ ml_dump("repetitions", "%" PRIu64, opt->repetitions); ml_dump("burst_size", "%u", opt->burst_size); + ml_dump("queue_pairs", "%u", opt->queue_pairs); + ml_dump("queue_size", "%u", opt->queue_size); ml_dump_begin("filelist"); for (i = 0; i < opt->nb_filelist; i++) { @@ -422,23 +449,31 @@ ml_inference_mldev_setup(struct ml_test *test, struct ml_options *opt) { struct rte_ml_dev_qp_conf qp_conf; struct test_inference *t; + uint16_t qp_id; int ret; t = ml_test_priv(test); + RTE_SET_USED(t); + ret = ml_test_device_configure(test, opt); if (ret != 0) return ret; /* setup queue pairs */ - qp_conf.nb_desc = t->cmn.dev_info.max_desc; + qp_conf.nb_desc = opt->queue_size; qp_conf.cb = NULL; - ret = rte_ml_dev_queue_pair_setup(opt->dev_id, 0, &qp_conf, opt->socket_id); - if (ret != 0) { - ml_err("Failed to setup ml device queue-pair, dev_id = %d, qp_id = %u\n", - opt->dev_id, 0); - goto error; + for (qp_id = 0; qp_id < opt->queue_pairs; qp_id++) { + qp_conf.nb_desc = opt->queue_size; + qp_conf.cb = NULL; + + ret = rte_ml_dev_queue_pair_setup(opt->dev_id, qp_id, &qp_conf, opt->socket_id); + if (ret != 0) { + ml_err("Failed to setup ml device queue-pair, dev_id = %d, qp_id = %u\n", + opt->dev_id, qp_id); + return ret; + } } ret = ml_test_device_start(test, opt); @@ -700,14 +735,28 @@ ml_inference_launch_cores(struct ml_test *test, struct ml_options *opt, uint16_t { struct test_inference *t = ml_test_priv(test); uint32_t lcore_id; + uint32_t nb_reqs; uint32_t id = 0; + uint32_t qp_id; + + nb_reqs = opt->repetitions / opt->queue_pairs; RTE_LCORE_FOREACH_WORKER(lcore_id) { - if (id == 2) + if (id >= opt->queue_pairs * 2) break; - t->args[lcore_id].nb_reqs = opt->repetitions; + qp_id = id / 2; + t->args[lcore_id].qp_id = qp_id; + t->args[lcore_id].nb_reqs = nb_reqs; + if (qp_id == 0) + t->args[lcore_id].nb_reqs += opt->repetitions - nb_reqs * opt->queue_pairs; + + if (t->args[lcore_id].nb_reqs == 0) { + id++; + break; + } + t->args[lcore_id].start_fid = start_fid; t->args[lcore_id].end_fid = end_fid; diff --git a/app/test-mldev/test_inference_common.h b/app/test-mldev/test_inference_common.h index da800f2bd4..81d9b07d41 100644 --- a/app/test-mldev/test_inference_common.h +++ b/app/test-mldev/test_inference_common.h @@ -22,6 +22,7 @@ struct ml_core_args { uint64_t nb_reqs; uint16_t start_fid; uint16_t end_fid; + uint32_t qp_id; struct rte_ml_op **enq_ops; struct rte_ml_op **deq_ops; -- 2.17.1