DPDK patches and discussions
 help / color / mirror / Atom feed
From: Srikanth Yalavarthi <syalavarthi@marvell.com>
To: Srikanth Yalavarthi <syalavarthi@marvell.com>,
	Prince Takkar <ptakkar@marvell.com>
Cc: <dev@dpdk.org>, <sshankarnara@marvell.com>, <aprabhu@marvell.com>,
	<pshukla@marvell.com>
Subject: [PATCH v3 8/8] ml/cnxk: reduce levels of nested variables access
Date: Thu, 16 Mar 2023 14:29:04 -0700	[thread overview]
Message-ID: <20230316212904.9318-9-syalavarthi@marvell.com> (raw)
In-Reply-To: <20230316212904.9318-1-syalavarthi@marvell.com>

Reduce the number of levels to access nested structure
variables. Use available variables or add new local
pointer variables for access to keep the code uniform.

Fixes: 298b2af4267f ("ml/cnxk: add internal structures for derived info")
Fixes: 0b9c0768ce2b ("ml/cnxk: support model query")

Signed-off-by: Srikanth Yalavarthi <syalavarthi@marvell.com>
---
 drivers/ml/cnxk/cn10k_ml_model.c | 48 ++++++++++++++++----------------
 1 file changed, 24 insertions(+), 24 deletions(-)

diff --git a/drivers/ml/cnxk/cn10k_ml_model.c b/drivers/ml/cnxk/cn10k_ml_model.c
index ceffde8459..2ded05c5dc 100644
--- a/drivers/ml/cnxk/cn10k_ml_model.c
+++ b/drivers/ml/cnxk/cn10k_ml_model.c
@@ -272,8 +272,8 @@ cn10k_ml_model_addr_update(struct cn10k_ml_model *model, uint8_t *buffer, uint8_
 	addr->total_input_sz_q = 0;
 	for (i = 0; i < metadata->model.num_input; i++) {
 		addr->input[i].nb_elements =
-			model->metadata.input[i].shape.w * model->metadata.input[i].shape.x *
-			model->metadata.input[i].shape.y * model->metadata.input[i].shape.z;
+			metadata->input[i].shape.w * metadata->input[i].shape.x *
+			metadata->input[i].shape.y * metadata->input[i].shape.z;
 		addr->input[i].sz_d = addr->input[i].nb_elements *
 				      rte_ml_io_type_size_get(metadata->input[i].input_type);
 		addr->input[i].sz_q = addr->input[i].nb_elements *
@@ -360,52 +360,52 @@ cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *mldev, uint16_t model_id, ui
 void
 cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cn10k_ml_model *model)
 {
+	struct cn10k_ml_model_metadata *metadata;
 	struct rte_ml_model_info *info;
 	struct rte_ml_io_info *output;
 	struct rte_ml_io_info *input;
 	uint8_t i;
 
+	metadata = &model->metadata;
 	info = PLT_PTR_CAST(model->info);
 	input = PLT_PTR_ADD(info, sizeof(struct rte_ml_model_info));
-	output =
-		PLT_PTR_ADD(input, model->metadata.model.num_input * sizeof(struct rte_ml_io_info));
+	output = PLT_PTR_ADD(input, metadata->model.num_input * sizeof(struct rte_ml_io_info));
 
 	/* Set model info */
 	memset(info, 0, sizeof(struct rte_ml_model_info));
-	rte_memcpy(info->name, model->metadata.model.name, MRVL_ML_MODEL_NAME_LEN);
-	snprintf(info->version, RTE_ML_STR_MAX, "%u.%u.%u.%u", model->metadata.model.version[0],
-		 model->metadata.model.version[1], model->metadata.model.version[2],
-		 model->metadata.model.version[3]);
+	rte_memcpy(info->name, metadata->model.name, MRVL_ML_MODEL_NAME_LEN);
+	snprintf(info->version, RTE_ML_STR_MAX, "%u.%u.%u.%u", metadata->model.version[0],
+		 metadata->model.version[1], metadata->model.version[2],
+		 metadata->model.version[3]);
 	info->model_id = model->model_id;
 	info->device_id = dev->data->dev_id;
 	info->batch_size = model->batch_size;
-	info->nb_inputs = model->metadata.model.num_input;
+	info->nb_inputs = metadata->model.num_input;
 	info->input_info = input;
-	info->nb_outputs = model->metadata.model.num_output;
+	info->nb_outputs = metadata->model.num_output;
 	info->output_info = output;
-	info->wb_size = model->metadata.weights_bias.file_size;
+	info->wb_size = metadata->weights_bias.file_size;
 
 	/* Set input info */
 	for (i = 0; i < info->nb_inputs; i++) {
-		rte_memcpy(input[i].name, model->metadata.input[i].input_name,
-			   MRVL_ML_INPUT_NAME_LEN);
-		input[i].dtype = model->metadata.input[i].input_type;
-		input[i].qtype = model->metadata.input[i].model_input_type;
-		input[i].shape.format = model->metadata.input[i].shape.format;
-		input[i].shape.w = model->metadata.input[i].shape.w;
-		input[i].shape.x = model->metadata.input[i].shape.x;
-		input[i].shape.y = model->metadata.input[i].shape.y;
-		input[i].shape.z = model->metadata.input[i].shape.z;
+		rte_memcpy(input[i].name, metadata->input[i].input_name, MRVL_ML_INPUT_NAME_LEN);
+		input[i].dtype = metadata->input[i].input_type;
+		input[i].qtype = metadata->input[i].model_input_type;
+		input[i].shape.format = metadata->input[i].shape.format;
+		input[i].shape.w = metadata->input[i].shape.w;
+		input[i].shape.x = metadata->input[i].shape.x;
+		input[i].shape.y = metadata->input[i].shape.y;
+		input[i].shape.z = metadata->input[i].shape.z;
 	}
 
 	/* Set output info */
 	for (i = 0; i < info->nb_outputs; i++) {
-		rte_memcpy(output[i].name, model->metadata.output[i].output_name,
+		rte_memcpy(output[i].name, metadata->output[i].output_name,
 			   MRVL_ML_OUTPUT_NAME_LEN);
-		output[i].dtype = model->metadata.output[i].output_type;
-		output[i].qtype = model->metadata.output[i].model_output_type;
+		output[i].dtype = metadata->output[i].output_type;
+		output[i].qtype = metadata->output[i].model_output_type;
 		output[i].shape.format = RTE_ML_IO_FORMAT_1D;
-		output[i].shape.w = model->metadata.output[i].size;
+		output[i].shape.w = metadata->output[i].size;
 		output[i].shape.x = 1;
 		output[i].shape.y = 1;
 		output[i].shape.z = 1;
-- 
2.17.1


  parent reply	other threads:[~2023-03-16 21:30 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-15 13:54 [PATCH 1/1] ml/cnxk: fix multiple coverity issues Srikanth Yalavarthi
2023-03-16  9:33 ` [PATCH v2 " Srikanth Yalavarthi
2023-03-16 17:00   ` Thomas Monjalon
2023-03-16 17:02     ` [EXT] " Srikanth Yalavarthi
2023-03-16 17:07       ` Thomas Monjalon
2023-03-16 21:28 ` [PATCH v3 0/8] Fixes to ml/cnxk driver Srikanth Yalavarthi
2023-03-16 21:28   ` [PATCH v3 1/8] ml/cnxk: fix evaluation order violation issues Srikanth Yalavarthi
2023-03-16 21:28   ` [PATCH v3 2/8] ml/cnxk: fix potential division by zero Srikanth Yalavarthi
2023-03-16 21:28   ` [PATCH v3 3/8] ml/cnxk: add pointer check after memory allocation Srikanth Yalavarthi
2023-03-16 21:29   ` [PATCH v3 4/8] ml/cnxk: remove logically dead code Srikanth Yalavarthi
2023-03-16 21:29   ` [PATCH v3 5/8] ml/cnxk: fix potential memory leak in xstats Srikanth Yalavarthi
2023-03-16 21:29   ` [PATCH v3 6/8] ml/cnxk: check for null pointer before dereference Srikanth Yalavarthi
2023-03-16 21:29   ` [PATCH v3 7/8] ml/cnxk: avoid variable name reuse in a function Srikanth Yalavarthi
2023-03-16 21:29   ` Srikanth Yalavarthi [this message]
2023-03-19 19:01   ` [PATCH v3 0/8] Fixes to ml/cnxk driver Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230316212904.9318-9-syalavarthi@marvell.com \
    --to=syalavarthi@marvell.com \
    --cc=aprabhu@marvell.com \
    --cc=dev@dpdk.org \
    --cc=pshukla@marvell.com \
    --cc=ptakkar@marvell.com \
    --cc=sshankarnara@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).