* [PATCH v1 1/1] app/mldev: enable support for pre-quantized I/O
@ 2023-10-02 10:02 Srikanth Yalavarthi
2023-10-03 6:01 ` Shivah Shankar Shankar Narayan Rao
2023-10-26 12:49 ` [PATCH v2 " Srikanth Yalavarthi
0 siblings, 2 replies; 5+ messages in thread
From: Srikanth Yalavarthi @ 2023-10-02 10:02 UTC (permalink / raw)
To: Srikanth Yalavarthi; +Cc: dev, sshankarnara, aprabhu, ptakkar
From: Anup Prabhu <aprabhu@marvell.com>
Enabled support for pre-quantized input and output in ML
test application.
Signed-off-by: Anup Prabhu <aprabhu@marvell.com>
---
Depends-on: series-29710 ("Spec changes to support multi I/O models")
app/test-mldev/ml_options.c | 8 ++++++++
app/test-mldev/ml_options.h | 28 ++++++++++++++------------
app/test-mldev/test_inference_common.c | 20 ++++++++++++------
doc/guides/tools/testmldev.rst | 3 +++
4 files changed, 40 insertions(+), 19 deletions(-)
diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c
index eeaffec399..7d24f7e2f0 100644
--- a/app/test-mldev/ml_options.c
+++ b/app/test-mldev/ml_options.c
@@ -24,6 +24,7 @@ ml_options_default(struct ml_options *opt)
opt->dev_id = 0;
opt->socket_id = SOCKET_ID_ANY;
opt->nb_filelist = 0;
+ opt->quantized_io = false;
opt->repetitions = 1;
opt->burst_size = 1;
opt->queue_pairs = 1;
@@ -269,6 +270,7 @@ static struct option lgopts[] = {
{ML_SOCKET_ID, 1, 0, 0},
{ML_MODELS, 1, 0, 0},
{ML_FILELIST, 1, 0, 0},
+ {ML_QUANTIZED_IO, 0, 0, 0},
{ML_REPETITIONS, 1, 0, 0},
{ML_BURST_SIZE, 1, 0, 0},
{ML_QUEUE_PAIRS, 1, 0, 0},
@@ -316,6 +318,11 @@ ml_options_parse(struct ml_options *opt, int argc, char **argv)
while ((opts = getopt_long(argc, argv, "", lgopts, &opt_idx)) != EOF) {
switch (opts) {
case 0: /* parse long options */
+ if (!strcmp(lgopts[opt_idx].name, "quantized_io")) {
+ opt->quantized_io = true;
+ break;
+ }
+
if (!strcmp(lgopts[opt_idx].name, "stats")) {
opt->stats = true;
break;
@@ -360,4 +367,5 @@ ml_options_dump(struct ml_options *opt)
ml_dump("socket_id", "%d", opt->socket_id);
ml_dump("debug", "%s", (opt->debug ? "true" : "false"));
+ ml_dump("quantized_io", "%s", (opt->quantized_io ? "true" : "false"));
}
diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h
index 90e22adeac..edb9dba8f7 100644
--- a/app/test-mldev/ml_options.h
+++ b/app/test-mldev/ml_options.h
@@ -12,19 +12,20 @@
#define ML_TEST_MAX_MODELS 8
/* Options names */
-#define ML_TEST ("test")
-#define ML_DEVICE_ID ("dev_id")
-#define ML_SOCKET_ID ("socket_id")
-#define ML_MODELS ("models")
-#define ML_FILELIST ("filelist")
-#define ML_REPETITIONS ("repetitions")
-#define ML_BURST_SIZE ("burst_size")
-#define ML_QUEUE_PAIRS ("queue_pairs")
-#define ML_QUEUE_SIZE ("queue_size")
-#define ML_TOLERANCE ("tolerance")
-#define ML_STATS ("stats")
-#define ML_DEBUG ("debug")
-#define ML_HELP ("help")
+#define ML_TEST ("test")
+#define ML_DEVICE_ID ("dev_id")
+#define ML_SOCKET_ID ("socket_id")
+#define ML_MODELS ("models")
+#define ML_FILELIST ("filelist")
+#define ML_QUANTIZED_IO ("quantized_io")
+#define ML_REPETITIONS ("repetitions")
+#define ML_BURST_SIZE ("burst_size")
+#define ML_QUEUE_PAIRS ("queue_pairs")
+#define ML_QUEUE_SIZE ("queue_size")
+#define ML_TOLERANCE ("tolerance")
+#define ML_STATS ("stats")
+#define ML_DEBUG ("debug")
+#define ML_HELP ("help")
struct ml_filelist {
char model[PATH_MAX];
@@ -46,6 +47,7 @@ struct ml_options {
float tolerance;
bool stats;
bool debug;
+ bool quantized_io;
};
void ml_options_default(struct ml_options *opt);
diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c
index 846f71abb1..36629210ee 100644
--- a/app/test-mldev/test_inference_common.c
+++ b/app/test-mldev/test_inference_common.c
@@ -777,14 +777,22 @@ ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, uint16_t
}
t->model[fid].inp_dsize = 0;
- for (i = 0; i < t->model[fid].info.nb_inputs; i++)
- t->model[fid].inp_dsize +=
- t->model[fid].info.input_info[i].nb_elements * sizeof(float);
+ for (i = 0; i < t->model[fid].info.nb_inputs; i++) {
+ if (opt->quantized_io)
+ t->model[fid].inp_dsize += t->model[fid].info.input_info[i].size;
+ else
+ t->model[fid].inp_dsize +=
+ t->model[fid].info.input_info[i].nb_elements * sizeof(float);
+ }
t->model[fid].out_dsize = 0;
- for (i = 0; i < t->model[fid].info.nb_outputs; i++)
- t->model[fid].out_dsize +=
- t->model[fid].info.output_info[i].nb_elements * sizeof(float);
+ for (i = 0; i < t->model[fid].info.nb_outputs; i++) {
+ if (opt->quantized_io)
+ t->model[fid].out_dsize += t->model[fid].info.output_info[i].size;
+ else
+ t->model[fid].out_dsize +=
+ t->model[fid].info.output_info[i].nb_elements * sizeof(float);
+ }
/* allocate buffer for user data */
mz_size = t->model[fid].inp_dsize + t->model[fid].out_dsize;
diff --git a/doc/guides/tools/testmldev.rst b/doc/guides/tools/testmldev.rst
index 9b1565a457..55e26eed08 100644
--- a/doc/guides/tools/testmldev.rst
+++ b/doc/guides/tools/testmldev.rst
@@ -89,6 +89,9 @@ The following are the command-line options supported by the test application.
A suffix ``.q`` is appended to quantized output filename.
Maximum number of filelist entries supported by the test is ``8``.
+``--quantized_io``
+ Disable IO quantization and dequantization.
+
``--repetitions <n>``
Set the number of inference repetitions to be executed in the test per each model.
Default value is ``1``.
--
2.41.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [PATCH v1 1/1] app/mldev: enable support for pre-quantized I/O
2023-10-02 10:02 [PATCH v1 1/1] app/mldev: enable support for pre-quantized I/O Srikanth Yalavarthi
@ 2023-10-03 6:01 ` Shivah Shankar Shankar Narayan Rao
2023-10-26 12:49 ` [PATCH v2 " Srikanth Yalavarthi
1 sibling, 0 replies; 5+ messages in thread
From: Shivah Shankar Shankar Narayan Rao @ 2023-10-03 6:01 UTC (permalink / raw)
To: Srikanth Yalavarthi, Srikanth Yalavarthi; +Cc: dev, Anup Prabhu, Prince Takkar
[-- Attachment #1: Type: text/plain, Size: 6158 bytes --]
> -----Original Message-----
> From: Srikanth Yalavarthi <syalavarthi@marvell.com>
> Sent: Monday, October 2, 2023 3:32 PM
> To: Srikanth Yalavarthi <syalavarthi@marvell.com>
> Cc: dev@dpdk.org; Shivah Shankar Shankar Narayan Rao
> <sshankarnara@marvell.com>; Anup Prabhu <aprabhu@marvell.com>;
> Prince Takkar <ptakkar@marvell.com>
> Subject: [PATCH v1 1/1] app/mldev: enable support for pre-quantized I/O
>
> From: Anup Prabhu <aprabhu@marvell.com>
>
> Enabled support for pre-quantized input and output in ML test application.
>
> Signed-off-by: Anup Prabhu <aprabhu@marvell.com>
> ---
> Depends-on: series-29710 ("Spec changes to support multi I/O models")
>
> app/test-mldev/ml_options.c | 8 ++++++++
> app/test-mldev/ml_options.h | 28 ++++++++++++++------------
> app/test-mldev/test_inference_common.c | 20 ++++++++++++------
> doc/guides/tools/testmldev.rst | 3 +++
> 4 files changed, 40 insertions(+), 19 deletions(-)
>
> diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c
> index eeaffec399..7d24f7e2f0 100644
> --- a/app/test-mldev/ml_options.c
> +++ b/app/test-mldev/ml_options.c
> @@ -24,6 +24,7 @@ ml_options_default(struct ml_options *opt)
> opt->dev_id = 0;
> opt->socket_id = SOCKET_ID_ANY;
> opt->nb_filelist = 0;
> + opt->quantized_io = false;
> opt->repetitions = 1;
> opt->burst_size = 1;
> opt->queue_pairs = 1;
> @@ -269,6 +270,7 @@ static struct option lgopts[] = {
> {ML_SOCKET_ID, 1, 0, 0},
> {ML_MODELS, 1, 0, 0},
> {ML_FILELIST, 1, 0, 0},
> + {ML_QUANTIZED_IO, 0, 0, 0},
> {ML_REPETITIONS, 1, 0, 0},
> {ML_BURST_SIZE, 1, 0, 0},
> {ML_QUEUE_PAIRS, 1, 0, 0},
> @@ -316,6 +318,11 @@ ml_options_parse(struct ml_options *opt, int argc,
> char **argv)
> while ((opts = getopt_long(argc, argv, "", lgopts, &opt_idx)) != EOF) {
> switch (opts) {
> case 0: /* parse long options */
> + if (!strcmp(lgopts[opt_idx].name, "quantized_io")) {
> + opt->quantized_io = true;
> + break;
> + }
> +
> if (!strcmp(lgopts[opt_idx].name, "stats")) {
> opt->stats = true;
> break;
> @@ -360,4 +367,5 @@ ml_options_dump(struct ml_options *opt)
> ml_dump("socket_id", "%d", opt->socket_id);
>
> ml_dump("debug", "%s", (opt->debug ? "true" : "false"));
> + ml_dump("quantized_io", "%s", (opt->quantized_io ? "true" :
> "false"));
> }
> diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h
> index 90e22adeac..edb9dba8f7 100644
> --- a/app/test-mldev/ml_options.h
> +++ b/app/test-mldev/ml_options.h
> @@ -12,19 +12,20 @@
> #define ML_TEST_MAX_MODELS 8
>
> /* Options names */
> -#define ML_TEST ("test")
> -#define ML_DEVICE_ID ("dev_id")
> -#define ML_SOCKET_ID ("socket_id")
> -#define ML_MODELS ("models")
> -#define ML_FILELIST ("filelist")
> -#define ML_REPETITIONS ("repetitions")
> -#define ML_BURST_SIZE ("burst_size")
> -#define ML_QUEUE_PAIRS ("queue_pairs")
> -#define ML_QUEUE_SIZE ("queue_size")
> -#define ML_TOLERANCE ("tolerance")
> -#define ML_STATS ("stats")
> -#define ML_DEBUG ("debug")
> -#define ML_HELP ("help")
> +#define ML_TEST ("test")
> +#define ML_DEVICE_ID ("dev_id")
> +#define ML_SOCKET_ID ("socket_id")
> +#define ML_MODELS ("models")
> +#define ML_FILELIST ("filelist")
> +#define ML_QUANTIZED_IO ("quantized_io")
> +#define ML_REPETITIONS ("repetitions")
> +#define ML_BURST_SIZE ("burst_size")
> +#define ML_QUEUE_PAIRS ("queue_pairs")
> +#define ML_QUEUE_SIZE ("queue_size")
> +#define ML_TOLERANCE ("tolerance")
> +#define ML_STATS ("stats")
> +#define ML_DEBUG ("debug")
> +#define ML_HELP ("help")
>
> struct ml_filelist {
> char model[PATH_MAX];
> @@ -46,6 +47,7 @@ struct ml_options {
> float tolerance;
> bool stats;
> bool debug;
> + bool quantized_io;
> };
>
> void ml_options_default(struct ml_options *opt); diff --git a/app/test-
> mldev/test_inference_common.c b/app/test-
> mldev/test_inference_common.c
> index 846f71abb1..36629210ee 100644
> --- a/app/test-mldev/test_inference_common.c
> +++ b/app/test-mldev/test_inference_common.c
> @@ -777,14 +777,22 @@ ml_inference_iomem_setup(struct ml_test *test,
> struct ml_options *opt, uint16_t
> }
>
> t->model[fid].inp_dsize = 0;
> - for (i = 0; i < t->model[fid].info.nb_inputs; i++)
> - t->model[fid].inp_dsize +=
> - t->model[fid].info.input_info[i].nb_elements *
> sizeof(float);
> + for (i = 0; i < t->model[fid].info.nb_inputs; i++) {
> + if (opt->quantized_io)
> + t->model[fid].inp_dsize += t-
> >model[fid].info.input_info[i].size;
> + else
> + t->model[fid].inp_dsize +=
> + t->model[fid].info.input_info[i].nb_elements
> * sizeof(float);
> + }
>
> t->model[fid].out_dsize = 0;
> - for (i = 0; i < t->model[fid].info.nb_outputs; i++)
> - t->model[fid].out_dsize +=
> - t->model[fid].info.output_info[i].nb_elements *
> sizeof(float);
> + for (i = 0; i < t->model[fid].info.nb_outputs; i++) {
> + if (opt->quantized_io)
> + t->model[fid].out_dsize += t-
> >model[fid].info.output_info[i].size;
> + else
> + t->model[fid].out_dsize +=
> + t-
> >model[fid].info.output_info[i].nb_elements * sizeof(float);
> + }
>
> /* allocate buffer for user data */
> mz_size = t->model[fid].inp_dsize + t->model[fid].out_dsize; diff --
> git a/doc/guides/tools/testmldev.rst b/doc/guides/tools/testmldev.rst index
> 9b1565a457..55e26eed08 100644
> --- a/doc/guides/tools/testmldev.rst
> +++ b/doc/guides/tools/testmldev.rst
> @@ -89,6 +89,9 @@ The following are the command-line options supported
> by the test application.
> A suffix ``.q`` is appended to quantized output filename.
> Maximum number of filelist entries supported by the test is ``8``.
>
> +``--quantized_io``
> + Disable IO quantization and dequantization.
> +
> ``--repetitions <n>``
> Set the number of inference repetitions to be executed in the test per each
> model.
> Default value is ``1``.
> --
> 2.41.0
Acked-by: Shivah Shankar S <sshankarnara@marvell.com>
[-- Attachment #2: winmail.dat --]
[-- Type: application/ms-tnef, Size: 38047 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v2 1/1] app/mldev: enable support for pre-quantized I/O
2023-10-02 10:02 [PATCH v1 1/1] app/mldev: enable support for pre-quantized I/O Srikanth Yalavarthi
2023-10-03 6:01 ` Shivah Shankar Shankar Narayan Rao
@ 2023-10-26 12:49 ` Srikanth Yalavarthi
2023-10-30 5:15 ` Shivah Shankar Shankar Narayan Rao
1 sibling, 1 reply; 5+ messages in thread
From: Srikanth Yalavarthi @ 2023-10-26 12:49 UTC (permalink / raw)
To: Srikanth Yalavarthi; +Cc: dev, sshankarnara, aprabhu, ptakkar
From: Anup Prabhu <aprabhu@marvell.com>
Enabled support for pre-quantized input and output in ML
test application.
Signed-off-by: Anup Prabhu <aprabhu@marvell.com>
---
v2:
- Updated application help
v1:
- Initial changes
app/test-mldev/ml_options.c | 11 +++++++++-
app/test-mldev/ml_options.h | 28 ++++++++++++++------------
app/test-mldev/test_inference_common.c | 20 ++++++++++++------
doc/guides/tools/testmldev.rst | 3 +++
4 files changed, 42 insertions(+), 20 deletions(-)
diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c
index eeaffec399..320f6325ae 100644
--- a/app/test-mldev/ml_options.c
+++ b/app/test-mldev/ml_options.c
@@ -24,6 +24,7 @@ ml_options_default(struct ml_options *opt)
opt->dev_id = 0;
opt->socket_id = SOCKET_ID_ANY;
opt->nb_filelist = 0;
+ opt->quantized_io = false;
opt->repetitions = 1;
opt->burst_size = 1;
opt->queue_pairs = 1;
@@ -243,7 +244,8 @@ ml_dump_test_options(const char *testname)
"\t\t--queue_pairs : number of queue pairs to create\n"
"\t\t--queue_size : size of queue-pair\n"
"\t\t--tolerance : maximum tolerance (%%) for output validation\n"
- "\t\t--stats : enable reporting device and model statistics\n");
+ "\t\t--stats : enable reporting device and model statistics\n"
+ "\t\t--quantized_io : skip input/output quantization\n");
printf("\n");
}
}
@@ -269,6 +271,7 @@ static struct option lgopts[] = {
{ML_SOCKET_ID, 1, 0, 0},
{ML_MODELS, 1, 0, 0},
{ML_FILELIST, 1, 0, 0},
+ {ML_QUANTIZED_IO, 0, 0, 0},
{ML_REPETITIONS, 1, 0, 0},
{ML_BURST_SIZE, 1, 0, 0},
{ML_QUEUE_PAIRS, 1, 0, 0},
@@ -316,6 +319,11 @@ ml_options_parse(struct ml_options *opt, int argc, char **argv)
while ((opts = getopt_long(argc, argv, "", lgopts, &opt_idx)) != EOF) {
switch (opts) {
case 0: /* parse long options */
+ if (!strcmp(lgopts[opt_idx].name, "quantized_io")) {
+ opt->quantized_io = true;
+ break;
+ }
+
if (!strcmp(lgopts[opt_idx].name, "stats")) {
opt->stats = true;
break;
@@ -360,4 +368,5 @@ ml_options_dump(struct ml_options *opt)
ml_dump("socket_id", "%d", opt->socket_id);
ml_dump("debug", "%s", (opt->debug ? "true" : "false"));
+ ml_dump("quantized_io", "%s", (opt->quantized_io ? "true" : "false"));
}
diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h
index 90e22adeac..edb9dba8f7 100644
--- a/app/test-mldev/ml_options.h
+++ b/app/test-mldev/ml_options.h
@@ -12,19 +12,20 @@
#define ML_TEST_MAX_MODELS 8
/* Options names */
-#define ML_TEST ("test")
-#define ML_DEVICE_ID ("dev_id")
-#define ML_SOCKET_ID ("socket_id")
-#define ML_MODELS ("models")
-#define ML_FILELIST ("filelist")
-#define ML_REPETITIONS ("repetitions")
-#define ML_BURST_SIZE ("burst_size")
-#define ML_QUEUE_PAIRS ("queue_pairs")
-#define ML_QUEUE_SIZE ("queue_size")
-#define ML_TOLERANCE ("tolerance")
-#define ML_STATS ("stats")
-#define ML_DEBUG ("debug")
-#define ML_HELP ("help")
+#define ML_TEST ("test")
+#define ML_DEVICE_ID ("dev_id")
+#define ML_SOCKET_ID ("socket_id")
+#define ML_MODELS ("models")
+#define ML_FILELIST ("filelist")
+#define ML_QUANTIZED_IO ("quantized_io")
+#define ML_REPETITIONS ("repetitions")
+#define ML_BURST_SIZE ("burst_size")
+#define ML_QUEUE_PAIRS ("queue_pairs")
+#define ML_QUEUE_SIZE ("queue_size")
+#define ML_TOLERANCE ("tolerance")
+#define ML_STATS ("stats")
+#define ML_DEBUG ("debug")
+#define ML_HELP ("help")
struct ml_filelist {
char model[PATH_MAX];
@@ -46,6 +47,7 @@ struct ml_options {
float tolerance;
bool stats;
bool debug;
+ bool quantized_io;
};
void ml_options_default(struct ml_options *opt);
diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c
index 846f71abb1..36629210ee 100644
--- a/app/test-mldev/test_inference_common.c
+++ b/app/test-mldev/test_inference_common.c
@@ -777,14 +777,22 @@ ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, uint16_t
}
t->model[fid].inp_dsize = 0;
- for (i = 0; i < t->model[fid].info.nb_inputs; i++)
- t->model[fid].inp_dsize +=
- t->model[fid].info.input_info[i].nb_elements * sizeof(float);
+ for (i = 0; i < t->model[fid].info.nb_inputs; i++) {
+ if (opt->quantized_io)
+ t->model[fid].inp_dsize += t->model[fid].info.input_info[i].size;
+ else
+ t->model[fid].inp_dsize +=
+ t->model[fid].info.input_info[i].nb_elements * sizeof(float);
+ }
t->model[fid].out_dsize = 0;
- for (i = 0; i < t->model[fid].info.nb_outputs; i++)
- t->model[fid].out_dsize +=
- t->model[fid].info.output_info[i].nb_elements * sizeof(float);
+ for (i = 0; i < t->model[fid].info.nb_outputs; i++) {
+ if (opt->quantized_io)
+ t->model[fid].out_dsize += t->model[fid].info.output_info[i].size;
+ else
+ t->model[fid].out_dsize +=
+ t->model[fid].info.output_info[i].nb_elements * sizeof(float);
+ }
/* allocate buffer for user data */
mz_size = t->model[fid].inp_dsize + t->model[fid].out_dsize;
diff --git a/doc/guides/tools/testmldev.rst b/doc/guides/tools/testmldev.rst
index 9b1565a457..55e26eed08 100644
--- a/doc/guides/tools/testmldev.rst
+++ b/doc/guides/tools/testmldev.rst
@@ -89,6 +89,9 @@ The following are the command-line options supported by the test application.
A suffix ``.q`` is appended to quantized output filename.
Maximum number of filelist entries supported by the test is ``8``.
+``--quantized_io``
+ Disable IO quantization and dequantization.
+
``--repetitions <n>``
Set the number of inference repetitions to be executed in the test per each model.
Default value is ``1``.
--
2.42.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [PATCH v2 1/1] app/mldev: enable support for pre-quantized I/O
2023-10-26 12:49 ` [PATCH v2 " Srikanth Yalavarthi
@ 2023-10-30 5:15 ` Shivah Shankar Shankar Narayan Rao
2023-11-14 14:08 ` Thomas Monjalon
0 siblings, 1 reply; 5+ messages in thread
From: Shivah Shankar Shankar Narayan Rao @ 2023-10-30 5:15 UTC (permalink / raw)
To: Srikanth Yalavarthi, Srikanth Yalavarthi; +Cc: dev, Anup Prabhu, Prince Takkar
[-- Attachment #1: Type: text/plain, Size: 660 bytes --]
> -----Original Message-----
> From: Srikanth Yalavarthi <syalavarthi@marvell.com>
> Sent: Thursday, October 26, 2023 6:20 PM
> To: Srikanth Yalavarthi <syalavarthi@marvell.com>
> Cc: dev@dpdk.org; Shivah Shankar Shankar Narayan Rao
> <sshankarnara@marvell.com>; Anup Prabhu <aprabhu@marvell.com>;
> Prince Takkar <ptakkar@marvell.com>
> Subject: [PATCH v2 1/1] app/mldev: enable support for pre-quantized I/O
>
> From: Anup Prabhu <aprabhu@marvell.com>
>
> Enabled support for pre-quantized input and output in ML test application.
>
> Signed-off-by: Anup Prabhu <aprabhu@marvell.com>
Acked-by: Shivah Shankar S <sshankarnara@marvell.com>
[-- Attachment #2: winmail.dat --]
[-- Type: application/ms-tnef, Size: 35815 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2 1/1] app/mldev: enable support for pre-quantized I/O
2023-10-30 5:15 ` Shivah Shankar Shankar Narayan Rao
@ 2023-11-14 14:08 ` Thomas Monjalon
0 siblings, 0 replies; 5+ messages in thread
From: Thomas Monjalon @ 2023-11-14 14:08 UTC (permalink / raw)
To: Anup Prabhu
Cc: Srikanth Yalavarthi, Srikanth Yalavarthi, dev, Prince Takkar,
Shivah Shankar Shankar Narayan Rao
30/10/2023 06:15, Shivah Shankar Shankar Narayan Rao:
> From: Srikanth Yalavarthi <syalavarthi@marvell.com>
> > From: Anup Prabhu <aprabhu@marvell.com>
> >
> > Enabled support for pre-quantized input and output in ML test application.
> >
> > Signed-off-by: Anup Prabhu <aprabhu@marvell.com>
> Acked-by: Shivah Shankar S <sshankarnara@marvell.com>
Applied, thanks.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-11-14 14:08 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-02 10:02 [PATCH v1 1/1] app/mldev: enable support for pre-quantized I/O Srikanth Yalavarthi
2023-10-03 6:01 ` Shivah Shankar Shankar Narayan Rao
2023-10-26 12:49 ` [PATCH v2 " Srikanth Yalavarthi
2023-10-30 5:15 ` Shivah Shankar Shankar Narayan Rao
2023-11-14 14:08 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).