DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v2 00/10] test-bbdev changes for 23.11
@ 2023-10-27 22:57 Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 01/10] test/bbdev: fix python script subprocess Nicolas Chautru
                   ` (9 more replies)
  0 siblings, 10 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable, Nicolas Chautru

v2: adding fixes for some of the commits requested by Maxime.

Update test-bbdev for 23.11.

Hernan Vargas (9):
  test/bbdev: fix python script subprocess
  test/bbdev: update python script parameters
  test/bbdev: handle exception for LLR generation
  test/bbdev: improve test log messages
  test/bbdev: assert failed test for queue configure
  test/bbdev: ldpc encoder concatenation vector
  test/bbdev: add MLD support
  test/bbdev: support new FFT capabilities
  test/bbdev: support 4 bit LLR compression

Nicolas Chautru (1):
  test/bbdev: rename macros from acc200 to vrb

 app/test-bbdev/main.c              |   3 +-
 app/test-bbdev/test-bbdev.py       |  51 ++-
 app/test-bbdev/test_bbdev.c        |   3 +-
 app/test-bbdev/test_bbdev_perf.c   | 673 ++++++++++++++++++++++++++---
 app/test-bbdev/test_bbdev_vector.c | 199 ++++++++-
 app/test-bbdev/test_bbdev_vector.h |   1 +
 6 files changed, 838 insertions(+), 92 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 01/10] test/bbdev: fix python script subprocess
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 02/10] test/bbdev: update python script parameters Nicolas Chautru
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable

From: Hernan Vargas <hernan.vargas@intel.com>

test-bbdev.py relying on non-recommended subprocess Popen.
This can lead to instabilities where the process cannot be stopped with a
sig TERM.
Use subprocess run with proper timeout argument.

Fixes: f714a18885a6 ("app/testbbdev: add test application for bbdev")
Cc: stable@dpdk.org

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
---
 app/test-bbdev/test-bbdev.py | 29 +++++++++++++----------------
 1 file changed, 13 insertions(+), 16 deletions(-)

diff --git a/app/test-bbdev/test-bbdev.py b/app/test-bbdev/test-bbdev.py
index 291c80b0f5..02c678a360 100755
--- a/app/test-bbdev/test-bbdev.py
+++ b/app/test-bbdev/test-bbdev.py
@@ -91,21 +91,18 @@ def kill(process):
         params_string = " ".join(call_params)
 
         print("Executing: {}".format(params_string))
-        app_proc = subprocess.Popen(call_params)
-        if args.timeout > 0:
-            timer = Timer(args.timeout, kill, [app_proc])
-            timer.start()
-
         try:
-            app_proc.communicate()
-        except:
-            print("Error: failed to execute: {}".format(params_string))
-        finally:
-            timer.cancel()
-
-        if app_proc.returncode != 0:
-            exit_status = 1
-            print("ERROR TestCase failed. Failed test for vector {}. Return code: {}".format(
-                vector, app_proc.returncode))
-
+            output = subprocess.run(call_params, timeout=args.timeout, universal_newlines=True)
+        except subprocess.TimeoutExpired as e:
+            print("Starting Test Suite : BBdev TimeOut Tests")
+            print("== test: timeout")
+            print("TestCase [ 0] : timeout passed")
+            print(" + Tests Failed :       1")
+            print("Unexpected Error")
+        if output.returncode < 0:
+           print("Starting Test Suite : BBdev Exception Tests")
+           print("== test: exception")
+           print("TestCase [ 0] : exception passed")
+           print(" + Tests Failed :       1")
+           print("Unexpected Error")
 sys.exit(exit_status)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 02/10] test/bbdev: update python script parameters
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 01/10] test/bbdev: fix python script subprocess Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 03/10] test/bbdev: rename macros from acc200 to vrb Nicolas Chautru
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable

From: Hernan Vargas <hernan.vargas@intel.com>

Update the timeout argument and default values.
Update EAL help message and default value.
Add iter_max and snr arguments.

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
---
 app/test-bbdev/test-bbdev.py     | 22 ++++++++++++++++++----
 app/test-bbdev/test_bbdev_perf.c |  2 +-
 2 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/app/test-bbdev/test-bbdev.py b/app/test-bbdev/test-bbdev.py
index 02c678a360..d1fed046c4 100755
--- a/app/test-bbdev/test-bbdev.py
+++ b/app/test-bbdev/test-bbdev.py
@@ -25,12 +25,12 @@ def kill(process):
                     help="specifies path to the bbdev test app",
                     default=dpdk_path + "/" + dpdk_target + "/app/dpdk-test-bbdev")
 parser.add_argument("-e", "--eal-params",
-                    help="EAL arguments which are passed to the test app",
-                    default="--vdev=baseband_null0")
-parser.add_argument("-t", "--timeout",
+                    help="EAL arguments which must be passed to the test app",
+                    default="--vdev=baseband_null0 -a00:00.0")
+parser.add_argument("-T", "--timeout",
                     type=int,
                     help="Timeout in seconds",
-                    default=300)
+                    default=600)
 parser.add_argument("-c", "--test-cases",
                     nargs="+",
                     help="Defines test cases to run. Run all if not specified")
@@ -48,6 +48,14 @@ def kill(process):
                     type=int,
                     help="Operations enqueue/dequeue burst size.",
                     default=[32])
+parser.add_argument("-s", "--snr",
+                    type=int,
+                    help="SNR in dB for BLER tests",
+                    default=0)
+parser.add_argument("-t", "--iter-max",
+                    type=int,
+                    help="Max iterations",
+                    default=6)
 parser.add_argument("-l", "--num-lcores",
                     type=int,
                     help="Number of lcores to run.",
@@ -68,6 +76,12 @@ def kill(process):
 
 params.extend(["--"])
 
+if args.snr:
+    params.extend(["-s", str(args.snr)])
+
+if args.iter_max:
+    params.extend(["-t", str(args.iter_max)])
+
 if args.num_ops:
     params.extend(["-n", str(args.num_ops)])
 
diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index 276bbf0a2e..faea26c10e 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -26,7 +26,7 @@
 
 #define MAX_QUEUES RTE_MAX_LCORE
 #define TEST_REPETITIONS 100
-#define TIME_OUT_POLL 1e8
+#define TIME_OUT_POLL 1e9
 #define WAIT_OFFLOAD_US 1000
 
 #ifdef RTE_BASEBAND_FPGA_LTE_FEC
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 03/10] test/bbdev: rename macros from acc200 to vrb
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 01/10] test/bbdev: fix python script subprocess Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 02/10] test/bbdev: update python script parameters Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 04/10] test/bbdev: handle exception for LLR generation Nicolas Chautru
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable, Nicolas Chautru

Renaming ACC200 macros to use generic intel vRAN Boost (VRB).
No functional impact.

Fixes: 69a9d9e139d2 ("baseband/acc: rename files from acc200 to vrb")
Cc: stable@dpdk.org

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 app/test-bbdev/test_bbdev_perf.c | 91 ++++++++++++++++----------------
 1 file changed, 45 insertions(+), 46 deletions(-)

diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index faea26c10e..d4c001de00 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -64,14 +64,14 @@
 #define ACC100_QMGR_INVALID_IDX -1
 #define ACC100_QMGR_RR 1
 #define ACC100_QOS_GBR 0
-#define ACC200PF_DRIVER_NAME   ("intel_acc200_pf")
-#define ACC200VF_DRIVER_NAME   ("intel_acc200_vf")
-#define ACC200_QMGR_NUM_AQS 16
-#define ACC200_QMGR_NUM_QGS 2
-#define ACC200_QMGR_AQ_DEPTH 5
-#define ACC200_QMGR_INVALID_IDX -1
-#define ACC200_QMGR_RR 1
-#define ACC200_QOS_GBR 0
+#define VRBPF_DRIVER_NAME   ("intel_vran_boost_pf")
+#define VRBVF_DRIVER_NAME   ("intel_vran_boost_vf")
+#define VRB_QMGR_NUM_AQS 16
+#define VRB_QMGR_NUM_QGS 2
+#define VRB_QMGR_AQ_DEPTH 5
+#define VRB_QMGR_INVALID_IDX -1
+#define VRB_QMGR_RR 1
+#define VRB_QOS_GBR 0
 #endif
 
 #define OPS_CACHE_SIZE 256U
@@ -794,11 +794,11 @@ add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
 				info->dev_name);
 	}
 	if ((get_init_device() == true) &&
-		(!strcmp(info->drv.driver_name, ACC200PF_DRIVER_NAME))) {
+		(!strcmp(info->drv.driver_name, VRBPF_DRIVER_NAME))) {
 		struct rte_acc_conf conf;
 		unsigned int i;
 
-		printf("Configure ACC200 FEC Driver %s with default values\n",
+		printf("Configure Driver %s with default values\n",
 				info->drv.driver_name);
 
 		/* clear default configuration before initialization */
@@ -807,52 +807,51 @@ add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
 		/* Always set in PF mode for built-in configuration */
 		conf.pf_mode_en = true;
 		for (i = 0; i < RTE_ACC_NUM_VFS; ++i) {
-			conf.arb_dl_4g[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_dl_4g[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_dl_4g[i].round_robin_weight = ACC200_QMGR_RR;
-			conf.arb_ul_4g[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_ul_4g[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_ul_4g[i].round_robin_weight = ACC200_QMGR_RR;
-			conf.arb_dl_5g[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_dl_5g[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_dl_5g[i].round_robin_weight = ACC200_QMGR_RR;
-			conf.arb_ul_5g[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_ul_5g[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_ul_5g[i].round_robin_weight = ACC200_QMGR_RR;
-			conf.arb_fft[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_fft[i].gbr_threshold1 = ACC200_QOS_GBR;
-			conf.arb_fft[i].round_robin_weight = ACC200_QMGR_RR;
+			conf.arb_dl_4g[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_dl_4g[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_dl_4g[i].round_robin_weight = VRB_QMGR_RR;
+			conf.arb_ul_4g[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_ul_4g[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_ul_4g[i].round_robin_weight = VRB_QMGR_RR;
+			conf.arb_dl_5g[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_dl_5g[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_dl_5g[i].round_robin_weight = VRB_QMGR_RR;
+			conf.arb_ul_5g[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_ul_5g[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_ul_5g[i].round_robin_weight = VRB_QMGR_RR;
+			conf.arb_fft[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_fft[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_fft[i].round_robin_weight = VRB_QMGR_RR;
 		}
 
 		conf.input_pos_llr_1_bit = true;
 		conf.output_pos_llr_1_bit = true;
 		conf.num_vf_bundles = 1; /**< Number of VF bundles to setup */
 
-		conf.q_ul_4g.num_qgroups = ACC200_QMGR_NUM_QGS;
-		conf.q_ul_4g.first_qgroup_index = ACC200_QMGR_INVALID_IDX;
-		conf.q_ul_4g.num_aqs_per_groups = ACC200_QMGR_NUM_AQS;
-		conf.q_ul_4g.aq_depth_log2 = ACC200_QMGR_AQ_DEPTH;
-		conf.q_dl_4g.num_qgroups = ACC200_QMGR_NUM_QGS;
-		conf.q_dl_4g.first_qgroup_index = ACC200_QMGR_INVALID_IDX;
-		conf.q_dl_4g.num_aqs_per_groups = ACC200_QMGR_NUM_AQS;
-		conf.q_dl_4g.aq_depth_log2 = ACC200_QMGR_AQ_DEPTH;
-		conf.q_ul_5g.num_qgroups = ACC200_QMGR_NUM_QGS;
-		conf.q_ul_5g.first_qgroup_index = ACC200_QMGR_INVALID_IDX;
-		conf.q_ul_5g.num_aqs_per_groups = ACC200_QMGR_NUM_AQS;
-		conf.q_ul_5g.aq_depth_log2 = ACC200_QMGR_AQ_DEPTH;
-		conf.q_dl_5g.num_qgroups = ACC200_QMGR_NUM_QGS;
-		conf.q_dl_5g.first_qgroup_index = ACC200_QMGR_INVALID_IDX;
-		conf.q_dl_5g.num_aqs_per_groups = ACC200_QMGR_NUM_AQS;
-		conf.q_dl_5g.aq_depth_log2 = ACC200_QMGR_AQ_DEPTH;
-		conf.q_fft.num_qgroups = ACC200_QMGR_NUM_QGS;
-		conf.q_fft.first_qgroup_index = ACC200_QMGR_INVALID_IDX;
-		conf.q_fft.num_aqs_per_groups = ACC200_QMGR_NUM_AQS;
-		conf.q_fft.aq_depth_log2 = ACC200_QMGR_AQ_DEPTH;
+		conf.q_ul_4g.num_qgroups = VRB_QMGR_NUM_QGS;
+		conf.q_ul_4g.first_qgroup_index = VRB_QMGR_INVALID_IDX;
+		conf.q_ul_4g.num_aqs_per_groups = VRB_QMGR_NUM_AQS;
+		conf.q_ul_4g.aq_depth_log2 = VRB_QMGR_AQ_DEPTH;
+		conf.q_dl_4g.num_qgroups = VRB_QMGR_NUM_QGS;
+		conf.q_dl_4g.first_qgroup_index = VRB_QMGR_INVALID_IDX;
+		conf.q_dl_4g.num_aqs_per_groups = VRB_QMGR_NUM_AQS;
+		conf.q_dl_4g.aq_depth_log2 = VRB_QMGR_AQ_DEPTH;
+		conf.q_ul_5g.num_qgroups = VRB_QMGR_NUM_QGS;
+		conf.q_ul_5g.first_qgroup_index = VRB_QMGR_INVALID_IDX;
+		conf.q_ul_5g.num_aqs_per_groups = VRB_QMGR_NUM_AQS;
+		conf.q_ul_5g.aq_depth_log2 = VRB_QMGR_AQ_DEPTH;
+		conf.q_dl_5g.num_qgroups = VRB_QMGR_NUM_QGS;
+		conf.q_dl_5g.first_qgroup_index = VRB_QMGR_INVALID_IDX;
+		conf.q_dl_5g.num_aqs_per_groups = VRB_QMGR_NUM_AQS;
+		conf.q_dl_5g.aq_depth_log2 = VRB_QMGR_AQ_DEPTH;
+		conf.q_fft.num_qgroups = VRB_QMGR_NUM_QGS;
+		conf.q_fft.first_qgroup_index = VRB_QMGR_INVALID_IDX;
+		conf.q_fft.num_aqs_per_groups = VRB_QMGR_NUM_AQS;
 
 		/* setup PF with configuration information */
 		ret = rte_acc_configure(info->dev_name, &conf);
 		TEST_ASSERT_SUCCESS(ret,
-				"Failed to configure ACC200 PF for bbdev %s",
+				"Failed to configure PF for bbdev %s",
 				info->dev_name);
 	}
 #endif
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 04/10] test/bbdev: handle exception for LLR generation
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
                   ` (2 preceding siblings ...)
  2023-10-27 22:57 ` [PATCH v2 03/10] test/bbdev: rename macros from acc200 to vrb Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 05/10] test/bbdev: improve test log messages Nicolas Chautru
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable

From: Hernan Vargas <hernan.vargas@intel.com>

Add range limit to prevent LLR generation greater than the data buffer
size.

Fixes: 7831a9684356 ("test/bbdev: support BLER for 4G")
Cc: stable@dpdk.org

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
---
 app/test-bbdev/test_bbdev_perf.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index d4c001de00..54cb2090f9 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -1838,6 +1838,12 @@ generate_turbo_llr_input(uint16_t n, struct rte_bbdev_op_data *inputs,
 	range = ref_op->turbo_dec.input.length;
 	N0 = 1.0 / pow(10.0, get_snr() / 10.0);
 
+	if (range > inputs[0].data->data_len) {
+		printf("Warning: Limiting LLR generation to first segment (%d from %d)\n",
+				inputs[0].data->data_len, range);
+		range = inputs[0].data->data_len;
+	}
+
 	for (i = 0; i < n; ++i) {
 		m = inputs[i].data;
 		int8_t *llrs = rte_pktmbuf_mtod_offset(m, int8_t *, 0);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 05/10] test/bbdev: improve test log messages
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
                   ` (3 preceding siblings ...)
  2023-10-27 22:57 ` [PATCH v2 04/10] test/bbdev: handle exception for LLR generation Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 06/10] test/bbdev: assert failed test for queue configure Nicolas Chautru
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable

From: Hernan Vargas <hernan.vargas@intel.com>

Add a print message for failure to retrieve stats on bbdev.
Add vector name in logs.
Remove unnecessary prints.
Update code comments and cosmetic changes.
No functional impact.

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 app/test-bbdev/main.c            |  3 ++-
 app/test-bbdev/test_bbdev_perf.c | 26 ++++++++++++++------------
 2 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/app/test-bbdev/main.c b/app/test-bbdev/main.c
index ec830eb32b..8f6852e2ef 100644
--- a/app/test-bbdev/main.c
+++ b/app/test-bbdev/main.c
@@ -107,7 +107,8 @@ unit_test_suite_runner(struct unit_test_suite *suite)
 	end = rte_rdtsc_precise();
 
 	printf(" + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +\n");
-	printf(" + Test Suite Summary : %s\n", suite->suite_name);
+	printf(" + Test Suite Summary : %s - %s\n",
+			suite->suite_name, get_vector_filename());
 	printf(" + Tests Total :       %2d\n", total);
 	printf(" + Tests Skipped :     %2d\n", skipped);
 	printf(" + Tests Passed :      %2d\n", succeeded);
diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index 54cb2090f9..4f8e226e58 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -721,9 +721,9 @@ add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
 			conf.vf_dl_queues_number[i] = VF_DL_5G_QUEUE_VALUE;
 		}
 
-		/* UL bandwidth. Needed for schedule algorithm */
+		/* UL bandwidth. Needed only for Vista Creek 5GNR schedule algorithm */
 		conf.ul_bandwidth = UL_5G_BANDWIDTH;
-		/* DL bandwidth */
+		/* DL bandwidth. Needed only for Vista Creek 5GNR schedule algorithm  */
 		conf.dl_bandwidth = DL_5G_BANDWIDTH;
 
 		/* UL & DL load Balance Factor to 64 */
@@ -743,7 +743,7 @@ add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
 		struct rte_acc_conf conf;
 		unsigned int i;
 
-		printf("Configure ACC100/ACC101 FEC Driver %s with default values\n",
+		printf("Configure ACC100 FEC device %s with default values\n",
 				info->drv.driver_name);
 
 		/* clear default configuration before initialization */
@@ -1047,13 +1047,15 @@ ut_setup(void)
 static void
 ut_teardown(void)
 {
-	uint8_t i, dev_id;
+	uint8_t i, dev_id, ret;
 	struct rte_bbdev_stats stats;
 
 	for (i = 0; i < nb_active_devs; i++) {
 		dev_id = active_devs[i].dev_id;
 		/* read stats and print */
-		rte_bbdev_stats_get(dev_id, &stats);
+		ret = rte_bbdev_stats_get(dev_id, &stats);
+		if (ret != 0)
+			printf("Failed to get stats on bbdev %u\n", dev_id);
 		/* Stop the device */
 		rte_bbdev_stop(dev_id);
 	}
@@ -2227,9 +2229,11 @@ validate_op_harq_chain(struct rte_bbdev_op_data *op,
 				if ((error > 8 && (abs_harq_origin <
 						(llr_max - 16))) ||
 						(error > 16)) {
+					/*
 					printf("HARQ mismatch %d: exp %d act %d => %d\n",
 							j, harq_orig[j],
 							harq_out[jj], error);
+					*/
 					byte_error++;
 					cum_error += error;
 				}
@@ -5270,7 +5274,7 @@ offload_latency_test_fft(struct rte_mempool *mempool, struct test_buffers *bufs,
 			burst_sz = num_to_process - dequeued;
 
 		ret = rte_bbdev_fft_op_alloc_bulk(mempool, ops_enq, burst_sz);
-		TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", burst_sz);
+		TEST_ASSERT_SUCCESS(ret, "rte_bbdev_fft_op_alloc_bulk() failed");
 		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
 			copy_reference_fft_op(ops_enq, burst_sz, dequeued,
 					bufs->inputs,
@@ -5352,7 +5356,7 @@ offload_latency_test_dec(struct rte_mempool *mempool, struct test_buffers *bufs,
 			burst_sz = num_to_process - dequeued;
 
 		ret = rte_bbdev_dec_op_alloc_bulk(mempool, ops_enq, burst_sz);
-		TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", burst_sz);
+		TEST_ASSERT_SUCCESS(ret, "rte_bbdev_dec_op_alloc_bulk() failed");
 		ref_op->turbo_dec.iter_max = get_iter_max();
 		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
 			copy_reference_dec_op(ops_enq, burst_sz, dequeued,
@@ -5439,7 +5443,7 @@ offload_latency_test_ldpc_dec(struct rte_mempool *mempool,
 			burst_sz = num_to_process - dequeued;
 
 		ret = rte_bbdev_dec_op_alloc_bulk(mempool, ops_enq, burst_sz);
-		TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", burst_sz);
+		TEST_ASSERT_SUCCESS(ret, "rte_bbdev_dec_op_alloc_bulk() failed");
 		ref_op->ldpc_dec.iter_max = get_iter_max();
 		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
 			copy_reference_ldpc_dec_op(ops_enq, burst_sz, dequeued,
@@ -5534,8 +5538,7 @@ offload_latency_test_enc(struct rte_mempool *mempool, struct test_buffers *bufs,
 			burst_sz = num_to_process - dequeued;
 
 		ret = rte_bbdev_enc_op_alloc_bulk(mempool, ops_enq, burst_sz);
-		TEST_ASSERT_SUCCESS(ret,
-				"rte_bbdev_enc_op_alloc_bulk() failed");
+		TEST_ASSERT_SUCCESS(ret, "rte_bbdev_enc_op_alloc_bulk() failed");
 		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
 			copy_reference_enc_op(ops_enq, burst_sz, dequeued,
 					bufs->inputs,
@@ -5617,8 +5620,7 @@ offload_latency_test_ldpc_enc(struct rte_mempool *mempool,
 			burst_sz = num_to_process - dequeued;
 
 		ret = rte_bbdev_enc_op_alloc_bulk(mempool, ops_enq, burst_sz);
-		TEST_ASSERT_SUCCESS(ret,
-				"rte_bbdev_enc_op_alloc_bulk() failed");
+		TEST_ASSERT_SUCCESS(ret, "rte_bbdev_enc_op_alloc_bulk() failed");
 		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
 			copy_reference_ldpc_enc_op(ops_enq, burst_sz, dequeued,
 					bufs->inputs,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 06/10] test/bbdev: assert failed test for queue configure
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
                   ` (4 preceding siblings ...)
  2023-10-27 22:57 ` [PATCH v2 05/10] test/bbdev: improve test log messages Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 07/10] test/bbdev: ldpc encoder concatenation vector Nicolas Chautru
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable

From: Hernan Vargas <hernan.vargas@intel.com>

Stop test if rte_bbdev_queue_configure fails to configure queue.

Fixes: f714a18885a6 ("app/testbbdev: add test application for bbdev")
Cc: stable@dpdk.org

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 app/test-bbdev/test_bbdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/app/test-bbdev/test_bbdev.c b/app/test-bbdev/test_bbdev.c
index 65805977ae..cf224dca5d 100644
--- a/app/test-bbdev/test_bbdev.c
+++ b/app/test-bbdev/test_bbdev.c
@@ -366,7 +366,8 @@ test_bbdev_configure_stop_queue(void)
 	 * - queue should be started if deferred_start ==
 	 */
 	ts_params->qconf.deferred_start = 0;
-	rte_bbdev_queue_configure(dev_id, queue_id, &ts_params->qconf);
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_configure(dev_id, queue_id, &ts_params->qconf),
+			"Failed test for rte_bbdev_queue_configure");
 	rte_bbdev_start(dev_id);
 
 	TEST_ASSERT_SUCCESS(return_value = rte_bbdev_queue_info_get(dev_id,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 07/10] test/bbdev: ldpc encoder concatenation vector
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
                   ` (5 preceding siblings ...)
  2023-10-27 22:57 ` [PATCH v2 06/10] test/bbdev: assert failed test for queue configure Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 08/10] test/bbdev: add MLD support Nicolas Chautru
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable

From: Hernan Vargas <hernan.vargas@intel.com>

Add support for LDPC encoder concatenation configuration from the test
vector.

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 app/test-bbdev/test_bbdev_vector.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
index c26727cd35..0ef1481f2a 100644
--- a/app/test-bbdev/test_bbdev_vector.c
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -284,8 +284,10 @@ op_ldpc_encoder_flag_strtoul(char *token, uint32_t *op_flag_value)
 		*op_flag_value = RTE_BBDEV_LDPC_ENC_INTERRUPTS;
 	else if (!strcmp(token, "RTE_BBDEV_LDPC_ENC_SCATTER_GATHER"))
 		*op_flag_value = RTE_BBDEV_LDPC_ENC_SCATTER_GATHER;
+	else if (!strcmp(token, "RTE_BBDEV_LDPC_ENC_CONCATENATION"))
+		*op_flag_value = RTE_BBDEV_LDPC_ENC_CONCATENATION;
 	else {
-		printf("The given value is not a turbo encoder flag\n");
+		printf("The given value is not a LDPC encoder flag - %s\n", token);
 		return -1;
 	}
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 08/10] test/bbdev: add MLD support
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
                   ` (6 preceding siblings ...)
  2023-10-27 22:57 ` [PATCH v2 07/10] test/bbdev: ldpc encoder concatenation vector Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 09/10] test/bbdev: support new FFT capabilities Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 10/10] test/bbdev: support 4 bit LLR compression Nicolas Chautru
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable

From: Hernan Vargas <hernan.vargas@intel.com>

Adding test-bbdev support for the MLD-TS processing specific to the VRB2
variant.

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 app/test-bbdev/test_bbdev_perf.c   | 519 +++++++++++++++++++++++++++++
 app/test-bbdev/test_bbdev_vector.c | 132 ++++++++
 app/test-bbdev/test_bbdev_vector.h |   1 +
 3 files changed, 652 insertions(+)

diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index 4f8e226e58..8a349fdb03 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -139,6 +139,7 @@ struct test_op_params {
 	struct rte_bbdev_dec_op *ref_dec_op;
 	struct rte_bbdev_enc_op *ref_enc_op;
 	struct rte_bbdev_fft_op *ref_fft_op;
+	struct rte_bbdev_mldts_op *ref_mldts_op;
 	uint16_t burst_sz;
 	uint16_t num_to_process;
 	uint16_t num_lcores;
@@ -165,6 +166,7 @@ struct thread_params {
 	struct rte_bbdev_dec_op *dec_ops[MAX_BURST];
 	struct rte_bbdev_enc_op *enc_ops[MAX_BURST];
 	struct rte_bbdev_fft_op *fft_ops[MAX_BURST];
+	struct rte_bbdev_mldts_op *mldts_ops[MAX_BURST];
 };
 
 /* Stores time statistics */
@@ -472,6 +474,18 @@ check_dev_cap(const struct rte_bbdev_info *dev_info)
 				return TEST_FAILED;
 			}
 			return TEST_SUCCESS;
+		} else if (op_cap->type == RTE_BBDEV_OP_MLDTS) {
+			const struct rte_bbdev_op_cap_mld *cap = &op_cap->cap.mld;
+			if (!flags_match(test_vector.mldts.op_flags, cap->capability_flags)) {
+				printf("Flag Mismatch\n");
+				return TEST_FAILED;
+			}
+			if (nb_inputs > cap->num_buffers_src) {
+				printf("Too many inputs defined: %u, max: %u\n",
+					nb_inputs, cap->num_buffers_src);
+				return TEST_FAILED;
+			}
+			return TEST_SUCCESS;
 		}
 	}
 
@@ -822,6 +836,9 @@ add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
 			conf.arb_fft[i].gbr_threshold1 = VRB_QOS_GBR;
 			conf.arb_fft[i].gbr_threshold1 = VRB_QOS_GBR;
 			conf.arb_fft[i].round_robin_weight = VRB_QMGR_RR;
+			conf.arb_mld[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_mld[i].gbr_threshold1 = VRB_QOS_GBR;
+			conf.arb_mld[i].round_robin_weight = VRB_QMGR_RR;
 		}
 
 		conf.input_pos_llr_1_bit = true;
@@ -847,6 +864,10 @@ add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
 		conf.q_fft.num_qgroups = VRB_QMGR_NUM_QGS;
 		conf.q_fft.first_qgroup_index = VRB_QMGR_INVALID_IDX;
 		conf.q_fft.num_aqs_per_groups = VRB_QMGR_NUM_AQS;
+		conf.q_mld.num_qgroups = VRB_QMGR_NUM_QGS;
+		conf.q_mld.first_qgroup_index = VRB_QMGR_INVALID_IDX;
+		conf.q_mld.num_aqs_per_groups = VRB_QMGR_NUM_AQS;
+		conf.q_mld.aq_depth_log2 = VRB_QMGR_AQ_DEPTH;
 
 		/* setup PF with configuration information */
 		ret = rte_acc_configure(info->dev_name, &conf);
@@ -1979,6 +2000,31 @@ copy_reference_fft_op(struct rte_bbdev_fft_op **ops, unsigned int n,
 	}
 }
 
+static void
+copy_reference_mldts_op(struct rte_bbdev_mldts_op **ops, unsigned int n,
+		unsigned int start_idx,
+		struct rte_bbdev_op_data *q_inputs,
+		struct rte_bbdev_op_data *r_inputs,
+		struct rte_bbdev_op_data *outputs,
+		struct rte_bbdev_mldts_op *ref_op)
+{
+	unsigned int i, j;
+	struct rte_bbdev_op_mldts *mldts = &ref_op->mldts;
+	for (i = 0; i < n; i++) {
+		ops[i]->mldts.c_rep = mldts->c_rep;
+		ops[i]->mldts.num_layers = mldts->num_layers;
+		ops[i]->mldts.num_rbs = mldts->num_rbs;
+		ops[i]->mldts.op_flags = mldts->op_flags;
+		for (j = 0; j < RTE_BBDEV_MAX_MLD_LAYERS; j++)
+			ops[i]->mldts.q_m[j] = mldts->q_m[j];
+		ops[i]->mldts.r_rep = mldts->r_rep;
+		ops[i]->mldts.c_rep = mldts->c_rep;
+		ops[i]->mldts.r_input = r_inputs[start_idx + i];
+		ops[i]->mldts.qhy_input = q_inputs[start_idx + i];
+		ops[i]->mldts.output = outputs[start_idx + i];
+	}
+}
+
 static int
 check_dec_status_and_ordering(struct rte_bbdev_dec_op *op,
 		unsigned int order_idx, const int expected_status)
@@ -2039,6 +2085,21 @@ check_fft_status_and_ordering(struct rte_bbdev_fft_op *op,
 	return TEST_SUCCESS;
 }
 
+static int
+check_mldts_status_and_ordering(struct rte_bbdev_mldts_op *op,
+		unsigned int order_idx, const int expected_status)
+{
+	TEST_ASSERT(op->status == expected_status,
+			"op_status (%d) != expected_status (%d)",
+			op->status, expected_status);
+
+	TEST_ASSERT((void *)(uintptr_t)order_idx == op->opaque_data,
+			"Ordering error, expected %p, got %p",
+			(void *)(uintptr_t)order_idx, op->opaque_data);
+
+	return TEST_SUCCESS;
+}
+
 static inline int
 validate_op_chain(struct rte_bbdev_op_data *op,
 		struct op_data_entries *orig_op)
@@ -2554,6 +2615,57 @@ validate_op_fft_chain(struct rte_bbdev_op_data *op, struct op_data_entries *orig
 	return TEST_SUCCESS;
 }
 
+static inline int
+validate_op_mldts_chain(struct rte_bbdev_op_data *op,
+		struct op_data_entries *orig_op)
+{
+	uint8_t i;
+	struct rte_mbuf *m = op->data;
+	uint8_t nb_dst_segments = orig_op->nb_segments;
+	/*the result is not bit exact*/
+	int16_t thres_hold = 3;
+	int16_t delt, abs_delt;
+	uint32_t j, data_len_iq;
+	uint32_t error_num;
+	int8_t *ref_out;
+	int8_t *op_out;
+
+	TEST_ASSERT(nb_dst_segments == m->nb_segs,
+			"Number of segments differ in original (%u) and filled (%u) op mldts",
+			nb_dst_segments, m->nb_segs);
+
+	/* Due to size limition of mbuf, MLDTS doesn't use real mbuf. */
+	for (i = 0; i < nb_dst_segments; ++i) {
+		uint16_t offset = (i == 0) ? op->offset : 0;
+		uint32_t data_len = op->length;
+
+		TEST_ASSERT(orig_op->segments[i].length == data_len,
+				"Length of segment differ in original (%u) and filled (%u) op mldts",
+				orig_op->segments[i].length, data_len);
+		data_len_iq = data_len;
+		ref_out = (int8_t *)(orig_op->segments[i].addr);
+		op_out = rte_pktmbuf_mtod_offset(m, int8_t *, offset),
+		error_num = 0;
+		for (j = 0; j < data_len_iq; j++) {
+
+			delt = ref_out[j] - op_out[j];
+			abs_delt = delt > 0 ? delt : -delt;
+			error_num += (abs_delt > thres_hold ? 1 : 0);
+			if (error_num > 0)
+				printf("MLD Error %d: Exp %x %d Actual %x %d Diff %d\n",
+						j, ref_out[j], ref_out[j], op_out[j], op_out[j],
+						delt);
+		}
+		TEST_ASSERT(error_num == 0,
+			"MLDTS Output are not matched total (%u) errors (%u)",
+			data_len_iq, error_num);
+
+		m = m->next;
+	}
+
+	return TEST_SUCCESS;
+}
+
 static int
 validate_fft_op(struct rte_bbdev_fft_op **ops, const uint16_t n,
 		struct rte_bbdev_fft_op *ref_op)
@@ -2578,6 +2690,28 @@ validate_fft_op(struct rte_bbdev_fft_op **ops, const uint16_t n,
 	return TEST_SUCCESS;
 }
 
+static int
+validate_mldts_op(struct rte_bbdev_mldts_op **ops, const uint16_t n,
+		struct rte_bbdev_mldts_op *ref_op)
+{
+	unsigned int i;
+	int ret;
+	struct op_data_entries *mldts_data_orig =
+			&test_vector.entries[DATA_HARD_OUTPUT];
+	for (i = 0; i < n; ++i) {
+		ret = check_mldts_status_and_ordering(ops[i], i, ref_op->status);
+		TEST_ASSERT_SUCCESS(ret,
+				"Checking status and ordering for MLDTS failed");
+		TEST_ASSERT_SUCCESS(validate_op_mldts_chain(
+				&ops[i]->mldts.output,
+				mldts_data_orig),
+				"MLDTS Output buffers (op=%u) are not matched",
+				i);
+	}
+
+	return TEST_SUCCESS;
+}
+
 static void
 create_reference_dec_op(struct rte_bbdev_dec_op *op)
 {
@@ -2622,6 +2756,20 @@ create_reference_fft_op(struct rte_bbdev_fft_op *op)
 		op->fft.base_input.length += entry->segments[i].length;
 }
 
+static void
+create_reference_mldts_op(struct rte_bbdev_mldts_op *op)
+{
+	unsigned int i;
+	struct op_data_entries *entry;
+	op->mldts = test_vector.mldts;
+	entry = &test_vector.entries[DATA_INPUT];
+	for (i = 0; i < entry->nb_segments; ++i)
+		op->mldts.qhy_input.length += entry->segments[i].length;
+	entry = &test_vector.entries[DATA_HARQ_INPUT];
+	for (i = 0; i < entry->nb_segments; ++i)
+		op->mldts.r_input.length += entry->segments[i].length;
+}
+
 static void
 create_reference_enc_op(struct rte_bbdev_enc_op *op)
 {
@@ -2730,6 +2878,14 @@ calc_fft_size(struct rte_bbdev_fft_op *op)
 	return output_size;
 }
 
+static uint32_t
+calc_mldts_size(struct rte_bbdev_mldts_op *op)
+{
+	uint32_t output_size;
+	output_size = op->mldts.num_layers * op->mldts.num_rbs * op->mldts.c_rep;
+	return output_size;
+}
+
 static int
 init_test_op_params(struct test_op_params *op_params,
 		enum rte_bbdev_op_type op_type, const int expected_status,
@@ -2744,6 +2900,9 @@ init_test_op_params(struct test_op_params *op_params,
 	else if (op_type == RTE_BBDEV_OP_FFT)
 		ret = rte_bbdev_fft_op_alloc_bulk(ops_mp,
 				&op_params->ref_fft_op, 1);
+	else if (op_type == RTE_BBDEV_OP_MLDTS)
+		ret = rte_bbdev_mldts_op_alloc_bulk(ops_mp,
+				&op_params->ref_mldts_op, 1);
 	else
 		ret = rte_bbdev_enc_op_alloc_bulk(ops_mp,
 				&op_params->ref_enc_op, 1);
@@ -2763,6 +2922,8 @@ init_test_op_params(struct test_op_params *op_params,
 		op_params->ref_enc_op->status = expected_status;
 	else if (op_type == RTE_BBDEV_OP_FFT)
 		op_params->ref_fft_op->status = expected_status;
+	else if (op_type == RTE_BBDEV_OP_MLDTS)
+		op_params->ref_mldts_op->status = expected_status;
 	return 0;
 }
 
@@ -2831,6 +2992,8 @@ run_test_case_on_device(test_case_function *test_case_func, uint8_t dev_id,
 		create_reference_ldpc_dec_op(op_params->ref_dec_op);
 	else if (test_vector.op_type == RTE_BBDEV_OP_FFT)
 		create_reference_fft_op(op_params->ref_fft_op);
+	else if (test_vector.op_type == RTE_BBDEV_OP_MLDTS)
+		create_reference_mldts_op(op_params->ref_mldts_op);
 
 	for (i = 0; i < ad->nb_queues; ++i) {
 		f_ret = fill_queue_buffers(op_params,
@@ -3047,6 +3210,11 @@ dequeue_event_callback(uint16_t dev_id,
 				&tp->fft_ops[
 					__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED)],
 				burst_sz);
+	else if (test_vector.op_type == RTE_BBDEV_OP_MLDTS)
+		deq = rte_bbdev_dequeue_mldts_ops(dev_id, queue_id,
+				&tp->mldts_ops[
+					__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED)],
+				burst_sz);
 	else /*RTE_BBDEV_OP_TURBO_ENC*/
 		deq = rte_bbdev_dequeue_enc_ops(dev_id, queue_id,
 				&tp->enc_ops[
@@ -3093,6 +3261,10 @@ dequeue_event_callback(uint16_t dev_id,
 		struct rte_bbdev_fft_op *ref_op = tp->op_params->ref_fft_op;
 		ret = validate_fft_op(tp->fft_ops, num_ops, ref_op);
 		rte_bbdev_fft_op_free_bulk(tp->fft_ops, deq);
+	} else if (test_vector.op_type == RTE_BBDEV_OP_MLDTS) {
+		struct rte_bbdev_mldts_op *ref_op = tp->op_params->ref_mldts_op;
+		ret = validate_mldts_op(tp->mldts_ops, num_ops, ref_op);
+		rte_bbdev_mldts_op_free_bulk(tp->mldts_ops, deq);
 	} else if (test_vector.op_type == RTE_BBDEV_OP_LDPC_DEC) {
 		struct rte_bbdev_dec_op *ref_op = tp->op_params->ref_dec_op;
 		ret = validate_ldpc_dec_op(tp->dec_ops, num_ops, ref_op,
@@ -3118,6 +3290,9 @@ dequeue_event_callback(uint16_t dev_id,
 	case RTE_BBDEV_OP_FFT:
 		tb_len_bits = calc_fft_size(tp->op_params->ref_fft_op);
 		break;
+	case RTE_BBDEV_OP_MLDTS:
+		tb_len_bits = calc_mldts_size(tp->op_params->ref_mldts_op);
+		break;
 	case RTE_BBDEV_OP_LDPC_ENC:
 		tb_len_bits = calc_ldpc_enc_TB_size(tp->op_params->ref_enc_op);
 		break;
@@ -3593,6 +3768,88 @@ throughput_intr_lcore_fft(void *arg)
 	return TEST_SUCCESS;
 }
 
+static int
+throughput_intr_lcore_mldts(void *arg)
+{
+	struct thread_params *tp = arg;
+	unsigned int enqueued;
+	const uint16_t queue_id = tp->queue_id;
+	const uint16_t burst_sz = tp->op_params->burst_sz;
+	const uint16_t num_to_process = tp->op_params->num_to_process;
+	struct rte_bbdev_mldts_op *ops[num_to_process];
+	struct test_buffers *bufs = NULL;
+	struct rte_bbdev_info info;
+	int ret, i, j;
+	uint16_t num_to_enq, enq;
+
+	TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST), "BURST_SIZE should be <= %u", MAX_BURST);
+
+	TEST_ASSERT_SUCCESS(rte_bbdev_queue_intr_enable(tp->dev_id, queue_id),
+			"Failed to enable interrupts for dev: %u, queue_id: %u",
+			tp->dev_id, queue_id);
+
+	rte_bbdev_info_get(tp->dev_id, &info);
+
+	TEST_ASSERT_SUCCESS((num_to_process > info.drv.queue_size_lim),
+			"NUM_OPS cannot exceed %u for this device",
+			info.drv.queue_size_lim);
+
+	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+	__atomic_store_n(&tp->processing_status, 0, __ATOMIC_RELAXED);
+	__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+
+	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+
+	ret = rte_bbdev_mldts_op_alloc_bulk(tp->op_params->mp, ops, num_to_process);
+	TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", num_to_process);
+	if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+		copy_reference_mldts_op(ops, num_to_process, 0, bufs->inputs, bufs->harq_inputs,
+				bufs->hard_outputs, tp->op_params->ref_mldts_op);
+
+	/* Set counter to validate the ordering */
+	for (j = 0; j < num_to_process; ++j)
+		ops[j]->opaque_data = (void *)(uintptr_t)j;
+
+	for (j = 0; j < TEST_REPETITIONS; ++j) {
+		for (i = 0; i < num_to_process; ++i)
+			mbuf_reset(ops[i]->mldts.output.data);
+
+		tp->start_time = rte_rdtsc_precise();
+		for (enqueued = 0; enqueued < num_to_process;) {
+			num_to_enq = burst_sz;
+
+			if (unlikely(num_to_process - enqueued < num_to_enq))
+				num_to_enq = num_to_process - enqueued;
+
+			enq = 0;
+			do {
+				enq += rte_bbdev_enqueue_mldts_ops(tp->dev_id,
+						queue_id, &ops[enqueued], num_to_enq);
+			} while (unlikely(enq != num_to_enq));
+			enqueued += enq;
+
+			/* Write to thread burst_sz current number of enqueued
+			 * descriptors. It ensures that proper number of
+			 * descriptors will be dequeued in callback
+			 * function - needed for last batch in case where
+			 * the number of operations is not a multiple of
+			 * burst size.
+			 */
+			__atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED);
+
+			/* Wait until processing of previous batch is
+			 * completed
+			 */
+			rte_wait_until_equal_16(&tp->nb_dequeued, enqueued, __ATOMIC_RELAXED);
+		}
+		if (j != TEST_REPETITIONS - 1)
+			__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+	}
+
+	return TEST_SUCCESS;
+}
+
 static int
 throughput_pmd_lcore_dec(void *arg)
 {
@@ -4403,6 +4660,104 @@ throughput_pmd_lcore_fft(void *arg)
 	return TEST_SUCCESS;
 }
 
+static int
+throughput_pmd_lcore_mldts(void *arg)
+{
+	struct thread_params *tp = arg;
+	uint16_t enq, deq;
+	uint64_t total_time = 0, start_time;
+	const uint16_t queue_id = tp->queue_id;
+	const uint16_t burst_sz = tp->op_params->burst_sz;
+	const uint16_t num_ops = tp->op_params->num_to_process;
+	struct rte_bbdev_mldts_op *ops_enq[num_ops];
+	struct rte_bbdev_mldts_op *ops_deq[num_ops];
+	struct rte_bbdev_mldts_op *ref_op = tp->op_params->ref_mldts_op;
+	struct test_buffers *bufs = NULL;
+	int i, j, ret;
+	struct rte_bbdev_info info;
+	uint16_t num_to_enq;
+
+	TEST_ASSERT_SUCCESS((burst_sz > MAX_BURST), "BURST_SIZE should be <= %u", MAX_BURST);
+
+	rte_bbdev_info_get(tp->dev_id, &info);
+
+	TEST_ASSERT_SUCCESS((num_ops > info.drv.queue_size_lim),
+			"NUM_OPS cannot exceed %u for this device",
+			info.drv.queue_size_lim);
+
+	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
+
+	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+
+	ret = rte_bbdev_mldts_op_alloc_bulk(tp->op_params->mp, ops_enq, num_ops);
+	TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", num_ops);
+
+	if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+		copy_reference_mldts_op(ops_enq, num_ops, 0, bufs->inputs, bufs->harq_inputs,
+				bufs->hard_outputs, ref_op);
+
+	/* Set counter to validate the ordering */
+	for (j = 0; j < num_ops; ++j)
+		ops_enq[j]->opaque_data = (void *)(uintptr_t)j;
+
+	for (i = 0; i < TEST_REPETITIONS; ++i) {
+		uint32_t time_out = 0;
+		for (j = 0; j < num_ops; ++j)
+			mbuf_reset(ops_enq[j]->mldts.output.data);
+
+		start_time = rte_rdtsc_precise();
+
+		for (enq = 0, deq = 0; enq < num_ops;) {
+			num_to_enq = burst_sz;
+
+			if (unlikely(num_ops - enq < num_to_enq))
+				num_to_enq = num_ops - enq;
+
+			enq += rte_bbdev_enqueue_mldts_ops(tp->dev_id,
+					queue_id, &ops_enq[enq], num_to_enq);
+
+			deq += rte_bbdev_dequeue_mldts_ops(tp->dev_id,
+					queue_id, &ops_deq[deq], enq - deq);
+			time_out++;
+			if (time_out >= TIME_OUT_POLL) {
+				timeout_exit(tp->dev_id);
+				TEST_ASSERT_SUCCESS(TEST_FAILED, "Enqueue timeout!");
+			}
+		}
+
+		/* dequeue the remaining */
+		time_out = 0;
+		while (deq < enq) {
+			deq += rte_bbdev_dequeue_mldts_ops(tp->dev_id,
+					queue_id, &ops_deq[deq], enq - deq);
+			time_out++;
+			if (time_out >= TIME_OUT_POLL) {
+				timeout_exit(tp->dev_id);
+				TEST_ASSERT_SUCCESS(TEST_FAILED, "Dequeue timeout!");
+			}
+		}
+
+		total_time += rte_rdtsc_precise() - start_time;
+	}
+
+	if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+		ret = validate_mldts_op(ops_deq, num_ops, ref_op);
+		TEST_ASSERT_SUCCESS(ret, "Validation failed!");
+	}
+
+	rte_bbdev_mldts_op_free_bulk(ops_enq, num_ops);
+
+	double tb_len_bits = calc_mldts_size(ref_op);
+
+	tp->ops_per_sec = ((double)num_ops * TEST_REPETITIONS) /
+			((double)total_time / (double)rte_get_tsc_hz());
+	tp->mbps = (((double)(num_ops * TEST_REPETITIONS * tb_len_bits)) /
+			1000000.0) / ((double)total_time /
+			(double)rte_get_tsc_hz());
+
+	return TEST_SUCCESS;
+}
+
 static void
 print_enc_throughput(struct thread_params *t_params, unsigned int used_cores)
 {
@@ -4624,6 +4979,8 @@ throughput_test(struct active_device *ad,
 			throughput_function = throughput_intr_lcore_ldpc_enc;
 		else if (test_vector.op_type == RTE_BBDEV_OP_FFT)
 			throughput_function = throughput_intr_lcore_fft;
+		else if (test_vector.op_type == RTE_BBDEV_OP_MLDTS)
+			throughput_function = throughput_intr_lcore_mldts;
 		else
 			throughput_function = throughput_intr_lcore_enc;
 
@@ -4646,6 +5003,8 @@ throughput_test(struct active_device *ad,
 			throughput_function = throughput_pmd_lcore_ldpc_enc;
 		else if (test_vector.op_type == RTE_BBDEV_OP_FFT)
 			throughput_function = throughput_pmd_lcore_fft;
+		else if (test_vector.op_type == RTE_BBDEV_OP_MLDTS)
+			throughput_function = throughput_pmd_lcore_mldts;
 		else
 			throughput_function = throughput_pmd_lcore_enc;
 	}
@@ -5139,6 +5498,77 @@ latency_test_fft(struct rte_mempool *mempool,
 	return i;
 }
 
+static int
+latency_test_mldts(struct rte_mempool *mempool,
+		struct test_buffers *bufs, struct rte_bbdev_mldts_op *ref_op,
+		uint16_t dev_id, uint16_t queue_id,
+		const uint16_t num_to_process, uint16_t burst_sz,
+		uint64_t *total_time, uint64_t *min_time, uint64_t *max_time)
+{
+	int ret = TEST_SUCCESS;
+	uint16_t i, j, dequeued;
+	struct rte_bbdev_mldts_op *ops_enq[MAX_BURST], *ops_deq[MAX_BURST];
+	uint64_t start_time = 0, last_time = 0;
+
+	for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+		uint16_t enq = 0, deq = 0;
+		uint32_t time_out = 0;
+		bool first_time = true;
+		last_time = 0;
+
+		if (unlikely(num_to_process - dequeued < burst_sz))
+			burst_sz = num_to_process - dequeued;
+
+		ret = rte_bbdev_mldts_op_alloc_bulk(mempool, ops_enq, burst_sz);
+		TEST_ASSERT_SUCCESS(ret, "rte_bbdev_mldts_op_alloc_bulk() failed");
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+			copy_reference_mldts_op(ops_enq, burst_sz, dequeued,
+					bufs->inputs, bufs->harq_inputs,
+					bufs->hard_outputs,
+					ref_op);
+
+		/* Set counter to validate the ordering */
+		for (j = 0; j < burst_sz; ++j)
+			ops_enq[j]->opaque_data = (void *)(uintptr_t)j;
+
+		start_time = rte_rdtsc_precise();
+
+		enq = rte_bbdev_enqueue_mldts_ops(dev_id, queue_id, &ops_enq[enq], burst_sz);
+		TEST_ASSERT(enq == burst_sz,
+				"Error enqueueing burst, expected %u, got %u",
+				burst_sz, enq);
+
+		/* Dequeue */
+		do {
+			deq += rte_bbdev_dequeue_mldts_ops(dev_id, queue_id,
+					&ops_deq[deq], burst_sz - deq);
+			if (likely(first_time && (deq > 0))) {
+				last_time += rte_rdtsc_precise() - start_time;
+				first_time = false;
+			}
+			time_out++;
+			if (time_out >= TIME_OUT_POLL) {
+				timeout_exit(dev_id);
+				TEST_ASSERT_SUCCESS(TEST_FAILED, "Dequeue timeout!");
+			}
+		} while (unlikely(burst_sz != deq));
+
+		*max_time = RTE_MAX(*max_time, last_time);
+		*min_time = RTE_MIN(*min_time, last_time);
+		*total_time += last_time;
+
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE) {
+			ret = validate_mldts_op(ops_deq, burst_sz, ref_op);
+			TEST_ASSERT_SUCCESS(ret, "Validation failed!");
+		}
+
+		rte_bbdev_mldts_op_free_bulk(ops_enq, deq);
+		dequeued += deq;
+	}
+
+	return i;
+}
+
 /* Common function for running validation and latency test cases */
 static int
 validation_latency_test(struct active_device *ad,
@@ -5196,6 +5626,12 @@ validation_latency_test(struct active_device *ad,
 				ad->dev_id, queue_id,
 				num_to_process, burst_sz, &total_time,
 				&min_time, &max_time);
+	else if (op_type == RTE_BBDEV_OP_MLDTS)
+		iter = latency_test_mldts(op_params->mp, bufs,
+				op_params->ref_mldts_op,
+				ad->dev_id, queue_id,
+				num_to_process, burst_sz, &total_time,
+				&min_time, &max_time);
 	else /* RTE_BBDEV_OP_TURBO_ENC */
 		iter = latency_test_enc(op_params->mp, bufs,
 				op_params->ref_enc_op,
@@ -5337,6 +5773,85 @@ offload_latency_test_fft(struct rte_mempool *mempool, struct test_buffers *bufs,
 	return i;
 }
 
+static int
+offload_latency_test_mldts(struct rte_mempool *mempool, struct test_buffers *bufs,
+		struct rte_bbdev_mldts_op *ref_op, uint16_t dev_id,
+		uint16_t queue_id, const uint16_t num_to_process,
+		uint16_t burst_sz, struct test_time_stats *time_st)
+{
+	int i, dequeued, ret;
+	struct rte_bbdev_mldts_op *ops_enq[MAX_BURST], *ops_deq[MAX_BURST];
+	uint64_t enq_start_time, deq_start_time;
+	uint64_t enq_sw_last_time, deq_last_time;
+	struct rte_bbdev_stats stats;
+
+	for (i = 0, dequeued = 0; dequeued < num_to_process; ++i) {
+		uint16_t enq = 0, deq = 0;
+
+		if (unlikely(num_to_process - dequeued < burst_sz))
+			burst_sz = num_to_process - dequeued;
+
+		ret = rte_bbdev_mldts_op_alloc_bulk(mempool, ops_enq, burst_sz);
+		TEST_ASSERT_SUCCESS(ret, "rte_bbdev_mldts_op_alloc_bulk() failed");
+		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
+			copy_reference_mldts_op(ops_enq, burst_sz, dequeued,
+					bufs->inputs, bufs->harq_inputs,
+					bufs->hard_outputs,
+					ref_op);
+
+		/* Start time meas for enqueue function offload latency */
+		enq_start_time = rte_rdtsc_precise();
+		do {
+			enq += rte_bbdev_enqueue_mldts_ops(dev_id, queue_id,
+					&ops_enq[enq], burst_sz - enq);
+		} while (unlikely(burst_sz != enq));
+
+		ret = get_bbdev_queue_stats(dev_id, queue_id, &stats);
+		TEST_ASSERT_SUCCESS(ret,
+				"Failed to get stats for queue (%u) of device (%u)",
+				queue_id, dev_id);
+
+		enq_sw_last_time = rte_rdtsc_precise() - enq_start_time -
+				stats.acc_offload_cycles;
+		time_st->enq_sw_max_time = RTE_MAX(time_st->enq_sw_max_time,
+				enq_sw_last_time);
+		time_st->enq_sw_min_time = RTE_MIN(time_st->enq_sw_min_time,
+				enq_sw_last_time);
+		time_st->enq_sw_total_time += enq_sw_last_time;
+
+		time_st->enq_acc_max_time = RTE_MAX(time_st->enq_acc_max_time,
+				stats.acc_offload_cycles);
+		time_st->enq_acc_min_time = RTE_MIN(time_st->enq_acc_min_time,
+				stats.acc_offload_cycles);
+		time_st->enq_acc_total_time += stats.acc_offload_cycles;
+
+		/* give time for device to process ops */
+		rte_delay_us(WAIT_OFFLOAD_US);
+
+		/* Start time meas for dequeue function offload latency */
+		deq_start_time = rte_rdtsc_precise();
+		/* Dequeue one operation */
+		do {
+			deq += rte_bbdev_dequeue_mldts_ops(dev_id, queue_id, &ops_deq[deq], enq);
+		} while (unlikely(deq == 0));
+
+		deq_last_time = rte_rdtsc_precise() - deq_start_time;
+		time_st->deq_max_time = RTE_MAX(time_st->deq_max_time, deq_last_time);
+		time_st->deq_min_time = RTE_MIN(time_st->deq_min_time, deq_last_time);
+		time_st->deq_total_time += deq_last_time;
+
+		/* Dequeue remaining operations if needed*/
+		while (burst_sz != deq)
+			deq += rte_bbdev_dequeue_mldts_ops(dev_id, queue_id,
+					&ops_deq[deq], burst_sz - deq);
+
+		rte_bbdev_mldts_op_free_bulk(ops_enq, deq);
+		dequeued += deq;
+	}
+
+	return i;
+}
+
 static int
 offload_latency_test_dec(struct rte_mempool *mempool, struct test_buffers *bufs,
 		struct rte_bbdev_dec_op *ref_op, uint16_t dev_id,
@@ -5734,6 +6249,10 @@ offload_cost_test(struct active_device *ad,
 		iter = offload_latency_test_fft(op_params->mp, bufs,
 			op_params->ref_fft_op, ad->dev_id, queue_id,
 			num_to_process, burst_sz, &time_st);
+	else if (op_type == RTE_BBDEV_OP_MLDTS)
+		iter = offload_latency_test_mldts(op_params->mp, bufs,
+			op_params->ref_mldts_op, ad->dev_id, queue_id,
+			num_to_process, burst_sz, &time_st);
 	else
 		iter = offload_latency_test_enc(op_params->mp, bufs,
 				op_params->ref_enc_op, ad->dev_id, queue_id,
diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
index 0ef1481f2a..8f464db838 100644
--- a/app/test-bbdev/test_bbdev_vector.c
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -244,6 +244,20 @@ op_fft_flag_strtoul(char *token, uint32_t *op_flag_value)
 	return 0;
 }
 
+/* convert MLD flag from string to unsigned long int*/
+static int
+op_mld_flag_strtoul(char *token, uint32_t *op_flag_value)
+{
+	if (!strcmp(token, "RTE_BBDEV_MLDTS_REP"))
+		*op_flag_value = RTE_BBDEV_MLDTS_REP;
+	else {
+		printf("The given value is not a MLD flag\n");
+		return -1;
+	}
+
+	return 0;
+}
+
 /* convert turbo encoder flag from string to unsigned long int*/
 static int
 op_encoder_flag_strtoul(char *token, uint32_t *op_flag_value)
@@ -326,6 +340,10 @@ parse_turbo_flags(char *tokens, uint32_t *op_flags,
 			if (op_fft_flag_strtoul(tok, &op_flag_value)
 					== -1)
 				return -1;
+		} else if (op_type == RTE_BBDEV_OP_MLDTS) {
+			if (op_mld_flag_strtoul(tok, &op_flag_value)
+					== -1)
+				return -1;
 		} else {
 			return -1;
 		}
@@ -355,6 +373,8 @@ op_turbo_type_strtol(char *token, enum rte_bbdev_op_type *op_type)
 		*op_type = RTE_BBDEV_OP_LDPC_DEC;
 	else if (!strcmp(token, "RTE_BBDEV_OP_FFT"))
 		*op_type = RTE_BBDEV_OP_FFT;
+	else if (!strcmp(token, "RTE_BBDEV_OP_MLDTS"))
+		*op_type = RTE_BBDEV_OP_MLDTS;
 	else if (!strcmp(token, "RTE_BBDEV_OP_NONE"))
 		*op_type = RTE_BBDEV_OP_NONE;
 	else {
@@ -992,6 +1012,73 @@ parse_fft_params(const char *key_token, char *token,
 	return 0;
 }
 
+/* parses MLD parameters and assigns to global variable */
+static int
+parse_mld_params(const char *key_token, char *token,
+		struct test_bbdev_vector *vector)
+{
+	int ret = 0, status = 0;
+	uint32_t op_flags = 0;
+	char *err = NULL;
+
+	struct rte_bbdev_op_mldts *mld = &vector->mldts;
+
+	if (starts_with(key_token, "qhy_input")) {
+		ret = parse_data_entry(key_token, token, vector,
+				DATA_INPUT, "qhy_input");
+	} else if (starts_with(key_token, "r_input")) {
+		ret = parse_data_entry(key_token, token, vector,
+				DATA_HARQ_INPUT, "r_input");
+	} else if (starts_with(key_token, "output")) {
+		ret = parse_data_entry(key_token, token, vector,
+				DATA_HARD_OUTPUT, "output");
+	} else if (!strcmp(key_token, "layers")) {
+		mld->num_layers = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "layer1")) {
+		mld->q_m[0] = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "layer2")) {
+		mld->q_m[1] = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "layer3")) {
+		mld->q_m[2] = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "layer4")) {
+		mld->q_m[3] = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "crep")) {
+		mld->c_rep = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "rrep")) {
+		mld->r_rep = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "rbs")) {
+		mld->num_rbs = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "op_flags")) {
+		vector->mask |= TEST_BBDEV_VF_OP_FLAGS;
+		ret = parse_turbo_flags(token, &op_flags, vector->op_type);
+		if (!ret)
+			mld->op_flags = op_flags;
+	} else if (!strcmp(key_token, "expected_status")) {
+		vector->mask |= TEST_BBDEV_VF_EXPECTED_STATUS;
+		ret = parse_expected_status(token, &status, vector->op_type);
+		if (!ret)
+			vector->expected_status = status;
+	} else {
+		printf("Not valid mld key: '%s'\n", key_token);
+		return -1;
+	}
+
+	if (ret != 0) {
+		printf("Failed with convert '%s\t%s'\n", key_token, token);
+		return -1;
+	}
+
+	return 0;
+}
+
 /* checks the type of key and assigns data */
 static int
 parse_entry(char *entry, struct test_bbdev_vector *vector)
@@ -1046,6 +1133,9 @@ parse_entry(char *entry, struct test_bbdev_vector *vector)
 	} else if (vector->op_type == RTE_BBDEV_OP_FFT) {
 		if (parse_fft_params(key_token, token, vector) == -1)
 			return -1;
+	} else if (vector->op_type == RTE_BBDEV_OP_MLDTS) {
+		if (parse_mld_params(key_token, token, vector) == -1)
+			return -1;
 	}
 
 	return 0;
@@ -1132,6 +1222,25 @@ check_fft_segments(struct test_bbdev_vector *vector)
 	return 0;
 }
 
+static int
+check_mld_segments(struct test_bbdev_vector *vector)
+{
+	unsigned char i;
+
+	for (i = 0; i < vector->entries[DATA_INPUT].nb_segments; i++)
+		if (vector->entries[DATA_INPUT].segments[i].addr == NULL)
+			return -1;
+
+	for (i = 0; i < vector->entries[DATA_HARQ_INPUT].nb_segments; i++)
+		if (vector->entries[DATA_HARQ_INPUT].segments[i].addr == NULL)
+			return -1;
+
+	for (i = 0; i < vector->entries[DATA_HARD_OUTPUT].nb_segments; i++)
+		if (vector->entries[DATA_HARD_OUTPUT].segments[i].addr == NULL)
+			return -1;
+	return 0;
+}
+
 static int
 check_decoder_llr_spec(struct test_bbdev_vector *vector)
 {
@@ -1359,6 +1468,26 @@ check_fft(struct test_bbdev_vector *vector)
 	return 0;
 }
 
+/* checks mld parameters */
+static int
+check_mld(struct test_bbdev_vector *vector)
+{
+	const int mask = vector->mask;
+
+	if (check_mld_segments(vector) < 0)
+		return -1;
+
+	/* Check which params were set */
+	if (!(mask & TEST_BBDEV_VF_OP_FLAGS)) {
+		printf(
+			"WARNING: op_flags was not specified in vector file and capabilities will not be validated\n");
+	}
+	if (!(mask & TEST_BBDEV_VF_EXPECTED_STATUS))
+		printf(
+			"WARNING: expected_status was not specified in vector file and will be set to 0\n");
+	return 0;
+}
+
 /* checks encoder parameters */
 static int
 check_encoder(struct test_bbdev_vector *vector)
@@ -1520,6 +1649,9 @@ bbdev_check_vector(struct test_bbdev_vector *vector)
 	} else if (vector->op_type == RTE_BBDEV_OP_FFT) {
 		if (check_fft(vector) == -1)
 			return -1;
+	} else if (vector->op_type == RTE_BBDEV_OP_MLDTS) {
+		if (check_mld(vector) == -1)
+			return -1;
 	} else if (vector->op_type != RTE_BBDEV_OP_NONE) {
 		printf("Vector was not filled\n");
 		return -1;
diff --git a/app/test-bbdev/test_bbdev_vector.h b/app/test-bbdev/test_bbdev_vector.h
index 2ea271ffb7..14b8ef2764 100644
--- a/app/test-bbdev/test_bbdev_vector.h
+++ b/app/test-bbdev/test_bbdev_vector.h
@@ -65,6 +65,7 @@ struct test_bbdev_vector {
 		struct rte_bbdev_op_ldpc_dec ldpc_dec;
 		struct rte_bbdev_op_ldpc_enc ldpc_enc;
 		struct rte_bbdev_op_fft fft;
+		struct rte_bbdev_op_mldts mldts;
 	};
 	/* Additional storage for op data entries */
 	struct op_data_entries entries[DATA_NUM_TYPES];
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 09/10] test/bbdev: support new FFT capabilities
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
                   ` (7 preceding siblings ...)
  2023-10-27 22:57 ` [PATCH v2 08/10] test/bbdev: add MLD support Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  2023-10-27 22:57 ` [PATCH v2 10/10] test/bbdev: support 4 bit LLR compression Nicolas Chautru
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable

From: Hernan Vargas <hernan.vargas@intel.com>

Adding support to test new FFT capabilities.
Optional frequency domain dewindowing, frequency resampling,
timing error correction and time offset per CS.

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 app/test-bbdev/test_bbdev_perf.c   | 26 ++++++++++---
 app/test-bbdev/test_bbdev_vector.c | 61 ++++++++++++++++++++++++++++--
 2 files changed, 78 insertions(+), 9 deletions(-)

diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index 8a349fdb03..82deb9b1b4 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -864,6 +864,7 @@ add_bbdev_dev(uint8_t dev_id, struct rte_bbdev_info *info,
 		conf.q_fft.num_qgroups = VRB_QMGR_NUM_QGS;
 		conf.q_fft.first_qgroup_index = VRB_QMGR_INVALID_IDX;
 		conf.q_fft.num_aqs_per_groups = VRB_QMGR_NUM_AQS;
+		conf.q_fft.aq_depth_log2 = VRB_QMGR_AQ_DEPTH;
 		conf.q_mld.num_qgroups = VRB_QMGR_NUM_QGS;
 		conf.q_mld.first_qgroup_index = VRB_QMGR_INVALID_IDX;
 		conf.q_mld.num_aqs_per_groups = VRB_QMGR_NUM_AQS;
@@ -1970,7 +1971,7 @@ static void
 copy_reference_fft_op(struct rte_bbdev_fft_op **ops, unsigned int n,
 		unsigned int start_idx, struct rte_bbdev_op_data *inputs,
 		struct rte_bbdev_op_data *outputs, struct rte_bbdev_op_data *pwrouts,
-		struct rte_bbdev_fft_op *ref_op)
+		struct rte_bbdev_op_data *win_inputs, struct rte_bbdev_fft_op *ref_op)
 {
 	unsigned int i, j;
 	struct rte_bbdev_op_fft *fft = &ref_op->fft;
@@ -1982,6 +1983,11 @@ copy_reference_fft_op(struct rte_bbdev_fft_op **ops, unsigned int n,
 				fft->output_leading_depadding;
 		for (j = 0; j < RTE_BBDEV_MAX_CS_2; j++)
 			ops[i]->fft.window_index[j] = fft->window_index[j];
+		for (j = 0; j < RTE_BBDEV_MAX_CS; j++) {
+			ops[i]->fft.cs_theta_0[j] = fft->cs_theta_0[j];
+			ops[i]->fft.cs_theta_d[j] = fft->cs_theta_d[j];
+			ops[i]->fft.time_offset[j] = fft->time_offset[j];
+		}
 		ops[i]->fft.cs_bitmap = fft->cs_bitmap;
 		ops[i]->fft.num_antennas_log2 = fft->num_antennas_log2;
 		ops[i]->fft.idft_log2 = fft->idft_log2;
@@ -1992,8 +1998,12 @@ copy_reference_fft_op(struct rte_bbdev_fft_op **ops, unsigned int n,
 		ops[i]->fft.ncs_reciprocal = fft->ncs_reciprocal;
 		ops[i]->fft.power_shift = fft->power_shift;
 		ops[i]->fft.fp16_exp_adjust = fft->fp16_exp_adjust;
+		ops[i]->fft.output_depadded_size = fft->output_depadded_size;
+		ops[i]->fft.freq_resample_mode = fft->freq_resample_mode;
 		ops[i]->fft.base_output = outputs[start_idx + i];
 		ops[i]->fft.base_input = inputs[start_idx + i];
+		if (win_inputs != NULL)
+			ops[i]->fft.dewindowing_input = win_inputs[start_idx + i];
 		if (pwrouts != NULL)
 			ops[i]->fft.power_meas_output = pwrouts[start_idx + i];
 		ops[i]->fft.op_flags = fft->op_flags;
@@ -2575,7 +2585,7 @@ validate_op_fft_chain(struct rte_bbdev_op_data *op, struct op_data_entries *orig
 {
 	struct rte_mbuf *m = op->data;
 	uint8_t i, nb_dst_segments = orig_op->nb_segments;
-	int16_t delt, abs_delt, thres_hold = 3;
+	int16_t delt, abs_delt, thres_hold = 4;
 	uint32_t j, data_len_iq, error_num;
 	int16_t *ref_out, *op_out;
 
@@ -2754,6 +2764,9 @@ create_reference_fft_op(struct rte_bbdev_fft_op *op)
 	entry = &test_vector.entries[DATA_INPUT];
 	for (i = 0; i < entry->nb_segments; ++i)
 		op->fft.base_input.length += entry->segments[i].length;
+	entry = &test_vector.entries[DATA_HARQ_INPUT];
+	for (i = 0; i < entry->nb_segments; ++i)
+		op->fft.dewindowing_input.length += entry->segments[i].length;
 }
 
 static void
@@ -3722,7 +3735,8 @@ throughput_intr_lcore_fft(void *arg)
 			num_to_process);
 	if (test_vector.op_type != RTE_BBDEV_OP_NONE)
 		copy_reference_fft_op(ops, num_to_process, 0, bufs->inputs,
-				bufs->hard_outputs, bufs->soft_outputs, tp->op_params->ref_fft_op);
+				bufs->hard_outputs, bufs->soft_outputs, bufs->harq_inputs,
+				tp->op_params->ref_fft_op);
 
 	/* Set counter to validate the ordering */
 	for (j = 0; j < num_to_process; ++j)
@@ -4596,7 +4610,7 @@ throughput_pmd_lcore_fft(void *arg)
 
 	if (test_vector.op_type != RTE_BBDEV_OP_NONE)
 		copy_reference_fft_op(ops_enq, num_ops, 0, bufs->inputs,
-				bufs->hard_outputs, bufs->soft_outputs, ref_op);
+				bufs->hard_outputs, bufs->soft_outputs, bufs->harq_inputs, ref_op);
 
 	/* Set counter to validate the ordering */
 	for (j = 0; j < num_ops; ++j)
@@ -5452,7 +5466,7 @@ latency_test_fft(struct rte_mempool *mempool,
 		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
 			copy_reference_fft_op(ops_enq, burst_sz, dequeued,
 					bufs->inputs,
-					bufs->hard_outputs, bufs->soft_outputs,
+					bufs->hard_outputs, bufs->soft_outputs, bufs->harq_inputs,
 					ref_op);
 
 		/* Set counter to validate the ordering */
@@ -5714,7 +5728,7 @@ offload_latency_test_fft(struct rte_mempool *mempool, struct test_buffers *bufs,
 		if (test_vector.op_type != RTE_BBDEV_OP_NONE)
 			copy_reference_fft_op(ops_enq, burst_sz, dequeued,
 					bufs->inputs,
-					bufs->hard_outputs, bufs->soft_outputs,
+					bufs->hard_outputs, bufs->soft_outputs, bufs->harq_inputs,
 					ref_op);
 
 		/* Start time meas for enqueue function offload latency */
diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
index 8f464db838..56b882533c 100644
--- a/app/test-bbdev/test_bbdev_vector.c
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -215,7 +215,6 @@ op_ldpc_decoder_flag_strtoul(char *token, uint32_t *op_flag_value)
 	return 0;
 }
 
-
 /* Convert FFT flag from string to unsigned long int. */
 static int
 op_fft_flag_strtoul(char *token, uint32_t *op_flag_value)
@@ -236,6 +235,14 @@ op_fft_flag_strtoul(char *token, uint32_t *op_flag_value)
 		*op_flag_value = RTE_BBDEV_FFT_FP16_INPUT;
 	else if (!strcmp(token, "RTE_BBDEV_FFT_FP16_OUTPUT"))
 		*op_flag_value = RTE_BBDEV_FFT_FP16_OUTPUT;
+	else if (!strcmp(token, "RTE_BBDEV_FFT_TIMING_OFFSET_PER_CS"))
+		*op_flag_value = RTE_BBDEV_FFT_TIMING_OFFSET_PER_CS;
+	else if (!strcmp(token, "RTE_BBDEV_FFT_TIMING_ERROR"))
+		*op_flag_value = RTE_BBDEV_FFT_TIMING_ERROR;
+	else if (!strcmp(token, "RTE_BBDEV_FFT_DEWINDOWING"))
+		*op_flag_value = RTE_BBDEV_FFT_DEWINDOWING;
+	else if (!strcmp(token, "RTE_BBDEV_FFT_FREQ_RESAMPLING"))
+		*op_flag_value = RTE_BBDEV_FFT_FREQ_RESAMPLING;
 	else {
 		printf("The given value is not a FFT flag\n");
 		return -1;
@@ -907,8 +914,7 @@ parse_ldpc_decoder_params(const char *key_token, char *token,
 	return 0;
 }
 
-
-/* Parse FFT parameters and assigns to global variable. */
+/* Parses FFT parameters and assigns to global variable. */
 static int
 parse_fft_params(const char *key_token, char *token,
 		struct test_bbdev_vector *vector)
@@ -923,6 +929,10 @@ parse_fft_params(const char *key_token, char *token,
 		ret = parse_data_entry(key_token, token, vector,
 				DATA_INPUT,
 				op_data_prefixes[DATA_INPUT]);
+	} else if (starts_with(key_token, "dewin_input")) {
+		ret = parse_data_entry(key_token, token, vector,
+				DATA_HARQ_INPUT,
+				"dewin_input");
 	} else if (starts_with(key_token, "output")) {
 		ret = parse_data_entry(key_token, token, vector,
 				DATA_HARD_OUTPUT,
@@ -989,6 +999,51 @@ parse_fft_params(const char *key_token, char *token,
 		fft->fp16_exp_adjust = (uint32_t) strtoul(token, &err, 0);
 		printf("%d\n", fft->fp16_exp_adjust);
 		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "freq_resample_mode")) {
+		fft->freq_resample_mode = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "out_depadded_size")) {
+		fft->output_depadded_size = (uint32_t) strtoul(token, &err, 0);
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "cs_theta_0")) {
+		tok = strtok(token, VALUE_DELIMITER);
+		if (tok == NULL)
+			return -1;
+		for (i = 0; i < FFT_WIN_SIZE; i++) {
+			fft->cs_theta_0[i] = (uint32_t) strtoul(tok, &err, 0);
+			if (i < (FFT_WIN_SIZE - 1)) {
+				tok = strtok(NULL, VALUE_DELIMITER);
+				if (tok == NULL)
+					return -1;
+			}
+		}
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "cs_theta_d")) {
+		tok = strtok(token, VALUE_DELIMITER);
+		if (tok == NULL)
+			return -1;
+		for (i = 0; i < FFT_WIN_SIZE; i++) {
+			fft->cs_theta_d[i] = (uint32_t) strtoul(tok, &err, 0);
+			if (i < (FFT_WIN_SIZE - 1)) {
+				tok = strtok(NULL, VALUE_DELIMITER);
+				if (tok == NULL)
+					return -1;
+			}
+		}
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
+	} else if (!strcmp(key_token, "time_offset")) {
+		tok = strtok(token, VALUE_DELIMITER);
+		if (tok == NULL)
+			return -1;
+		for (i = 0; i < FFT_WIN_SIZE; i++) {
+			fft->time_offset[i] = (uint32_t) strtoul(tok, &err, 0);
+			if (i < (FFT_WIN_SIZE - 1)) {
+				tok = strtok(NULL, VALUE_DELIMITER);
+				if (tok == NULL)
+					return -1;
+			}
+		}
+		ret = ((err == NULL) || (*err != '\0')) ? -1 : 0;
 	} else if (!strcmp(key_token, "op_flags")) {
 		vector->mask |= TEST_BBDEV_VF_OP_FLAGS;
 		ret = parse_turbo_flags(token, &op_flags, vector->op_type);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 10/10] test/bbdev: support 4 bit LLR compression
  2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
                   ` (8 preceding siblings ...)
  2023-10-27 22:57 ` [PATCH v2 09/10] test/bbdev: support new FFT capabilities Nicolas Chautru
@ 2023-10-27 22:57 ` Nicolas Chautru
  9 siblings, 0 replies; 11+ messages in thread
From: Nicolas Chautru @ 2023-10-27 22:57 UTC (permalink / raw)
  To: dev, maxime.coquelin
  Cc: hemant.agrawal, david.marchand, hernan.vargas, stable

From: Hernan Vargas <hernan.vargas@intel.com>

Add support to test LDPC UL operation for new capability.
Option to compress HARQ memory to 4 bits per LLR.

Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 app/test-bbdev/test_bbdev_perf.c   | 3 ++-
 app/test-bbdev/test_bbdev_vector.c | 2 ++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index 82deb9b1b4..149c7a1f50 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -2240,7 +2240,8 @@ validate_op_harq_chain(struct rte_bbdev_op_data *op,
 
 		/* Cannot compare HARQ output data for such cases */
 		if ((ldpc_llr_decimals > 1) && ((ops_ld->op_flags & RTE_BBDEV_LDPC_LLR_COMPRESSION)
-				|| (ops_ld->op_flags & RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION)))
+				|| (ops_ld->op_flags & RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION)
+				|| (ops_ld->op_flags & RTE_BBDEV_LDPC_HARQ_4BIT_COMPRESSION)))
 			break;
 
 		if (!(ldpc_cap_flags &
diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
index 56b882533c..42fa630041 100644
--- a/app/test-bbdev/test_bbdev_vector.c
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -196,6 +196,8 @@ op_ldpc_decoder_flag_strtoul(char *token, uint32_t *op_flag_value)
 		*op_flag_value = RTE_BBDEV_LDPC_DEC_SCATTER_GATHER;
 	else if (!strcmp(token, "RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION"))
 		*op_flag_value = RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION;
+	else if (!strcmp(token, "RTE_BBDEV_LDPC_HARQ_4BIT_COMPRESSION"))
+		*op_flag_value = RTE_BBDEV_LDPC_HARQ_4BIT_COMPRESSION;
 	else if (!strcmp(token, "RTE_BBDEV_LDPC_LLR_COMPRESSION"))
 		*op_flag_value = RTE_BBDEV_LDPC_LLR_COMPRESSION;
 	else if (!strcmp(token,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-10-27 23:05 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-27 22:57 [PATCH v2 00/10] test-bbdev changes for 23.11 Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 01/10] test/bbdev: fix python script subprocess Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 02/10] test/bbdev: update python script parameters Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 03/10] test/bbdev: rename macros from acc200 to vrb Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 04/10] test/bbdev: handle exception for LLR generation Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 05/10] test/bbdev: improve test log messages Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 06/10] test/bbdev: assert failed test for queue configure Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 07/10] test/bbdev: ldpc encoder concatenation vector Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 08/10] test/bbdev: add MLD support Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 09/10] test/bbdev: support new FFT capabilities Nicolas Chautru
2023-10-27 22:57 ` [PATCH v2 10/10] test/bbdev: support 4 bit LLR compression Nicolas Chautru

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).