From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 86FADA0545;
	Tue, 20 Dec 2022 20:30:37 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 0459B42DBA;
	Tue, 20 Dec 2022 20:27:28 +0100 (CET)
Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com
 [67.231.148.174])
 by mails.dpdk.org (Postfix) with ESMTP id AE82842D3A
 for <dev@dpdk.org>; Tue, 20 Dec 2022 20:27:03 +0100 (CET)
Received: from pps.filterd (m0045849.ppops.net [127.0.0.1])
 by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id
 2BKHOmm9018275 for <dev@dpdk.org>; Tue, 20 Dec 2022 11:27:03 -0800
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;
 h=from : to : cc :
 subject : date : message-id : in-reply-to : references : mime-version :
 content-type; s=pfpt0220; bh=bUiqW5kEd7/njPJvMyBUhoNr4THei2hoFH7UXwml43s=;
 b=U7poz/fR99VFLcxnkJqpZi9SSnrI0DaTG2+76a5G/UFaFoTaATD/TJZwKjrjOgPjhc6f
 j5ku/nc19CoaloZRQgrMZQ9zC27F1BxSB52fDzl0lTvdot0XKXjyoIPyGFzI4+NUT3vB
 B3MnskrK6islOVJB3fK9Mb8txIiWu/YqUfX4jZWZvopwLH8NVO4MV1jxcmCQA4xVP9aY
 WZe1sogt7tV3y0FRvY6wfocQXrfJ534vkCSKMCoJrR8guHkmzKKIwviSvdzWkG1yl7ZC
 wxWH8CYMwljbDTEsDDXo6yBM90bPPE7l2nXU7vXpGa2pLgAvT+flm2cBEznfUz40M1vO RQ== 
Received: from dc5-exch01.marvell.com ([199.233.59.181])
 by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3mkapj2tqc-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT)
 for <dev@dpdk.org>; Tue, 20 Dec 2022 11:27:02 -0800
Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com
 (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.42;
 Tue, 20 Dec 2022 11:27:01 -0800
Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com
 (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.42 via Frontend
 Transport; Tue, 20 Dec 2022 11:27:01 -0800
Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233])
 by maili.marvell.com (Postfix) with ESMTP id 908EB3F7073;
 Tue, 20 Dec 2022 11:27:01 -0800 (PST)
From: Srikanth Yalavarthi <syalavarthi@marvell.com>
To: Srikanth Yalavarthi <syalavarthi@marvell.com>
CC: <dev@dpdk.org>, <sshankarnara@marvell.com>, <jerinj@marvell.com>,
 <aprabhu@marvell.com>
Subject: [PATCH v3 36/38] ml/cnxk: add support to use lock during jcmd enq
Date: Tue, 20 Dec 2022 11:26:43 -0800
Message-ID: <20221220192645.14042-37-syalavarthi@marvell.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20221220192645.14042-1-syalavarthi@marvell.com>
References: <20221208201806.21893-1-syalavarthi@marvell.com>
 <20221220192645.14042-1-syalavarthi@marvell.com>
MIME-Version: 1.0
Content-Type: text/plain
X-Proofpoint-ORIG-GUID: -A0uOLs1Z68eZ-i2P8R0Ievtp7UQH2AQ
X-Proofpoint-GUID: -A0uOLs1Z68eZ-i2P8R0Ievtp7UQH2AQ
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.205,Aquarius:18.0.923,Hydra:6.0.545,FMLib:17.11.122.1
 definitions=2022-12-20_06,2022-12-20_01,2022-06-22_01
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Added device argument "hw_queue_lock" to select the JCMDQ enqueue
ROC function to be used in fast path.

hw_queue_lock:

0: Disable, use lock free version of JCMDQ enqueue ROC 	function for
	job queuing. To avoid race condition in request queuing to
	hardware, disabling hw_queue_lock restricts the number of
	queue-pairs supported by cnxk driver to 1.

1: Enable, (default) use spin-lock version of JCMDQ enqueue ROC
	function for job queuing. Enabling spinlock version would
	disable restrictions on the number of queue-pairs that
	can be created.

Signed-off-by: Srikanth Yalavarthi <syalavarthi@marvell.com>
---
 drivers/ml/cnxk/cn10k_ml_dev.c | 31 ++++++++++++++++++++++++++++++-
 drivers/ml/cnxk/cn10k_ml_dev.h | 13 +++++++++++--
 drivers/ml/cnxk/cn10k_ml_ops.c | 20 +++++++++++++++++---
 3 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c
index 5c02d67c8e..aa503b2691 100644
--- a/drivers/ml/cnxk/cn10k_ml_dev.c
+++ b/drivers/ml/cnxk/cn10k_ml_dev.c
@@ -22,12 +22,14 @@
 #define CN10K_ML_FW_REPORT_DPE_WARNINGS "report_dpe_warnings"
 #define CN10K_ML_DEV_CACHE_MODEL_DATA	"cache_model_data"
 #define CN10K_ML_OCM_ALLOC_MODE		"ocm_alloc_mode"
+#define CN10K_ML_DEV_HW_QUEUE_LOCK	"hw_queue_lock"
 
 #define CN10K_ML_FW_PATH_DEFAULT		"/lib/firmware/mlip-fw.bin"
 #define CN10K_ML_FW_ENABLE_DPE_WARNINGS_DEFAULT 1
 #define CN10K_ML_FW_REPORT_DPE_WARNINGS_DEFAULT 0
 #define CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT	1
 #define CN10K_ML_OCM_ALLOC_MODE_DEFAULT		"lowest"
+#define CN10K_ML_DEV_HW_QUEUE_LOCK_DEFAULT	1
 
 /* ML firmware macros */
 #define FW_MEMZONE_NAME		 "ml_cn10k_fw_mz"
@@ -46,6 +48,7 @@ static const char *const valid_args[] = {CN10K_ML_FW_PATH,
 					 CN10K_ML_FW_REPORT_DPE_WARNINGS,
 					 CN10K_ML_DEV_CACHE_MODEL_DATA,
 					 CN10K_ML_OCM_ALLOC_MODE,
+					 CN10K_ML_DEV_HW_QUEUE_LOCK,
 					 NULL};
 
 /* Dummy operations for ML device */
@@ -87,6 +90,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde
 	bool cache_model_data_set = false;
 	struct rte_kvargs *kvlist = NULL;
 	bool ocm_alloc_mode_set = false;
+	bool hw_queue_lock_set = false;
 	char *ocm_alloc_mode = NULL;
 	bool fw_path_set = false;
 	char *fw_path = NULL;
@@ -158,6 +162,18 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde
 		ocm_alloc_mode_set = true;
 	}
 
+	if (rte_kvargs_count(kvlist, CN10K_ML_DEV_HW_QUEUE_LOCK) == 1) {
+		ret = rte_kvargs_process(kvlist, CN10K_ML_DEV_HW_QUEUE_LOCK, &parse_integer_arg,
+					 &mldev->hw_queue_lock);
+		if (ret < 0) {
+			plt_err("Error processing arguments, key = %s\n",
+				CN10K_ML_DEV_HW_QUEUE_LOCK);
+			ret = -EINVAL;
+			goto exit;
+		}
+		hw_queue_lock_set = true;
+	}
+
 check_args:
 	if (!fw_path_set)
 		mldev->fw.path = CN10K_ML_FW_PATH_DEFAULT;
@@ -215,6 +231,18 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde
 	}
 	plt_info("ML: %s = %s", CN10K_ML_OCM_ALLOC_MODE, mldev->ocm.alloc_mode);
 
+	if (!hw_queue_lock_set) {
+		mldev->hw_queue_lock = CN10K_ML_DEV_HW_QUEUE_LOCK_DEFAULT;
+	} else {
+		if ((mldev->hw_queue_lock < 0) || (mldev->hw_queue_lock > 1)) {
+			plt_err("Invalid argument, %s = %d\n", CN10K_ML_DEV_HW_QUEUE_LOCK,
+				mldev->hw_queue_lock);
+			ret = -EINVAL;
+			goto exit;
+		}
+	}
+	plt_info("ML: %s = %d", CN10K_ML_DEV_HW_QUEUE_LOCK, mldev->hw_queue_lock);
+
 exit:
 	if (kvlist)
 		rte_kvargs_free(kvlist);
@@ -756,4 +784,5 @@ RTE_PMD_REGISTER_PARAM_STRING(MLDEV_NAME_CN10K_PMD, CN10K_ML_FW_PATH
 			      "=<path>" CN10K_ML_FW_ENABLE_DPE_WARNINGS
 			      "=<0|1>" CN10K_ML_FW_REPORT_DPE_WARNINGS
 			      "=<0|1>" CN10K_ML_DEV_CACHE_MODEL_DATA
-			      "=<0|1>" CN10K_ML_OCM_ALLOC_MODE "=<lowest|largest>");
+			      "=<0|1>" CN10K_ML_OCM_ALLOC_MODE
+			      "=<lowest|largest>" CN10K_ML_DEV_HW_QUEUE_LOCK "=<0|1>");
diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h
index 718edadde7..49676ac9e7 100644
--- a/drivers/ml/cnxk/cn10k_ml_dev.h
+++ b/drivers/ml/cnxk/cn10k_ml_dev.h
@@ -21,8 +21,11 @@
 /* Maximum number of models per device */
 #define ML_CN10K_MAX_MODELS 16
 
-/* Maximum number of queue-pairs per device */
-#define ML_CN10K_MAX_QP_PER_DEVICE 1
+/* Maximum number of queue-pairs per device, spinlock version */
+#define ML_CN10K_MAX_QP_PER_DEVICE_SL 16
+
+/* Maximum number of queue-pairs per device, lock-free version */
+#define ML_CN10K_MAX_QP_PER_DEVICE_LF 1
 
 /* Maximum number of descriptors per queue-pair */
 #define ML_CN10K_MAX_DESC_PER_QP 1024
@@ -384,6 +387,12 @@ struct cn10k_ml_dev {
 
 	/* Enable / disable model data caching */
 	int cache_model_data;
+
+	/* Use spinlock version of ROC enqueue */
+	int hw_queue_lock;
+
+	/* JCMD enqueue function handler */
+	bool (*ml_jcmdq_enqueue)(struct roc_ml *roc_ml, struct ml_job_cmd_s *job_cmd);
 };
 
 uint64_t cn10k_ml_fw_flags_get(struct cn10k_ml_fw *fw);
diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c
index 1e6d366c59..c82f3de5c8 100644
--- a/drivers/ml/cnxk/cn10k_ml_ops.c
+++ b/drivers/ml/cnxk/cn10k_ml_ops.c
@@ -534,13 +534,21 @@ cn10k_ml_cache_model_data(struct rte_ml_dev *dev, int16_t model_id)
 static int
 cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info)
 {
+	struct cn10k_ml_dev *mldev;
+
 	if (dev_info == NULL)
 		return -EINVAL;
 
+	mldev = dev->data->dev_private;
+
 	memset(dev_info, 0, sizeof(struct rte_ml_dev_info));
 	dev_info->driver_name = dev->device->driver->name;
 	dev_info->max_models = ML_CN10K_MAX_MODELS;
-	dev_info->max_queue_pairs = ML_CN10K_MAX_QP_PER_DEVICE;
+	if (mldev->hw_queue_lock)
+		dev_info->max_queue_pairs = ML_CN10K_MAX_QP_PER_DEVICE_SL;
+	else
+		dev_info->max_queue_pairs = ML_CN10K_MAX_QP_PER_DEVICE_LF;
+
 	dev_info->max_desc = ML_CN10K_MAX_DESC_PER_QP;
 	dev_info->max_segments = ML_CN10K_MAX_SEGMENTS;
 	dev_info->min_align_size = ML_CN10K_ALIGN_SIZE;
@@ -703,6 +711,12 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c
 	else
 		mldev->xstats_enabled = false;
 
+	/* Set JCMDQ enqueue function */
+	if (mldev->hw_queue_lock == 1)
+		mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_sl;
+	else
+		mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_lf;
+
 	dev->enqueue_burst = cn10k_ml_enqueue_burst;
 	dev->dequeue_burst = cn10k_ml_dequeue_burst;
 	dev->op_error_get = cn10k_ml_op_error_get;
@@ -1992,7 +2006,7 @@ cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op
 	req->result.user_ptr = op->user_ptr;
 
 	plt_write64(ML_CN10K_POLL_JOB_START, &req->status);
-	enqueued = roc_ml_jcmdq_enqueue_lf(&mldev->roc, &req->jcmd);
+	enqueued = mldev->ml_jcmdq_enqueue(&mldev->roc, &req->jcmd);
 	if (unlikely(!enqueued))
 		goto jcmdq_full;
 
@@ -2113,7 +2127,7 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op)
 	timeout = true;
 	req->timeout = plt_tsc_cycles() + ML_CN10K_CMD_TIMEOUT * plt_tsc_hz();
 	do {
-		if (roc_ml_jcmdq_enqueue_lf(&mldev->roc, &req->jcmd)) {
+		if (mldev->ml_jcmdq_enqueue(&mldev->roc, &req->jcmd)) {
 			req->op = op;
 			timeout = false;
 			break;
-- 
2.17.1