From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 5F169A0C43;
	Wed, 20 Oct 2021 18:31:35 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id F1F93411FE;
	Wed, 20 Oct 2021 18:30:45 +0200 (CEST)
Received: from mga11.intel.com (mga11.intel.com [192.55.52.93])
 by mails.dpdk.org (Postfix) with ESMTP id 66A0441173
 for <dev@dpdk.org>; Wed, 20 Oct 2021 18:30:35 +0200 (CEST)
X-IronPort-AV: E=McAfee;i="6200,9189,10143"; a="226286489"
X-IronPort-AV: E=Sophos;i="5.87,167,1631602800"; d="scan'208";a="226286489"
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 20 Oct 2021 09:30:35 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.87,167,1631602800"; d="scan'208";a="494708226"
Received: from silpixa00401122.ir.intel.com ([10.55.128.10])
 by orsmga008.jf.intel.com with ESMTP; 20 Oct 2021 09:30:32 -0700
From: Kevin Laatz <kevin.laatz@intel.com>
To: dev@dpdk.org
Cc: thomas@monjalon.net, bruce.richardson@intel.com, fengchengwen@huawei.com,
 jerinj@marvell.com, conor.walsh@intel.com,
 Kevin Laatz <kevin.laatz@intel.com>
Date: Wed, 20 Oct 2021 16:30:05 +0000
Message-Id: <20211020163013.2125016-9-kevin.laatz@intel.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20211020163013.2125016-1-kevin.laatz@intel.com>
References: <20210827172048.558704-1-kevin.laatz@intel.com>
 <20211020163013.2125016-1-kevin.laatz@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Subject: [dpdk-dev] [PATCH v11 08/16] dma/idxd: add start and stop functions
 for pci devices
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Add device start/stop functions for DSA devices bound to vfio. For devices
bound to the IDXD kernel driver, these are not required since the IDXD
kernel driver takes care of this.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 doc/guides/dmadevs/idxd.rst |  3 +++
 drivers/dma/idxd/idxd_pci.c | 51 +++++++++++++++++++++++++++++++++++++
 2 files changed, 54 insertions(+)

diff --git a/doc/guides/dmadevs/idxd.rst b/doc/guides/dmadevs/idxd.rst
index 62ffd39ee0..711890bd9e 100644
--- a/doc/guides/dmadevs/idxd.rst
+++ b/doc/guides/dmadevs/idxd.rst
@@ -135,3 +135,6 @@ IDXD configuration requirements:
 * Only one ``vchan`` is supported per device (work queue).
 * IDXD devices do not support silent mode.
 * The transfer direction must be set to ``RTE_DMA_DIR_MEM_TO_MEM`` to copy from memory to memory.
+
+Once configured, the device can then be made ready for use by calling the
+``rte_dma_start()`` API.
diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index c9e193a11d..58760d2e74 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -60,6 +60,55 @@ idxd_is_wq_enabled(struct idxd_dmadev *idxd)
 	return ((state >> WQ_STATE_SHIFT) & WQ_STATE_MASK) == 0x1;
 }
 
+static int
+idxd_pci_dev_stop(struct rte_dma_dev *dev)
+{
+	struct idxd_dmadev *idxd = dev->fp_obj->dev_private;
+	uint8_t err_code;
+
+	if (!idxd_is_wq_enabled(idxd)) {
+		IDXD_PMD_ERR("Work queue %d already disabled", idxd->qid);
+		return 0;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_disable_wq);
+	if (err_code || idxd_is_wq_enabled(idxd)) {
+		IDXD_PMD_ERR("Failed disabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return err_code == 0 ? -1 : -err_code;
+	}
+	IDXD_PMD_DEBUG("Work queue %d disabled OK", idxd->qid);
+
+	return 0;
+}
+
+static int
+idxd_pci_dev_start(struct rte_dma_dev *dev)
+{
+	struct idxd_dmadev *idxd = dev->fp_obj->dev_private;
+	uint8_t err_code;
+
+	if (idxd_is_wq_enabled(idxd)) {
+		IDXD_PMD_WARN("WQ %d already enabled", idxd->qid);
+		return 0;
+	}
+
+	if (idxd->desc_ring == NULL) {
+		IDXD_PMD_ERR("WQ %d has not been fully configured", idxd->qid);
+		return -EINVAL;
+	}
+
+	err_code = idxd_pci_dev_command(idxd, idxd_enable_wq);
+	if (err_code || !idxd_is_wq_enabled(idxd)) {
+		IDXD_PMD_ERR("Failed enabling work queue %d, error code: %#x",
+				idxd->qid, err_code);
+		return err_code == 0 ? -1 : -err_code;
+	}
+	IDXD_PMD_DEBUG("Work queue %d enabled OK", idxd->qid);
+
+	return 0;
+}
+
 static int
 idxd_pci_dev_close(struct rte_dma_dev *dev)
 {
@@ -88,6 +137,8 @@ static const struct rte_dma_dev_ops idxd_pci_ops = {
 	.dev_configure = idxd_configure,
 	.vchan_setup = idxd_vchan_setup,
 	.dev_info_get = idxd_info_get,
+	.dev_start = idxd_pci_dev_start,
+	.dev_stop = idxd_pci_dev_stop,
 };
 
 /* each portal uses 4 x 4k pages */
-- 
2.30.2