From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id ACDEFA0C43;
	Tue, 19 Oct 2021 13:26:32 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id D7F034117E;
	Tue, 19 Oct 2021 13:25:59 +0200 (CEST)
Received: from mga03.intel.com (mga03.intel.com [134.134.136.65])
 by mails.dpdk.org (Postfix) with ESMTP id 3ABD0410F4
 for <dev@dpdk.org>; Tue, 19 Oct 2021 13:25:56 +0200 (CEST)
X-IronPort-AV: E=McAfee;i="6200,9189,10141"; a="228433197"
X-IronPort-AV: E=Sophos;i="5.85,384,1624345200"; d="scan'208";a="228433197"
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 19 Oct 2021 04:25:55 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.85,384,1624345200"; d="scan'208";a="490246476"
Received: from silpixa00401122.ir.intel.com ([10.55.128.10])
 by fmsmga007.fm.intel.com with ESMTP; 19 Oct 2021 04:25:54 -0700
From: Kevin Laatz <kevin.laatz@intel.com>
To: dev@dpdk.org
Cc: thomas@monjalon.net, bruce.richardson@intel.com, fengchengwen@huawei.com,
 jerinj@marvell.com, conor.walsh@intel.com,
 Kevin Laatz <kevin.laatz@intel.com>
Date: Tue, 19 Oct 2021 11:25:31 +0000
Message-Id: <20211019112540.1825132-8-kevin.laatz@intel.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20211019112540.1825132-1-kevin.laatz@intel.com>
References: <20210827172048.558704-1-kevin.laatz@intel.com>
 <20211019112540.1825132-1-kevin.laatz@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Subject: [dpdk-dev] [PATCH v9 07/16] dma/idxd: add configure and info_get
 functions
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Add functions for device configuration. The info_get function is included
here since it can be useful for checking successful configuration.

Documentation is also updated to add device configuration usage info.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
---
 doc/guides/dmadevs/idxd.rst      | 15 +++++++
 drivers/dma/idxd/idxd_bus.c      |  3 ++
 drivers/dma/idxd/idxd_common.c   | 71 ++++++++++++++++++++++++++++++++
 drivers/dma/idxd/idxd_internal.h |  6 +++
 drivers/dma/idxd/idxd_pci.c      |  3 ++
 5 files changed, 98 insertions(+)

diff --git a/doc/guides/dmadevs/idxd.rst b/doc/guides/dmadevs/idxd.rst
index ce33e2857a..62ffd39ee0 100644
--- a/doc/guides/dmadevs/idxd.rst
+++ b/doc/guides/dmadevs/idxd.rst
@@ -120,3 +120,18 @@ use a subset of configured queues.
 Once probed successfully, irrespective of kernel driver, the device will appear as a ``dmadev``,
 that is a "DMA device type" inside DPDK, and can be accessed using APIs from the
 ``rte_dmadev`` library.
+
+Using IDXD DMAdev Devices
+--------------------------
+
+To use the devices from an application, the dmadev API can be used.
+
+Device Configuration
+~~~~~~~~~~~~~~~~~~~~~
+
+IDXD configuration requirements:
+
+* ``ring_size`` must be a power of two, between 64 and 4096.
+* Only one ``vchan`` is supported per device (work queue).
+* IDXD devices do not support silent mode.
+* The transfer direction must be set to ``RTE_DMA_DIR_MEM_TO_MEM`` to copy from memory to memory.
diff --git a/drivers/dma/idxd/idxd_bus.c b/drivers/dma/idxd/idxd_bus.c
index 3c0837ec52..b2acdac4f9 100644
--- a/drivers/dma/idxd/idxd_bus.c
+++ b/drivers/dma/idxd/idxd_bus.c
@@ -96,6 +96,9 @@ idxd_dev_close(struct rte_dma_dev *dev)
 static const struct rte_dma_dev_ops idxd_bus_ops = {
 		.dev_close = idxd_dev_close,
 		.dev_dump = idxd_dump,
+		.dev_configure = idxd_configure,
+		.vchan_setup = idxd_vchan_setup,
+		.dev_info_get = idxd_info_get,
 };
 
 static void *
diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c
index f972260a56..b0c79a2e42 100644
--- a/drivers/dma/idxd/idxd_common.c
+++ b/drivers/dma/idxd/idxd_common.c
@@ -39,6 +39,77 @@ idxd_dump(const struct rte_dma_dev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *info, uint32_t size)
+{
+	struct idxd_dmadev *idxd = dev->fp_obj->dev_private;
+
+	if (size < sizeof(*info))
+		return -EINVAL;
+
+	*info = (struct rte_dma_info) {
+			.dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_HANDLES_ERRORS |
+				RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_FILL,
+			.max_vchans = 1,
+			.max_desc = 4096,
+			.min_desc = 64,
+	};
+	if (idxd->sva_support)
+		info->dev_capa |= RTE_DMA_CAPA_SVA;
+	return 0;
+}
+
+int
+idxd_configure(struct rte_dma_dev *dev __rte_unused, const struct rte_dma_conf *dev_conf,
+		uint32_t conf_sz)
+{
+	if (sizeof(struct rte_dma_conf) != conf_sz)
+		return -EINVAL;
+
+	if (dev_conf->nb_vchans != 1)
+		return -EINVAL;
+	return 0;
+}
+
+int
+idxd_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan __rte_unused,
+		const struct rte_dma_vchan_conf *qconf, uint32_t qconf_sz)
+{
+	struct idxd_dmadev *idxd = dev->fp_obj->dev_private;
+	uint16_t max_desc = qconf->nb_desc;
+
+	if (sizeof(struct rte_dma_vchan_conf) != qconf_sz)
+		return -EINVAL;
+
+	idxd->qcfg = *qconf;
+
+	if (!rte_is_power_of_2(max_desc))
+		max_desc = rte_align32pow2(max_desc);
+	IDXD_PMD_DEBUG("DMA dev %u using %u descriptors", dev->data->dev_id, max_desc);
+	idxd->desc_ring_mask = max_desc - 1;
+	idxd->qcfg.nb_desc = max_desc;
+
+	/* in case we are reconfiguring a device, free any existing memory */
+	rte_free(idxd->desc_ring);
+
+	/* allocate the descriptor ring at 2x size as batches can't wrap */
+	idxd->desc_ring = rte_zmalloc(NULL, sizeof(*idxd->desc_ring) * max_desc * 2, 0);
+	if (idxd->desc_ring == NULL)
+		return -ENOMEM;
+	idxd->desc_iova = rte_mem_virt2iova(idxd->desc_ring);
+
+	idxd->batch_idx_read = 0;
+	idxd->batch_idx_write = 0;
+	idxd->batch_start = 0;
+	idxd->batch_size = 0;
+	idxd->ids_returned = 0;
+	idxd->ids_avail = 0;
+
+	memset(idxd->batch_comp_ring, 0, sizeof(*idxd->batch_comp_ring) *
+			(idxd->max_batches + 1));
+	return 0;
+}
+
 int
 idxd_dmadev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_dmadev *base_idxd,
diff --git a/drivers/dma/idxd/idxd_internal.h b/drivers/dma/idxd/idxd_internal.h
index 5e253fdfbc..1dbe31abcd 100644
--- a/drivers/dma/idxd/idxd_internal.h
+++ b/drivers/dma/idxd/idxd_internal.h
@@ -81,5 +81,11 @@ struct idxd_dmadev {
 int idxd_dmadev_create(const char *name, struct rte_device *dev,
 		const struct idxd_dmadev *base_idxd, const struct rte_dma_dev_ops *ops);
 int idxd_dump(const struct rte_dma_dev *dev, FILE *f);
+int idxd_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *dev_conf,
+		uint32_t conf_sz);
+int idxd_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan,
+		const struct rte_dma_vchan_conf *qconf, uint32_t qconf_sz);
+int idxd_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info,
+		uint32_t size);
 
 #endif /* _IDXD_INTERNAL_H_ */
diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index 96c8c65cc0..681bb55efe 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -84,6 +84,9 @@ idxd_pci_dev_close(struct rte_dma_dev *dev)
 static const struct rte_dma_dev_ops idxd_pci_ops = {
 	.dev_close = idxd_pci_dev_close,
 	.dev_dump = idxd_dump,
+	.dev_configure = idxd_configure,
+	.vchan_setup = idxd_vchan_setup,
+	.dev_info_get = idxd_info_get,
 };
 
 /* each portal uses 4 x 4k pages */
-- 
2.30.2