From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id AB13EA04BA;
	Wed,  7 Oct 2020 18:31:45 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 8B4BC1B868;
	Wed,  7 Oct 2020 18:31:44 +0200 (CEST)
Received: from mga05.intel.com (mga05.intel.com [192.55.52.43])
 by dpdk.org (Postfix) with ESMTP id C93511B755
 for <dev@dpdk.org>; Wed,  7 Oct 2020 18:31:41 +0200 (CEST)
IronPort-SDR: gswaBf0BA1y0CMmDbly/B2Ei64KFu3iWA8or7RJrhTVctEBD3ckEG/E5wgQTxRTwJQiTHvN7eg
 6VYpJ7PTDwrg==
X-IronPort-AV: E=McAfee;i="6000,8403,9767"; a="249724469"
X-IronPort-AV: E=Sophos;i="5.77,347,1596524400"; d="scan'208";a="249724469"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 07 Oct 2020 09:31:21 -0700
IronPort-SDR: Y4jr1MrMs7egjZ6JouHyWMEbRUFYhEjWGFwSBKEEj3msl0H5tMfuJGM3eJMg1naUOVLSVsYgaX
 BBA4tqc0s00g==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.77,347,1596524400"; d="scan'208";a="354965973"
Received: from silpixa00399126.ir.intel.com ([10.237.222.4])
 by orsmga007.jf.intel.com with ESMTP; 07 Oct 2020 09:31:19 -0700
From: Bruce Richardson <bruce.richardson@intel.com>
To: dev@dpdk.org
Cc: patrick.fu@intel.com, thomas@monjalon.net,
 Bruce Richardson <bruce.richardson@intel.com>,
 Kevin Laatz <kevin.laatz@intel.com>, Radu Nicolau <radu.nicolau@intel.com>
Date: Wed,  7 Oct 2020 17:30:15 +0100
Message-Id: <20201007163023.2817-18-bruce.richardson@intel.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201007163023.2817-1-bruce.richardson@intel.com>
References: <20200721095140.719297-1-bruce.richardson@intel.com>
 <20201007163023.2817-1-bruce.richardson@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Subject: [dpdk-dev] [PATCH v5 17/25] raw/ioat: add configure function for
	idxd devices
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Add configure function for idxd devices, taking the same parameters as the
existing configure function for ioat. The ring_size parameter is used to
compute the maximum number of bursts to be supported by the driver, given
that the hardware works on individual bursts of descriptors at a time.

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Kevin Laatz <kevin.laatz@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
---
 drivers/raw/ioat/idxd_pci.c            |  1 +
 drivers/raw/ioat/idxd_vdev.c           |  1 +
 drivers/raw/ioat/ioat_common.c         | 64 ++++++++++++++++++++++++++
 drivers/raw/ioat/ioat_private.h        |  3 ++
 drivers/raw/ioat/rte_ioat_rawdev_fns.h |  1 +
 5 files changed, 70 insertions(+)

diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c
index 9bee92766..b173c5ae3 100644
--- a/drivers/raw/ioat/idxd_pci.c
+++ b/drivers/raw/ioat/idxd_pci.c
@@ -55,6 +55,7 @@ static const struct rte_rawdev_ops idxd_pci_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 /* each portal uses 4 x 4k pages */
diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c
index ba78eee90..3dad1473b 100644
--- a/drivers/raw/ioat/idxd_vdev.c
+++ b/drivers/raw/ioat/idxd_vdev.c
@@ -34,6 +34,7 @@ static const struct rte_rawdev_ops idxd_vdev_ops = {
 		.dev_close = idxd_rawdev_close,
 		.dev_selftest = idxd_rawdev_test,
 		.dump = idxd_dev_dump,
+		.dev_configure = idxd_dev_configure,
 };
 
 static void *
diff --git a/drivers/raw/ioat/ioat_common.c b/drivers/raw/ioat/ioat_common.c
index 672241351..5173c331c 100644
--- a/drivers/raw/ioat/ioat_common.c
+++ b/drivers/raw/ioat/ioat_common.c
@@ -44,6 +44,70 @@ idxd_dev_dump(struct rte_rawdev *dev, FILE *f)
 	return 0;
 }
 
+int
+idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size)
+{
+	struct idxd_rawdev *idxd = dev->dev_private;
+	struct rte_idxd_rawdev *rte_idxd = &idxd->public;
+	struct rte_ioat_rawdev_config *cfg = config;
+	uint16_t max_desc = cfg->ring_size;
+	uint16_t max_batches = max_desc / BATCH_SIZE;
+	uint16_t i;
+
+	if (config_size != sizeof(*cfg))
+		return -EINVAL;
+
+	if (dev->started) {
+		IOAT_PMD_ERR("%s: Error, device is started.", __func__);
+		return -EAGAIN;
+	}
+
+	rte_idxd->hdls_disable = cfg->hdls_disable;
+
+	/* limit the batches to what can be stored in hardware */
+	if (max_batches > idxd->max_batches) {
+		IOAT_PMD_DEBUG("Ring size of %u is too large for this device, need to limit to %u batches of %u",
+				max_desc, idxd->max_batches, BATCH_SIZE);
+		max_batches = idxd->max_batches;
+		max_desc = max_batches * BATCH_SIZE;
+	}
+	if (!rte_is_power_of_2(max_desc))
+		max_desc = rte_align32pow2(max_desc);
+	IOAT_PMD_DEBUG("Rawdev %u using %u descriptors in %u batches",
+			dev->dev_id, max_desc, max_batches);
+
+	/* in case we are reconfiguring a device, free any existing memory */
+	rte_free(rte_idxd->batch_ring);
+	rte_free(rte_idxd->hdl_ring);
+
+	rte_idxd->batch_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->batch_ring) * max_batches, 0);
+	if (rte_idxd->batch_ring == NULL)
+		return -ENOMEM;
+
+	rte_idxd->hdl_ring = rte_zmalloc(NULL,
+			sizeof(*rte_idxd->hdl_ring) * max_desc, 0);
+	if (rte_idxd->hdl_ring == NULL) {
+		rte_free(rte_idxd->batch_ring);
+		rte_idxd->batch_ring = NULL;
+		return -ENOMEM;
+	}
+	rte_idxd->batch_ring_sz = max_batches;
+	rte_idxd->hdl_ring_sz = max_desc;
+
+	for (i = 0; i < rte_idxd->batch_ring_sz; i++) {
+		struct rte_idxd_desc_batch *b = &rte_idxd->batch_ring[i];
+		b->batch_desc.completion = rte_mem_virt2iova(&b->comp);
+		b->batch_desc.desc_addr = rte_mem_virt2iova(&b->null_desc);
+		b->batch_desc.op_flags = (idxd_op_batch << IDXD_CMD_OP_SHIFT) |
+				IDXD_FLAG_COMPLETION_ADDR_VALID |
+				IDXD_FLAG_REQUEST_COMPLETION;
+	}
+
+	return 0;
+}
+
 int
 idxd_rawdev_create(const char *name, struct rte_device *dev,
 		   const struct idxd_rawdev *base_idxd,
diff --git a/drivers/raw/ioat/ioat_private.h b/drivers/raw/ioat/ioat_private.h
index f521c85a1..aba70d8d7 100644
--- a/drivers/raw/ioat/ioat_private.h
+++ b/drivers/raw/ioat/ioat_private.h
@@ -59,6 +59,9 @@ extern int idxd_rawdev_create(const char *name, struct rte_device *dev,
 
 extern int idxd_rawdev_close(struct rte_rawdev *dev);
 
+extern int idxd_dev_configure(const struct rte_rawdev *dev,
+		rte_rawdev_obj_t config, size_t config_size);
+
 extern int idxd_rawdev_test(uint16_t dev_id);
 
 extern int idxd_dev_dump(struct rte_rawdev *dev, FILE *f);
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index 178c432dd..e9cdce016 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -187,6 +187,7 @@ struct rte_idxd_rawdev {
 	uint16_t next_ret_hdl;   /* the next user hdl to return */
 	uint16_t last_completed_hdl; /* the last user hdl that has completed */
 	uint16_t next_free_hdl;  /* where the handle for next op will go */
+	uint16_t hdls_disable;   /* disable tracking completion handles */
 
 	struct rte_idxd_user_hdl *hdl_ring;
 	struct rte_idxd_desc_batch *batch_ring;
-- 
2.25.1