From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 18391A0546;
	Fri, 30 Apr 2021 17:07:45 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 59E17411E2;
	Fri, 30 Apr 2021 17:07:08 +0200 (CEST)
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by mails.dpdk.org (Postfix) with ESMTP id 8EC1D4112E
 for <dev@dpdk.org>; Fri, 30 Apr 2021 17:07:00 +0200 (CEST)
IronPort-SDR: ZESd8NspbaBCij6TEDakBpeaX8pDj2PgBcKu9kc/coLwWGWjAH6Zo3o27cTZiMlpNFFAizxh4p
 B+kq9qyPMQww==
X-IronPort-AV: E=McAfee;i="6200,9189,9970"; a="261240840"
X-IronPort-AV: E=Sophos;i="5.82,263,1613462400"; d="scan'208";a="261240840"
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 30 Apr 2021 08:07:00 -0700
IronPort-SDR: I6d4y2DxB3+VcTacsh+SPKuOW8hLGdDJaCiyYxW02oHWVvBMrEZEDzH5/JKAKy3QygWcMO57X1
 0PxiV7biQ/9w==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.82,263,1613462400"; d="scan'208";a="456011359"
Received: from silpixa00399126.ir.intel.com ([10.237.223.78])
 by FMSMGA003.fm.intel.com with ESMTP; 30 Apr 2021 08:06:59 -0700
From: Bruce Richardson <bruce.richardson@intel.com>
To: dev@dpdk.org
Cc: kevin.laatz@intel.com,
	sunil.pai.g@intel.com,
	jiayu.hu@intel.com
Date: Fri, 30 Apr 2021 16:06:32 +0100
Message-Id: <20210430150637.362610-8-bruce.richardson@intel.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210430150637.362610-1-bruce.richardson@intel.com>
References: <20210318182042.43658-1-bruce.richardson@intel.com>
 <20210430150637.362610-1-bruce.richardson@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Subject: [dpdk-dev] [PATCH v4 07/12] raw/ioat: allow perform operations
 function to return error
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

From: Kevin Laatz <kevin.laatz@intel.com>

Change the return type for the rte_ioat_perform_ops() function from void to
int to allow the possibility of returning an error code in future, should
it be necessary.

Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
 drivers/raw/ioat/rte_ioat_rawdev.h     |  4 +++-
 drivers/raw/ioat/rte_ioat_rawdev_fns.h | 11 +++++++----
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/raw/ioat/rte_ioat_rawdev.h b/drivers/raw/ioat/rte_ioat_rawdev.h
index f9e8425a7f..e5a22a0799 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev.h
@@ -124,8 +124,10 @@ rte_ioat_fence(int dev_id);
  *
  * @param dev_id
  *   The rawdev device id of the ioat instance
+ * @return
+ *   0 on success. Non-zero return on error.
  */
-static inline void
+static inline int
 __rte_experimental
 rte_ioat_perform_ops(int dev_id);
 
diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
index e96edc9053..477c1b7b41 100644
--- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h
+++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h
@@ -291,7 +291,7 @@ __ioat_fence(int dev_id)
 /*
  * Trigger hardware to begin performing enqueued operations
  */
-static __rte_always_inline void
+static __rte_always_inline int
 __ioat_perform_ops(int dev_id)
 {
 	struct rte_ioat_rawdev *ioat =
@@ -301,6 +301,8 @@ __ioat_perform_ops(int dev_id)
 	rte_compiler_barrier();
 	*ioat->doorbell = ioat->next_write;
 	ioat->xstats.started = ioat->xstats.enqueued;
+
+	return 0;
 }
 
 /**
@@ -462,7 +464,7 @@ __idxd_movdir64b(volatile void *dst, const void *src)
 			: "a" (dst), "d" (src));
 }
 
-static __rte_always_inline void
+static __rte_always_inline int
 __idxd_perform_ops(int dev_id)
 {
 	struct rte_idxd_rawdev *idxd =
@@ -470,7 +472,7 @@ __idxd_perform_ops(int dev_id)
 	struct rte_idxd_desc_batch *b = &idxd->batch_ring[idxd->next_batch];
 
 	if (b->submitted || b->op_count == 0)
-		return;
+		return 0;
 	b->hdl_end = idxd->next_free_hdl;
 	b->comp.status = 0;
 	b->submitted = 1;
@@ -480,6 +482,7 @@ __idxd_perform_ops(int dev_id)
 	if (++idxd->next_batch == idxd->batch_ring_sz)
 		idxd->next_batch = 0;
 	idxd->xstats.started = idxd->xstats.enqueued;
+	return 0;
 }
 
 static __rte_always_inline int
@@ -558,7 +561,7 @@ rte_ioat_fence(int dev_id)
 		return __ioat_fence(dev_id);
 }
 
-static inline void
+static inline int
 rte_ioat_perform_ops(int dev_id)
 {
 	enum rte_ioat_dev_type *type =
-- 
2.30.2