From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0C33FA0526; Tue, 21 Jul 2020 11:57:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0FB3F1C0CC; Tue, 21 Jul 2020 11:56:30 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id A28961C0B0 for ; Tue, 21 Jul 2020 11:56:25 +0200 (CEST) IronPort-SDR: n6wwIJEVlj70dc9kmCRAa8Gdo44FlL2rwb/qFu06H8d/bI6MxxcA3qHDAT+htA+2PKYqmo0ds+ nBGA0uiewQlg== X-IronPort-AV: E=McAfee;i="6000,8403,9688"; a="138191472" X-IronPort-AV: E=Sophos;i="5.75,378,1589266800"; d="scan'208";a="138191472" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2020 02:56:25 -0700 IronPort-SDR: Wu3j7PQl53vY4teH44yyUy4tUeswbyX0MbgrYtEzhhxu6yEXWuWKUv5CYXZapEhKMmzlhIN76g s9BMNHPkleEA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,378,1589266800"; d="scan'208";a="488024538" Received: from silpixa00399126.ir.intel.com ([10.237.222.36]) by fmsmga005.fm.intel.com with ESMTP; 21 Jul 2020 02:56:23 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: cheng1.jiang@intel.com, patrick.fu@intel.com, kevin.laatz@intel.com, Bruce Richardson Date: Tue, 21 Jul 2020 10:51:39 +0100 Message-Id: <20200721095140.719297-20-bruce.richardson@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200721095140.719297-1-bruce.richardson@intel.com> References: <20200721095140.719297-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH 20.11 19/20] raw/ioat: add xstats tracking for idxd devices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add update of the relevant stats for the data path functions and point the overall device struct xstats function pointers to the existing ioat functions. At this point, all necessary hooks for supporting the existing unit tests are in place so call them for each device. Signed-off-by: Bruce Richardson --- drivers/raw/ioat/idxd_pci.c | 3 +++ drivers/raw/ioat/idxd_vdev.c | 3 +++ drivers/raw/ioat/ioat_rawdev_test.c | 2 +- drivers/raw/ioat/rte_ioat_rawdev_fns.h | 26 ++++++++++++++++---------- 4 files changed, 23 insertions(+), 11 deletions(-) diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c index 2b5a0e1c8..a426897b2 100644 --- a/drivers/raw/ioat/idxd_pci.c +++ b/drivers/raw/ioat/idxd_pci.c @@ -108,6 +108,9 @@ static const struct rte_rawdev_ops idxd_pci_ops = { .dev_start = idxd_pci_dev_start, .dev_stop = idxd_pci_dev_stop, .dev_info_get = idxd_dev_info_get, + .xstats_get = ioat_xstats_get, + .xstats_get_names = ioat_xstats_get_names, + .xstats_reset = ioat_xstats_reset, }; /* each portal uses 4 x 4k pages */ diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c index 8b7c2cacd..aa2693368 100644 --- a/drivers/raw/ioat/idxd_vdev.c +++ b/drivers/raw/ioat/idxd_vdev.c @@ -87,6 +87,9 @@ static const struct rte_rawdev_ops idxd_vdev_ops = { .dev_start = idxd_vdev_start, .dev_stop = idxd_vdev_stop, .dev_info_get = idxd_dev_info_get, + .xstats_get = ioat_xstats_get, + .xstats_get_names = ioat_xstats_get_names, + .xstats_reset = ioat_xstats_reset, }; static void * diff --git a/drivers/raw/ioat/ioat_rawdev_test.c b/drivers/raw/ioat/ioat_rawdev_test.c index 7864138fb..678d135c4 100644 --- a/drivers/raw/ioat/ioat_rawdev_test.c +++ b/drivers/raw/ioat/ioat_rawdev_test.c @@ -258,5 +258,5 @@ int idxd_rawdev_test(uint16_t dev_id) { rte_rawdev_dump(dev_id, stdout); - return 0; + return ioat_rawdev_test(dev_id); } diff --git a/drivers/raw/ioat/rte_ioat_rawdev_fns.h b/drivers/raw/ioat/rte_ioat_rawdev_fns.h index 813c1a157..2ddd0a024 100644 --- a/drivers/raw/ioat/rte_ioat_rawdev_fns.h +++ b/drivers/raw/ioat/rte_ioat_rawdev_fns.h @@ -182,6 +182,8 @@ struct rte_idxd_user_hdl { */ struct rte_idxd_rawdev { enum rte_ioat_dev_type type; + struct rte_ioat_xstats xstats; + void *portal; /* address to write the batch descriptor */ /* counters to track the batches and the individual op handles */ @@ -330,20 +332,16 @@ __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst, IDXD_FLAG_CACHE_CONTROL; /* check for room in the handle ring */ - if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl) { - rte_errno = ENOSPC; - return 0; - } + if (((idxd->next_free_hdl + 1) & (idxd->hdl_ring_sz - 1)) == idxd->next_ret_hdl) + goto failed; + if (b->op_count >= BATCH_SIZE) { /* TODO change to submit batch and move on */ - rte_errno = ENOSPC; - return 0; + goto failed; } /* check that we can actually use the current batch */ - if (b->submitted) { - rte_errno = ENOSPC; - return 0; - } + if (b->submitted) + goto failed; /* write the descriptor */ b->ops[b->op_count++] = (struct rte_idxd_hw_desc){ @@ -362,7 +360,13 @@ __idxd_enqueue_copy(int dev_id, rte_iova_t src, rte_iova_t dst, if (++idxd->next_free_hdl == idxd->hdl_ring_sz) idxd->next_free_hdl = 0; + idxd->xstats.enqueued++; return 1; + +failed: + idxd->xstats.enqueue_failed++; + rte_errno = ENOSPC; + return 0; } static __rte_always_inline void @@ -389,6 +393,7 @@ __idxd_perform_ops(int dev_id) if (++idxd->next_batch == idxd->batch_ring_sz) idxd->next_batch = 0; + idxd->xstats.started = idxd->xstats.enqueued; } static __rte_always_inline int @@ -425,6 +430,7 @@ __idxd_completed_ops(int dev_id, uint8_t max_ops, idxd->next_ret_hdl = h_idx; + idxd->xstats.completed += n; return n; } -- 2.25.1