From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EEEC7A0526; Tue, 21 Jul 2020 11:56:47 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 41FF61C07E; Tue, 21 Jul 2020 11:56:16 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 39D3C1C02D for ; Tue, 21 Jul 2020 11:56:13 +0200 (CEST) IronPort-SDR: EysIruXBYgVZwuR/yzCJPEwxQJstlvPSTz1RjgL5b0KGR+dgjMvzYnBAAU6CfKSrAekx2H1tm5 qAM4Wa3jYVHg== X-IronPort-AV: E=McAfee;i="6000,8403,9688"; a="138191453" X-IronPort-AV: E=Sophos;i="5.75,378,1589266800"; d="scan'208";a="138191453" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2020 02:56:12 -0700 IronPort-SDR: 5wWzcCB+mdpSWH3toItmKTYsk0u5zbKRk8fK0KAjX2WUIMJmpbCAIc4jDzlzp4ktyO1G6ntyAB xkHQQf4ggIeQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,378,1589266800"; d="scan'208";a="488024464" Received: from silpixa00399126.ir.intel.com ([10.237.222.36]) by fmsmga005.fm.intel.com with ESMTP; 21 Jul 2020 02:56:11 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: cheng1.jiang@intel.com, patrick.fu@intel.com, kevin.laatz@intel.com, Bruce Richardson Date: Tue, 21 Jul 2020 10:51:34 +0100 Message-Id: <20200721095140.719297-15-bruce.richardson@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200721095140.719297-1-bruce.richardson@intel.com> References: <20200721095140.719297-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH 20.11 14/20] raw/ioat: add start and stop functions for idxd devices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the start and stop functions for DSA hardware devices. Signed-off-by: Bruce Richardson Signed-off-by: Kevin Laatz --- drivers/raw/ioat/idxd_pci.c | 52 ++++++++++++++++++++++++++++++++++++ drivers/raw/ioat/idxd_vdev.c | 50 ++++++++++++++++++++++++++++++++++ 2 files changed, 102 insertions(+) diff --git a/drivers/raw/ioat/idxd_pci.c b/drivers/raw/ioat/idxd_pci.c index 762efd5ac..6655cf9b7 100644 --- a/drivers/raw/ioat/idxd_pci.c +++ b/drivers/raw/ioat/idxd_pci.c @@ -51,10 +51,62 @@ idxd_is_wq_enabled(struct idxd_rawdev *idxd) return (state & WQ_STATE_MASK) == 0x1; } +static void +idxd_pci_dev_stop(struct rte_rawdev *dev) +{ + struct idxd_rawdev *idxd = dev->dev_private; + uint8_t err_code; + + if (!idxd_is_wq_enabled(idxd)) { + IOAT_PMD_ERR("Work queue %d already disabled", idxd->qid); + return; + } + + err_code = idxd_pci_dev_command(idxd, idxd_disable_wq); + if (err_code || idxd_is_wq_enabled(idxd)) { + IOAT_PMD_ERR("Failed disabling work queue %d, error code: %#x", + idxd->qid, err_code); + return; + } + IOAT_PMD_DEBUG("Work queue %d disabled OK", idxd->qid); + + return; +} + +static int +idxd_pci_dev_start(struct rte_rawdev *dev) +{ + struct idxd_rawdev *idxd = dev->dev_private; + uint8_t err_code; + + if (idxd_is_wq_enabled(idxd)) { + IOAT_PMD_WARN("WQ %d already enabled", idxd->qid); + return 0; + } + + if (idxd->public.batch_ring == NULL) { + IOAT_PMD_ERR("WQ %d has not been fully configured", idxd->qid); + return -EINVAL; + } + + err_code = idxd_pci_dev_command(idxd, idxd_enable_wq); + if (err_code || !idxd_is_wq_enabled(idxd)) { + IOAT_PMD_ERR("Failed enabling work queue %d, error code: %#x", + idxd->qid, err_code); + return err_code == 0 ? -1 : err_code; + } + + IOAT_PMD_DEBUG("Work queue %d enabled OK", idxd->qid); + + return 0; +} + static const struct rte_rawdev_ops idxd_pci_ops = { .dev_selftest = idxd_rawdev_test, .dump = idxd_dev_dump, .dev_configure = idxd_dev_configure, + .dev_start = idxd_pci_dev_start, + .dev_stop = idxd_pci_dev_stop, }; /* each portal uses 4 x 4k pages */ diff --git a/drivers/raw/ioat/idxd_vdev.c b/drivers/raw/ioat/idxd_vdev.c index 90ad11006..ab7efd216 100644 --- a/drivers/raw/ioat/idxd_vdev.c +++ b/drivers/raw/ioat/idxd_vdev.c @@ -32,10 +32,60 @@ struct idxd_vdev_args { uint8_t wq_id; }; +static void +idxd_vdev_stop(struct rte_rawdev *dev) +{ + struct idxd_rawdev *idxd = dev->dev_private; + int ret; + + if (!accfg_wq_is_enabled(idxd->u.vdev.wq)) { + IOAT_PMD_ERR("Work queue %s already disabled", + accfg_wq_get_devname(idxd->u.vdev.wq)); + return; + } + + ret = accfg_wq_disable(idxd->u.vdev.wq); + if (ret) { + IOAT_PMD_INFO("Work queue %s not disabled, continuing...", + accfg_wq_get_devname(idxd->u.vdev.wq)); + return; + } + IOAT_PMD_DEBUG("Disabling work queue %s OK", + accfg_wq_get_devname(idxd->u.vdev.wq)); + + return; +} + +static int +idxd_vdev_start(struct rte_rawdev *dev) +{ + struct idxd_rawdev *idxd = dev->dev_private; + int ret; + + if (accfg_wq_is_enabled(idxd->u.vdev.wq)) { + IOAT_PMD_ERR("Work queue %s already enabled", + accfg_wq_get_devname(idxd->u.vdev.wq)); + return 0; + } + + ret = accfg_wq_enable(idxd->u.vdev.wq); + if (ret) { + IOAT_PMD_ERR("Error enabling work queue %s", + accfg_wq_get_devname(idxd->u.vdev.wq)); + return -1; + } + IOAT_PMD_DEBUG("Enabling work queue %s OK", + accfg_wq_get_devname(idxd->u.vdev.wq)); + + return 0; +} + static const struct rte_rawdev_ops idxd_vdev_ops = { .dev_selftest = idxd_rawdev_test, .dump = idxd_dev_dump, .dev_configure = idxd_dev_configure, + .dev_start = idxd_vdev_start, + .dev_stop = idxd_vdev_stop, }; static void * -- 2.25.1