From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F39BA0C55; Wed, 13 Oct 2021 17:18:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B63E0411F4; Wed, 13 Oct 2021 17:18:04 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id E719641162 for ; Wed, 13 Oct 2021 17:18:01 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10136"; a="290944052" X-IronPort-AV: E=Sophos;i="5.85,371,1624345200"; d="scan'208";a="290944052" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2021 08:17:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,371,1624345200"; d="scan'208";a="441682039" Received: from silpixa00399126.ir.intel.com ([10.237.223.151]) by orsmga006.jf.intel.com with ESMTP; 13 Oct 2021 08:17:51 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: conor.walsh@intel.com, kevin.laatz@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, Bruce Richardson Date: Wed, 13 Oct 2021 16:17:25 +0100 Message-Id: <20211013151736.762378-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211013151736.762378-1-bruce.richardson@intel.com> References: <20210924102942.2878051-1-bruce.richardson@intel.com> <20211013151736.762378-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v7 02/13] dma/skeleton: add channel status function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to avoid timing errors with the unit tests, we need to ensure we have the vchan_status function to report when a channel is idle. Signed-off-by: Bruce Richardson --- drivers/dma/skeleton/skeleton_dmadev.c | 18 +++++++++++++++++- drivers/dma/skeleton/skeleton_dmadev.h | 2 +- 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c index 22a73c6178..dd2f1c9b57 100644 --- a/drivers/dma/skeleton/skeleton_dmadev.c +++ b/drivers/dma/skeleton/skeleton_dmadev.c @@ -79,7 +79,7 @@ cpucopy_thread(void *param) hw->zero_req_count = 0; rte_memcpy(desc->dst, desc->src, desc->len); - hw->completed_count++; + __atomic_add_fetch(&hw->completed_count, 1, __ATOMIC_RELEASE); (void)rte_ring_enqueue(hw->desc_completed, (void *)desc); } @@ -257,6 +257,21 @@ skeldma_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, return vchan_setup(hw, conf->nb_desc); } +static int +skeldma_vchan_status(const struct rte_dma_dev *dev, + uint16_t vchan, enum rte_dma_vchan_status *status) +{ + struct skeldma_hw *hw = dev->data->dev_private; + + RTE_SET_USED(vchan); + + *status = RTE_DMA_VCHAN_IDLE; + if (hw->submitted_count != __atomic_load_n(&hw->completed_count, __ATOMIC_ACQUIRE) + || hw->zero_req_count == 0) + *status = RTE_DMA_VCHAN_ACTIVE; + return 0; +} + static int skeldma_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *stats, uint32_t stats_sz) @@ -424,6 +439,7 @@ static const struct rte_dma_dev_ops skeldma_ops = { .dev_close = skeldma_close, .vchan_setup = skeldma_vchan_setup, + .vchan_status = skeldma_vchan_status, .stats_get = skeldma_stats_get, .stats_reset = skeldma_stats_reset, diff --git a/drivers/dma/skeleton/skeleton_dmadev.h b/drivers/dma/skeleton/skeleton_dmadev.h index eaa52364bf..91eb5460fc 100644 --- a/drivers/dma/skeleton/skeleton_dmadev.h +++ b/drivers/dma/skeleton/skeleton_dmadev.h @@ -54,7 +54,7 @@ struct skeldma_hw { /* Cache delimiter for cpucopy thread's operation data */ char cache2 __rte_cache_aligned; - uint32_t zero_req_count; + volatile uint32_t zero_req_count; uint64_t completed_count; }; -- 2.30.2