From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 775C5A0C55; Wed, 13 Oct 2021 17:19:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E81F041241; Wed, 13 Oct 2021 17:18:29 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id AE01141203 for ; Wed, 13 Oct 2021 17:18:25 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10136"; a="290944147" X-IronPort-AV: E=Sophos;i="5.85,371,1624345200"; d="scan'208";a="290944147" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2021 08:18:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,371,1624345200"; d="scan'208";a="441682201" Received: from silpixa00399126.ir.intel.com ([10.237.223.151]) by orsmga006.jf.intel.com with ESMTP; 13 Oct 2021 08:18:17 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: conor.walsh@intel.com, kevin.laatz@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, Bruce Richardson Date: Wed, 13 Oct 2021 16:17:35 +0100 Message-Id: <20211013151736.762378-13-bruce.richardson@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211013151736.762378-1-bruce.richardson@intel.com> References: <20210924102942.2878051-1-bruce.richardson@intel.com> <20211013151736.762378-1-bruce.richardson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v7 12/13] app/test: add dmadev fill tests X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kevin Laatz For dma devices which support the fill operation, run unit tests to verify fill behaviour is correct. Signed-off-by: Kevin Laatz Signed-off-by: Bruce Richardson Reviewed-by: Conor Walsh --- app/test/test_dmadev.c | 49 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c index 8e61216f04..27d2e7a5c4 100644 --- a/app/test/test_dmadev.c +++ b/app/test/test_dmadev.c @@ -623,7 +623,51 @@ test_completion_handling(int16_t dev_id, uint16_t vchan) { return test_completion_status(dev_id, vchan, false) /* without fences */ || test_completion_status(dev_id, vchan, true); /* with fences */ +} + +static int +test_enqueue_fill(int16_t dev_id, uint16_t vchan) +{ + const unsigned int lengths[] = {8, 64, 1024, 50, 100, 89}; + struct rte_mbuf *dst; + char *dst_data; + uint64_t pattern = 0xfedcba9876543210; + unsigned int i, j; + + dst = rte_pktmbuf_alloc(pool); + if (dst == NULL) + ERR_RETURN("Failed to allocate mbuf\n"); + dst_data = rte_pktmbuf_mtod(dst, char *); + + for (i = 0; i < RTE_DIM(lengths); i++) { + /* reset dst_data */ + memset(dst_data, 0, rte_pktmbuf_data_len(dst)); + + /* perform the fill operation */ + int id = rte_dma_fill(dev_id, vchan, pattern, + rte_pktmbuf_iova(dst), lengths[i], RTE_DMA_OP_FLAG_SUBMIT); + if (id < 0) + ERR_RETURN("Error with rte_dma_fill\n"); + await_hw(dev_id, vchan); + + if (rte_dma_completed(dev_id, vchan, 1, NULL, NULL) != 1) + ERR_RETURN("Error: fill operation failed (length: %u)\n", lengths[i]); + /* check the data from the fill operation is correct */ + for (j = 0; j < lengths[i]; j++) { + char pat_byte = ((char *)&pattern)[j % 8]; + if (dst_data[j] != pat_byte) + ERR_RETURN("Error with fill operation (lengths = %u): got (%x), not (%x)\n", + lengths[i], dst_data[j], pat_byte); + } + /* check that the data after the fill operation was not written to */ + for (; j < rte_pktmbuf_data_len(dst); j++) + if (dst_data[j] != 0) + ERR_RETURN("Error, fill operation wrote too far (lengths = %u): got (%x), not (%x)\n", + lengths[i], dst_data[j], 0); + } + rte_pktmbuf_free(dst); + return 0; } static int @@ -696,6 +740,11 @@ test_dmadev_instance(int16_t dev_id) dev_id, vchan, !CHECK_ERRS) < 0) goto err; + if ((info.dev_capa & RTE_DMA_CAPA_OPS_FILL) == 0) + printf("DMA Dev %u: No device fill support, skipping fill tests\n", dev_id); + else if (runtest("fill", test_enqueue_fill, 1, dev_id, vchan, CHECK_ERRS) < 0) + goto err; + rte_mempool_free(pool); rte_dma_stop(dev_id); rte_dma_stats_reset(dev_id, vchan); -- 2.30.2