From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B291A09E9; Mon, 14 Dec 2020 09:20:27 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 17180C988; Mon, 14 Dec 2020 09:19:57 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id AD93EC982 for ; Mon, 14 Dec 2020 09:19:53 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BE8FGoH023956; Mon, 14 Dec 2020 00:19:51 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=ws3zB6Ocy7oRF3ZgYkMMb7pRIDVntxtcBTdKJtl+y4g=; b=Noc79qOYImyYFASyW0NFVPsBivAhGrR60T9jovvZEeG3b6xKaJokubnuv+BoirpZktkA v2AH60mxp8uHIqHDossnt1g3fK1XifvT2eRlhgK+9iZc5Shl8ZAEN9djHNv2QhFbFWX4 gEGRikL4xq0voFVUqnoxrkjdegpeNTm4z4JxxrXwSwYhZWMVsrgFRLPTMrwtGD3+pf49 PeOregtDH0ZFPgMAg5R28DV2tK4zNwFYbAW3XoyYSOIcWT9WO4RTqtjdhsFalw2j1/Kd HTjxbrBYORyu8hRscP/2v2kgtisdEFs8908fx5ZjdiiwFxxWt3IoaLKFjqroYv7J7j+6 pQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35cv3suqdm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 14 Dec 2020 00:19:51 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 14 Dec 2020 00:19:50 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 14 Dec 2020 00:19:50 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id C21643F7040; Mon, 14 Dec 2020 00:19:48 -0800 (PST) From: Nithin Dabilpuram To: , David Christensen , CC: , , Nithin Dabilpuram Date: Mon, 14 Dec 2020 13:49:33 +0530 Message-ID: <20201214081935.23577-4-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20201214081935.23577-1-ndabilpuram@marvell.com> References: <20201012081106.10610-1-ndabilpuram@marvell.com> <20201214081935.23577-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-14_03:2020-12-11, 2020-12-14 signatures=0 Subject: [dpdk-dev] [PATCH v5 3/4] test: add test case to validate VFIO DMA map/unmap X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Test case alloc's system pages and tries to performs a user DMA map and unmap both partially and fully. Signed-off-by: Nithin Dabilpuram Acked-by: Anatoly Burakov --- app/test/meson.build | 1 + app/test/test_vfio.c | 106 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 107 insertions(+) create mode 100644 app/test/test_vfio.c diff --git a/app/test/meson.build b/app/test/meson.build index 94fd39f..d9eedb6 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -139,6 +139,7 @@ test_sources = files('commands.c', 'test_trace_register.c', 'test_trace_perf.c', 'test_version.c', + 'test_vfio.c', 'virtual_pmd.c' ) diff --git a/app/test/test_vfio.c b/app/test/test_vfio.c new file mode 100644 index 0000000..9febf35 --- /dev/null +++ b/app/test/test_vfio.c @@ -0,0 +1,106 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell. + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "test.h" + +static int +test_memory_vfio_dma_map(void) +{ + uint64_t sz1, sz2, sz = 2 * rte_mem_page_size(); + uint64_t unmap1, unmap2; + uint8_t *alloc_mem; + uint8_t *mem; + int ret; + + /* Allocate twice size of requirement from heap to align later */ + alloc_mem = malloc(sz * 2); + if (!alloc_mem) { + printf("Skipping test as unable to alloc %luB from heap\n", + sz * 2); + return 1; + } + + /* Force page allocation */ + memset(alloc_mem, 0, sz * 2); + + mem = RTE_PTR_ALIGN(alloc_mem, rte_mem_page_size()); + + /* map the whole region */ + ret = rte_vfio_container_dma_map(RTE_VFIO_DEFAULT_CONTAINER_FD, + (uintptr_t)mem, (rte_iova_t)mem, sz); + if (ret) { + /* Check if VFIO is not available or no device is probed */ + if (rte_errno == ENOTSUP || rte_errno == ENODEV) { + ret = 1; + goto fail; + } + printf("Failed to dma map whole region, ret=%d(%s)\n", + ret, rte_strerror(rte_errno)); + goto fail; + } + + unmap1 = (uint64_t)mem + (sz / 2); + sz1 = sz / 2; + unmap2 = (uint64_t)mem; + sz2 = sz / 2; + /* unmap the partial region */ + ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, + unmap1, (rte_iova_t)unmap1, sz1); + if (ret) { + if (rte_errno == ENOTSUP) { + printf("Partial dma unmap not supported\n"); + unmap2 = (uint64_t)mem; + sz2 = sz; + } else { + printf("Failed to unmap second half region, ret=%d(%s)\n", + ret, rte_strerror(rte_errno)); + goto fail; + } + } + + /* unmap the remaining region */ + ret = rte_vfio_container_dma_unmap(RTE_VFIO_DEFAULT_CONTAINER_FD, + unmap2, (rte_iova_t)unmap2, sz2); + if (ret) { + printf("Failed to unmap remaining region, ret=%d(%s)\n", ret, + rte_strerror(rte_errno)); + goto fail; + } + +fail: + free(alloc_mem); + return ret; +} + +static int +test_vfio(void) +{ + int ret; + + /* test for vfio dma map/unmap */ + ret = test_memory_vfio_dma_map(); + if (ret == 1) { + printf("VFIO dma map/unmap unsupported\n"); + } else if (ret < 0) { + printf("Error vfio dma map/unmap, ret=%d\n", ret); + return -1; + } + + return 0; +} + +REGISTER_TEST_COMMAND(vfio_autotest, test_vfio); -- 2.8.4