From: dpdklab@iol.unh.edu
To: test-report@dpdk.org
Cc: dpdk-test-reports@iol.unh.edu
Subject: [dpdk-test-report] |WARNING| pw98589 [PATCH] [v1, 1/1] vfio: add page-by-page mapping API
Date: Fri, 10 Sep 2021 09:28:38 -0400 (EDT) [thread overview]
Message-ID: <20210910132838.D555C30551@noxus.dpdklab.iol.unh.edu> (raw)
[-- Attachment #1: Type: text/plain, Size: 5669 bytes --]
Test-Label: iol-testing
Test-Status: WARNING
http://dpdk.org/patch/98589
_apply patch failure_
Submitter: Burakov, Anatoly <anatoly.burakov@intel.com>
Date: Friday, September 10 2021 11:27:35
Applied on: CommitID:b344eb5d941a7522ff27b6b7b5419f68c3fea9a0
Apply patch set 98589 failed:
Checking patch lib/eal/freebsd/eal.c...
Checking patch lib/eal/include/rte_vfio.h...
Checking patch lib/eal/linux/eal_vfio.c...
error: while searching for:
static int
container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
uint64_t len)
{
struct user_mem_map *new_map;
struct user_mem_maps *user_mem_maps;
bool has_partial_unmap;
int ret = 0;
user_mem_maps = &vfio_cfg->mem_maps;
error: patch failed: lib/eal/linux/eal_vfio.c:1872
error: while searching for:
ret = -1;
goto out;
}
/* map the entry */
if (vfio_dma_mem_map(vfio_cfg, vaddr, iova, len, 1)) {
/* technically, this will fail if there are currently no devices
* plugged in, even if a device were added later, this mapping
* might have succeeded. however, since we cannot verify if this
* is a valid mapping without having a device attached, consider
* this to be unsupported, because we can't just store any old
* mapping and pollute list of active mappings willy-nilly.
*/
RTE_LOG(ERR, EAL, "Couldn't map new region for DMA\n");
ret = -1;
goto out;
}
/* do we have partial unmap support? */
has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap;
error: patch failed: lib/eal/linux/eal_vfio.c:1887
error: while searching for:
new_map->addr = vaddr;
new_map->iova = iova;
new_map->len = len;
/* for IOMMU types supporting partial unmap, we don't need chunking */
new_map->chunk = has_partial_unmap ? 0 : len;
compact_user_maps(user_mem_maps);
out:
error: patch failed: lib/eal/linux/eal_vfio.c:1908
Hunk #4 succeeded at 2061 (offset -147 lines).
Hunk #5 succeeded at 2203 (offset -147 lines).
Checking patch lib/eal/version.map...
Checking patch lib/eal/windows/eal.c...
Applied patch lib/eal/freebsd/eal.c cleanly.
Applied patch lib/eal/include/rte_vfio.h cleanly.
Applying patch lib/eal/linux/eal_vfio.c with 3 rejects...
Rejected hunk #1.
Rejected hunk #2.
Rejected hunk #3.
Hunk #4 applied cleanly.
Hunk #5 applied cleanly.
Applied patch lib/eal/version.map cleanly.
Applied patch lib/eal/windows/eal.c cleanly.
diff a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c (rejected hunks)
@@ -1872,11 +1872,12 @@ vfio_dma_mem_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
static int
container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
- uint64_t len)
+ uint64_t len, uint64_t pagesz)
{
struct user_mem_map *new_map;
struct user_mem_maps *user_mem_maps;
bool has_partial_unmap;
+ uint64_t chunk_size;
int ret = 0;
user_mem_maps = &vfio_cfg->mem_maps;
@@ -1887,19 +1888,37 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
ret = -1;
goto out;
}
- /* map the entry */
- if (vfio_dma_mem_map(vfio_cfg, vaddr, iova, len, 1)) {
- /* technically, this will fail if there are currently no devices
- * plugged in, even if a device were added later, this mapping
- * might have succeeded. however, since we cannot verify if this
- * is a valid mapping without having a device attached, consider
- * this to be unsupported, because we can't just store any old
- * mapping and pollute list of active mappings willy-nilly.
- */
- RTE_LOG(ERR, EAL, "Couldn't map new region for DMA\n");
- ret = -1;
- goto out;
+
+ /* technically, mapping will fail if there are currently no devices
+ * plugged in, even if a device were added later, this mapping might
+ * have succeeded. however, since we cannot verify if this is a valid
+ * mapping without having a device attached, consider this to be
+ * unsupported, because we can't just store any old mapping and pollute
+ * list of active mappings willy-nilly.
+ */
+
+ /* if page size was not specified, map the entire segment in one go */
+ if (pagesz == 0) {
+ if (vfio_dma_mem_map(vfio_cfg, vaddr, iova, len, 1)) {
+ RTE_LOG(ERR, EAL, "Couldn't map new region for DMA\n");
+ ret = -1;
+ goto out;
+ }
+ } else {
+ /* otherwise, do mappings page-by-page */
+ uint64_t offset;
+
+ for (offset = 0; offset < len; offset += pagesz) {
+ uint64_t va = vaddr + offset;
+ uint64_t io = iova + offset;
+ if (vfio_dma_mem_map(vfio_cfg, va, io, pagesz, 1)) {
+ RTE_LOG(ERR, EAL, "Couldn't map new region for DMA\n");
+ ret = -1;
+ goto out;
+ }
+ }
}
+
/* do we have partial unmap support? */
has_partial_unmap = vfio_cfg->vfio_iommu_type->partial_unmap;
@@ -1908,8 +1927,18 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
new_map->addr = vaddr;
new_map->iova = iova;
new_map->len = len;
- /* for IOMMU types supporting partial unmap, we don't need chunking */
- new_map->chunk = has_partial_unmap ? 0 : len;
+
+ /*
+ * Chunking essentially serves largely the same purpose as page sizes,
+ * so for the purposes of this calculation, we treat them as the same.
+ * The reason we have page sizes is because we want to map things in a
+ * way that allows us to partially unmap later. Therefore, when IOMMU
+ * supports partial unmap, page size is irrelevant and can be ignored.
+ * For IOMMU that don't support partial unmap, page size is equivalent
+ * to chunk size.
+ */
+ chunk_size = pagesz == 0 ? len : pagesz;
+ new_map->chunk = has_partial_unmap ? 0 : chunk_size;
compact_user_maps(user_mem_maps);
out:
https://lab.dpdk.org/results/dashboard/patchsets/18675/
UNH-IOL DPDK Community Lab
reply other threads:[~2021-09-10 13:28 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210910132838.D555C30551@noxus.dpdklab.iol.unh.edu \
--to=dpdklab@iol.unh.edu \
--cc=dpdk-test-reports@iol.unh.edu \
--cc=test-report@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).