From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 29F489205 for ; Thu, 12 Nov 2015 14:06:15 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP; 12 Nov 2015 05:06:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,281,1444719600"; d="scan'208";a="818190163" Received: from tan-s2600cw.sh.intel.com ([10.239.128.225]) by orsmga001.jf.intel.com with ESMTP; 12 Nov 2015 05:06:13 -0800 From: Jianfeng Tan To: dev@dpdk.org Date: Thu, 12 Nov 2015 14:06:01 +0800 Message-Id: <1447308361-82139-1-git-send-email-jianfeng.tan@intel.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1447279449-11289-1-git-send-email-jianfeng.tan@intel.com> References: <1447279449-11289-1-git-send-email-jianfeng.tan@intel.com> Subject: [dpdk-dev] [PATCH v3] vhost: fix mmap failure as len not aligned with hugepage size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Nov 2015 13:06:15 -0000 This patch fixes a bug under lower version linux kernel, mmap() fails when length is not aligned with hugepage size. mmap() without flag of MAP_ANONYMOUS, should be called with length argument aligned with hugepagesz at older longterm version Linux, like 2.6.32 and 3.2.72, or mmap() will fail with EINVAL. This bug was fixed in Linux kernel by commit: dab2d3dc45ae7343216635d981d43637e1cb7d45 To avoid failure, make sure in caller to keep length aligned. v3 changes: - fix (u64) -> (void *) convert error on 32-bit system v2 changes: - add Kernel version comments and commit msg - remove unnecessary alignments when munmap Signed-off-by: Jianfeng Tan --- lib/librte_vhost/vhost_user/virtio-net-user.c | 36 ++++++++++++++++----------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c b/lib/librte_vhost/vhost_user/virtio-net-user.c index d07452a..99da029 100644 --- a/lib/librte_vhost/vhost_user/virtio-net-user.c +++ b/lib/librte_vhost/vhost_user/virtio-net-user.c @@ -74,7 +74,6 @@ free_mem_region(struct virtio_net *dev) { struct orig_region_map *region; unsigned int idx; - uint64_t alignment; if (!dev || !dev->mem) return; @@ -82,12 +81,8 @@ free_mem_region(struct virtio_net *dev) region = orig_region(dev->mem, dev->mem->nregions); for (idx = 0; idx < dev->mem->nregions; idx++) { if (region[idx].mapped_address) { - alignment = region[idx].blksz; - munmap((void *)(uintptr_t) - RTE_ALIGN_FLOOR( - region[idx].mapped_address, alignment), - RTE_ALIGN_CEIL( - region[idx].mapped_size, alignment)); + munmap((void *)(uintptr_t)region[idx].mapped_address, + region[idx].mapped_size); close(region[idx].fd); } } @@ -147,6 +142,18 @@ user_set_mem_table(struct vhost_device_ctx ctx, struct VhostUserMsg *pmsg) /* This is ugly */ mapped_size = memory.regions[idx].memory_size + memory.regions[idx].mmap_offset; + + /* mmap() without flag of MAP_ANONYMOUS, should be called + * with length argument aligned with hugepagesz at older + * longterm version Linux, like 2.6.32 and 3.2.72, or + * mmap() will fail with EINVAL. + * + * to avoid failure, make sure in caller to keep length + * aligned. + */ + alignment = get_blk_size(pmsg->fds[idx]); + mapped_size = RTE_ALIGN_CEIL(mapped_size, alignment); + mapped_address = (uint64_t)(uintptr_t)mmap(NULL, mapped_size, PROT_READ | PROT_WRITE, MAP_SHARED, @@ -154,9 +161,11 @@ user_set_mem_table(struct vhost_device_ctx ctx, struct VhostUserMsg *pmsg) 0); RTE_LOG(INFO, VHOST_CONFIG, - "mapped region %d fd:%d to %p sz:0x%"PRIx64" off:0x%"PRIx64"\n", + "mapped region %d fd:%d to:%p sz:0x%"PRIx64" " + "off:0x%"PRIx64" align:0x%"PRIx64"\n", idx, pmsg->fds[idx], (void *)(uintptr_t)mapped_address, - mapped_size, memory.regions[idx].mmap_offset); + mapped_size, memory.regions[idx].mmap_offset, + alignment); if (mapped_address == (uint64_t)(uintptr_t)MAP_FAILED) { RTE_LOG(ERR, VHOST_CONFIG, @@ -166,7 +175,7 @@ user_set_mem_table(struct vhost_device_ctx ctx, struct VhostUserMsg *pmsg) pregion_orig[idx].mapped_address = mapped_address; pregion_orig[idx].mapped_size = mapped_size; - pregion_orig[idx].blksz = get_blk_size(pmsg->fds[idx]); + pregion_orig[idx].blksz = alignment; pregion_orig[idx].fd = pmsg->fds[idx]; mapped_address += memory.regions[idx].mmap_offset; @@ -193,11 +202,8 @@ user_set_mem_table(struct vhost_device_ctx ctx, struct VhostUserMsg *pmsg) err_mmap: while (idx--) { - alignment = pregion_orig[idx].blksz; - munmap((void *)(uintptr_t)RTE_ALIGN_FLOOR( - pregion_orig[idx].mapped_address, alignment), - RTE_ALIGN_CEIL(pregion_orig[idx].mapped_size, - alignment)); + munmap((void *)(uintptr_t)pregion_orig[idx].mapped_address, + pregion_orig[idx].mapped_size); close(pregion_orig[idx].fd); } free(dev->mem); -- 2.1.4