From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7164A0C53; Tue, 10 Aug 2021 10:01:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6B8B040E6E; Tue, 10 Aug 2021 10:01:00 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id 186A34068E for ; Tue, 10 Aug 2021 10:00:59 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id 02808A0C54; Tue, 10 Aug 2021 10:00:58 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Date: Tue, 10 Aug 2021 08:00:58 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: core X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: changpeng.liu@intel.com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 Subject: [dpdk-dev] [Bug 786] dynamic memory model may cause potential DMA silent error X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" https://bugs.dpdk.org/show_bug.cgi?id=3D786 Bug ID: 786 Summary: dynamic memory model may cause potential DMA silent error Product: DPDK Version: unspecified Hardware: All OS: All Status: UNCONFIRMED Severity: normal Priority: Normal Component: core Assignee: dev@dpdk.org Reporter: changpeng.liu@intel.com Target Milestone: --- We found that in some very rare situations the vfio dynamic memory model ha= s an issue which may result the DMA engine doesn't put the data to the right IO buffer, here is the tests we do to identify the issue: 1. Start the application and call rte_zmalloc to allocate IO buffers. Hotplug one NVMe drive, then DPDK will register existing memory region to kernel vfio driver via dma_map ioctl, we added one trace before this ioctl: DPDK dma_map vaddr: 0x200000200000, iova: 0x200000200000, size: 0x14200000, ret: 0 2. Then we call rte_free to free some memory buffers, and DPDK will call dma_unmap to vfio driver and release related huge files: DPDK dma_unmap iova: 0x20000a400000, size: 0x0, ret: 0 Here we saw that the return value is 0, which means success, but the unmap = size is 0, the kernel vfio driver didn't do the real unmap action, because the I= OVA range isn't same with the previous map one. The new DPDK version will print= an error for this case now. 3. Then we call rte_zmalloc again, DPDK will create new huge files and rema= p to the previous virtual address, and then call dma_map to register to kernel v= fio driver: DPDK dma_map vaddr: 0x20000a400000, iova: 0x20000a400000, size: 0x400000, ret=3D-1, errno was set to EEXIST but DPDK will ignore this errno, so rte_zmalloc will return success. Then if the new malloced memory was used as NVMe IO buffer, the DMA engine = may move data to the previous pinned pages, because the kernel vfio driver didn= 't update the memory map, but all the IO stack will not print any warning log. We can use static memory model as a workaround. --=20 You are receiving this mail because: You are the assignee for the bug.=