From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 85CE78D9A for ; Tue, 13 Oct 2015 23:16:36 +0200 (CEST) Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (Postfix) with ESMTPS id C943AC0C9A69 for ; Tue, 13 Oct 2015 21:16:35 +0000 (UTC) Received: from redhat.com (ovpn-116-44.ams2.redhat.com [10.36.116.44]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id t9DLGVkX016986 for ; Tue, 13 Oct 2015 17:16:33 -0400 Date: Wed, 14 Oct 2015 00:16:29 +0300 From: "Michael S. Tsirkin" To: "dev@dpdk.org" Message-ID: <20151014000508-mutt-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Subject: [dpdk-dev] dpdk/vhost-user and VM migration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Oct 2015 21:16:36 -0000 Hello! I am currently looking at how using dpdk on host, accessing VM memory using the vhost-user interface, interacts with VM migration. The issue is that any changes made to VM memory need to be tracked so that updates can be sent from migration source to destination. At the moment, there's a proposal of an interface extension to vhost-user which adds ability to do this tracking through shared memory. dpdk would then be responsible for tracking these updates using atomic operations to set bits (per page written) in a memory bitmap. This only needs to happen during migration, at other times there could be a jump to skip this logging. Is this a reasonable approach? Would performance degradation during migration associated with atomics affect the performance to a level where it's no longer useful? Pls note these logs aren't latency sensitive, so can be done on a separate core, and can be batched. One alternative I'm considering is extending linux kernel so it can do this tracking automatically, by marking pages read-only, detecting a pagefault and logging the write, then making the pages writeable. This would mean higher worst-case overhead (pagefaults are expensive) but lower average one (not extra code after the first fault). Not sure how feasible this is yet, this would be harder to implement and it will only be apply to newer host kernels. Any feedback would be appreciated. -- MST