From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 7045A11F5 for ; Wed, 9 Dec 2015 04:41:40 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP; 08 Dec 2015 19:41:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,401,1444719600"; d="scan'208";a="837299576" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by orsmga001.jf.intel.com with ESMTP; 08 Dec 2015 19:41:39 -0800 Received: from fmsmsx151.amr.corp.intel.com (10.18.125.4) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 8 Dec 2015 19:41:38 -0800 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by FMSMSX151.amr.corp.intel.com (10.18.125.4) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 8 Dec 2015 19:41:38 -0800 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.138]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.92]) with mapi id 14.03.0248.002; Wed, 9 Dec 2015 11:41:36 +0800 From: "Xie, Huawei" To: Yuanhan Liu , "dev@dpdk.org" Thread-Topic: [PATCH 0/4 for 2.3] vhost-user live migration support Thread-Index: AdEyM4D99L0MHWSEQPSf7GDC1BFi+Q== Date: Wed, 9 Dec 2015 03:41:36 +0000 Message-ID: References: <1449027793-30975-1-git-send-email-yuanhan.liu@linux.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: Victor Kaplansky , "Michael S. Tsirkin" Subject: Re: [dpdk-dev] [PATCH 0/4 for 2.3] vhost-user live migration support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Dec 2015 03:41:41 -0000 On 12/2/2015 11:40 AM, Yuanhan Liu wrote:=0A= > This patch set adds the initial vhost-user live migration support.=0A= >=0A= > The major task behind that is to log pages we touched during=0A= > live migration. So, this patch is basically about adding vhost=0A= > log support, and using it.=0A= >=0A= > Patchset=0A= > =3D=3D=3D=3D=3D=3D=3D=3D=0A= > - Patch 1 handles VHOST_USER_SET_LOG_BASE, which tells us where=0A= > the dirty memory bitmap is.=0A= > =0A= > - Patch 2 introduces a vhost_log_write() helper function to log=0A= > pages we are gonna change.=0A= >=0A= > - Patch 3 logs changes we made to used vring.=0A= >=0A= > - Patch 4 sets log_fhmfd protocol feature bit, which actually=0A= > enables the vhost-user live migration support.=0A= >=0A= > A simple test guide (on same host)=0A= > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A= >=0A= > The following test is based on OVS + DPDK. And here is guide=0A= > to setup OVS + DPDK:=0A= >=0A= > http://wiki.qemu.org/Features/vhost-user-ovs-dpdk=0A= >=0A= > 1. start ovs-vswitchd=0A= >=0A= > 2. Add two ovs vhost-user port, say vhost0 and vhost1=0A= >=0A= > 3. Start a VM1 to connect to vhost0. Here is my example:=0A= >=0A= > $QEMU -enable-kvm -m 1024 -smp 4 \=0A= > -chardev socket,id=3Dchar0,path=3D/var/run/openvswitch/vhost0 \= =0A= > -netdev type=3Dvhost-user,id=3Dmynet1,chardev=3Dchar0,vhostforce \= =0A= > -device virtio-net-pci,netdev=3Dmynet1,mac=3D52:54:00:12:34:58 \= =0A= > -object memory-backend-file,id=3Dmem,size=3D1024M,mem-path=3D$HOME= /hugetlbfs,share=3Don \=0A= > -numa node,memdev=3Dmem -mem-prealloc \=0A= > -kernel $HOME/iso/vmlinuz -append "root=3D/dev/sda1" \=0A= > -hda fc-19-i386.img \=0A= > -monitor telnet::3333,server,nowait -curses=0A= >=0A= > 4. run "ping $host" inside VM1=0A= >=0A= > 5. Start VM2 to connect to vhost0, and marking it as the target=0A= > of live migration (by adding -incoming tcp:0:4444 option)=0A= >=0A= > $QEMU -enable-kvm -m 1024 -smp 4 \=0A= > -chardev socket,id=3Dchar0,path=3D/var/run/openvswitch/vhost1 \= =0A= > -netdev type=3Dvhost-user,id=3Dmynet1,chardev=3Dchar0,vhostforce \= =0A= > -device virtio-net-pci,netdev=3Dmynet1,mac=3D52:54:00:12:34:58 \= =0A= > -object memory-backend-file,id=3Dmem,size=3D1024M,mem-path=3D$HOME= /hugetlbfs,share=3Don \=0A= > -numa node,memdev=3Dmem -mem-prealloc \=0A= > -kernel $HOME/iso/vmlinuz -append "root=3D/dev/sda1" \=0A= > -hda fc-19-i386.img \=0A= > -monitor telnet::3334,server,nowait -curses \=0A= > -incoming tcp:0:4444 =0A= >=0A= > 6. connect to VM1 monitor, and start migration:=0A= >=0A= > > migrate tcp:0:4444=0A= >=0A= > 7. After a while, you will find that VM1 has been migrated to VM2,=0A= > and the "ping" command continues running, perfectly.=0A= Is there some formal verification that migration is truly successful? At=0A= least that the memory we care in our vhost-user case has been migrated=0A= successfully?=0A= For instance, we miss logging guest RX buffers in this patch set, but we=0A= have no idea.=0A= =0A= [...]=0A=