From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id C78B12E8A for ; Thu, 11 Dec 2014 21:19:06 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP; 11 Dec 2014 12:15:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,559,1413270000"; d="scan'208";a="652456948" Received: from kmsmsx153.gar.corp.intel.com ([172.21.73.88]) by orsmga002.jf.intel.com with ESMTP; 11 Dec 2014 12:17:03 -0800 Received: from shsmsx104.ccr.corp.intel.com (10.239.110.15) by KMSMSX153.gar.corp.intel.com (172.21.73.88) with Microsoft SMTP Server (TLS) id 14.3.195.1; Fri, 12 Dec 2014 04:17:02 +0800 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.110]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.182]) with mapi id 14.03.0195.001; Fri, 12 Dec 2014 04:16:55 +0800 From: "Xie, Huawei" To: Linhaifeng , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH RFC v2 08/12] lib/librte_vhost: vhost-user support Thread-Index: AQHQFQiBhTFU6SC6J0OUJHzuD5S8bJyKnPzggAAJT7A= Date: Thu, 11 Dec 2014 20:16:54 +0000 Message-ID: References: <1418247477-13920-1-git-send-email-huawei.xie@intel.com> <1418247477-13920-9-git-send-email-huawei.xie@intel.com> <548933D8.5000908@huawei.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "haifeng.lin@intel.com" Subject: Re: [dpdk-dev] [PATCH RFC v2 08/12] lib/librte_vhost: vhost-user support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Dec 2014 20:19:07 -0000 > -----Original Message----- > From: Xie, Huawei > Sent: Thursday, December 11, 2014 10:13 AM > To: 'Linhaifeng'; dev@dpdk.org > Cc: haifeng.lin@intel.com > Subject: RE: [dpdk-dev] [PATCH RFC v2 08/12] lib/librte_vhost: vhost-user > support >=20 > > > > Only support one vhost-user port ? >=20 > Do you mean vhost server by "port"? > If that is the case, yes, now only one vhost server is supported for mult= iple virtio > devices. > As stated in the cover letter, we have requirement and plan for multiple = server > support, > though I am not sure if it is absolutely necessary. >=20 > > > > Can you mmap the region if gpa is 0? When i run VM with two numa node > (qemu > > will create two hugepage file) found that always failed to mmap with th= e > region > > which gpa is 0. >=20 > Current implementation doesn't assume there is only one huge page file to= back > the guest memory. > It maps every region using the fd of that region. > Could you please paste your guest VM command line here? >=20 > > > > BTW can we ensure the memory regions cover with all the memory of > hugepage > > for VM? >=20 > I think so, because virtio devices could use any normal guest memory, but= we > needn't ensure that. > We only need to map the region passed to us from qemu vhost, which should= be > enough to translate > the GPA in vring from virtio in guest, otherwise it is the bug of qemu vh= ost. I see your post to qemu mailing list. Would you mind I paste here? Qemu use two 1GB huge pages files to map the guest 2GB memory, and indeed w= e get "2GB" memory region. The problem is the all the 2GB memory maps to the first huge page file node= 0.MvcPyi. Seems a bug. qemu command: -m 2048 -smp 2,sockets=3D2,cores=3D1,threads=3D1 -object memory-backend-file,prealloc=3Dyes,mem-path=3D/dev/hugepages/libvir= t/qemu,share=3Don,size=3D1024M,id=3Dram-node0 -numa node,nodeid=3D0,cpus=3D= 0,memdev=3Dram-node0 -object memory-backend-file,prealloc=3Dyes,mem-path=3D/dev/hugepages/libvir= t/qemu,share=3Don,size=3D1024M,id=3Dram-node1 -numa node,nodeid=3D1,cpus=3D= 1,memdev=3Dram-node1 memory regions: gpa =3D 0xC0000 size =3D 2146697216 ua =3D 0x2aaaaacc0000 offset =3D 786432 gpa =3D 0x0 size =3D 655360 ua =3D 0x2aaaaac00000 offset =3D 0 hugepage: cat /proc/pidof qemu/maps 2aaaaac00000-2aaaeac00000 rw-s 00000000 00:18 10357788 /d= ev/hugepages/libvirt/qemu/qemu_back_mem._objects_ram-node0.MvcPyi (deleted) 2aaaeac00000-2aab2ac00000 rw-s 00000000 00:18 10357789 /d= ev/hugepages/libvirt/qemu/qemu_back_mem._objects_ram-node1.tjAVin (deleted) The memory size of each region is not match to the size of each hugepage fi= le,is this ok?How does vhost-user to mmap all the hugepage?