From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id A4EBF1F7 for ; Fri, 6 Feb 2015 06:56:01 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 05 Feb 2015 21:51:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,527,1418112000"; d="scan'208";a="662359991" Received: from pgsmsx101.gar.corp.intel.com ([10.221.44.78]) by fmsmga001.fm.intel.com with ESMTP; 05 Feb 2015 21:55:58 -0800 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by PGSMSX101.gar.corp.intel.com (10.221.44.78) with Microsoft SMTP Server (TLS) id 14.3.195.1; Fri, 6 Feb 2015 13:54:13 +0800 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.62]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.46]) with mapi id 14.03.0195.001; Fri, 6 Feb 2015 13:54:13 +0800 From: "Xu, Qian Q" To: Linhaifeng , "Xie, Huawei" Thread-Topic: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer Thread-Index: AQHQO67XTRFGuodsU0WEcb2WzD895JzWYlWAgAAhYACAAAXrgIAADhqAgAG6qGCAAnnlgIAB0TGwgAAqfoCAADNKgIABIywAgACG1dD//4INgIABk5dwgALLDICAAKOxUA== Date: Fri, 6 Feb 2015 05:54:12 +0000 Message-ID: <82F45D86ADE5454A95A89742C8D1410E01CB7717@shsmsx102.ccr.corp.intel.com> References: <1422527404-12424-1-git-send-email-haifeng.lin@huawei.com> <54CA29F4.8080108@huawei.com> <54CA3ABF.3010203@huawei.com> <82F45D86ADE5454A95A89742C8D1410E01C9FC50@shsmsx102.ccr.corp.intel.com> <54CDC1D3.9000806@huawei.com> <82F45D86ADE5454A95A89742C8D1410E01CA1DA3@shsmsx102.ccr.corp.intel.com> <54CF6BB3.7080002@huawei.com> <54D08AFA.2030404@huawei.com> <82F45D86ADE5454A95A89742C8D1410E01CA3197@shsmsx102.ccr.corp.intel.com> <54D0926D.9010304@huawei.com> <82F45D86ADE5454A95A89742C8D1410E01CA3873@shsmsx102.ccr.corp.intel.com> <54D43CCF.1000509@huawei.com> In-Reply-To: <54D43CCF.1000509@huawei.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "liuyongan@huawei.com" , "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 05:56:02 -0000 Haifeng Are you using the latest dpdk branch with vhost-user patches? I have never = met the issue. When is the vhost sample crashed? When you start VM or when you run sth in = VM? Is your qemu 2.2? How about your memory info? Could you give more detai= ls about your steps?=20 -----Original Message----- From: Linhaifeng [mailto:haifeng.lin@huawei.com]=20 Sent: Friday, February 06, 2015 12:02 PM To: Xu, Qian Q; Xie, Huawei Cc: lilijun; liuyongan@huawei.com; dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when the= re is no buffer On 2015/2/4 9:38, Xu, Qian Q wrote: > 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu ver= sion>2.1 to enable the vhost-user server's feature. Old qemu such as 1.5,1.= 6 didn't support it. > Below is my VM1 startup command, for your reference, similar for VM2.=20 > /home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cp= u host -enable-kvm -m 2048 -object memory-backend-file,id=3Dmem,size=3D2048= M,mem-path=3D/mnt/huge,share=3Don -numa node,memdev=3Dmem -mem-prealloc -sm= p 2 -drive file=3D/home/img/dpdk1-vm1.img -chardev socket,id=3Dchar0,path= =3D/home/dpdk-vhost/vhost-net -netdev type=3Dvhost-user,id=3Dmynet1,chardev= =3Dchar0,vhostforce -device virtio-net-pci,mac=3D00:00:00:00:00:01, -nograp= hic >=20 > 5. Then in the VM, you can have the same operations as before, send packe= t from virtio1 to virtio2.=20 >=20 > Pls let me know if any questions, issues.=20 Hi xie & xu When I try to start VM vhost-switch crashed. VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000 off:0x= 0 VHOST_CONFIG: mmap qemu guest failed. VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR run_dpdk_vhost.sh: line 19: 1854 Segmentation fault ${RTE_SDK}/exampl= es/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /dev/hugepages -m = 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0 --=20 Regards, Haifeng