From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id C7D80AD8E for ; Wed, 4 Feb 2015 02:38:36 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 03 Feb 2015 17:33:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,516,1418112000"; d="scan'208";a="661147172" Received: from pgsmsx105.gar.corp.intel.com ([10.221.44.96]) by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2015 17:38:33 -0800 Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by PGSMSX105.gar.corp.intel.com (10.221.44.96) with Microsoft SMTP Server (TLS) id 14.3.195.1; Wed, 4 Feb 2015 09:38:32 +0800 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.124]) by SHSMSX103.ccr.corp.intel.com ([169.254.4.91]) with mapi id 14.03.0195.001; Wed, 4 Feb 2015 09:38:31 +0800 From: "Xu, Qian Q" To: Linhaifeng , "Xie, Huawei" Thread-Topic: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer Thread-Index: AQHQO67XTRFGuodsU0WEcb2WzD895JzWYlWAgAAhYACAAAXrgIAADhqAgAG6qGCAAnnlgIAB0TGwgAAqfoCAADNKgIABIywAgACG1dD//4INgIABk5dw Date: Wed, 4 Feb 2015 01:38:30 +0000 Message-ID: <82F45D86ADE5454A95A89742C8D1410E01CA3873@shsmsx102.ccr.corp.intel.com> References: <1422527404-12424-1-git-send-email-haifeng.lin@huawei.com> <54CA29F4.8080108@huawei.com> <54CA3ABF.3010203@huawei.com> <82F45D86ADE5454A95A89742C8D1410E01C9FC50@shsmsx102.ccr.corp.intel.com> <54CDC1D3.9000806@huawei.com> <82F45D86ADE5454A95A89742C8D1410E01CA1DA3@shsmsx102.ccr.corp.intel.com> <54CF6BB3.7080002@huawei.com> <54D08AFA.2030404@huawei.com> <82F45D86ADE5454A95A89742C8D1410E01CA3197@shsmsx102.ccr.corp.intel.com> <54D0926D.9010304@huawei.com> In-Reply-To: <54D0926D.9010304@huawei.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "liuyongan@huawei.com" , "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Feb 2015 01:38:37 -0000 Haifeng 1. Get the latest dpdk master branch code, apply huawei's patchset of vhost= -user. The first patch is http://dpdk.org/dev/patchwork/patch/2796/, totall= y 12patches, date is 1/30/2015.=20 2. Update the config/common_linuxapp and build the samples, see my script f= or reference. If=20 cd ./dpdk export RTE_SDK=3D$PWD export RTE_TARGET=3Dx86_64-native-linuxapp-gcc sed -i 's/CONFIG_RTE_LIBRTE_VHOST=3D.*$/CONFIG_RTE_LIBRTE_VHOST=3Dy/' ./con= fig/commo = n_linuxapp make install -j38 T=3Dx86_64-native-linuxapp-gcc cd $RTE_SDK/lib/librte_vhost make cd ./eventfd_link make cd $RTE_SDK/examples/vhost make 3. Launch the vhost-user sample, then you will see there is a vhost-net und= er you dpdk folder for socket use. If you meet error as can't setup mempool= , you can update one line in examples/vhost/main.c, '#define MAX_QUEUES 512= '--->' #define MAX_QUEUE 128'. #!/bin/sh modprobe kvm modprobe kvm_intel awk '/Hugepagesize/ {print $2}' /proc/meminfo awk '/HugePages_Total/ { print $2 }' /proc/meminfo umount `awk '/hugetlbfs/ { print $2 }' /proc/mounts` mkdir -p /mnt/huge mount -t hugetlbfs nodev /mnt/huge -o pagesize=3D1G #1G or 2M page, both o= k. rm -f /dev/vhost-net rmmod vhost-net modprobe fuse modprobe cuse rmmod eventfd_link rmmod igb_uio cd ./dpdk pwd insmod lib/librte_vhost/eventfd_link/eventfd_link.ko modprobe uio rmmod rte_kni rmmod igb_uio insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko ./tools/dpdk_nic_bind.py --bind=3Digb_uio 0000:08:00.1 taskset -c 1-3 examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mn= t/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 2 #Make sure the vm2vm is 1 or 2 to make vm to vm communication work. Mergeab= le can be 1(to enable jumbo frame) or 0(disable jumbo frame).=20 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu versi= on>2.1 to enable the vhost-user server's feature. Old qemu such as 1.5,1.6 = didn't support it. Below is my VM1 startup command, for your reference, similar for VM2.=20 /home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu = host -enable-kvm -m 2048 -object memory-backend-file,id=3Dmem,size=3D2048M,= mem-path=3D/mnt/huge,share=3Don -numa node,memdev=3Dmem -mem-prealloc -smp = 2 -drive file=3D/home/img/dpdk1-vm1.img -chardev socket,id=3Dchar0,path=3D/= home/dpdk-vhost/vhost-net -netdev type=3Dvhost-user,id=3Dmynet1,chardev=3Dc= har0,vhostforce -device virtio-net-pci,mac=3D00:00:00:00:00:01, -nographic 5. Then in the VM, you can have the same operations as before, send packet = from virtio1 to virtio2.=20 Pls let me know if any questions, issues.=20 -----Original Message----- From: Linhaifeng [mailto:haifeng.lin@huawei.com]=20 Sent: Tuesday, February 03, 2015 5:19 PM To: Xu, Qian Q; Xie, Huawei Cc: lilijun; liuyongan@huawei.com Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when the= re is no buffer Yes,the lasted codes will not happen. On 2015/2/3 16:53, Xu, Qian Q wrote: > If you'd like to use DPDK plus vhost-user's patch, I can send you my step= s for setup, do u need it?=20 of course! pls! I'd like to use it.Thank you very much! --=20 Regards, Haifeng