From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 725F25A35 for ; Wed, 3 Jun 2015 09:50:48 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 03 Jun 2015 00:50:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,546,1427785200"; d="scan'208";a="704462438" Received: from pgsmsx108.gar.corp.intel.com ([10.221.44.103]) by orsmga001.jf.intel.com with ESMTP; 03 Jun 2015 00:50:46 -0700 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by PGSMSX108.gar.corp.intel.com (10.221.44.103) with Microsoft SMTP Server (TLS) id 14.3.224.2; Wed, 3 Jun 2015 15:50:45 +0800 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.109]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.50]) with mapi id 14.03.0224.002; Wed, 3 Jun 2015 15:50:43 +0800 From: "Xu, Qian Q" To: "Ouyang, Changchun" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v5 0/4] Fix vhost enqueue/dequeue issue Thread-Index: AQHQncLqmXkYdCeiMESNCBEHq2d4f52aZe8Q Date: Wed, 3 Jun 2015 07:50:43 +0000 Message-ID: <82F45D86ADE5454A95A89742C8D1410E01D521B0@shsmsx102.ccr.corp.intel.com> References: <1433235064-2773-1-git-send-email-changchun.ouyang@intel.com> <1433311341-12087-1-git-send-email-changchun.ouyang@intel.com> In-Reply-To: <1433311341-12087-1-git-send-email-changchun.ouyang@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v5 0/4] Fix vhost enqueue/dequeue issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Jun 2015 07:50:49 -0000 Tested-by: Qian Xu Signed-off-by: Qian Xu -Tested commit: 1a1109404e702d3ad1ccc1033df55c59bec1f89a -Host OS/Kernel: FC21/3.19 -Guest OS/Kernel: FC21/3.19 -NIC: Intel 82599 10G -Default x86_64-native-linuxapp-gcc configuration -Total 2 cases, 2 passed. Test Case 1: test_perf_vhost_one_vm_dpdk_fwd_vhost-user =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D On host: 1. Start up vhost-switch, vm2vm 0 means only one vm without vm to vm commun= ication:: taskset -c 18-20 /examples/vhost/build/vhost-switch -c 0xf= -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --z= ero-copy 0 --vm2vm 0=20 =20 2. Start VM with vhost user as backend:: taskset -c 22-28 \ /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1= -cpu host \ -enable-kvm -m 4096 -object memory-backend-file,id=3Dmem,size=3D4096M,mem-p= ath=3D/mnt/huge,share=3Don -numa node,memdev=3Dmem -mem-prealloc \ -smp cores=3D20,sockets=3D1 -drive file=3D/home/img/fc21-vm1.img \ -chardev socket,id=3Dchar0,path=3D/home/qxu10/dpdk/vhost-net -netdev type= =3Dvhost-user,id=3Dmynet1,chardev=3Dchar0,vhostforce=3Don \ -device virtio-net-pci,mac=3D52:54:00:00:00:01,netdev=3Dmynet1 \ -chardev socket,id=3Dchar1,path=3D/home/qxu10/dpdk/vhost-net -netdev type= =3Dvhost-user,id=3Dmynet2,chardev=3Dchar1,vhostforce=3Don \ -device virtio-net-pci,mac=3D52:54:00:00:00:02,netdev=3Dmynet2 \ -netdev tap,id=3Dipvm1,ifname=3Dtap3,script=3D/etc/qemu-ifup -device rtl813= 9,netdev=3Dipvm1,id=3Dnet0,mac=3D00:00:00:00:00:09 -nographic On guest: 3. ensure the dpdk folder copied to the guest with the same config file and= build process as host. Then bind 2 virtio devices to igb_uio and start tes= tpmd, below is the step for reference:: .//tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0 .//x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n= 4 -- -i --txqflags 0x0f00 --rxq=3D2 --disable-hw-vlan-filter =20 $ >set fwd mac =20 $ >start tx_first 4. After typing start tx_first in testpmd, user can see there would be 2 vi= rtio device with MAC and vlan id registered in vhost-user, the log would be= shown in host's vhost-sample output. 5. Send traffic(30second) to virtio1 and virtio2, and set the packet size f= rom 64 to 1518. Check the performance in Mpps. The traffic sent to virtio1 = should have the DEST MAC of Virtio1's MAC, Vlan id of Virtio1. The traffic = sent to virtio2 should have the DEST MAC of Virtio2's MAC, Vlan id of Virti= o2. The traffic's DEST IP and SRC IP is continuously incremental,e.g(from 1= 92.168.1.1 to 192.168.1.63), so the packets can go to different queues via = RSS/Hash. As to the functionality criteria, The received rate should not be= zero. As to the performance criteria, need check it with developer or desi= gn doc/PRD.=20 6. Check that if the packets have been to different queue at the guest test= pmd stats display. =20 7. Check the packet data integrity.=20 =20 Test Case 2: test_perf_virtio_one_vm_linux_fwd_vhost-user =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D On host: Same step as in TestCase1. On guest: =20 =20 1. Set up routing on guest:: $ systemctl stop firewalld.service =20 $ systemctl disable firewalld.service =20 $ systemctl stop ip6tables.service =20 $ systemctl disable ip6tables.service $ systemctl stop iptables.service =20 $ systemctl disable iptables.service $ systemctl stop NetworkManager.service =20 $ systemctl disable NetworkManager.service =20 $ echo 1 >/proc/sys/net/ipv4/ip_forward $ ip addr add 192.168.1.2/24 dev eth1 # eth1 is virtio1 =20 $ ip neigh add 192.168.1.1 lladdr 00:00:00:00:0a:0a dev eth1 =20 $ ip link set dev eth1 up =20 $ ip addr add 192.168.2.2/24 dev eth2 # eth2 is virtio2 =20 $ ip neigh add 192.168.2.1 lladdr 00:00:00:00:00:0a dev eth2 =20 $ ip link set dev eth2 up 2. Send traffic(30second) to virtio1 and virtio2. According to above script= , traffic sent to virtio1 should have SRC IP (e.g: 192.168.1.1), DEST IP(e.= g:192.168.2.1), DEST MAC as virtio1's MAC, VLAN ID as virtio1's VLAN. Traff= ic sent to virtio2 has the similar setting, SRC IP(e.g:192.168.2.1), DEST I= P(e.g: 192.168.1.1), VLAN ID as virtio2's VLAN. Set the packet size from 64= to 1518 as well as jumbo frame.Check the performance in Mpps.As to the fun= ctionality criteria, The received rate should not be zero. As to the perfor= mance criteria, need check it with developer or design doc/PRD.=20 3. Check if the data integrity of the forwarded packets, ensure no content = changes. =20 Thanks Qian -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ouyang Changchun Sent: Wednesday, June 03, 2015 2:02 PM To: dev@dpdk.org Subject: [dpdk-dev] [PATCH v5 0/4] Fix vhost enqueue/dequeue issue Fix enqueue/dequeue can't handle chained vring descriptors; Remove unnecess= ary vring descriptor length updating; Add support copying scattered mbuf to= vring; Changchun Ouyang (4): lib_vhost: Fix enqueue/dequeue can't handle chained vring descriptors lib_vhost: Refine code style lib_vhost: Extract function lib_vhost: Remove unnecessary vring descriptor length updating lib/librte_vhost/vhost_rxtx.c | 201 +++++++++++++++++++++++---------------= ---- 1 file changed, 111 insertions(+), 90 deletions(-) -- 1.8.4.2