From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 535922A5F for ; Fri, 6 Nov 2015 09:24:10 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP; 06 Nov 2015 00:24:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,251,1444719600"; d="scan'208";a="595064934" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by FMSMGA003.fm.intel.com with ESMTP; 06 Nov 2015 00:24:08 -0800 Received: from fmsmsx113.amr.corp.intel.com (10.18.116.7) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.248.2; Fri, 6 Nov 2015 00:24:08 -0800 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by FMSMSX113.amr.corp.intel.com (10.18.116.7) with Microsoft SMTP Server (TLS) id 14.3.248.2; Fri, 6 Nov 2015 00:24:08 -0800 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.199]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.41]) with mapi id 14.03.0248.002; Fri, 6 Nov 2015 16:24:06 +0800 From: "Xu, Qian Q" To: "Xu, Qian Q" , Thomas Monjalon Thread-Topic: [dpdk-dev] [PATCH v3 6/8] driver/virtio:enqueue vhost TX offload Thread-Index: AQHRFu9RuBd0z9bfVky49JxT4XInnp6LMNoAgAAaWwCAAAdSAIABxy7Q//+DroCAAKEEEIABabEQ Date: Fri, 6 Nov 2015 08:24:06 +0000 Message-ID: <82F45D86ADE5454A95A89742C8D1410E03173818@shsmsx102.ccr.corp.intel.com> References: <1446634456-413-1-git-send-email-jijiang.liu@intel.com> <4105806.UIj9otbWQy@xps13> <82F45D86ADE5454A95A89742C8D1410E03172DE7@shsmsx102.ccr.corp.intel.com> <3700450.502ikmtvau@xps13> <82F45D86ADE5454A95A89742C8D1410E03172E9B@shsmsx102.ccr.corp.intel.com> In-Reply-To: <82F45D86ADE5454A95A89742C8D1410E03172E9B@shsmsx102.ccr.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" , "Michael S. Tsirkin" Subject: Re: [dpdk-dev] [PATCH v3 6/8] driver/virtio:enqueue vhost TX offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Nov 2015 08:24:11 -0000 Tested-by: Qian Xu - Test Commit: c4d404d7c1257465176deb5bb8c84e627d2d5eee - OS/Kernel: Fedora 21/4.1.8 - GCC: gcc (GCC) 4.9.2 20141101 (Red Hat 4.9.2-1) - CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz - NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (re= v 01) - Target: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection = (rev 01) - Total 1 cases, 1 passed, 0 failed. Legacy vhost + virtio-pmd can work wel= l with TSO.=20 Test Case 1: test_legacy_vhost+ virtio-pmd tso=20 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D On host: 1. Start VM with legacy-vhost as backend:: taskset -c 4-6 /home/qxu10/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_6= 4 -object memory-backend-file, id=3Dmem,size=3D2048M,mem-path=3D/mnt/huge,s= hare=3Don -numa node,memdev=3Dmem -mem-prealloc \ -enable-kvm -m 2048 -smp 4 -cpu host -name dpdk1-vm1 \ -drive file=3D/home/img/dpdk1-vm1.img \ -netdev tap,id=3Dvhost3,ifname=3Dtap_vhost3,vhost=3Don,script=3Dno \ -device virtio-net pci,netdev=3Dvhost3,mac=3D52:54:00:00:00:01,id=3Dnet= 3 \ -netdev tap,id=3Dipvm1,ifname=3Dtap3,script=3D/etc/qemu-ifup -device rt= l8139,netdev=3Dipvm1,id=3Dnet0,mac=3D00:00:00:00:00:01 \ -localtime -nographic 2. Set up the bridge on host:=20 brctl addbr br1 brctl addif br1 ens260f0 # The interface is 85:00.0 connected to ixia card3= port9 brctl addif br1 tap0 brctl addif br1 tap1 ifconfig ens260f0 up ifconfig ens260f0 promisc ifconfig tap0 up ifconfig tap1 up ifconfig tap0 promisc ifconfig tap1 promisc brctl stp br1 off ifconfig br1 up brctl show 3. Disable firewall and Network manager on host: systemctl stop firewalld.service systemctl disable firewalld.service systemctl stop ip6tables.service systemctl disable ip6tables.service systemctl stop iptables.service systemctl disable iptables.service systemctl stop NetworkManager.service systemctl disable NetworkManager.service 4. Let br1 learn the MAC : 02:00:00:00:00:00, since in the VM, the virtio = device run testpmd, then it will send packets with the DEST MAC as 02:00:00= :00:00:00. Then the br1 will know this packet can go to the NIC and then it= will go back to the traffic generator. So here we send a packet from IXIA = with the SRC MAC=3D02:00:00:00:00:00 and DEST MAC=3D52:54:00:00:00:01 to le= t the br1 know the MAC. We can verify the macs that the bridge knows by run= ning: brctl br1 showmacs port no mac addr is local? ageing timer 3 02:00:00:00:00:00 no 6.06 1 42:fa:45:4d:aa:4d yes 0.00 1 42:fa:45:4d:aa:4d yes 0.00 1 52:54:00:00:00:01 no 6.06 2 8e:d7:22:bf:c9:8d yes 0.00 2 8e:d7:22:bf:c9:8d yes 0.00 3 90:e2:ba:4a:55:1c yes 0.00 3 90:e2:ba:4a:55:1c yes 0.00 On guest: 5. ensure the dpdk folder copied to the guest with the same config file and= build process as host. Then bind 2 virtio devices to igb_uio and start tes= tpmd, below is the step for reference:: .//tools/dpdk_nic_bind.py --bind igb_uio 00:03.0=20 .//x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c f -n= 4 -- -i --txqflags 0x0f00 --max-pkt-len 9000=20 =20 $ >set fwd csum =20 $ >tso set 1000 0 $ >tso set 1000 1 $ >start=20 6. Send TCP packets to virtio1, and the packet size is 5000, then at the v= irtio side, it will receive 1 packet ant let vhost to do TSO, vhost will le= t NIC do TSO, so at IXIA, we expected 5 packets, each ~1k size, then also c= apture the received packets and check if the checksum is correct. Result: All the behavior is expected and cksum is correct. So the case is = PASS. Thanks Qian -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xu, Qian Q Sent: Thursday, November 05, 2015 6:45 PM To: Thomas Monjalon Cc: dev@dpdk.org; Michael S. Tsirkin Subject: Re: [dpdk-dev] [PATCH v3 6/8] driver/virtio:enqueue vhost TX offlo= ad OK, I will check it tomorrow.=20 Another comment is that "Legacy vhost + virtio-pmd" is not the common use c= ase. Firstly, in this case, virtio-pmd has no TCP/IP stack, TSO is not very= meaningful; secondly, we can't get performance benefit from this case comp= ared to "Legacy vhost+ legacy virtio". So I'm afraid no customer would like= to try this case since the fake TSO and poor performance.=20 Thanks Qian -----Original Message----- From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]=20 Sent: Thursday, November 05, 2015 5:02 PM To: Xu, Qian Q Cc: Liu, Jijiang; dev@dpdk.org; Michael S. Tsirkin Subject: Re: [dpdk-dev] [PATCH v3 6/8] driver/virtio:enqueue vhost TX offlo= ad 2015-11-05 08:49, Xu, Qian Q: > Test Case 1: test_dpdk vhost+ virtio-pmd tso=20 [...] > Test Case 2: test_dpdk vhost+legacy virtio iperf tso [...] > Yes please, I'd like to see a test report showing this virtio running wit= h Linux vhost and without vhost. > We must check that the checksum is well offloaded and sent packets are va= lids. > Thanks Thanks for doing some tests. I had no doubt it works with DPDK vhost. Please could you do some tests without vhost and with kernel vhost? We need to check that the checksum is not missing in such cases.