From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from sonic309-15.consmr.mail.bf2.yahoo.com (sonic309-15.consmr.mail.bf2.yahoo.com [74.6.129.125]) by dpdk.org (Postfix) with ESMTP id 88BFA7D13 for ; Fri, 25 Aug 2017 02:01:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1503619311; bh=Qfvr7L8M+bL5ETk32L0ql4Uj4i7+2Lolp8jY9XjlJYk=; h=Date:From:Reply-To:To:Cc:In-Reply-To:References:Subject:From:Subject; b=N1dzPON8R+RECr36Qinvqh2fQQUPo77YE/SnEhfbsoG/2dtcqt4LLG+LTsRNvsukPOqtgICRjP3aEaDvrguuCAgJBxqZ7WJ1c6PXDaz2ESsS4lsmMdRVB2BbC0tgDA9i5DGnfY3/AaunyqKhafboZ5+bM8H+miE0ZJ0o2sSv4PIvzm+sJcDmvvnH2sw/GDsY89/Bc3JhZ1930+S3ygYSr+uPXTP1KpcGdEv0e/0JkY1j62PyJGbCaAugMitOPK4GwPyj4RdxnqXbhrRTULfxta1wakku6YCAmmu/ft0KKEE4jlAE3k2csEcmsYgEAcPlPCOBHN3EZO5zMidXP6e6Aw== X-YMail-OSG: ift75DQVM1k.U4P22zjVAstciehpM5OsZ0qfJljnZYXGEa3wjeLCZFLdXMl9OGk bUSOZ8nlJYUagJxcEIj9UxRjVRxW6qhn1u.kW0PKH0IlL9.w5WWKoqTPquJ7mwhUj3Mxsy06bNnm JX97pUreDx55kg1KZjOxVBi0qTLd.2c0qgyRFWfF3kb.fseW_vJvRXD4QPFUe7ikecHsRwM1dng. dOd_ydQg_rLfGzdxC0zsYYfjuc6JAf77_1nx4cVvOljGkdEqk5tGCLUzMK0Mqc0rKwkwIuRLMrM1 Xl.iBnfXknYuZjzl_gCiX4rJB0XFlzG1h3ZjFixoAbJSkGN4L4Ah61l20_s6Z8Fxf0x.wAjLYzdX ZCtKntZneyCeah5tiI_o62wVGz.OfADYdztKtfeWTAfrgKb7QE7HSCNOjN.l6EpeQzXf9oYzj1rt Ma95r8nHASGTNdnFPDdePCLY5qvQ4LabBwqJ9IgUTC3s- Received: from sonic.gate.mail.ne1.yahoo.com by sonic309.consmr.mail.bf2.yahoo.com with HTTP; Fri, 25 Aug 2017 00:01:51 +0000 Date: Fri, 25 Aug 2017 00:01:48 +0000 (UTC) From: Dharmesh Mehta To: Stephen Hemminger Cc: Users Message-ID: <626833798.667384.1503619308756@mail.yahoo.com> In-Reply-To: <20170824161831.2967997e@xeon-e3> References: <1098935201.614169.1503613167623.ref@mail.yahoo.com> <1098935201.614169.1503613167623@mail.yahoo.com> <20170824161831.2967997e@xeon-e3> MIME-Version: 1.0 X-Mailer: WebService/1.1.10451 YahooMailNeo Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Fails to receive data more than 1500 bytes. X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: Dharmesh Mehta List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Aug 2017 00:01:52 -0000 /* empty vmdq configuration structure. Filled in programatically */static s= truct rte_eth_conf vmdq_conf_default =3D { .rxmode =3D { .mq_mode =C2=A0 = =C2=A0 =C2=A0 =C2=A0=3D ETH_MQ_RX_VMDQ_ONLY, .split_hdr_size =3D 0, .header= _split =C2=A0 =3D 0, /**< Header Split disabled */ .hw_ip_checksum =3D 0, /= **< IP checksum offload disabled */ .hw_vlan_filter =3D 0, /**< VLAN filter= ing disabled */ /* * It is necessary for 1G NIC such as I350, * this fixe= s bug of ipv4 forwarding in guest can't * forward pakets from one virtio d= ev to another virtio dev. */ .hw_vlan_strip =C2=A0=3D 1, /**< VLAN strip e= nabled. */ .jumbo_frame =C2=A0 =C2=A0=3D 0, /**< Jumbo Frame Support disabl= ed */ .hw_strip_crc =C2=A0 =3D 1, /**< CRC stripped by hardware */ .enable_= scatter =3D 1, //required for jumbofram + 1500. .jumbo_frame =3D 1, //requi= red for jumbofram + 1500. }, .txmode =3D { .mq_mode =3D ETH_MQ_TX_NONE, }, .rx_adv_conf =3D { /* * sho= uld be overridden separately in code with * appropriate values */ .vmdq_r= x_conf =3D { .nb_queue_pools =3D ETH_8_POOLS, .enable_default_pool =3D 0, .= default_pool =3D 0, .nb_pool_maps =3D 0, .pool_map =3D {{0, 0},}, }, },}; This is my config. am i missing something? From: Stephen Hemminger To: Dharmesh Mehta =20 Cc: Users Sent: Thursday, August 24, 2017 4:18 PM Subject: Re: [dpdk-users] Fails to receive data more than 1500 bytes. =20 On Thu, 24 Aug 2017 22:19:27 +0000 (UTC) Dharmesh Mehta wrote: > Hello, > I am using Intel i350 NIC card, and I am not able to receive data more th= an 1500 bytes in a packet. I tried igb_uio as well as uio_pci_generic drive= r, but both fails. > If I reduce data <=3D 1500 bytes than it works, but any thing more than 1= 500 is not able to receive. > Do I have to tune any config parameter in order to support more than 1500= ? > I tried to set MTU from code using API - rte_eth_dev_set_mtu(port_id, mtu= ) , but no success. > 0000:03:00.0 'I350 Gigabit Network Connection 1521' drv=3Digb_uio unused= =3Dvfio-pci,uio_pci_generic0000:03:00.1 'I350 Gigabit Network Connection 15= 21' drv=3Duio_pci_generic unused=3Digb_uio,vfio-pci0000:03:00.2 'I350 Gigab= it Network Connection 1521' drv=3Duio_pci_generic unused=3Digb_uio,vfio-pci= 0000:03:00.3 'I350 Gigabit Network Connection 1521' drv=3Duio_pci_generic u= nused=3Digb_uio,vfio-pci > Thanks in advance.DM. In order to support >1500 bytes, you need to at least: =C2=A0=C2=A0=C2=A0 1. set jumbo_frame when setting rxmode =C2=A0=C2=A0=C2=A0 2. set enable_scatter in rxmode (unless mtu + overhead <= pool size) =C2=A0=C2=A0=C2=A0 3. make sure pool mbuf size > eth->min_rx_buf_size =20 >From mehtadharmesh@yahoo.com Fri Aug 25 02:49:39 2017 Return-Path: Received: from sonic310-15.consmr.mail.bf2.yahoo.com (sonic310-15.consmr.mail.bf2.yahoo.com [74.6.135.125]) by dpdk.org (Postfix) with ESMTP id 849C77D14 for ; Fri, 25 Aug 2017 02:49:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1503622179; bh=Scp6MSg5ayVQK1sT3gsmCJJZX8TTKw2en9UDgBJ2ZXY=; h=Date:From:Reply-To:To:Cc:In-Reply-To:References:Subject:From:Subject; b=BuouF+3oKXbNAkiDik5To58q3YL4ITHRw0PIc9VfJNF3v+mjuKl7l2avWEwdyZpCUyUej1dgY9pN9xMpCnpF0T0SK76MsCKa2QmeFqrCOckp35vdT545Att8SXURdBZ5L0Ozn02/yTSgdQLvp8ewyT3Kz5ysYqhgt9THMizW/CcUhuHgC5NucfMG0x2nXSq/JrKXRHaDF+YadNzyqBN9yafQArfrhiNJmrHopjvGGF/g219pOmuUFnNsmiwt7UntvYiMPi+NoTvQShz/twAiLy0MZTZ9Sd/Q2WMFkpWJoRNy4Ue3GmKbCwSBCucVOe0IpP6CeC2C177KrUwemBlh5g== X-YMail-OSG: z0Mb55AVM1kbXWAPdrP5ABN8J3U_2Pw7f2hiSK1Jyk8W5_Iuij2sRrGHK5z_6wG 7kijBqCvzJ4IWVisfj8wKXu6HZYHVPpCer.tEJTGHBS2Pvpn7iXagAnOZ7WngELoPpA9_ntNBGWK 3XC1WKrcbckw3wyiU0_abOFQEghf7QKlCkhL6Ondj8cdFG7yjYPrzl0KZGq8rx8PnI3erfayFKIq u.zko_jYFaE9063R1u82pIgbpebu8Yy6S17Inh2NqznpL2hb3fvk6_x.P1CgyzU.KsCKHHrzH1z9 j._MC_2emMKOfNE15hnLubClgp_nw0UdNjjae709KzXZTiuPWmhzMxmyzXtYeCBfbE9LdsUFuEWh 3UqK6_6N2ECO6iqtsvfug8a_E_.pM7Tlvq7wN8GtKu.frF7r12.GuvodfRA5VWZHeYXo4rxUbFAe P3kJ2AOWJsRmblo2.gF6UabEFLqSXwzW5I2rc5UzNdWu6CVg- Received: from sonic.gate.mail.ne1.yahoo.com by sonic310.consmr.mail.bf2.yahoo.com with HTTP; Fri, 25 Aug 2017 00:49:39 +0000 Date: Fri, 25 Aug 2017 00:49:27 +0000 (UTC) From: Dharmesh Mehta To: Stephen Hemminger Cc: Users Message-ID: <1411108186.679125.1503622167743@mail.yahoo.com> In-Reply-To: <20170824161831.2967997e@xeon-e3> References: <1098935201.614169.1503613167623.ref@mail.yahoo.com> <1098935201.614169.1503613167623@mail.yahoo.com> <20170824161831.2967997e@xeon-e3> MIME-Version: 1.0 X-Mailer: WebService/1.1.10451 YahooMailNeo Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Fails to receive data more than 1500 bytes. X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: Dharmesh Mehta List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Aug 2017 00:49:39 -0000 Here is dump of my rx_mode. (i am using dpdk 17.05.1) vmdq_conf_default.rxmode.mq_mode=3D4vmdq_conf_default.rxmode.max_rx_pkt_len= =3D9728vmdq_conf_default.rxmode.split_hdr_size=3D0vmdq_conf_default.rxmode.= header_split=3D0vmdq_conf_default.rxmode.hw_ip_checksum=3D0vmdq_conf_defaul= t.rxmode.hw_vlan_filter=3D0vmdq_conf_default.rxmode.hw_vlan_strip=3D1vmdq_c= onf_default.rxmode.hw_vlan_extend=3D0vmdq_conf_default.rxmode.jumbo_frame= =3D1vmdq_conf_default.rxmode.hw_strip_crc=3D1vmdq_conf_default.rxmode.enabl= e_scatter=3D1vmdq_conf_default.rxmode.enable_lro=3D0 but still I don't see my code is able to capture packet. TX is fine. What other area of code I should check? -DM. From: Stephen Hemminger To: Dharmesh Mehta =20 Cc: Users Sent: Thursday, August 24, 2017 4:18 PM Subject: Re: [dpdk-users] Fails to receive data more than 1500 bytes. =20 On Thu, 24 Aug 2017 22:19:27 +0000 (UTC) Dharmesh Mehta wrote: > Hello, > I am using Intel i350 NIC card, and I am not able to receive data more th= an 1500 bytes in a packet. I tried igb_uio as well as uio_pci_generic drive= r, but both fails. > If I reduce data <=3D 1500 bytes than it works, but any thing more than 1= 500 is not able to receive. > Do I have to tune any config parameter in order to support more than 1500= ? > I tried to set MTU from code using API - rte_eth_dev_set_mtu(port_id, mtu= ) , but no success. > 0000:03:00.0 'I350 Gigabit Network Connection 1521' drv=3Digb_uio unused= =3Dvfio-pci,uio_pci_generic0000:03:00.1 'I350 Gigabit Network Connection 15= 21' drv=3Duio_pci_generic unused=3Digb_uio,vfio-pci0000:03:00.2 'I350 Gigab= it Network Connection 1521' drv=3Duio_pci_generic unused=3Digb_uio,vfio-pci= 0000:03:00.3 'I350 Gigabit Network Connection 1521' drv=3Duio_pci_generic u= nused=3Digb_uio,vfio-pci > Thanks in advance.DM. In order to support >1500 bytes, you need to at least: =C2=A0=C2=A0=C2=A0 1. set jumbo_frame when setting rxmode =C2=A0=C2=A0=C2=A0 2. set enable_scatter in rxmode (unless mtu + overhead <= pool size) =C2=A0=C2=A0=C2=A0 3. make sure pool mbuf size > eth->min_rx_buf_size =20 >From wbahacer@126.com Fri Aug 25 10:41:00 2017 Return-Path: Received: from m15-114.126.com (m15-114.126.com [220.181.15.114]) by dpdk.org (Postfix) with ESMTP id 6BBEC7CB8 for ; Fri, 25 Aug 2017 10:40:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com; s=s110527; h=From:Subject:Message-ID:Date:MIME-Version; bh=RKBmT rar8LHEIEN8IT2K57Zl7l4ENmOye/poLTKHams=; b=pxE3TYFja0ls2UlIJ5+9u 93thRzmrzV32Xlp/htF6mgjUlfdWiJMw7GXHvunRZf703cySvj00usUfRserSHtR EDv/XVEkcWTJfonE9IZEtLBe+f5b3bTH01+faQnbEXNNoprOg4wPxdnupI9HO+Tl aTENAkiG+go8O/91KFacSQ= Received: from [10.24.0.238] (unknown [202.189.3.162]) by smtp7 (Coremail) with SMTP id DsmowACnH8SV4p9ZOvCqAA--.43098S2; Fri, 25 Aug 2017 16:40:54 +0800 (CST) To: users@dpdk.org From: Furong Message-ID: Date: Fri, 25 Aug 2017 16:40:51 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 X-CM-TRANSID: DsmowACnH8SV4p9ZOvCqAA--.43098S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxAw4DJFW8ZF1DWr15Jr4Utwb_yoWrJw47pa 4UKF97tw1kJr4rWws5Za4ruFW2kFs7Za17G34fJ34vkF1qg3savr98K3Z8uayUuF4Iyry5 XrWDGFyv9w1kAaDanT9S1TB71UUUUUJqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07j_CzZUUUUU= X-Originating-IP: [202.189.3.162] X-CM-SenderInfo: xzedxtxfhuqiyswou0bp/1tbizxY-MFUw5m3AowAAsb Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-users] How to tune configurations for measuring zero packet-loss performance of OVS-DPDK with vhost-user? X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Aug 2017 08:41:00 -0000 Hi! I've built a testbed to measure the zero packet-loss performance of OVS-DPDK with vhost-user. Here are the configurations of my testbed: 1. Host machine (ubuntu14.04.5, Linux-3.19.0-25):     a/  hardware: quad socket with Intel Xeon E5-4603v2@2.20GHz (4 cores/socket), 32GB DDR3 memory, dual-port Intel 82599ES NIC (10Gbps/port, in socket0);     b/  BIOS settings: disable power management options including C-state, P-state, Step Speedup and set cpu in performance mode;     c/  host OS booting parameters: isolcpus=0-7, nohz_full=0-7, rcu_nocbs=0-7, intel_iommu=on, iommu=pt and 16 x 1G hugepages     d/  OVS-DPDK:          1)  version: OVS-2.6.1, DPDK-16.07.2 (using x86_64-ivshmem-linuxapp-gcc target)          2)  configurations: 2 physical port (dpdk0 and dpdk1, vfio-pci dirver) and 2 vhost-user port (vhost0, vhost1) were added to ovs bridge (br0), and 1 PMD core (pinned to core 0, in socket0) was used for forwarding. The fowarding rules were "in_port=dpdk0,action=output:vhost0" and "in_port=vhost1,action=output:dpdk0".      e/ irq affinity: kill irqbalance and set smp_affinity of all irqs to 0xff00 (core 8-15).      f/  RT priority: change RT priority of ksoftirqd (chrt -fp 2 $tid), rcuos (chrt -fp 3 $tid) and rcuob (chrt -fp 2 $tid). 2. VM setting      a/ hypervisor: QEMU-2.8.0 and KVM      b/ QEMU command: qemu-system-x86_64 -enable-kvm -drive file=$IMAGE,if=virtio -cpu host -smp 3 -m 4G -boot c \              -name $NAME -vnc :$VNC_INDEX  -net none \              -object memory-backend-file,id=mem,size=4G,mem-path=/dev/hugepages,share=on \              -mem-prealloc -numa node,memdev=mem \              -chardev socket,id=char1,path=$VHOSTDIR/vhost0 \              -netdev type=vhost-user,id=net1,chardev=char1,vhostforce \              -device virtio-net-pci,netdev=net1,mac=52:54:00:00:00:14,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off,rx_queue_size=1024,indirect_desc=on \              -chardev socket,id=char2,path=$VHOSTDIR/vhost1 \              -netdev type=vhost-user,id=net2,chardev=char2,vhostforce \              -device virtio-net-pci,netdev=net2,mac=52:54:00:00:00:15,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off,mrg_rxbuf=off,rx_queue_size=1024,indirect_desc=on     c/ Guest OS: ubuntu14.04     d/ Guest OS booting parameters: isolcpus=0-1, nohz_full=0-1, rcu_nocbs=0-1, and 1 x 1G hugepages     e/ irq affinity and RT priority: remove irqs and change RT priority of isolated vcpus (vcpu0, vcpu1)     f/ Guest forwarding application: example/l2fwd build on dpdk-16.07.2 (using ivshmem target). The function of l2fwd is to forward packets from one port to another port, and each port has its' own polling thread to receive packets.     g/ App configurations: two virtio ports (vhost0, vhost1, using uio_pci_generic driver) were used by l2fwd, and l2fwd had 2 polling threads that ran on vcpu0 and vcpu1 (pinned to physical core1 and core2, in socket0). 3. Traffic generator      a/ Spirent TestCenter with 2 x 10G ports was used to generate traffic.      b/ 1 flow with 64B packet size was generated from one port and sent to dpdk0, and then receive and count packets at another port. Here are my results: 1. Max throughput (non zero packet-loss case): 2.03Gbps 2. Max throughput (zero packet-loss case): 100 ~ 200Mbps And I got some information about packet loss from packet statistics in OVS and l2fwd: When input traffic large than 200Mbps, there may were 3 packet loss point -- OVS rx from physical NIC (RX queue was full), OVS tx to vhost port (vhost rx queue was full) and l2fwd tx to vhost port (vhost tx queue was full). I don't know why the difference between above 2 cases is so large. I doubt that I've misconfigure my testbed. Could someone share experience with me ? Thanks a lot!