From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ns.mahan.org (unknown [67.116.10.138]) by dpdk.org (Postfix) with ESMTP id BEC1630E for ; Tue, 28 May 2013 21:15:52 +0200 (CEST) Received: from [192.168.71.47] (localhost [127.0.0.1]) (authenticated bits=0) by ns.mahan.org (8.14.5/8.14.5) with ESMTP id r4SJFY2P033478; Tue, 28 May 2013 12:15:35 -0700 (PDT) (envelope-from mahan@mahan.org) References: <519F74F6.3000903@mahan.org> <201305241641.38896.thomas.monjalon@6wind.com> <201305241745.25844.thomas.monjalon@6wind.com> <5BBC85C7-B39F-4200-AB7B-CD5464BDA431@mahan.org> <51A10FC3.5050703@6wind.com> <51A12618.3040509@6wind.com> Mime-Version: 1.0 (1.0) In-Reply-To: <51A12618.3040509@6wind.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Message-Id: X-Mailer: iPad Mail (10B329) From: Patrick Mahan Date: Tue, 28 May 2013 12:15:35 -0700 To: Damien Millescamps Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] Best example for showing throughput? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 May 2013 19:15:53 -0000 On May 25, 2013, at 1:59 PM, Damien Millescamps wrote: > On 05/25/2013 09:23 PM, Damien Millescamps wrote: >> Hi Patrick, >>=20 >> If you are using both ports of the same Niantics at the same time then >> you won't be able to reach the line-rate on both port. >=20 > For a better explanation, you can refer to this post from Alexander > Duyck from Intel on the linux network mailing list: >=20 > http://permalink.gmane.org/gmane.linux.network/207295 >=20 Interesting article. Okay, I attempted to find the bus rate of this card this morning (the output= of lspci is below). This shows me I am capable of 5GT/s raw which works out to 4 Gbps/lane with 8= lanes enabled which should, theoretically, give 32 Gbps. Reading the artic= le, the suggestion is that due to the smaller packet the overhead is contrib= uting to more 50% of the PCIe bus traffic. Given that I'm seeing a forwardi= ng speed of only about 12.282 Mpps with ~17% drops for only in one direction= (rss enabled, 7 queues enabled with 6 queues used). So the overhead cost is almost 70%? Can this ever do line rate? Under what conditions? It has been my experienc= e that the industry standard is testing throughput using these 64 byte packe= ts. Output of 'lspci -vvv -s 03:00.0': 03:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ N= etwork Connection (rev 01) Subsystem: Intel Corporation Ethernet Server Adapter X520-2 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- S= tepping- SERR- FastB2 B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=3Dfast >TAbort- SERR-

Regards, > --=20 > Damien Millescamps