From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ns.mahan.org (unknown [67.116.10.138]) by dpdk.org (Postfix) with ESMTP id B088030E for ; Wed, 29 May 2013 20:24:48 +0200 (CEST) Received: from [192.168.71.114] (localhost [127.0.0.1]) (authenticated bits=0) by ns.mahan.org (8.14.5/8.14.5) with ESMTP id r4TIOnwd038340; Wed, 29 May 2013 11:24:49 -0700 (PDT) (envelope-from mahan@mahan.org) References: <519F74F6.3000903@mahan.org> <201305241641.38896.thomas.monjalon@6wind.com> <201305241745.25844.thomas.monjalon@6wind.com> <5BBC85C7-B39F-4200-AB7B-CD5464BDA431@mahan.org> <51A10FC3.5050703@6wind.com> <51A12618.3040509@6wind.com> <51A60BA0.7000700@6wind.com> Mime-Version: 1.0 (1.0) In-Reply-To: <51A60BA0.7000700@6wind.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Message-Id: <9DE4A69B-DAF4-423A-99B3-95E8FDBB17D3@mahan.org> X-Mailer: iPad Mail (10B329) From: Patrick Mahan Date: Wed, 29 May 2013 11:24:51 -0700 To: Damien Millescamps Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] Best example for showing throughput? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 May 2013 18:24:49 -0000 On May 29, 2013, at 7:07 AM, Damien Millescamps wrote: > On 05/28/2013 09:15 PM, Patrick Mahan wrote: >> So the overhead cost is almost 70%? >>=20 >> Can this ever do line rate? Under what conditions? It has been my experi= ence that the industry standard is testing throughput using these 64 byte pa= ckets. >=20 > This overhead can actually be explained considering the PCIe 2.1[1] > standard and 82599 Specifications[2]. >=20 Damien, Thanks very much for this explanation of the overhead costs associated with t= he 64-byte packet size (and the references). I am just recently started loo= king at what it takes to do 10GE using off the shelf components and having t= he PCIe overhead explained so clearly helps, hugely! Patrick=20 > To sum up, for each packet the adapter needs to first send a read > request on a 16 Bytes packet descriptor (cf. [2]), to which it will > receive a read answer. Then the adapter must issue either a read or > write request to the packet physical address for the size of the packet. > The frame format for PCIe read and write request is composed with a > Start of frame, a Sequence Number, a Header, the Data, an LRC and an End > of frame (cf. [1]). The overhead we are talking about here is more than > 16 Bytes per PCIe message. In addition to that, the PCIe physical layer > uses a 10bits per bytes encoding, thus adding to the overhead. > Now if you apply this to the 64 Bytes packet, you should notice that the > overhead is way above 70% (4 messages plus descriptor and data size > times 10b/8b encoding which should be around 83% if I didn't miss anything= ). >=20 > However, if we end up with a limited overhead it is because the 82599 > implements thresholds in order to be able to batch the packet descriptor > reading / writing back (cf. [2] WTHRESH for example) thus reducing the > overhead to a little more than 70% with the default DPDK parameters. >=20 > You can achieve line-rate for 64 Bytes packets on each port > independently. When using both port simultaneously you can achieve > line-rate using packet size above 64Bytes. In the post to which I > redirected you, Alexander talked about 256Bytes packets. But if you take > the time to compute the total throughput needed on the PCIe as a > function of the packet size, you'll probably end up with a lower minimum > packet size than 256B to achieve line-rate simultaneously on both port. >=20 > [1] > http://www.pcisig.com/members/downloads/specifications/pciexpress/PCI_Expr= ess_Base_r2_1_04Mar09.pdf > [2] > http://www.intel.com/content/dam/doc/datasheet/82599-10-gbe-controller-dat= asheet.pdf >=20 > --=20 > Damien Millescamps