From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ns.mahan.org (unknown [67.116.10.138]) by dpdk.org (Postfix) with ESMTP id 65BFF6931 for ; Fri, 24 May 2013 20:51:11 +0200 (CEST) Received: from [192.168.71.41] (localhost [127.0.0.1]) (authenticated bits=0) by ns.mahan.org (8.14.5/8.14.5) with ESMTP id r4OIp6Su013248; Fri, 24 May 2013 11:51:06 -0700 (PDT) (envelope-from mahan@mahan.org) References: <519F74F6.3000903@mahan.org> <201305241641.38896.thomas.monjalon@6wind.com> <201305241745.25844.thomas.monjalon@6wind.com> Mime-Version: 1.0 (1.0) In-Reply-To: <201305241745.25844.thomas.monjalon@6wind.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Message-Id: <5BBC85C7-B39F-4200-AB7B-CD5464BDA431@mahan.org> X-Mailer: iPad Mail (10B329) From: Patrick Mahan Date: Fri, 24 May 2013 11:51:09 -0700 To: Thomas Monjalon Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] Best example for showing throughput? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 May 2013 18:51:11 -0000 On May 24, 2013, at 8:45 AM, Thomas Monjalon wro= te: > Adding other questions about packet generator: >=20 > 24/05/2013 16:41, Thomas Monjalon : >> 24/05/2013 16:11, Patrick Mahan : >>> Intel Xeon E5-2690 (8 physical, 16 virtual) >>=20 >> How many CPU sockets have you ? >>=20 >>> 64 Gbyte DDR3 memory >>> Intel 82599EB-SPF dual port 10GE interface >>> CentOS 6.4 (2.6.32-358.6.1.el6.x86_64) >>> The 82599 is in a 16x PCI-e slot. >>=20 >> Check the datasheet of your motherboard. >> Are you sure it is wired as a 16x PCI-e ? >> Is it connected to the right NUMA node ? >>=20 >>> I have it attached to an IXIA box. >=20 > Which packet size are you sending with your packet generator ? > In case of 64 byte packets (with Ethernet CRC), (64+20)*8 =3D 672 bits. > So line rate is 10000/672 =3D 14.88 Mpps. > This bandwith should be supported by your 82599 NIC. Yes, the Ixia is sending the standard 64 byte packet. The stats show a send= rate of 14.880 Mpps. >=20 > Are you sending and receiving on the 2 ports at the same time ? > Forwarding in the 2 directions is equivalent to double the bandwidth. > Maybe that 14.88*2 =3D 29.76 Mpps is too much for your hardware. >=20 Yes I am running traffic both ways. Interestingly, the amount of drops seem= consistent in both directions. This makes sense since testpmd is spinning o= ff a thread to read from each input queue. > You could also try with 2 ports on 2 different NICs. Hmmm, not sure if I can lay hands on another 82599 card. This one is a loan= er. Thanks, Patrick >=20 >>> I have been running the app 'testpmd' >>> in iofwd mode with 2K rx/tx descriptors and 512 burst/mbcache. I have >>> been varying the # of queues and unfortunately, I am not seeing full >>> line rate. >>=20 >> What is your command line ? >>=20 >>> I am seeing about 20-24% droppage on the receive side. It doesn't seem >>> to matter the # of queues. >>=20 >> If queues are polled by different cores, it should matter. >>=20 >>> Question 1: Is 'testpmd' the best application for this type of testing?=20= >>> If not, which program? Or do I need to roll my own? >>=20 >> testpmd is the right application for performance benchmark. >> It is also possible to use examples l2fwd/l3fwd but you should keep >> testpmd. >>=20 >>> Question 2: I have blacklisted the Intel i350 ports on the motherboard >>> and am using ssh to access the platform. Could this be affecting the >>> test? >>=20 >> You mean i350 is used for ssh ? It shouldn't significantly affect your >> test. >=20 > --=20 > Thomas