From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 7A2152BAC for ; Wed, 23 Nov 2016 15:56:25 +0100 (CET) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP; 23 Nov 2016 06:56:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,538,1473145200"; d="scan'208";a="789944806" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by FMSMGA003.fm.intel.com with ESMTP; 23 Nov 2016 06:56:23 -0800 Received: from fmsmsx152.amr.corp.intel.com (10.18.125.5) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.248.2; Wed, 23 Nov 2016 06:56:23 -0800 Received: from fmsmsx113.amr.corp.intel.com ([169.254.13.68]) by FMSMSX152.amr.corp.intel.com ([169.254.6.84]) with mapi id 14.03.0248.002; Wed, 23 Nov 2016 06:56:23 -0800 From: "Wiles, Keith" To: Mohammad Malihi CC: "users@dpdk.org" Thread-Topic: [dpdk-users] High Packet missed rate Thread-Index: AQHSRYARlz/WDaeT502TtHwl0yjVDaDnLp4A Date: Wed, 23 Nov 2016 14:56:23 +0000 Message-ID: <40B86198-FED8-49C2-A01C-9BCB803FB009@intel.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.254.99.223] Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] High Packet missed rate X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Nov 2016 14:56:25 -0000 > On Nov 23, 2016, at 5:51 AM, Mohammad Malihi = wrote: >=20 > Hi > I'm new on dpdk and i have problems about high packet dropping ratio > when using pktgen and l2fwd in rates greter than 1.5 Gbs (using pktgen). > In order to have benchmarks on dpdk forwarding capabilities in 10 Gbs rat= e, > i've established a very simple test environment as depicted bellow : >=20 > --------------------------------- > ---------------------------------------------- > |Server 1 port 0 (Intel 82599ES)| -----> |Server 2 port 0 (Intel 82599ES)= | > --------------------------------- > ---------------------------------------------- > = | > (bridge via > l2fwd app) > = | > ---------------------------------------------- > ---------------------------------------------- > |Server 1 port 1 (Intel 82599ES)| <----- |Server 2 port 1 (Intel 82599ES= )| > --------------------------------------------- > ---------------------------------------------- >=20 > Sending packets (on server 1) with size 64 bytes at rate 10 Gbs can be do= ne > by running pktgen with the following parameters : > -c 0x07 -n 12 -- -P -m "1.0, 2.1" > and following commands in interactive mode : > set 0 rate 100 > set 0 size 64 > start 0 >=20 > At the other side (server 2), l2fwd app forwards packets with parameters = : > -c 0x07 -n 12 -- -p 0x03 -q 1 -T 10 > (core 0 receives packets from port 0 and core 1 sends them using port 1) What is the core mapping here, are the two cores on different sockets? You can use the python script in the tools directory to print the info out.= We should not see drops at that rate unless the packets are moving between= sockets. >=20 > Hardware Specifications (same for 2 servers) : > Processors : 2 "Intel Xeon 2690 v2" (each of them has 20 cores). > NIC : "Intel 82599ES 10-GB" with 2 interfaces (connected to x8 PCIe Ge= n > 2 -> 5 GT/s) > Memory : 264115028 KB >=20 > Also hugepage sizes on each side are totally 128 GB (64 GB(huge page->1GB= ) > for each node) > and all ports and used cores(0,1) are on the same NUMA node. >=20 > I've made some modifcations to l2fwd app to show packet dropping count by > calling > "rte_eth_stats_get" in "print_stats" function and using "imissed" member = of > "rte_eth_stas". >=20 > The results on screen show that in rates greater than 1.5 Gbs, packet > dropping (by hardware) occures. > I wonder, why there is packet missing in rates bellow 10 Gbs >=20 > Thanks in advance Regards, Keith