From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from or1.mithiskyconnect.com (or1.mithiskyconnect.com [180.149.242.124]) by dpdk.org (Postfix) with ESMTP id 330AB58CB for ; Wed, 9 Sep 2015 11:47:12 +0200 (CEST) Received: from or1.mithiskyconnect.com (localhost.localdomain [127.0.0.1]) by or1.mithiskyconnect.com (outW) with ESMTP id 272DF14C0349; Wed, 9 Sep 2015 15:17:10 +0530 (IST) Received: from mail9.mithiskyconnect.com (unknown [180.149.247.243]) by or1.mithiskyconnect.com (cleanSplit) with ESMTP id 2558414C0348; Wed, 9 Sep 2015 15:17:10 +0530 (IST) Received: from mail9.mithiskyconnect.com (localhost.localdomain [127.0.0.1]) by mail9.mithiskyconnect.com (SMF) with ESMTP id 279A1CA0811; Wed, 9 Sep 2015 15:17:10 +0530 (IST) Received: from mail9.mithi.com (localhost.localdomain [127.0.0.1]) by mail9.mithiskyconnect.com (bulkSplit) with ESMTP id 1BBC1CA0810; Wed, 9 Sep 2015 15:17:10 +0530 (IST) Received: from 180.149.247.243 by Mail9 (envelope-from , uid 0) with qmail-scanner-1.25 (clamscan: 0.60. Clear:RC:0(180.149.247.243) :. Processed in 0.036249 secs); Wed, 9 Sep 2015 09:47:10 +0000 Received: from unknown (HELO localhost.localdomain) (husainee.plumber@nevisnetworks.com@[180.149.247.243]) (envelope-sender ) by 0 (qmail-ldap-1.03) with AES128-SHA encrypted SMTP for ; Wed, 9 Sep 2015 09:47:09 +0000 Message-ID: <55F00018.2020100@nevisnetworks.com> Date: Wed, 9 Sep 2015 15:17:04 +0530 From: "husainee" User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: =?us-ascii?Q?Dumitrescu=2C_Cristian?= , =?us-ascii?Q?dev=40dpdk=2Eorg?= References: <55EE6A46.6030507@nevisnetworks.com> <3EB4FA525960D640B5BDFFD6A3D89126478B8A04@IRSMSX108.ger.corp.intel.com> In-Reply-To: <3EB4FA525960D640B5BDFFD6A3D89126478B8A04@IRSMSX108.ger.corp.intel.com> Content-Type: multipart/mixed; boundary=------------030506080204020200020502 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Random packet drops with ip_pipeline on R730. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Sep 2015 09:47:12 -0000 This is a multi-part message in MIME format. --------------030506080204020200020502 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi Cristian PFA the config file. I am sending packets from port0 and receiving on port1. By random packet drops I mean, on every run the number of packets dropped is not same. Here are some results as below. Frame sent rate 1488095.2 fps, 64Byte packets (100% of 1000Mbps) Run1- 0.0098% (20-22 Million Packets) Run2- 0.021% (20-22 Million Packets) Run3- 0.0091% (20-22 Million Packets) Frame rate 744047.62 fps, 64 Byte packets, (50% of 1000Mbps) Run1- 0.0047% (20-22 Million Packets) Run2- 0.0040% (20-22 Million Packets) Run3- 0.0040% (20-22 Million Packets) Frame rate 148809.52 fps, 64 Byte packets,(10% of 1000Mbps) Run1- 0 (20-22 Million Packets) Run2- 0 (20-22 Million Packets) Run3- 0 (20-22 Million Packets) Following are the hw nic setting differences btw ip_pipeline and l2fwd app. parameter ip_pipeline l2fwd jumbo frame 1 0 hw_ip_checksum 1 0 rx_conf. wthresh 4 0 rx_conf.rx_free_thresh 64 32 tx_conf.pthresh 36 32 burst size 64 32 We tried to make the ip_pipeline settings same as l2fwd but no change in results. I have not tried with 10GbE . I do not have 10GbE test equipment. regards husainee On 09/08/2015 06:32 PM, Dumitrescu, Cristian wrote: > Hi Husainee, > > Can you please explain what do you mean by random packet drops? What percentage of the input packets get dropped, does it take place on every run, does the number of dropped packets vary on every run, etc? > > Are you also able to reproduce this issue with other NICs, e.g. 10GbE NIC? > > Can you share your config file? > > Can you please double check the low level NIC settings between the two applications, i.e. the settings in structures link_params_default, default_hwq_in_params, default_hwq_out_params from ip_pipeline file config_parse.c vs. their equivalents from l2fwd? The only thing I can think of right now is maybe one of the low level threshold values for the Ethernet link is not tuned for your 1GbE NIC. > > Regards, > Cristian > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of husainee >> Sent: Tuesday, September 8, 2015 7:56 AM >> To: dev@dpdk.org >> Subject: [dpdk-dev] Random packet drops with ip_pipeline on R730. >> >> Hi >> >> I am using a DELL730 with Dual socket. Processor in each socket is >> Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz- 6Cores. >> The CPU layout has socket 0 with 0,2,4,6,8,10 cores and socket 1 with >> 1,3,5,7,9,11 cores. >> The NIC card is i350. >> >> The Cores 2-11 are isolated using isolcpus kernel parameter. We are >> running the ip_peipeline application with only Master, RX and TX threads >> (Flow and Route have been removed from cfg file). The threads are run as >> follows >> >> - Master on CPU core 2 >> - RX on CPU core 4 >> - TX on CPU core 6 >> >> 64 byte packets are sent from ixia at different speeds, but we are >> seeing random packet drops. Same excercise is done on core 3,5,7 and >> results are same. >> >> We tried the l2fwd app and it works fine with no packet drops. >> >> Hugepages per 1024 x 2M per socket. >> >> >> Can anyone suggest what could be the reason for these random packet >> drops. >> >> regards >> husainee >> >> >> >> >> >> >> >> >> >> > --------------030506080204020200020502 Content-Type: text/plain; charset=UTF-8; name=ip_pipeline.cfg Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename=ip_pipeline.cfg OyAgIEJTRCBMSUNFTlNFCjsKOyAgIENvcHlyaWdodChjKSAyMDEwLTIwMTQgSW50ZWwgQ29y cG9yYXRpb24uIEFsbCByaWdodHMgcmVzZXJ2ZWQuCjsgICBBbGwgcmlnaHRzIHJlc2VydmVk Lgo7CjsgICBSZWRpc3RyaWJ1dGlvbiBhbmQgdXNlIGluIHNvdXJjZSBhbmQgYmluYXJ5IGZv cm1zLCB3aXRoIG9yIHdpdGhvdXQKOyAgIG1vZGlmaWNhdGlvbiwgYXJlIHBlcm1pdHRlZCBw cm92aWRlZCB0aGF0IHRoZSBmb2xsb3dpbmcgY29uZGl0aW9ucwo7ICAgYXJlIG1ldDoKOwo7 ICAgICAqIFJlZGlzdHJpYnV0aW9ucyBvZiBzb3VyY2UgY29kZSBtdXN0IHJldGFpbiB0aGUg YWJvdmUgY29weXJpZ2h0CjsgICAgICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9u cyBhbmQgdGhlIGZvbGxvd2luZyBkaXNjbGFpbWVyLgo7ICAgICAqIFJlZGlzdHJpYnV0aW9u cyBpbiBiaW5hcnkgZm9ybSBtdXN0IHJlcHJvZHVjZSB0aGUgYWJvdmUgY29weXJpZ2h0Cjsg ICAgICAgbm90aWNlLCB0aGlzIGxpc3Qgb2YgY29uZGl0aW9ucyBhbmQgdGhlIGZvbGxvd2lu ZyBkaXNjbGFpbWVyIGluCjsgICAgICAgdGhlIGRvY3VtZW50YXRpb24gYW5kL29yIG90aGVy IG1hdGVyaWFscyBwcm92aWRlZCB3aXRoIHRoZQo7ICAgICAgIGRpc3RyaWJ1dGlvbi4KOyAg ICAgKiBOZWl0aGVyIHRoZSBuYW1lIG9mIEludGVsIENvcnBvcmF0aW9uIG5vciB0aGUgbmFt ZXMgb2YgaXRzCjsgICAgICAgY29udHJpYnV0b3JzIG1heSBiZSB1c2VkIHRvIGVuZG9yc2Ug b3IgcHJvbW90ZSBwcm9kdWN0cyBkZXJpdmVkCjsgICAgICAgZnJvbSB0aGlzIHNvZnR3YXJl IHdpdGhvdXQgc3BlY2lmaWMgcHJpb3Igd3JpdHRlbiBwZXJtaXNzaW9uLgo7CjsgICBUSElT IFNPRlRXQVJFIElTIFBST1ZJREVEIEJZIFRIRSBDT1BZUklHSFQgSE9MREVSUyBBTkQgQ09O VFJJQlVUT1JTCjsgICAiQVMgSVMiIEFORCBBTlkgRVhQUkVTUyBPUiBJTVBMSUVEIFdBUlJB TlRJRVMsIElOQ0xVRElORywgQlVUIE5PVAo7ICAgTElNSVRFRCBUTywgVEhFIElNUExJRUQg V0FSUkFOVElFUyBPRiBNRVJDSEFOVEFCSUxJVFkgQU5EIEZJVE5FU1MgRk9SCjsgICBBIFBB UlRJQ1VMQVIgUFVSUE9TRSBBUkUgRElTQ0xBSU1FRC4gSU4gTk8gRVZFTlQgU0hBTEwgVEhF IENPUFlSSUdIVAo7ICAgT1dORVIgT1IgQ09OVFJJQlVUT1JTIEJFIExJQUJMRSBGT1IgQU5Z IERJUkVDVCwgSU5ESVJFQ1QsIElOQ0lERU5UQUwsCjsgICBTUEVDSUFMLCBFWEVNUExBUlks IE9SIENPTlNFUVVFTlRJQUwgREFNQUdFUyAoSU5DTFVESU5HLCBCVVQgTk9UCjsgICBMSU1J VEVEIFRPLCBQUk9DVVJFTUVOVCBPRiBTVUJTVElUVVRFIEdPT0RTIE9SIFNFUlZJQ0VTOyBM T1NTIE9GIFVTRSwKOyAgIERBVEEsIE9SIFBST0ZJVFM7IE9SIEJVU0lORVNTIElOVEVSUlVQ VElPTikgSE9XRVZFUiBDQVVTRUQgQU5EIE9OIEFOWQo7ICAgVEhFT1JZIE9GIExJQUJJTElU WSwgV0hFVEhFUiBJTiBDT05UUkFDVCwgU1RSSUNUIExJQUJJTElUWSwgT1IgVE9SVAo7ICAg KElOQ0xVRElORyBORUdMSUdFTkNFIE9SIE9USEVSV0lTRSkgQVJJU0lORyBJTiBBTlkgV0FZ IE9VVCBPRiBUSEUgVVNFCjsgICBPRiBUSElTIFNPRlRXQVJFLCBFVkVOIElGIEFEVklTRUQg T0YgVEhFIFBPU1NJQklMSVRZIE9GIFNVQ0ggREFNQUdFLgoKOyBDb3JlIGNvbmZpZ3VyYXRp b24KW2NvcmUgMF0KdHlwZSA9IE1BU1RFUgpxdWV1ZXMgaW4gID0gNSAtMSAtMSAtMSAtMSAt MSAtMSAtMQpxdWV1ZXMgb3V0ID0gNCAtMSAtMSAgLTEgLTEgLTEgLTEgLTEKCltjb3JlIDFd CnR5cGUgPSBSWApxdWV1ZXMgaW4gID0gLTEgLTEgLTEgLTEgLTEgLTEgLTEgNApxdWV1ZXMg b3V0ID0gIDAgIDEgIDIgIDMgLTEgLTEgLTEgNQoKO1tjb3JlIDJdCjt0eXBlID0gRkMKO3F1 ZXVlcyBpbiAgPSAgMCAgMSAgMiAgMyAtMSAtMSAtMSA5CjtxdWV1ZXMgb3V0ID0gIDQgIDUg IDYgIDcgLTEgLTEgLTEgMTEKCgpbY29yZSAyXQp0eXBlID0gVFgKcXVldWVzIGluICA9ICAx IDAgMiAzIC0xIC0xIC0xIC0xCnF1ZXVlcyBvdXQgPSAtMSAtMSAtMSAtMSAtMSAtMSAtMSAt MQo= --------------030506080204020200020502--