From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id C87FA3977 for ; Thu, 14 Apr 2016 19:49:04 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP; 14 Apr 2016 10:49:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,485,1455004800"; d="scan'208";a="945310732" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by fmsmga001.fm.intel.com with ESMTP; 14 Apr 2016 10:49:04 -0700 Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 14 Apr 2016 10:49:03 -0700 Received: from shsmsx104.ccr.corp.intel.com (10.239.110.15) by FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 14 Apr 2016 10:49:03 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.136]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.142]) with mapi id 14.03.0248.002; Fri, 15 Apr 2016 01:49:00 +0800 From: "Hu, Xuekun" To: Alexander Kiselev , Shawn Lewis CC: "users@dpdk.org" Thread-Topic: [dpdk-users] Lcore impact Thread-Index: AQHRlmspfG15CtAnlkKe22ZT3O4mKp+JJEMAgAAHkYCAAAEcgIAAA3EAgACKgoA= Date: Thu, 14 Apr 2016 17:49:00 +0000 Message-ID: <88A92D351643BA4CB23E303155170626150A9974@SHSMSX101.ccr.corp.intel.com> References: <9485D7B0-E2EA-4D23-BBD9-6D233BDF8E29@gmail.com> <6EBF3C5F-D1A0-4E49-9A16-7FDB2F15E46C@gmail.com> <8CD7A8EE-BCAF-4107-9CEA-8B696B7F4A5C@gmail.com> In-Reply-To: <8CD7A8EE-BCAF-4107-9CEA-8B696B7F4A5C@gmail.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYzE0YWY1M2UtNDEzNy00OTkyLTk0YjMtYzA4ZmJmMDZkM2Q4IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IlVUakhMYkpKZUxpc09ERzNoVlVMY0hlMktUTWlIM1gwWEkzb0hzU1FkWTA9In0= x-ctpclassification: CTP_IC x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="koi8-r" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Lcore impact X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Apr 2016 17:49:05 -0000 Are the two lcore belonging to one processor, or two processors? What the m= emory footprint is for the system call threads? If the memory footprint is = big (>LLC cache size) and two locre are in the same processor, then it coul= d have impact on packet processing thread.=20 -----Original Message----- From: users [mailto:users-bounces@dpdk.org] On Behalf Of Alexander Kiselev Sent: Friday, April 15, 2016 1:19 AM To: Shawn Lewis Cc: users@dpdk.org Subject: Re: [dpdk-users] Lcore impact I've already seen this documen and have used this tricks a lot of times. Bu= t this time I send data locally over localhost. There is even no nics bind = to linux in my machine. Therefore there is no nics interruptions I can pin = to cpu. So what do you propose? > 14 =C1=D0=D2. 2016 =C7., =D7 20:06, Shawn Lewis =CE= =C1=D0=C9=D3=C1=CC(=C1): >=20 > You have to work with IRQBalancer as well >=20 > http://www.intel.com/content/dam/doc/application-note/82575-82576-82598-8= 2599-ethernet-controllers-interrupts-appl-note.pdf >=20 > Is just an example document which discuss this (not so much DPDK related)= ... But the OS will attempt to balance the interrupts when you actually wa= nt to remove or pin them down... >=20 >> On Thu, Apr 14, 2016 at 1:02 PM, Alexander Kiselev = wrote: >>=20 >>=20 >>> 14 =C1=D0=D2. 2016 =C7., =D7 19:35, Shawn Lewis =CE= =C1=D0=C9=D3=C1=CC(=C1): >>>=20 >>> Lots of things... >>>=20 >>> One just because you have a process running on an lcore, does not mean = thats all that runs on it. Unless you have told the kernel at boot to NOT = use those specific cores, those cores will be used for many things OS relat= ed. >>=20 >> Generally yes, but unless I start sending data to socket there is no pac= ket loss. I did about 10 test runs in a raw and everythis was ok. And ther= e is no other application running on that test machine that uses cpu cores. >>=20 >> So the question is why this socket operations influence the other lcore? >>=20 >>>=20 >>> IRQBlance >>> System OS operations. >>> Other Applications. >>>=20 >>> So by doing file i/o you are generating interrupts, where those interru= pts get serviced is up to IRQBalancer. So could be any one of your cores. >>=20 >> That is a good point. I can use cpu affinity feature to bind unterruptio= n handler to the core not used in my test. But I send data locally over loc= alhost. Is it possible to use cpu affinity in that case? >>=20 >>>=20 >>>=20 >>>=20 >>>> On Thu, Apr 14, 2016 at 12:31 PM, Alexander Kiselev wrote: >>>> Could someone give me any hints about what could cause permormance iss= ues in a situation where one lcore doing a lot of linux system calls (read/= write on socket) slow down the other lcore doing packet forwarding? In my t= est the forwarding lcore doesn't share any memory structures with the other= lcore that sends test data to socket. Both lcores pins to different proces= sors cores. So therotically they shouldn't have any impact on each other bu= t they do, once one lcore starts sending data to socket the other lcore sta= rts dropping packets. Why? >>>=20 >=20