From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f47.google.com (mail-wm0-f47.google.com [74.125.82.47]) by dpdk.org (Postfix) with ESMTP id DF1741396 for ; Thu, 14 Apr 2016 21:31:01 +0200 (CEST) Received: by mail-wm0-f47.google.com with SMTP id a140so2683946wma.0 for ; Thu, 14 Apr 2016 12:31:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=Q7LZAJxkIbYS7dt8AH51AfKpZpVOBM7PQukgSZyoQUI=; b=PKySgoaASpKQ+F0Qzwa5oXqKfp9YrnHNk7tSI7K3NW88wRK0N/+rd30ah6RQMkc6qH iq8triWHhD/teNT9HA7XOGLW2v7h8Q25c79bLzYVsU0g9FSbd+brCuVQsY1pPuSjcNgg Y2W6v7VglZ6JVofY7/9oAOUvXM3sJ8/9tIqx0wz9tg9bFeWl5b3c9yW1GSTdiJyidG6I thyLNMKKhb+4fNE5PKmUE6qOUfyuTqRgHdMEmaEb/pHL2NAUFZ4OCK14RXhpSL6wQe7e cJkw1GgCoVw8YrMBh0uSDsf1L8xDd0fdkwWMJx38TcHpoTN9IZZN4xSgEFic/W2uEpTU 69AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=Q7LZAJxkIbYS7dt8AH51AfKpZpVOBM7PQukgSZyoQUI=; b=gvhqgXD8eI7mQNoLoOJvNegqZD5QVbVONk9SBkEEUqV0LkgHpjTdH2xc9pgO0uInqF 15RCMIefayJy2zR0gBZXx/wqbzP5bfvnUmJ8ITstAkVdJkxLi/Gqsafwfkt6Q3emMeq9 7by4xU87g9+9rPNaMh4Y0N9/tadfo+CExEX2z/VNJzOhjaZDJqvfQgkCCor+JPgEmj92 b9hRqsJUjOzPZWVcMeFleNDOxeG2Q42sU1299NZynAFD79N5G2Bf2zJARQvHMZX7NTwi hnaMcR/wDL11OG+GwUxel3RQAQnDyMjRCrBnnWAWnhmfH8chEkfBAfWqYJgnuRUHOsiZ 0L3A== X-Gm-Message-State: AOPr4FUZ9KAnbN87Km1sfQqr/SHJ+EmtomZUuX5km8b0yKv8n06IfFukBHQE3KF0vleFb2l97HkkpQuWv3lYow== MIME-Version: 1.0 X-Received: by 10.194.58.138 with SMTP id r10mr17573940wjq.153.1460662261700; Thu, 14 Apr 2016 12:31:01 -0700 (PDT) Received: by 10.28.19.134 with HTTP; Thu, 14 Apr 2016 12:31:01 -0700 (PDT) In-Reply-To: <88A92D351643BA4CB23E303155170626150A9974@SHSMSX101.ccr.corp.intel.com> References: <9485D7B0-E2EA-4D23-BBD9-6D233BDF8E29@gmail.com> <6EBF3C5F-D1A0-4E49-9A16-7FDB2F15E46C@gmail.com> <8CD7A8EE-BCAF-4107-9CEA-8B696B7F4A5C@gmail.com> <88A92D351643BA4CB23E303155170626150A9974@SHSMSX101.ccr.corp.intel.com> Date: Thu, 14 Apr 2016 22:31:01 +0300 Message-ID: From: =?UTF-8?B?0JDQu9C10LrRgdCw0L3QtNGAINCa0LjRgdC10LvQtdCy?= To: "Hu, Xuekun" Cc: Shawn Lewis , "users@dpdk.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Lcore impact X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Apr 2016 19:31:02 -0000 2016-04-14 20:49 GMT+03:00 Hu, Xuekun : > Are the two lcore belonging to one processor, or two processors? What the > memory footprint is for the system call threads? If the memory footprint = is > big (>LLC cache size) and two locre are in the same processor, then it > could have impact on packet processing thread. > Those two lcores belong to one processor and it's a single processor machine. Both cores allocates a lot of memory and use the full dpdk arsenal: lpm, mempools, hashes and etc. But during the test the core doing socket data transfering is using only small 16k buffer for sending and sending is the all it does during the test. It doesn't use any other allocated memory structures. The processing core in turn is using rte_lpm whitch is big, but in my test there are only about 10 routes in it, so I think the amount "hot" memory is not very big. But I can't say if it's bigger than l3 cpu cache or not. Should I use some profilers and see if socket operations cause a lot of cache miss in the processing lcore? It there some tool that allows me to do that? perf maybe? > > > -----Original Message----- > From: users [mailto:users-bounces@dpdk.org] On Behalf Of Alexander Kisele= v > Sent: Friday, April 15, 2016 1:19 AM > To: Shawn Lewis > Cc: users@dpdk.org > Subject: Re: [dpdk-users] Lcore impact > > I've already seen this documen and have used this tricks a lot of times. > But this time I send data locally over localhost. There is even no nics > bind to linux in my machine. Therefore there is no nics interruptions I c= an > pin to cpu. So what do you propose? > > > 14 =D0=B0=D0=BF=D1=80. 2016 =D0=B3., =D0=B2 20:06, Shawn Lewis =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0): > > > > You have to work with IRQBalancer as well > > > > > http://www.intel.com/content/dam/doc/application-note/82575-82576-82598-8= 2599-ethernet-controllers-interrupts-appl-note.pdf > > > > Is just an example document which discuss this (not so much DPDK > related)... But the OS will attempt to balance the interrupts when you > actually want to remove or pin them down... > > > >> On Thu, Apr 14, 2016 at 1:02 PM, Alexander Kiselev > wrote: > >> > >> > >>> 14 =D0=B0=D0=BF=D1=80. 2016 =D0=B3., =D0=B2 19:35, Shawn Lewis =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0=D0=BB(=D0=B0): > >>> > >>> Lots of things... > >>> > >>> One just because you have a process running on an lcore, does not mea= n > thats all that runs on it. Unless you have told the kernel at boot to NO= T > use those specific cores, those cores will be used for many things OS > related. > >> > >> Generally yes, but unless I start sending data to socket there is no > packet loss. I did about 10 test runs in a raw and everythis was ok. And > there is no other application running on that test machine that uses cpu > cores. > >> > >> So the question is why this socket operations influence the other lcor= e? > >> > >>> > >>> IRQBlance > >>> System OS operations. > >>> Other Applications. > >>> > >>> So by doing file i/o you are generating interrupts, where those > interrupts get serviced is up to IRQBalancer. So could be any one of you= r > cores. > >> > >> That is a good point. I can use cpu affinity feature to bind > unterruption handler to the core not used in my test. But I send data > locally over localhost. Is it possible to use cpu affinity in that case? > >> > >>> > >>> > >>> > >>>> On Thu, Apr 14, 2016 at 12:31 PM, Alexander Kiselev < > kiselev99@gmail.com> wrote: > >>>> Could someone give me any hints about what could cause permormance > issues in a situation where one lcore doing a lot of linux system calls > (read/write on socket) slow down the other lcore doing packet forwarding? > In my test the forwarding lcore doesn't share any memory structures with > the other lcore that sends test data to socket. Both lcores pins to > different processors cores. So therotically they shouldn't have any impac= t > on each other but they do, once one lcore starts sending data to socket t= he > other lcore starts dropping packets. Why? > >>> > > > --=20 =D0=A1 =D1=83=D0=B2=D0=B0=D0=B6=D0=B5=D0=BD=D0=B8=D0=B5=D0=BC, =D0=9A=D0=B8=D1=81=D0=B5=D0=BB=D0=B5=D0=B2 =D0=90=D0=BB=D0=B5=D0=BA=D1=81= =D0=B0=D0=BD=D0=B4=D1=80