From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2E3F4A06C9 for ; Wed, 19 Oct 2022 16:31:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 181A842BC1; Wed, 19 Oct 2022 16:31:03 +0200 (CEST) Received: from EUR03-DBA-obe.outbound.protection.outlook.com (mail-dbaeur03olkn2046.outbound.protection.outlook.com [40.92.58.46]) by mails.dpdk.org (Postfix) with ESMTP id 2054C42BB1 for ; Wed, 19 Oct 2022 16:31:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=A0/suXMvmWd4wMuJbKgjMh42gGRDwaY6zMr3dI/iZLfe9j4jUyTDKaTSrGy5/016t/mMtXnYf5mIGY9I4+5MZETzIGBGPCPl9+m9yawbsJPB+Lx7b7MDgcSZS8badr2n5eQ8sEc2ZW7ifpLZHxK73xXuuBBaRxVBrbrUEvUmSDA8TJiWGtkSFnTG6CmghiqbOLbh9W8U03fB55XuQrwnoUnUqlujV3zvihACUvYpZKWoeMeIdQqccjWkAKelHWoRqmLgrQq2po4+QjVrhKaXBJt2ajUxpEwzN/hr1f3wE10kiaNyJvWj93WmrXKk5uL4kBQA2MHSnQcyZQvWtZBfLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ufw4FBstcP7gtFfiaUPBTSD2E8RL/StkCXejug4lJ3U=; b=nqgLwHzMMigdc5v/8geRooxw4UbA2wHy2495rulsPxpWJ3Zw+xkP6iM0Edr+WkAOol8L4cG9hOSxBcXoMx6Fs1x1EqnBF9bmu6egCJZ9AaLXIe/14jMnQpcPypfJ0JbwAsk+SH6HRXl94Ck8aMN34GZOgmTwx5bWuIRNENSArjkATHc6Gh4zG7RO0+vtnCa4zm3siCfQ14ZGDmLdjioz6CKvrQFaHlonLBQ5esHa2uBS1g2+3rbsEg1OMxI2+rvpMY96sglleT9RgcoEHjBxEJDIRJKi1OZvHTnD7P+/TZC6/g+vr/CVAVQs2T34GCcIWez/xakVW4l6bbkPgQT3zA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hotmail.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ufw4FBstcP7gtFfiaUPBTSD2E8RL/StkCXejug4lJ3U=; b=lOOxXv7tkbPf8fTWTAQgYX+U6duk9gWns/Nwk5S86pGWTE9JRsN2PHwh3bUQHYIfX4OlzanKc8UcFUVguC3WxQSTPaGsYcRx2klCeCHQaSJQR552n4tG7gSEpI8udetbvaAuLtiHhtNc/FwqtqxlmuyqQv520WZ08QoUqsTJSP4JmMMdEEeM4WDACR0G3gxtSbm3xOa3n9LmQsTc+h3MOxSyr1d5/bVdhiyT9X0BqxciUa8Hxxk0cESeNCpHPKeXJEx0o/Q+pCQBeQfI845FMUzZHeUHusWfvY2bcuZKX7sGBDQWiIFRG2WCFgSMhVbF5JR+rH/rYq0ipi1yNMvsbQ== Received: from AM0PR04MB6692.eurprd04.prod.outlook.com (2603:10a6:208:178::25) by AM0PR04MB6804.eurprd04.prod.outlook.com (2603:10a6:208:189::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.30; Wed, 19 Oct 2022 14:31:01 +0000 Received: from AM0PR04MB6692.eurprd04.prod.outlook.com ([fe80::854d:c3ba:6cf5:389a]) by AM0PR04MB6692.eurprd04.prod.outlook.com ([fe80::854d:c3ba:6cf5:389a%5]) with mapi id 15.20.5723.033; Wed, 19 Oct 2022 14:31:01 +0000 From: Juan Pablo L. To: Stephen Hemminger CC: "users@dpdk.org" Subject: Re: lcores clarification Thread-Topic: lcores clarification Thread-Index: AQHY4zrgo9trgOrkREWQJCRvKAgeKa4UvvUAgAAEKI6AAAa3gIAA/bi/ Date: Wed, 19 Oct 2022 14:31:01 +0000 Message-ID: References: <20221018154106.101b064f@hermes.local> <20221018162001.58965de1@hermes.local> In-Reply-To: <20221018162001.58965de1@hermes.local> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: x-ms-exchange-messagesentrepresentingtype: 1 x-tmn: [ojrRJ22UikoTB9AlbxXqDfntmHrhx7s/] x-ms-publictraffictype: Email x-ms-traffictypediagnostic: AM0PR04MB6692:EE_|AM0PR04MB6804:EE_ x-ms-office365-filtering-correlation-id: cbf9fef9-7a66-4ce9-333c-08dab1de8bed x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 9Ckc8L+7wIOlWIemCbUiD0/qcLXtZmY7TYTY79TCWMaWIxmHnxb9fWwC2LBecF/JI9NgiEOad/qJV765mzUApixSk/ZlbUk2KoIKsQKeQ4m6qmI+elYCyPpjVeRFOxj0ixu0x8OU+9lOg/xr+Nk4+DiXethVB2tnUbpUYUPKdpteS0QwN0N8fm3jhkxbtP65atDBJ3qf5FxyH1uNgaWSQG8VZIPFUtdSRxlyrnCUZe+Cwtf4tITbkwTleVuAt5adMjxeKsM/PcPiqy5QHPuhH49zYObpl8IsiOuvhE0+PzuJ8otuxRsTkeXCrJOBITnOHMYu2vgWb09KAUVNvUnBVO5DkpuwH3N5/THZgw9cjFvln5etCYRzIi+4oviyPU0tofZ8tik5mSKgdENzQTTlieaxM4i7KjFYsqZpiT+PF3XTNrZoLvH3Hh8VZ4ao8BujvFggP/atwweOqmIN0BsqAoUH8rIJD2yJZ4eg/M2MqayGovF9z4hwCjOe0795UBW43CA9fZ74gdh0mSnhQ0/MclzYlAwKjtPZ21l2RDb4V+7P03inuALtd8+teRnG7aCKqaTpK9eV2JOIvsBW2mBFy+uEpSgioxQub9wIM/ZdOixat1J/9Qd8Bs89+E6N6B2Pd9ecOAbAYtuLlZ8leE3f7LV0/UouVaD1WIOZnoIu9IdBDQC4/g68wZ5XllM5ZWKQ x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?kiBlUjntbXrETsn9gvVb3z2kAneM9xmVlAthRzXWGOvkPzefYkdUmqr5qBqO?= =?us-ascii?Q?6EOVmXog6M7vWnLK6OICKShAnNJn3UGNUQj+OWtjCDCIoLalejfYhHOuLy1D?= =?us-ascii?Q?EImFScbN3LfZZrl8TjrPSrJLRh8X5DZNS4Ua67m2jOdbwG97oZeGlV/0O/V4?= =?us-ascii?Q?maU38ot/ALhfu+YJ/2yuyx8qssc0LsgwmNbUNFfnj268FYmd3+q8/W3ZhU0/?= =?us-ascii?Q?hyPT2LceuWoMy6W001cHop8QXx4W4sVYAqhEhUIztICa/yihCp8P9JgoJDFE?= =?us-ascii?Q?2edAkJcs0fjSr08kucmYamPCoKRfxwWR2rsJr1Nquvs9owQa3UpBkn38XxYs?= =?us-ascii?Q?Us2aPA+ch6gcdMy3A9/Ye0XTGUyEaavFNnUE5wQlAgoU+snF9OOjDYaHliuc?= =?us-ascii?Q?HSbPbcFUvEDXwWMjjCMN3GtMzNiE/TsmkEJnlEMD7aYKu3Fz7OWxJAtVa9cE?= =?us-ascii?Q?DBCCF+wZZ54BeUXPGd63IbU87j70qATLkjq4kMxoTAt6XeHwHLddmj816f8F?= =?us-ascii?Q?LG18mmuqxky5xEZB1OUDQd79yQ+kwRxa9rXPlo9lt95dj8sFs3JztQMbyqvC?= =?us-ascii?Q?pXhYnBliOT4by41vjE13+Y7qx2eW3YGf/l/tutdE8Yb8cQgPO2C366s5fy3u?= =?us-ascii?Q?KmBuVsprXFMDkiKETX02ODIIpfDSi2iCBnbzjb7BmmL1jdHHkKUJV40p8OCu?= =?us-ascii?Q?sBN3vjRXhNZuNN+T+KFOhumAdKnyMNGkF97XoFa6Fti9DCcDtWgcAaTZAmod?= =?us-ascii?Q?aT8QTtTcPP21V7KUFFTDPvYwrSrxCSieejucwBnHVa4knX5ChInxIOmg2G8H?= =?us-ascii?Q?5pKu2KE+MI66ym/mwxf/U5GQeiFrX1/WP841sYiUqhWLyKf1ND79r1Jkc9jv?= =?us-ascii?Q?z/4iPe/yxGgfiSiruvcO9suh6RS7fTd0HljFLXebeL//XppivwPjkqj2QMWs?= =?us-ascii?Q?10ZRQMbirhNbhYWVUqaiMKnrs1ehXJPWKrX4LPGvEN8OLR69fJb31V3vBgH+?= =?us-ascii?Q?zNRg0hXBlLipQEcjaaDi5GqAuR3Hk2W/UvZfRU1XfITqUfAeZBh/9Mn6w+br?= =?us-ascii?Q?w8S0Ko1NOyLVTCHfukoXUnX7fFKMzJYseRo6iDu6ka9DegT2l/MQDgx/1z00?= =?us-ascii?Q?Hq9hHctWkQfptfcB1DGNEaOM4hY0eOOaOMed11H989mwWqNNwUF72uDWaguF?= =?us-ascii?Q?j0rKB8gMDElM6G4BP0jDsUmv5lKVwGGB+VLB1LSu9bHjzGG7AB4CHfo/+UrM?= =?us-ascii?Q?x/tbHhFqJdWbegDY5FsIWu54TATsJnIOC8XmEfqD4g=3D=3D?= Content-Type: multipart/alternative; boundary="_000_AM0PR04MB66922FFB3CEDFDE00619E726D92B9AM0PR04MB6692eurp_" MIME-Version: 1.0 X-OriginatorOrg: sct-15-20-4755-11-msonline-outlook-03a34.templateTenant X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: AM0PR04MB6692.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000 X-MS-Exchange-CrossTenant-Network-Message-Id: cbf9fef9-7a66-4ce9-333c-08dab1de8bed X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Oct 2022 14:31:01.4468 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000 X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB6804 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_AM0PR04MB66922FFB3CEDFDE00619E726D92B9AM0PR04MB6692eurp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Stephen, I understand, thank you!! ________________________________ From: Stephen Hemminger Sent: Tuesday, October 18, 2022 11:20 PM To: Juan Pablo L. Cc: users@dpdk.org Subject: Re: lcores clarification On Tue, 18 Oct 2022 23:04:48 +0000 Juan Pablo L. wrote: > Hi Stephen, thank you for your advise, I understand that, like you said t= he same applies to other things not only dpdk, but I was asking to run diff= erent lcores on a CPU .. I understand that a lcore is a thread (pthread in = this case) running on a CPU .. I also understand that many lcores can run o= n same CPU( right ?) and said lcores have sperate per-lcore variables, env,= etc so one lcore is a single thread running on a while(resource is busy) N= OT sharing anything with the other lcores running on same CPU that is each = lcore has its own ring for example waiting for packages coming from differe= nt sockets, etc ... and the lcore is NOT running threads inside or anything= like that ... > > I have to admit that I am trying to understand this whole lcore,CPU, sche= duling, etc in DPDK still ... thank you for your input! > ________________________________ > From: Stephen Hemminger > Sent: Tuesday, October 18, 2022 10:41 PM > To: Juan Pablo L. > Cc: users@dpdk.org > Subject: Re: lcores clarification > > On Tue, 18 Oct 2022 22:06:04 +0000 > Juan Pablo L. wrote: > > > Hellos guys, I have a doubt that I have not been able to clarify with t= he docs, so maybe you can help me please ... > > > > I have a process that I want to run on a set of CPUs (performing the sa= me task) but I want to run all of them at the same time, so my question is: > > > > if I configure something like "--lcores=3D'2@(5-7)'" that will make my = lcore with id 2 run simultaneously (at the same time in parallel) on CPUs 5= ,6,7 or it will run my lcore id 2 only one time at any given time on either= CPU 5,6 or 7 ? > > > > would it be better if I configure different lcore id for the same and a= ssign it to individual CPUs ? , e.g: "--lcores=3D'2@5,3@6,4@7'" even though= lcore id 2,3,4 are really the same job ? > > > > I think my question is just a matter of what is the best way to configu= re EAL to accomplish what I want, that is to run the same process on all CP= U available concurrently, even though, I believe, option 2 seems to be the = obvious way I just want to make sure I am not doing something that DPDK cou= ld do for me already .... > > > > A bonus question if its not too much, since lcores are really pthreads = (in linux) and I have isolated the CPUs from the kernel scheduler, my guess= is that if I have different lcores (performing the same or different tasks= ) running on the same CPU DPDK will be doing the scheduling right ? .. > > > > thank you very much!!!! > > > > If you have multiple threads running on same lcore, you better not use lo= cks or rings, > or mbufs, or most of the DPDK. Because these features all work like: > > while (resource is busy) > spin wait > > So if one thread has a resource and gets preempted. When another thread r= uns > and tries to acquire the same resource; the thread will spin until schedu= ler > decides to run the other thread. > > This is not unique to DPDK, the same problem is described in the pthread_= spinlock > man page. Running multiple DPDK lcores on same cpu risks the scheduler hitting you an= d will hurt performance. Running a DPDK lcore with an affinity cpuset that spans multiple cpu's will not be faster. It just will cause the scheduler to move that thread among cpus. The fastest is to do the default and pin the lcore to a single cpu; and use CPU isolation so that no other process runs on that CPU. Sure you can do lots of weird and wonderful things to rearrange lcores but your just making the application unstable. --_000_AM0PR04MB66922FFB3CEDFDE00619E726D92B9AM0PR04MB6692eurp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable
Stephen, I understand, thank you!!

From: Stephen Hemminger <= ;stephen@networkplumber.org>
Sent: Tuesday, October 18, 2022 11:20 PM
To: Juan Pablo L. <jpablolorenzetti@hotmail.com>
Cc: users@dpdk.org <users@dpdk.org>
Subject: Re: lcores clarification
 
On Tue, 18 Oct 2022 23:04:48 +0000
Juan Pablo L. <jpablolorenzetti@hotmail.com> wrote:

> Hi Stephen, thank you for your advise, I understand that, like you sai= d the same applies to other things not only dpdk, but I was asking to run d= ifferent lcores on a CPU .. I understand that a lcore is a thread (pthread = in this case) running on a CPU .. I also understand that many lcores can run on same CPU( right ?) and said lc= ores have sperate per-lcore variables, env, etc so one lcore is a single th= read running on a while(resource is busy) NOT sharing anything with the oth= er lcores running on same CPU that is each lcore has its own ring for example waiting for packages coming fro= m different sockets, etc ... and the lcore is NOT running threads inside or= anything like that ...
>
> I have to admit that I am trying to understand this whole lcore,CPU, s= cheduling, etc in DPDK still ... thank you for your input!
> ________________________________
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Tuesday, October 18, 2022 10:41 PM
> To: Juan Pablo L. <jpablolorenzetti@hotmail.com>
> Cc: users@dpdk.org <users@dpdk.org>
> Subject: Re: lcores clarification
>
> On Tue, 18 Oct 2022 22:06:04 +0000
> Juan Pablo L. <jpablolorenzetti@hotmail.com> wrote:
>
> > Hellos guys, I have a doubt that I have not been able to clarify = with the docs, so maybe you can help me please ...
> >
> > I have a process that I want to run on a set of CPUs (performing = the same task) but I want to run all of them at the same time, so my questi= on is:
> >
> > if I configure something like "--lcores=3D'2@(5-7)'" th= at will make my lcore with id 2 run simultaneously (at the same time in par= allel) on CPUs 5,6,7 or it will run my lcore id 2 only one time at any give= n time on either CPU 5,6 or 7 ?
> >
> > would it be better if I configure different lcore id for the same= and assign it to individual CPUs ? , e.g: "--lcores=3D'2@5,3@6,4@7'&q= uot; even though lcore id 2,3,4 are really the same job ?
> >
> > I think my question is just a matter of what is the best way to c= onfigure EAL to accomplish what I want, that is to run the same process on = all CPU available concurrently, even though, I believe, option 2 seems to b= e the obvious way I just want to make sure I am not doing something that DPDK could do for me already ....
> >
> > A bonus question if its not too much, since lcores are really pth= reads (in linux) and I have isolated the CPUs from the kernel scheduler, my= guess is that if I have different lcores (performing the same or different= tasks) running on the same CPU DPDK will be doing the scheduling right ? ..
> >
> > thank you very much!!!!
> > 
>
> If you have multiple threads running on same lcore, you better not use= locks or rings,
> or mbufs, or most of the DPDK. Because these features all work like: >
>   while (resource is busy)
>      spin wait
>
> So if one thread has a resource and gets preempted. When another threa= d runs
> and tries to acquire the same resource; the thread will spin until sch= eduler
> decides to run the other thread.
>
> This is not unique to DPDK, the same problem is described in the pthre= ad_spinlock
> man page.

Running multiple DPDK lcores on same cpu risks the scheduler hitting you an= d
will hurt performance.

Running a DPDK lcore with an affinity cpuset that spans multiple cpu's will=
not be faster. It just will cause the scheduler to move that thread among cpus.

The fastest is to do the default and pin the lcore to a single cpu;
and use CPU isolation so that no other process runs on that CPU.

Sure you can do lots of weird and wonderful things to rearrange lcores
but your just making the application unstable.

--_000_AM0PR04MB66922FFB3CEDFDE00619E726D92B9AM0PR04MB6692eurp_--