From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10E6AA0577 for ; Wed, 19 Oct 2022 01:04:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8F9BE4069C; Wed, 19 Oct 2022 01:04:50 +0200 (CEST) Received: from EUR02-DB5-obe.outbound.protection.outlook.com (mail-db5eur02olkn2010.outbound.protection.outlook.com [40.92.50.10]) by mails.dpdk.org (Postfix) with ESMTP id EC3F140041 for ; Wed, 19 Oct 2022 01:04:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UYnKiF3UbYUn+1kTAKX12k/FOhmmUpiSi8iUk+xX7m91YRuuEoA5xhsgjoRzqBeomtcWIUu5p6dIPFD2v1moykgd9IkRkiAcceTOiIoYCbG7Ksx85UXZo3oUL0iomNosf8dK0qI4++xQsaMiwuxDGWcTmruTYCU9JJlkzIyG005Ib4nX2Vhr8+UGydCwYAZGnRISGamnkAi2F7rNkglS32QjMPJFPNbG52N/3WTQNqVAWBy1KcqZIaBNJwwCQOAPAhGs51E/zc3fS9l5VsCRyTcc4n90ncdAHVwoib42GLCgsVHfsqq4FSy5llW0m/5nkgJQQLQPrdL4+FNIUSKxCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4qOjOfaNgXt/dvq0iP5eA4DjPy6NXU0cdWsnVu2NSMc=; b=McZB+Fr2vrh/ZH3u88wC1uEtOLZfyle0vlRVqrpjjoPUyzFvfve3AtWAS8aZUiGFyBLFE1zEirWGJDap4mGymxBQWKPf0RsDWss3IcNKSaN+iJj7bVUA9udpp0CP6h47N7rwHChAwIzOM2EOMtwxICyiOdvjQdcQdjTpLeA3iDO2Q5R/c9D/5Siia7egm42YQkNN5t3icfN7O3CCvgjYYNg/Uq0l8mR91HD4F9eVzoNUQTQeQI33fxU36dUIRH+G6MtYBfZhrOZmNhfj75Zht0Vtk1FCTdmjtiD0ySqD5u2Y7F5mN704zN+s0ytIlHP/BCDdIYLFVEc7Tek7RyvCsA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hotmail.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4qOjOfaNgXt/dvq0iP5eA4DjPy6NXU0cdWsnVu2NSMc=; b=j4u4lwizDBck8gSi3pDGrI7t2tslk3hkLH+hOMfeLqFrJ0blVp9Hu5d4k5ICSYeV54ra81I8zirqup5KMdBqjVhi3RI+U63oVPPahM8hwzBmqvFYnHeR8Owfk1143cTWLhI6bhnXcvuOztCtb1wljZiGjAHhZNxJQ9G+yK+9McNaEjSezljcbT9sZOUlIYE7ZXq0chqNrhWVDkbaBA7HRfOW1T9utLgnbomiKeAVmmBIfsqD4ypWuQv0lbmD9ASjBdI8FTpr5vxwbNpro9mC/O1Gdd24SFmsPvgOr3sYdMz7JlhSQO5V0HhWxeajRnNKIyBkpAW3HZzALCijoepU7w== Received: from AM0PR04MB6692.eurprd04.prod.outlook.com (2603:10a6:208:178::25) by AS8PR04MB8136.eurprd04.prod.outlook.com (2603:10a6:20b:3f7::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5723.29; Tue, 18 Oct 2022 23:04:48 +0000 Received: from AM0PR04MB6692.eurprd04.prod.outlook.com ([fe80::854d:c3ba:6cf5:389a]) by AM0PR04MB6692.eurprd04.prod.outlook.com ([fe80::854d:c3ba:6cf5:389a%5]) with mapi id 15.20.5723.033; Tue, 18 Oct 2022 23:04:48 +0000 From: Juan Pablo L. To: Stephen Hemminger CC: "users@dpdk.org" Subject: Re: lcores clarification Thread-Topic: lcores clarification Thread-Index: AQHY4zrgo9trgOrkREWQJCRvKAgeKa4UvvUAgAAEKI4= Date: Tue, 18 Oct 2022 23:04:48 +0000 Message-ID: References: <20221018154106.101b064f@hermes.local> In-Reply-To: <20221018154106.101b064f@hermes.local> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: msip_labels: x-ms-exchange-messagesentrepresentingtype: 1 x-tmn: [UAOwo2D6lr+kdEstjnQfA7qPgPBTGTry] x-ms-publictraffictype: Email x-ms-traffictypediagnostic: AM0PR04MB6692:EE_|AS8PR04MB8136:EE_ x-ms-office365-filtering-correlation-id: 5951183c-1f1c-4272-eccd-08dab15d2799 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: vHhIn5l+5KPwMCpJFGMGnzsRMc1PjzXOlBjChbvpksZknQNwPauYlGDwSURo4nzIzxfK8IcVoUdePBD1YvNz/D9pmiBgKWV0g3nRKrOhiYlZ0SG1QPIjz4DCuTtfO/yJO/i+VgppIJ9Oa7qM4zgkfAhi70uz8X1qElosuV6AembGL/itzYfXuiX1OF3XI55nS72j/TQmZJQd0mY/p5CyEPp9QNDPOakood+/LjxdGqK8WsUo2zKxCUWqOT+R2viECOg4W3JMgRwFj3SaloFUb8kleh6KprvLloH/A6WKcZuCo8Hp4c/mT++KBrAJD1M4cSbltlq+mLYPxjmnaXwwmHP2/WmGJaNTAEWrZZ1/nLisHvGYMUJa1lb+HkcQe9mxlr1VsT2duSzzzuKykzmM25bA0hN3r3Op7m+ImgLHaLZJUsQOa02SZ54I2GbyjtqYXmGc2j0JDrPuKRWxXUokne5dAgFFr2JGFtnX3m+SiOBD87rurc+3mljAAv0kKFybNN5S3PW3soTnuuObswMrXSsauavPpSVrAqt8H7RhqOEWY+ZYw+PTFCcS0ZcuyWNungaEfS+wKjKw2AzYGsBDtkguF2HdQ/rv8AyLRqxtzefANG9zMOjjQVdK5IB4owHP/q8t4QgqMMYTlLNjs6LWObbI4NvDJlyexQODbS52yaEOxNMM2zgFXuNeu5gXvF7m x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?VU7Mc7u+y6d78Jssmg1dCrCXD43wXtXZVXKd2yW9UT+D2mdvWNDbnrb7IANF?= =?us-ascii?Q?sLQlEdLImbq0dLfz3rcFAYsfa0OyujTO/GE1qnn5Tv9yCH6Tui6C5foMkCNF?= =?us-ascii?Q?JX0tVErWreBtiU0lWYjq08/0BUim9Co0vQhCpwysKuLij/ZlKv+NGDlczzaq?= =?us-ascii?Q?BhMPx4gx7ooSYSDjKt2WSs9Ng+UzCJMXyjucXPdee7fZdhqUvY+xsdE4Lgqn?= =?us-ascii?Q?zVfHbD3twGs5Ssd0qeIlzWcJ6mdJRqvqXTNgGRvBudPADzEWSZl1dXIgDdVh?= =?us-ascii?Q?wLJZ16QQRlBLOfsqoHZuzxHq0MszW2VothkJfR42GfZlwTy7uuQIxwxfLS5T?= =?us-ascii?Q?CBKYgADm2Rk1R1ZM/asgdp55P0C5RoRxi/C85+emNZuKI2kVfEalgdUuPbgU?= =?us-ascii?Q?f4OPWPLo9QeIutKj/jOcjxvshOwr9iudZLelbV+em6MHGvRVjo8WVzBTY/3P?= =?us-ascii?Q?gBwxh96UMTmb2v7ElW11SmzbHfIPLLeTJmRjU4zZ36pLJ/IML3pnLLEMscKF?= =?us-ascii?Q?qNnPYSjuSXlBY4k4tYrtlxvPYu2hENlC1syHM1Ds3qLJX2Ex4iP5BscK9dxO?= =?us-ascii?Q?kfcXz+gv6x6HAl3eVqjpq3WOGstQqumSmhN8SMT253J7iNfAU+2tugM9rD9h?= =?us-ascii?Q?+DKKvq8mFCB61oUrbNd5+WCAl4Ha8q1uwiPCp0iaK+cJw4ivJFyd9zdPn6ac?= =?us-ascii?Q?rHuaa/nQ5y7HRo3n0BbY/hgTnb51d35XqwwL5rv8jVw6xugO0WTEWpjuKhkA?= =?us-ascii?Q?1tXB8zxjxf+Ly9kytW9QlrLsi/+ibjIZUMkp1KPoctHKWquI7kWK9R2vwJ4r?= =?us-ascii?Q?RVewiYaqF4/AnFfMYyxqwY+85TGg9c8g4Ag4LonEFEI8+Npg+4l8GhZM0PVV?= =?us-ascii?Q?hyz4m+/uXd2LRvhHUItThplye+EQ3PqAE5pDHLgVInjBdaR4arMvF/xyv6jT?= =?us-ascii?Q?KHwCsfhzBMiPZoEgkWTuABAeEQB7rkKO1Spy9Mn8gWOGh9aLAx4SfjFj2Vre?= =?us-ascii?Q?/7Ux3YGviSPzjriPJXuNQHf1vD6/yYtzM7ixOniMzbflUaN4JkMah+/H4CH3?= =?us-ascii?Q?fO8l9Mf1EoAaru+hBx0gklS31Q/wEjk8AYxvbwUAGWbshReor6MnSlf7Axn8?= =?us-ascii?Q?MyJnX5XyYo3nHv7yU9KNLX0pgUGETwC+SZW/kcjvw1qhmxWR2N9pJuCPPm+z?= =?us-ascii?Q?o2Kir3+AIdWQ2UqC91W4LEhCerwMFauRLlNYkWYVXBAQKxswmJ93ESS2gbXT?= =?us-ascii?Q?F/BL6/ZfqNHCOO422k3Md9xludSap6tVcopZCtPBhw=3D=3D?= Content-Type: multipart/alternative; boundary="_000_AM0PR04MB6692AE7F21FF5882C18E75C6D9289AM0PR04MB6692eurp_" MIME-Version: 1.0 X-OriginatorOrg: sct-15-20-4755-11-msonline-outlook-03a34.templateTenant X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: AM0PR04MB6692.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000 X-MS-Exchange-CrossTenant-Network-Message-Id: 5951183c-1f1c-4272-eccd-08dab15d2799 X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Oct 2022 23:04:48.0195 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000 X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR04MB8136 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_AM0PR04MB6692AE7F21FF5882C18E75C6D9289AM0PR04MB6692eurp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi Stephen, thank you for your advise, I understand that, like you said the= same applies to other things not only dpdk, but I was asking to run differ= ent lcores on a CPU .. I understand that a lcore is a thread (pthread in th= is case) running on a CPU .. I also understand that many lcores can run on = same CPU( right ?) and said lcores have sperate per-lcore variables, env, e= tc so one lcore is a single thread running on a while(resource is busy) NOT= sharing anything with the other lcores running on same CPU that is each lc= ore has its own ring for example waiting for packages coming from different= sockets, etc ... and the lcore is NOT running threads inside or anything l= ike that ... I have to admit that I am trying to understand this whole lcore,CPU, schedu= ling, etc in DPDK still ... thank you for your input! ________________________________ From: Stephen Hemminger Sent: Tuesday, October 18, 2022 10:41 PM To: Juan Pablo L. Cc: users@dpdk.org Subject: Re: lcores clarification On Tue, 18 Oct 2022 22:06:04 +0000 Juan Pablo L. wrote: > Hellos guys, I have a doubt that I have not been able to clarify with the= docs, so maybe you can help me please ... > > I have a process that I want to run on a set of CPUs (performing the same= task) but I want to run all of them at the same time, so my question is: > > if I configure something like "--lcores=3D'2@(5-7)'" that will make my lc= ore with id 2 run simultaneously (at the same time in parallel) on CPUs 5,6= ,7 or it will run my lcore id 2 only one time at any given time on either C= PU 5,6 or 7 ? > > would it be better if I configure different lcore id for the same and ass= ign it to individual CPUs ? , e.g: "--lcores=3D'2@5,3@6,4@7'" even though l= core id 2,3,4 are really the same job ? > > I think my question is just a matter of what is the best way to configure= EAL to accomplish what I want, that is to run the same process on all CPU = available concurrently, even though, I believe, option 2 seems to be the ob= vious way I just want to make sure I am not doing something that DPDK could= do for me already .... > > A bonus question if its not too much, since lcores are really pthreads (i= n linux) and I have isolated the CPUs from the kernel scheduler, my guess i= s that if I have different lcores (performing the same or different tasks) = running on the same CPU DPDK will be doing the scheduling right ? .. > > thank you very much!!!! > If you have multiple threads running on same lcore, you better not use lock= s or rings, or mbufs, or most of the DPDK. Because these features all work like: while (resource is busy) spin wait So if one thread has a resource and gets preempted. When another thread run= s and tries to acquire the same resource; the thread will spin until schedule= r decides to run the other thread. This is not unique to DPDK, the same problem is described in the pthread_sp= inlock man page. --_000_AM0PR04MB6692AE7F21FF5882C18E75C6D9289AM0PR04MB6692eurp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable
Hi Stephen, thank you for your advise, I understand that, like you said the= same applies to other things not only dpdk, but I was asking to run differ= ent lcores on a CPU .. I understand that a lcore is a thread (pthread in th= is case) running on a CPU .. I also understand that many lcores can run on same CPU( right ?) and said lcores = have sperate per-lcore variables, env, etc so one lcore is a single thread = running on a while(resource is busy) NOT sharing anything with the other lc= ores running on same CPU that is each lcore has its own ring for example waiting for packages coming from d= ifferent sockets, etc ... and the lcore is NOT running threads inside or an= ything like that ...

I have to admit that I am trying to understand this whole lcore,CPU, schedu= ling, etc in DPDK still ... thank you for your input!

From: Stephen Hemminger <= ;stephen@networkplumber.org>
Sent: Tuesday, October 18, 2022 10:41 PM
To: Juan Pablo L. <jpablolorenzetti@hotmail.com>
Cc: users@dpdk.org <users@dpdk.org>
Subject: Re: lcores clarification
 
On Tue, 18 Oct 2022 22:06:04 +0000
Juan Pablo L. <jpablolorenzetti@hotmail.com> wrote:

> Hellos guys, I have a doubt that I have not been able to clarify with = the docs, so maybe you can help me please ...
>
> I have a process that I want to run on a set of CPUs (performing the s= ame task) but I want to run all of them at the same time, so my question is= :
>
> if I configure something like "--lcores=3D'2@(5-7)'" that wi= ll make my lcore with id 2 run simultaneously (at the same time in parallel= ) on CPUs 5,6,7 or it will run my lcore id 2 only one time at any given tim= e on either CPU 5,6 or 7 ?
>
> would it be better if I configure different lcore id for the same and = assign it to individual CPUs ? , e.g: "--lcores=3D'2@5,3@6,4@7'" = even though lcore id 2,3,4 are really the same job ?
>
> I think my question is just a matter of what is the best way to config= ure EAL to accomplish what I want, that is to run the same process on all C= PU available concurrently, even though, I believe, option 2 seems to be the= obvious way I just want to make sure I am not doing something that DPDK could do for me already ....
>
> A bonus question if its not too much, since lcores are really pthreads= (in linux) and I have isolated the CPUs from the kernel scheduler, my gues= s is that if I have different lcores (performing the same or different task= s) running on the same CPU DPDK will be doing the scheduling right ? ..
>
> thank you very much!!!!
>

If you have multiple threads running on same lcore, you better not use lock= s or rings,
or mbufs, or most of the DPDK. Because these features all work like:

  while (resource is busy)
     spin wait

So if one thread has a resource and gets preempted. When another thread run= s
and tries to acquire the same resource; the thread will spin until schedule= r
decides to run the other thread.

This is not unique to DPDK, the same problem is described in the pthread_sp= inlock
man page.
--_000_AM0PR04MB6692AE7F21FF5882C18E75C6D9289AM0PR04MB6692eurp_--