DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Mattias Rönnblom" <hofors@lysator.liu.se>
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>,
	"Sevincer, Abdullah" <abdullah.sevincer@intel.com>,
	Stephen Hemminger <stephen@networkplumber.org>,
	"thomas@monjalon.net" <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	Tyler Retzlaff <roretzla@linux.microsoft.com>, nd <nd@arm.com>
Subject: Re: quick thread in DLB2
Date: Thu, 14 Sep 2023 10:09:10 +0200	[thread overview]
Message-ID: <7a343f4c-bc49-4f35-6518-24c52b09aad8@lysator.liu.se> (raw)
In-Reply-To: <DBAPR08MB581412CB8A5D96821D2B962798F0A@DBAPR08MB5814.eurprd08.prod.outlook.com>

On 2023-09-13 22:56, Honnappa Nagarahalli wrote:
> 
> 
>> -----Original Message-----
>> From: Mattias Rönnblom <hofors@lysator.liu.se>
>> Sent: Wednesday, September 13, 2023 10:48 AM
>> To: Sevincer, Abdullah <abdullah.sevincer@intel.com>; Stephen Hemminger
>> <stephen@networkplumber.org>; thomas@monjalon.net
>> Cc: dev@dpdk.org; Tyler Retzlaff <roretzla@linux.microsoft.com>
>> Subject: Re: quick thread in DLB2
>>
>> On 2023-09-11 16:28, Sevincer, Abdullah wrote:
>>> Mattias,
>>> Yes that’s correct.
>>>
>>>
>>
>> There is no way to cleaner and more robust way to achieve the same result?
>> For example, by accessing /proc, or better, an DPDK abstraction of the same.
> There similar issues in other areas. For ex: the CPUs with large core count have larger interconnect. The SLC to CPU distance starts to matter and the memory latency increases. The distance of the cores on the interconnect also impacts lock behaviors. We probably need a common mechanism/library to export such details.

To make DSW (and other work schedulers) work better on systems with SMT, 
it would be useful to know which lcores are hardware thread siblings.

Topology related to CPU core capacity in heterogeneous system (e.g., 
big.LITTLE) could be used for similar purposes.

The list goes on but one wouldn't need to address all use cases in the 
v1 API.

Something like hwloc(7), but DPDK native.

> Not sure how much of this would be a security risk.
> 

What do have in mind? The DPDK library has no more privileges than the 
application running on top of it. As far as I see, what we are talking 
about here is mere convenience and portability, from a security point of 
view.

>>
>>> -----Original Message-----
>>> From: Mattias Rönnblom <hofors@lysator.liu.se>
>>> Sent: Friday, September 8, 2023 12:28 AM
>>> To: Sevincer, Abdullah <abdullah.sevincer@intel.com>; Stephen
>>> Hemminger <stephen@networkplumber.org>; Thomas Monjalon
>>> <thomas@monjalon.net>
>>> Cc: dev@dpdk.org; Tyler Retzlaff <roretzla@linux.microsoft.com>
>>> Subject: Re: quick thread in DLB2
>>>
>>> On 2023-09-08 00:09, Sevincer, Abdullah wrote:
>>>> Hi Stephen,
>>>> It is probing ports for best CPU. Yes it collects cycles. We may rework in the
>> future.
>>>
>>> Best, in what sense? Is this some kind of topology exploration? One DLB
>> port being closer to (cheaper to access for) certain cores?
>>>
>>>> Open to suggestions.
>>>>
>>>> -----Original Message-----
>>>> From: Stephen Hemminger <stephen@networkplumber.org>
>>>> Sent: Wednesday, September 6, 2023 12:45 PM
>>>> To: Thomas Monjalon <thomas@monjalon.net>
>>>> Cc: Sevincer, Abdullah <abdullah.sevincer@intel.com>; dev@dpdk.org;
>>>> Tyler Retzlaff <roretzla@linux.microsoft.com>
>>>> Subject: Re: quick thread in DLB2
>>>>
>>>> On Fri, 01 Sep 2023 16:08:48 +0200
>>>> Thomas Monjalon <thomas@monjalon.net> wrote:
>>>>
>>>>> Hello Abdullah,
>>>>>
>>>>> In the DLB2 code, I see a thread is created for a single operation:
>>>>> In drivers/event/dlb2/pf/base/dlb2_resource.c
>>>>> pthread_create(&pthread, NULL, &dlb2_pp_profile_func,
>>>>> &dlb2_thread_data[i]); and just after:
>>>>> pthread_join(pthread, NULL);
>>>>>
>>>>> Can we avoid creating this thread?
>>>>> I guess no, because it must spawn on a specific CPU.
>>>>>
>>>>>
>>>>
>>>> The per thread data seems to break lots of expectations in EAL.
>>>> It all seems to be about capturing the number of cycles on different cores.
>>>> Looks like a mess.

      reply	other threads:[~2023-09-14  8:09 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-01 14:08 Thomas Monjalon
2023-09-04 19:13 ` Sevincer, Abdullah
2023-09-06 19:45 ` Stephen Hemminger
2023-09-07 22:09   ` Sevincer, Abdullah
2023-09-07 23:37     ` Stephen Hemminger
2023-09-08  7:28     ` Mattias Rönnblom
2023-09-11 14:28       ` Sevincer, Abdullah
2023-09-13 15:48         ` Mattias Rönnblom
2023-09-13 20:56           ` Honnappa Nagarahalli
2023-09-14  8:09             ` Mattias Rönnblom [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7a343f4c-bc49-4f35-6518-24c52b09aad8@lysator.liu.se \
    --to=hofors@lysator.liu.se \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=abdullah.sevincer@intel.com \
    --cc=dev@dpdk.org \
    --cc=nd@arm.com \
    --cc=roretzla@linux.microsoft.com \
    --cc=stephen@networkplumber.org \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).