DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Varghese, Vipin" <vipin.varghese@amd.com>
To: "Mattias Rönnblom" <hofors@lysator.liu.se>,
	ferruh.yigit@amd.com, dev@dpdk.org
Subject: Re: [RFC 0/2] introduce LLC aware functions
Date: Mon, 2 Sep 2024 06:09:21 +0530	[thread overview]
Message-ID: <d0eb351b-250b-4d09-8bfb-8a69ea14aab6@amd.com> (raw)
In-Reply-To: <45f26104-ad6c-4e42-8446-d8b51ac3f2dd@lysator.liu.se>

<snipped>

Thank you Mattias for the comments and question, please let me try to 
explain the same below

> We shouldn't have a separate CPU/cache hierarchy API instead?

Based on the intention to bring in CPU lcores which share same L3 (for 
better cache hits and less noisy neighbor) current API focuses on using

Last Level Cache. But if the suggestion is `there are SoC where L2 cache 
are also shared, and the new API should be provisioned`, I am also

comfortable with the thought.

>
> Could potentially be built on the 'hwloc' library.

There are 3 reason on AMD SoC we did not explore this path, reasons are

1. depending n hwloc version and kernel version certain SoC hierarchies 
are not available

2. CPU NUMA and IO (memory & PCIe) NUMA are independent on AMD Epyc Soc.

3. adds the extra dependency layer of library layer to be made available 
to work.


hence we have tried to use Linux Documented generic layer of `sysfs CPU 
cache`.

I will try to explore more on hwloc and check if other libraries within 
DPDK leverages the same.

>
> I much agree cache/core topology may be of interest of the application
> (or a work scheduler, like a DPDK event device), but it's not limited to
> LLC. It may well be worthwhile to care about which cores shares L2
> cache, for example. Not sure the RTE_LCORE_FOREACH_* approach scales.

yes, totally understand as some SoC, multiple lcores shares same L2 cache.


Can we rework the API to be rte_get_cache_<function> where user argument 
is desired lcore index.

1. index-1: SMT threads

2. index-2: threads sharing same L2 cache

3. index-3: threads sharing same L3 cache

4. index-MAX: identify the threads sharing last level cache.

>
>> < Function: Purpose >
>> ---------------------
>>   - rte_get_llc_first_lcores: Retrieves all the first lcores in the 
>> shared LLC.
>>   - rte_get_llc_lcore: Retrieves all lcores that share the LLC.
>>   - rte_get_llc_n_lcore: Retrieves the first n or skips the first n 
>> lcores in the shared LLC.
>>
>> < MACRO: Purpose >
>> ------------------
>> RTE_LCORE_FOREACH_LLC_FIRST: iterates through all first lcore from 
>> each LLC.
>> RTE_LCORE_FOREACH_LLC_FIRST_WORKER: iterates through all first worker 
>> lcore from each LLC.
>> RTE_LCORE_FOREACH_LLC_WORKER: iterates lcores from LLC based on hint 
>> (lcore id).
>> RTE_LCORE_FOREACH_LLC_SKIP_FIRST_WORKER: iterates lcores from LLC 
>> while skipping first worker.
>> RTE_LCORE_FOREACH_LLC_FIRST_N_WORKER: iterates through `n` lcores 
>> from each LLC.
>> RTE_LCORE_FOREACH_LLC_SKIP_N_WORKER: skip first `n` lcores, then 
>> iterates through reaming lcores in each LLC.
>>
While the MACRO are simple wrapper invoking appropriate API. can this be 
worked out in this fashion?

<snipped>

  reply	other threads:[~2024-09-02  0:39 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-27 15:10 Vipin Varghese
2024-08-27 15:10 ` [RFC 1/2] eal: add llc " Vipin Varghese
2024-08-27 17:36   ` Stephen Hemminger
2024-09-02  0:27     ` Varghese, Vipin
2024-08-27 20:56   ` Wathsala Wathawana Vithanage
2024-08-29  3:21     ` 答复: " Feifei Wang
2024-09-02  1:20     ` Varghese, Vipin
2024-09-03 17:54       ` Wathsala Wathawana Vithanage
2024-09-04  8:18         ` Bruce Richardson
2024-09-06 11:59         ` Varghese, Vipin
2024-09-12 16:58           ` Wathsala Wathawana Vithanage
2024-08-27 15:10 ` [RFC 2/2] eal/lcore: add llc aware for each macro Vipin Varghese
2024-08-27 21:23 ` [RFC 0/2] introduce LLC aware functions Mattias Rönnblom
2024-09-02  0:39   ` Varghese, Vipin [this message]
2024-09-04  9:30     ` Mattias Rönnblom
2024-09-04 14:37       ` Stephen Hemminger
2024-09-11  3:13         ` Varghese, Vipin
2024-09-11  3:53           ` Stephen Hemminger
2024-09-12  1:11             ` Varghese, Vipin
2024-09-09 14:22       ` Varghese, Vipin
2024-09-09 14:52         ` Mattias Rönnblom
2024-09-11  3:26           ` Varghese, Vipin
2024-09-11 15:55             ` Mattias Rönnblom
2024-09-11 17:04               ` Honnappa Nagarahalli
2024-09-12  1:33                 ` Varghese, Vipin
2024-09-12  6:38                   ` Mattias Rönnblom
2024-09-12  7:02                     ` Mattias Rönnblom
2024-09-12 11:23                       ` Varghese, Vipin
2024-09-12 12:12                         ` Mattias Rönnblom
2024-09-12 15:50                           ` Stephen Hemminger
2024-09-12 11:17                     ` Varghese, Vipin
2024-09-12 11:59                       ` Mattias Rönnblom
2024-09-12 13:30                         ` Bruce Richardson
2024-09-12 16:32                           ` Mattias Rönnblom
2024-09-12  2:28                 ` Varghese, Vipin
2024-09-11 16:01             ` Bruce Richardson
2024-09-11 22:25               ` Konstantin Ananyev
2024-09-12  2:38                 ` Varghese, Vipin
2024-09-12  2:19               ` Varghese, Vipin
2024-09-12  9:17                 ` Bruce Richardson
2024-09-12 11:50                   ` Varghese, Vipin
2024-09-13 14:15                     ` Burakov, Anatoly
2024-09-12 13:18                   ` Mattias Rönnblom
2024-08-28  8:38 ` Burakov, Anatoly
2024-09-02  1:08   ` Varghese, Vipin
2024-09-02 14:17     ` Burakov, Anatoly
2024-09-02 15:33       ` Varghese, Vipin
2024-09-03  8:50         ` Burakov, Anatoly
2024-09-05 13:05           ` Ferruh Yigit
2024-09-05 14:45             ` Burakov, Anatoly
2024-09-05 15:34               ` Ferruh Yigit
2024-09-06  8:44                 ` Burakov, Anatoly
2024-09-09 14:14                   ` Varghese, Vipin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d0eb351b-250b-4d09-8bfb-8a69ea14aab6@amd.com \
    --to=vipin.varghese@amd.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=hofors@lysator.liu.se \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).