[AMD Official Use Only - AMD Internal Distribution Only]

<snipped>
> >
> > For the naming, would "rte_get_next_sibling_core" (or lcore if you
> > prefer) be a clearer name than just adding "ex" on to the end of the
> > existing function?
> >
> > Looking logically, I'm not sure about the BOOST_ENABLED and
> > BOOST_DISABLED flags you propose - in a system with multiple possible
> > standard and boost frequencies what would those correspond to? What's
> > also missing is a define for getting actual NUMA siblings i.e. those
> > sharing common memory but not an L3 or anything else.
> >
> > My suggestion would be to have the function take just an integer-type e.g.
> > uint16_t parameter which defines the memory/cache hierarchy level to
> > use, 0 being lowest, 1 next, and so on. Different systems may have
> > different numbers of cache levels so lets just make it a zero-based
> > index of levels, rather than giving explicit defines (except for
> > memory which should probably always be last). The zero-level will be for
> "closest neighbour"
> > whatever that happens to be, with as many levels as is necessary to
> > express the topology, e.g. without SMT, but with 3 cache levels, level
> > 0 would be an L2 neighbour, level 1 an L3 neighbour. If the L3 was
> > split within a memory NUMA node, then level 2 would give the NUMA
> > siblings. We'd just need an API to return the max number of levels along with
> the iterator.
>
> Sounds like a neat idea to me.
 
Hi Konstantin, I have tried my best to address to Bruce comment. Let me try to recap
  1. we want vendor agnostic API which allows end users to get list of lcores
  2. this can be based on L1(SMT), L2, L3, NUMA, TURBO (as of now)
  3. instead of creating multiple different API, we would like to add 1 API `rte_get_next_lcore_extnd` which can be controlled with `flags`
  4. flag can be single or combination (like L3|TURBO_ENABLED or NUMA|TURBO_ENABLED).
  5. As per my current idea, we can expand ease of use via MACRO and not API.
 
I hope this justifies why we should have 1 exntended API and wrap things with Macro?
May I setup or add to tech discussion call with Mattias and Honappa too?