From: "Varghese, Vipin" <Vipin.Varghese@amd.com>
To: "Morten Brørup" <mb@smartsharesystems.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"roretzla@linux.microsoft.com" <roretzla@linux.microsoft.com>,
"bruce.richardson@intel.com" <bruce.richardson@intel.com>,
"john.mcnamara@intel.com" <john.mcnamara@intel.com>,
"dmitry.kozliuk@gmail.com" <dmitry.kozliuk@gmail.com>,
"jerinj@marvell.com" <jerinj@marvell.com>,
"David Christensen" <drc@linux.ibm.com>,
"Wathsala Vithanage" <wathsala.vithanage@arm.com>,
"Min Zhou" <zhoumin@loongson.cn>,
"Stanislaw Kardach" <stanislaw.kardach@gmail.com>,
"konstantin.ananyev@huawei.com" <konstantin.ananyev@huawei.com>
Cc: "ruifeng.wang@arm.com" <ruifeng.wang@arm.com>,
"mattias.ronnblom@ericsson.com" <mattias.ronnblom@ericsson.com>,
"anatoly.burakov@intel.com" <anatoly.burakov@intel.com>,
"stephen@networkplumber.org" <stephen@networkplumber.org>,
"Yigit, Ferruh" <Ferruh.Yigit@amd.com>,
"honnappa.nagarahalli@arm.com" <honnappa.nagarahalli@arm.com>
Subject: RE: [RFC v3 1/3] eal/lcore: add topology based functions
Date: Tue, 5 Nov 2024 02:17:49 +0000 [thread overview]
Message-ID: <PH7PR12MB8596BF09963460CEAE17582E82522@PH7PR12MB8596.namprd12.prod.outlook.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35E9F872@smartserver.smartshare.dk>
Snipped
> > >
> > > I recall from the Cache Stashing community call... There is some
> > > ACPI
> > function to
> > > get the (opaque) "location IDs" of various parts in the system, to
> > > be
> > used for setting
> > > the Cache Stashing hints.
> > > Is there only one "ACPI location ID" (I don't know the correct name)
> > shared by the
> > > L3 cache and the v-cache in AMD EPYC, or do they have each their own?
> >
> > At least on AMD EPYC, the stashing ID updated to either MSI-X table or
> > Device Specific Mode is core-id.
>
> Are you saying that on AMD EPYC only L2 caches have a Stashing ID, so no other
> CPU caches can be stashed into?
On AMD EPYC zen4 (limited Operating Part Number) & zen5, Cache Stashing is done on L2 cache level.
This is done by passing core-id as the steering tag (opaque value)
> If yes, then it's a non-issue for Cache Stashing, since it doesn't need to care about
> L3 cache or v-cache.
Yes, that is what I am trying to imply irrespective of the platform (Arm, powerpc, riscv, Intel or AMD) cache
stashing based on PCIe SIG is based on each vendor implementation. On AMD EPYC currently this is on based
on core-id (which is the hint to platform to put it into L2 cache).
Other platforms might support putting to L1, L2 or L3. And for these I agree that cache id might be used steering tag.
>
> >
> >
> > > If they are not exposed as one ID, but two separate IDs, the
> > > Topology
> > API might
> > > need to reflect this, so it can be used in the Cache Stashing API.
> >
> > I have different view on the same and had shared this with Ajit
> > (Broadcom) and others. To my understanding, use of rte_ethdev API used
> > for caching hints should be inline to rte_lcore. Depending upon the
> > platform (ARM's specific implementation, the lcore gets translated to
> > L2 or L3 cache ID within the PMD.
>
> The rte_ethdev API for cache stashing provides a higher level of abstraction, yes.
>
> But the layer below it - the Stashing API used by the PMDs to obtain Stashing ID
> from "location ID" - could use the "location ID" structure type defined by the
> Topology library's lower layer.
Yes, consume the lcore-id from the user via ethdev API, internally the PMD translates this..
Based on my current understanding this can be done in two ways
1. the translation is done using hwloc library API calls
2. using rte_topology structure during probing, also probe for cache id for L1, L2, L3 and L4.
To do achieve this, one has to add `unsigned int cache_id` to `struct core_domain_mapping`.
This allows `rte_eal_topology_init` probe to store the cache_id.
>
> >
> > Note: the current patch introduces of Topology aware grouping, which
> > helps to run application better or tiles or chiplets sharing same
> > L2|L3 or IO domain.
>
> Both libraries (Topology and Cache Stashing) need to have detailed information
> about the hardware, although they use the information for two different purposes.
>
> Maybe they could share a common lower layer for the system topology, perhaps
> just a few header files. Or maybe the Cache Stashing library should depend on the
> Topology library as its "lower layer" to provide the hardware information it needs.
>
> I'm not saying that it must be so. I'm only saying that I suppose these two libraries
> have a lot in common, and they could try to take advantage of this, to provide a
> more uniform API at their lower layers.
Yes I agree, my only case here let us first get rte_topology into dpdk eco-system.
Since the ` struct core_domain_mapping` can be easily updated this can be done in next step.
Note: assuming we can merge this for release 25.03, and we all concur or `cache stashing API` we get it tested on AMD EPYC and other platforms alike.
I will share v4 by today.
next prev parent reply other threads:[~2024-11-05 2:17 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-30 5:41 [RFC v3 0/3] Introduce Topology NUMA grouping for lcores Vipin Varghese
2024-10-30 5:41 ` [RFC v3 1/3] eal/lcore: add topology based functions Vipin Varghese
2024-10-30 15:43 ` Stephen Hemminger
2024-11-04 3:09 ` Varghese, Vipin
2024-10-30 15:44 ` Stephen Hemminger
2024-11-04 3:13 ` Varghese, Vipin
2024-11-04 8:45 ` Mattias Rönnblom
2024-10-30 15:45 ` Stephen Hemminger
2024-11-04 3:13 ` Varghese, Vipin
2024-10-30 15:47 ` Stephen Hemminger
2024-11-04 3:16 ` Varghese, Vipin
2024-11-04 7:57 ` Morten Brørup
2024-11-04 9:56 ` Varghese, Vipin
2024-11-04 11:21 ` Morten Brørup
2024-11-04 12:14 ` Varghese, Vipin
2024-11-04 12:29 ` Morten Brørup
2024-11-04 13:08 ` Varghese, Vipin
2024-11-04 14:04 ` Morten Brørup
2024-11-05 2:17 ` Varghese, Vipin [this message]
2024-10-30 5:41 ` [RFC v3 2/3] test/lcore: enable tests for topology Vipin Varghese
2024-10-30 11:50 ` [EXTERNAL] " Pavan Nikhilesh Bhagavatula
2024-11-04 3:07 ` Varghese, Vipin
2024-10-30 5:41 ` [RFC v3 3/3] examples: add lcore topology API calls Vipin Varghese
2024-10-30 11:49 ` [EXTERNAL] " Pavan Nikhilesh Bhagavatula
2024-10-30 12:06 ` Varghese, Vipin
2024-10-30 12:37 ` Varghese, Vipin
2024-10-30 19:34 ` Pavan Nikhilesh Bhagavatula
2024-11-04 3:02 ` Varghese, Vipin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=PH7PR12MB8596BF09963460CEAE17582E82522@PH7PR12MB8596.namprd12.prod.outlook.com \
--to=vipin.varghese@amd.com \
--cc=Ferruh.Yigit@amd.com \
--cc=anatoly.burakov@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=dmitry.kozliuk@gmail.com \
--cc=drc@linux.ibm.com \
--cc=honnappa.nagarahalli@arm.com \
--cc=jerinj@marvell.com \
--cc=john.mcnamara@intel.com \
--cc=konstantin.ananyev@huawei.com \
--cc=mattias.ronnblom@ericsson.com \
--cc=mb@smartsharesystems.com \
--cc=roretzla@linux.microsoft.com \
--cc=ruifeng.wang@arm.com \
--cc=stanislaw.kardach@gmail.com \
--cc=stephen@networkplumber.org \
--cc=wathsala.vithanage@arm.com \
--cc=zhoumin@loongson.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).