From: "Varghese, Vipin" <Vipin.Varghese@amd.com>
To: David Marchand <david.marchand@redhat.com>,
Michael Piszczek <mpiszczek@ddn.com>,
"Yigit, Ferruh" <Ferruh.Yigit@amd.com>,
"Namburu, Chandu-babu" <chandu@amd.com>,
"Modali, Bhagyada" <Bhagyada.Modali@amd.com>,
"Uttarwar, Sunil Prakashrao" <SunilPrakashrao.Uttarwar@amd.com>,
"Boyer, Andrew" <Andrew.Boyer@amd.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [PATCH v2] pci: read amd iommu virtual address width
Date: Mon, 10 Oct 2022 13:12:44 +0000 [thread overview]
Message-ID: <MN2PR12MB30856F0297E228F9D754B2C982209@MN2PR12MB3085.namprd12.prod.outlook.com> (raw)
In-Reply-To: <CAJFAV8xSqKGsUvVR7h5wj9AcWojrt1GagTCjxsPpH5JTStevxg@mail.gmail.com>
[Public]
Hi Michael,
Thank you for looking into the change, please find my comments shared below
<snipped>
> >
> > Add code to read the virtual address width for AMD processors.
> >
> > Signed-off-by: Michael Piszczek <mpiszczek@ddn.com>
> > ---
> > drivers/bus/pci/linux/pci.c | 43
> > +++++++++++++++++++++++++++++++++++++
> > 1 file changed, 43 insertions(+)
> >
> > diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
> > index e521459870..0c6d79ca74 100644
> > --- a/drivers/bus/pci/linux/pci.c
> > +++ b/drivers/bus/pci/linux/pci.c
> > @@ -4,6 +4,7 @@
> >
> > #include <string.h>
> > #include <dirent.h>
> > +#include <sys/stat.h>
> >
> > #include <rte_log.h>
> > #include <rte_bus.h>
> > @@ -492,6 +493,38 @@ rte_pci_scan(void) }
> >
> > #if defined(RTE_ARCH_X86)
> > +
> > +static uint64_t
> > +pci_device_amd_iommu_support_va(const struct rte_pci_addr *addr) {
> > +#define RD_AMD_CAP_VASIZE_SHIFT 15 #define
> RD_AMD_CAP_VASIZE_MASK
> > +(0x7F << RD_AMD_CAP_VASIZE_SHIFT)
> > +
> > + char filename[PATH_MAX];
> > + FILE *fp;
> > + uint64_t amd_cap_reg = 0;
> > +
> > + snprintf(filename, sizeof(filename),
> > + "%s/" PCI_PRI_FMT "/iommu/amd-iommu/cap",
> > + rte_pci_get_sysfs_path(), addr->domain, addr->bus, addr->devid,
> > + addr->function);
> > +
> > + fp = fopen(filename, "r");
> > + if (fp == NULL)
> > + return 0;
> > +
> > + /* We have an Amd IOMMU */
> > + if (fscanf(fp, "%" PRIx64, &amd_cap_reg) != 1) {
> > + RTE_LOG(ERR, EAL, "%s(): can't read %s\n", __func__, filename);
> > + fclose(fp);
> > + return 0;
> > + }
> > +
> > + fclose(fp);
> > +
> > + return ((amd_cap_reg & RD_AMD_CAP_VASIZE_MASK) >>
> > +RD_AMD_CAP_VASIZE_SHIFT) + 1; }
> > +
> > bool
> > pci_device_iommu_support_va(const struct rte_pci_device *dev) { @@
> > -501,6 +534,16 @@ pci_device_iommu_support_va(const struct
> rte_pci_device *dev)
> > char filename[PATH_MAX];
> > FILE *fp;
> > uint64_t mgaw, vtd_cap_reg = 0;
> > + struct stat s;
> > +
> > + if (stat("/sys/class/iommu/ivhd2/amd-iommu", &s) == 0) {
AMD EPYC can be configured into various Logical NUMA divisions for both 1P and 2P (sockets) for Naples, Rome and Milan. This leads various combinations as 1, 2, 4 for a single socket. Hence hard coding the check for ` ivhd2` could fail even for a valid NPS configuration.
> > + mgaw = pci_device_amd_iommu_support_va(addr);
Instead of checking the device availability, one can exercise the iommu capability directly. For example
```
# ls /sys/bus/pci/devices/0000\:41\:00.0/iommu/amd-iommu/cap -l
-r--r--r-- 1 root root 4096 Oct 10 06:10 /sys/bus/pci/devices/0000:41:00.0/iommu/amd-iommu/cap
```
> > + if (mgaw > 0) {
> > + rte_mem_set_dma_mask(mgaw);
> > + return true;
> > + }
> > + return false;
> > + }
> >
> > snprintf(filename, sizeof(filename),
> > "%s/" PCI_PRI_FMT "/iommu/intel-iommu/cap",
Hence my suggestion would be retaining the same code flow as original function with added check for `amd-iommu` along with `intel-iommu` for x86 devices. The additional function call ` pci_device_amd_iommu_support_va` can be avoided.
> > --
> > 2.34.1
> >
>
> Sorry, I picked you guys with @amd.com mail addresses.
> If there is someone with knowledge of this piece of AMD hw available, can
> this patch be reviewed?
> Thanks.
>
>
> --
> David Marchand
next prev parent reply other threads:[~2022-10-10 13:12 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-12 16:01 [PATCH 0/1] " Michael Piszczek
2022-09-12 16:01 ` [PATCH 1/1] " Michael Piszczek
2022-09-14 13:49 ` [PATCH v2] " Michael Piszczek
2022-10-03 7:48 ` David Marchand
2022-10-10 13:12 ` Varghese, Vipin [this message]
2022-10-10 21:47 ` [PATCH v3] " Michael Piszczek
2022-10-10 21:47 ` Michael Piszczek
2022-10-11 22:00 ` Ferruh Yigit
2022-10-11 14:08 ` [PATCH v4] " Michael Piszczek
2022-10-11 14:08 ` Michael Piszczek
2022-10-12 9:18 ` Ferruh Yigit
2022-10-12 15:15 ` Stephen Hemminger
2022-10-13 18:16 ` [PATCH v5] " Michael Piszczek
2022-10-13 18:16 ` Michael Piszczek
2022-10-24 18:09 ` Stephen Hemminger
2022-10-25 7:56 ` Ferruh Yigit
2022-10-17 15:45 ` [PATCH v6] " Michael Piszczek
2022-10-17 15:45 ` Michael Piszczek
2022-10-25 11:54 ` David Marchand
2023-08-08 7:31 ` David Marchand
2023-08-08 13:53 ` Michael Piszczek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=MN2PR12MB30856F0297E228F9D754B2C982209@MN2PR12MB3085.namprd12.prod.outlook.com \
--to=vipin.varghese@amd.com \
--cc=Andrew.Boyer@amd.com \
--cc=Bhagyada.Modali@amd.com \
--cc=Ferruh.Yigit@amd.com \
--cc=SunilPrakashrao.Uttarwar@amd.com \
--cc=chandu@amd.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=mpiszczek@ddn.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).