From: John Alexander <John.Alexander@datapath.co.uk>
To: Elena Agostini <eagostini@nvidia.com>,
"NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [PATCH v2] gpudev: pin GPU memory
Date: Tue, 4 Jan 2022 17:28:58 +0000 [thread overview]
Message-ID: <DB6PR0902MB2070F16AA72A6C1C331A2E09B44A9@DB6PR0902MB2070.eurprd09.prod.outlook.com> (raw)
In-Reply-To: <DM6PR12MB41075D97594DB4EEA4A63160CD4A9@DM6PR12MB4107.namprd12.prod.outlook.com>
[-- Attachment #1: Type: text/plain, Size: 3216 bytes --]
What happens when the Nvidia GPU driver kernel callback occurs to invalidate the pinned GPU memory region? Doesn't the NIC need to cease all DMA transfers to/from that region before the kernel callback can complete?
From: Elena Agostini <eagostini@nvidia.com>
Sent: 04 January 2022 13:55
To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: dev@dpdk.org
Subject: Re: [PATCH v2] gpudev: pin GPU memory
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
> 04/01/2022 03:41, eagostini@nvidia.com<mailto:eagostini@nvidia.com>:
> > From: Elena Agostini <eagostini@nvidia.com<mailto:eagostini@nvidia.com>>
> >
> > Enable the possibility to make a GPU memory area accessible from
> > the CPU.
> >
> > GPU memory has to be allocated via rte_gpu_mem_alloc().
> >
> > This patch allows the gpudev library to pin, through the GPU driver,
> > a chunk of GPU memory and to return a memory pointer usable
> > by the CPU to access the GPU memory area.
> >
> > Signed-off-by: Elena Agostini <eagostini@nvidia.com<mailto:eagostini@nvidia.com>>
> [...]
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Pin a chunk of GPU memory to make it accessible from the CPU
>
> You should define what means "pin" exactly.
> Which properties should we expect?
>
Thanks for reviewing, this is the kind of discussion I wanted to have.
Maybe "pin" is too GDRCopy specific oriented.
Here I want to make a GPU memory buffer visible from the CPU. In case
of NVIDIA, this means the GPU memory address has to be pinned (virtual address
doesn't change) and dma-mapped.
Maybe the name should be more like rte_gpu_mem_to_cpu() that's more
explicative and generic.
> > + * using the memory pointer returned by the function.
>
> Which function should return the pointer?
> rte_gpu_mem_pin is returning an int.
Oversight, will fix it.
>
>
> > + * GPU memory has to be allocated via rte_gpu_mem_alloc().
>
> Why pinning is not done by rte_gpu_mem_alloc()?
> Should it be a flag?
rte_gpu_mem_alloc() allocate virtual memory on the GPU that doesn't have
to be necessarily shared (pinned) to make it visible from CPU.
>
> > + *
> > + * @param dev_id
> > + * Device ID requiring pinned memory.
> > + * @param size
> > + * Number of bytes to pin.
> > + * Requesting 0 will do nothing.
> > + * @param ptr
> > + * Pointer to the GPU memory area to be pinned.
> > + * NULL is a no-op accepted value.
> > +
> > + * @return
> > + * A pointer to the pinned GPU memory usable by the CPU, otherwise NULL and rte_errno is set:
> > + * - ENODEV if invalid dev_id
> > + * - EINVAL if reserved flags
>
> Which reserved flags?
>
> > + * - ENOTSUP if operation not supported by the driver
> > + * - E2BIG if size is higher than limit
> > + * - ENOMEM if out of space
>
> Is out of space relevant for pinning?
Yes, let me add it
>
> > + * - EPERM if driver error
> > + */
> > +__rte_experimental
> > +int rte_gpu_mem_pin(int16_t dev_id, size_t size, void *ptr);
[-- Attachment #2: Type: text/html, Size: 12796 bytes --]
next prev parent reply other threads:[~2022-01-04 17:29 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-04 2:34 [PATCH v1] " eagostini
2022-01-04 2:41 ` [PATCH v2] " eagostini
2022-01-04 12:51 ` Thomas Monjalon
2022-01-04 13:55 ` Elena Agostini
2022-01-04 17:28 ` John Alexander [this message]
2022-01-04 17:53 ` Elena Agostini
2022-01-08 0:04 ` [PATCH v3] gpudev: expose " eagostini
2022-01-27 3:47 ` [PATCH v4 1/2] gpudev: expose GPU memory to CPU eagostini
2022-01-27 3:47 ` [PATCH v4 2/2] app/test-gpudev: test cpu_map/cpu_unmap functions eagostini
2022-01-27 6:55 ` [PATCH v4 1/2] gpudev: expose GPU memory to CPU Wang, Haiyue
2022-02-10 10:38 ` Elena Agostini
2022-02-11 4:46 ` Wang, Haiyue
2022-01-27 3:50 ` [PATCH v5 " eagostini
2022-01-27 3:50 ` [PATCH v5 2/2] app/test-gpudev: test cpu_map/cpu_unmap functions eagostini
2022-02-10 15:12 ` [PATCH v5 1/2] gpudev: expose GPU memory to CPU Thomas Monjalon
2022-01-04 2:39 [PATCH v2] gpudev: pin GPU memory eagostini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DB6PR0902MB2070F16AA72A6C1C331A2E09B44A9@DB6PR0902MB2070.eurprd09.prod.outlook.com \
--to=john.alexander@datapath.co.uk \
--cc=dev@dpdk.org \
--cc=eagostini@nvidia.com \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).