From: Thomas Monjalon <thomas@monjalon.net>
To: Elena Agostini <eagostini@nvidia.com>
Cc: dev@dpdk.org
Subject: Re: [PATCH v2] gpudev: pin GPU memory
Date: Tue, 04 Jan 2022 13:51:47 +0100 [thread overview]
Message-ID: <12925392.uLZWGnKmhe@thomas> (raw)
In-Reply-To: <20220104024100.14318-1-eagostini@nvidia.com>
04/01/2022 03:41, eagostini@nvidia.com:
> From: Elena Agostini <eagostini@nvidia.com>
>
> Enable the possibility to make a GPU memory area accessible from
> the CPU.
>
> GPU memory has to be allocated via rte_gpu_mem_alloc().
>
> This patch allows the gpudev library to pin, through the GPU driver,
> a chunk of GPU memory and to return a memory pointer usable
> by the CPU to access the GPU memory area.
>
> Signed-off-by: Elena Agostini <eagostini@nvidia.com>
[...]
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Pin a chunk of GPU memory to make it accessible from the CPU
You should define what means "pin" exactly.
Which properties should we expect?
> + * using the memory pointer returned by the function.
Which function should return the pointer?
rte_gpu_mem_pin is returning an int.
> + * GPU memory has to be allocated via rte_gpu_mem_alloc().
Why pinning is not done by rte_gpu_mem_alloc()?
Should it be a flag?
> + *
> + * @param dev_id
> + * Device ID requiring pinned memory.
> + * @param size
> + * Number of bytes to pin.
> + * Requesting 0 will do nothing.
> + * @param ptr
> + * Pointer to the GPU memory area to be pinned.
> + * NULL is a no-op accepted value.
> +
> + * @return
> + * A pointer to the pinned GPU memory usable by the CPU, otherwise NULL and rte_errno is set:
> + * - ENODEV if invalid dev_id
> + * - EINVAL if reserved flags
Which reserved flags?
> + * - ENOTSUP if operation not supported by the driver
> + * - E2BIG if size is higher than limit
> + * - ENOMEM if out of space
Is out of space relevant for pinning?
> + * - EPERM if driver error
> + */
> +__rte_experimental
> +int rte_gpu_mem_pin(int16_t dev_id, size_t size, void *ptr);
next prev parent reply other threads:[~2022-01-04 12:51 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-04 2:34 [PATCH v1] " eagostini
2022-01-04 2:41 ` [PATCH v2] " eagostini
2022-01-04 12:51 ` Thomas Monjalon [this message]
2022-01-04 13:55 ` Elena Agostini
2022-01-04 17:28 ` John Alexander
2022-01-04 17:53 ` Elena Agostini
2022-01-08 0:04 ` [PATCH v3] gpudev: expose " eagostini
2022-01-27 3:47 ` [PATCH v4 1/2] gpudev: expose GPU memory to CPU eagostini
2022-01-27 3:47 ` [PATCH v4 2/2] app/test-gpudev: test cpu_map/cpu_unmap functions eagostini
2022-01-27 6:55 ` [PATCH v4 1/2] gpudev: expose GPU memory to CPU Wang, Haiyue
2022-02-10 10:38 ` Elena Agostini
2022-02-11 4:46 ` Wang, Haiyue
2022-01-27 3:50 ` [PATCH v5 " eagostini
2022-01-27 3:50 ` [PATCH v5 2/2] app/test-gpudev: test cpu_map/cpu_unmap functions eagostini
2022-02-10 15:12 ` [PATCH v5 1/2] gpudev: expose GPU memory to CPU Thomas Monjalon
2022-01-04 2:39 [PATCH v2] gpudev: pin GPU memory eagostini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=12925392.uLZWGnKmhe@thomas \
--to=thomas@monjalon.net \
--cc=dev@dpdk.org \
--cc=eagostini@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).