DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: dev@dpdk.org
Cc: Stephen Hemminger <stephen@networkplumber.org>,
	Elena Agostini <eagostini@nvidia.com>
Subject: [PATCH 2/2] gpudev: add malloc annotations to rte_gpu_mem_alloc
Date: Thu, 17 Oct 2024 15:58:04 -0700	[thread overview]
Message-ID: <20241017225844.235401-3-stephen@networkplumber.org> (raw)
In-Reply-To: <20241017225844.235401-1-stephen@networkplumber.org>

Add function attributes that allow detecting use after free,
and calling free on bad pointer at compile time.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
 lib/gpudev/rte_gpudev.h | 46 +++++++++++++++++++++--------------------
 1 file changed, 24 insertions(+), 22 deletions(-)

diff --git a/lib/gpudev/rte_gpudev.h b/lib/gpudev/rte_gpudev.h
index 0a94a6abc4..7a35cf85d1 100644
--- a/lib/gpudev/rte_gpudev.h
+++ b/lib/gpudev/rte_gpudev.h
@@ -357,6 +357,28 @@ int rte_gpu_callback_unregister(int16_t dev_id, enum rte_gpu_event event,
 __rte_experimental
 int rte_gpu_info_get(int16_t dev_id, struct rte_gpu_info *info);
 
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Deallocate a chunk of memory allocated with rte_gpu_mem_alloc().
+ *
+ * @param dev_id
+ *   Reference device ID.
+ * @param ptr
+ *   Pointer to the memory area to be deallocated.
+ *   NULL is a no-op accepted value.
+ *
+ * @return
+ *   0 on success, -rte_errno otherwise:
+ *   - ENODEV if invalid dev_id
+ *   - ENOTSUP if operation not supported by the driver
+ *   - EPERM if driver error
+ */
+__rte_experimental
+int rte_gpu_mem_free(int16_t dev_id, void *ptr);
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -385,28 +407,8 @@ int rte_gpu_info_get(int16_t dev_id, struct rte_gpu_info *info);
  */
 __rte_experimental
 void *rte_gpu_mem_alloc(int16_t dev_id, size_t size, unsigned int align)
-__rte_alloc_size(2);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice.
- *
- * Deallocate a chunk of memory allocated with rte_gpu_mem_alloc().
- *
- * @param dev_id
- *   Reference device ID.
- * @param ptr
- *   Pointer to the memory area to be deallocated.
- *   NULL is a no-op accepted value.
- *
- * @return
- *   0 on success, -rte_errno otherwise:
- *   - ENODEV if invalid dev_id
- *   - ENOTSUP if operation not supported by the driver
- *   - EPERM if driver error
- */
-__rte_experimental
-int rte_gpu_mem_free(int16_t dev_id, void *ptr);
+	__rte_alloc_size(2) __rte_alloc_align(3)
+	__rte_malloc __rte_dealloc(rte_gpu_mem_free, 2);
 
 /**
  * @warning
-- 
2.45.2


      parent reply	other threads:[~2024-10-17 22:59 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-17 22:58 [PATCH 0/2] gpudev: annotate memory allocation Stephen Hemminger
2024-10-17 22:58 ` [PATCH 1/2] test-gpudev: avoid use-after-free and free-non-heap warnings Stephen Hemminger
2024-10-17 22:58 ` Stephen Hemminger [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241017225844.235401-3-stephen@networkplumber.org \
    --to=stephen@networkplumber.org \
    --cc=dev@dpdk.org \
    --cc=eagostini@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).