DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg
@ 2020-07-15  3:59 Suanming Mou
  2020-07-15  3:59 ` [dpdk-dev] [PATCH 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
                   ` (8 more replies)
  0 siblings, 9 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-15  3:59 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

Currently, for MLX5 PMD, once millions of flows created, the memory
consumption of the flows are also very huge. For the system with limited
memory, it means the system need to reserve most of the memory as huge
page memory to serve the flows in advance. And other normal applications
will have no chance to use this reserved memory any more. While most of
the time, the system will not have lots of flows, the  reserved huge page
memory becomes a bit waste of memory at most of the time.

By the new sys_mem_en devarg, once set it to be true, it allows the PMD
allocate the memory from system by default with the new add mlx5 memory
management functions. Only once the MLX5_MEM_RTE flag is set, the memory
will be allocate from rte, otherwise, it allocates memory from system.

So in this case, the system with limited memory no need to reserve most
of the memory for hugepage. Only some needed memory for datapath objects
will be enough to allocated with explicitly flag. Other memory will be
allocated from system. For system with enough memory, no need to care
about the devarg, the memory will always be from rte hugepage.

One restriction is that for DPDK application with multiple PCI devices,
if the sys_mem_en devargs are different between the devices, the
sys_mem_en only gets the value from the first device devargs, and print
out a message to warn that.

Suanming Mou (7):
  common/mlx5: add mlx5 memory management functions
  net/mlx5: add allocate memory from system devarg
  net/mlx5: convert control path memory to unified malloc
  common/mlx5: convert control path memory to unified malloc
  common/mlx5: convert data path objects to unified malloc
  net/mlx5: convert configuration objects to unified malloc
  net/mlx5: convert Rx/Tx queue objects to unified malloc

 doc/guides/nics/mlx5.rst                        |   7 +
 drivers/common/mlx5/Makefile                    |   1 +
 drivers/common/mlx5/linux/mlx5_glue.c           |  13 +-
 drivers/common/mlx5/linux/mlx5_nl.c             |   5 +-
 drivers/common/mlx5/meson.build                 |   1 +
 drivers/common/mlx5/mlx5_common.c               |  10 +-
 drivers/common/mlx5/mlx5_common_mp.c            |   7 +-
 drivers/common/mlx5/mlx5_common_mr.c            |  31 ++--
 drivers/common/mlx5/mlx5_devx_cmds.c            |  75 +++++----
 drivers/common/mlx5/mlx5_malloc.c               | 214 ++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_malloc.h               |  93 ++++++++++
 drivers/common/mlx5/rte_common_mlx5_version.map |   5 +
 drivers/net/mlx5/linux/mlx5_ethdev_os.c         |   8 +-
 drivers/net/mlx5/linux/mlx5_os.c                |  28 ++--
 drivers/net/mlx5/mlx5.c                         | 108 ++++++------
 drivers/net/mlx5/mlx5.h                         |   1 +
 drivers/net/mlx5/mlx5_ethdev.c                  |  15 +-
 drivers/net/mlx5/mlx5_flow.c                    |  45 ++---
 drivers/net/mlx5/mlx5_flow_dv.c                 |  46 ++---
 drivers/net/mlx5/mlx5_flow_meter.c              |  11 +-
 drivers/net/mlx5/mlx5_flow_verbs.c              |   8 +-
 drivers/net/mlx5/mlx5_mp.c                      |   3 +-
 drivers/net/mlx5/mlx5_rss.c                     |  13 +-
 drivers/net/mlx5/mlx5_rxq.c                     |  74 ++++----
 drivers/net/mlx5/mlx5_txq.c                     |  44 +++--
 drivers/net/mlx5/mlx5_utils.c                   |  60 ++++---
 drivers/net/mlx5/mlx5_utils.h                   |   2 +-
 drivers/net/mlx5/mlx5_vlan.c                    |   8 +-
 28 files changed, 660 insertions(+), 276 deletions(-)
 create mode 100644 drivers/common/mlx5/mlx5_malloc.c
 create mode 100644 drivers/common/mlx5/mlx5_malloc.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH 1/7] common/mlx5: add mlx5 memory management functions
  2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
@ 2020-07-15  3:59 ` Suanming Mou
  2020-07-15  3:59 ` [dpdk-dev] [PATCH 2/7] net/mlx5: add allocate memory from system devarg Suanming Mou
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-15  3:59 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

Add the internal mlx5 memory management functions:

mlx5_malloc_mem_select();
mlx5_rellaocate();
mlx5_malloc();
mlx5_free();

User will be allowed to manage memory from system or from rte memory
with the unified functions.

In this case, for the system with limited memory which can not reserve
lots of rte hugepage memory in advanced, will allocate the memory from
system for some of not so important control path objects based on the
sys_mem_en configuration.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/Makefile                    |   1 +
 drivers/common/mlx5/meson.build                 |   1 +
 drivers/common/mlx5/mlx5_malloc.c               | 214 ++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_malloc.h               |  93 ++++++++++
 drivers/common/mlx5/rte_common_mlx5_version.map |   5 +
 5 files changed, 314 insertions(+)
 create mode 100644 drivers/common/mlx5/mlx5_malloc.c
 create mode 100644 drivers/common/mlx5/mlx5_malloc.h

diff --git a/drivers/common/mlx5/Makefile b/drivers/common/mlx5/Makefile
index f6c762b..239d681 100644
--- a/drivers/common/mlx5/Makefile
+++ b/drivers/common/mlx5/Makefile
@@ -21,6 +21,7 @@ SRCS-y += linux/mlx5_nl.c
 SRCS-y += linux/mlx5_common_verbs.c
 SRCS-y += mlx5_common_mp.c
 SRCS-y += mlx5_common_mr.c
+SRCS-y += mlx5_malloc.c
 ifeq ($(CONFIG_RTE_IBVERBS_LINK_DLOPEN),y)
 INSTALL-y-lib += $(LIB_GLUE)
 endif
diff --git a/drivers/common/mlx5/meson.build b/drivers/common/mlx5/meson.build
index ba43714..70e2c1c 100644
--- a/drivers/common/mlx5/meson.build
+++ b/drivers/common/mlx5/meson.build
@@ -13,6 +13,7 @@ sources += files(
 	'mlx5_common.c',
 	'mlx5_common_mp.c',
 	'mlx5_common_mr.c',
+	'mlx5_malloc.c',
 )
 
 cflags_options = [
diff --git a/drivers/common/mlx5/mlx5_malloc.c b/drivers/common/mlx5/mlx5_malloc.c
new file mode 100644
index 0000000..990e428
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_malloc.c
@@ -0,0 +1,214 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#include <errno.h>
+#include <rte_malloc.h>
+#include <malloc.h>
+#include <stdbool.h>
+#include <string.h>
+
+#include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
+
+struct mlx5_sys_mem {
+	uint32_t init:1; /* Memory allocator initialized. */
+	uint32_t enable:1; /* System memory select. */
+	uint32_t reserve:30; /* Reserve. */
+	struct rte_memseg_list *last_msl;
+	/* last allocated rte memory memseg list. */
+};
+
+/* Initialize default as not */
+static struct mlx5_sys_mem mlx5_sys_mem = {
+	.init = 0,
+	.enable = 0,
+};
+
+/**
+ * Check if the address belongs to memory seg list.
+ *
+ * @param addr
+ *   Memory address to be ckeced.
+ * @param msl
+ *   Memory seg list.
+ *
+ * @return
+ *   True if it belongs, false otherwise.
+ */
+static bool
+mlx5_mem_check_msl(void *addr, struct rte_memseg_list *msl)
+{
+	void *start, *end;
+
+	if (!msl)
+		return false;
+	start = msl->base_va;
+	end = RTE_PTR_ADD(start, msl->len);
+	if (addr >= start && addr < end)
+		return true;
+	return false;
+}
+
+/**
+ * Update the msl if memory belongs to new msl.
+ *
+ * @param addr
+ *   Memory address.
+ */
+static void
+mlx5_mem_update_msl(void *addr)
+{
+	/*
+	 * Update the cache msl if the new addr comes from the new msl
+	 * different with the cached msl.
+	 */
+	if (addr && !mlx5_mem_check_msl(addr, mlx5_sys_mem.last_msl))
+		mlx5_sys_mem.last_msl = rte_mem_virt2memseg_list(addr);
+}
+
+/**
+ * Check if the address belongs to rte memory.
+ *
+ * @param addr
+ *   Memory address to be ckeced.
+ *
+ * @return
+ *   True if it belongs, false otherwise.
+ */
+static bool
+mlx5_mem_is_rte(void *addr)
+{
+	/*
+	 * Check if the last cache msl matches. Drop to slow path
+	 * to check if the memory belongs to rte memory.
+	 */
+	if (mlx5_mem_check_msl(addr, mlx5_sys_mem.last_msl) ||
+	    rte_mem_virt2memseg_list(addr))
+		return true;
+	return false;
+}
+
+/**
+ * Allocate memory with alignment.
+ *
+ * @param size
+ *   Memory size to be allocated.
+ * @param align
+ *   Memory alignment.
+ * @param zero
+ *   Clear the allocated memory or not.
+ *
+ * @return
+ *   Pointer of the allocated memory, NULL otherwise.
+ */
+static void *
+mlx5_alloc_align(size_t size, unsigned int align, unsigned int zero)
+{
+	void *buf;
+	buf = memalign(align, size);
+	if (!buf) {
+		DRV_LOG(ERR, "Couldn't allocate buf.\n");
+		return NULL;
+	}
+	if (zero)
+		memset(buf, 0, size);
+	return buf;
+}
+
+void *
+mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket)
+{
+	void *addr;
+	bool rte_mem;
+
+	/*
+	 * If neither system memory nor rte memory is required, allocate
+	 * memory according to mlx5_sys_mem.enable.
+	 */
+	if (flags & MLX5_MEM_RTE)
+		rte_mem = true;
+	else if (flags & MLX5_MEM_SYS)
+		rte_mem = false;
+	else
+		rte_mem = mlx5_sys_mem.enable ? false : true;
+	if (rte_mem) {
+		if (flags & MLX5_MEM_ZERO)
+			addr = rte_zmalloc_socket(NULL, size, align, socket);
+		else
+			addr = rte_malloc_socket(NULL, size, align, socket);
+		mlx5_mem_update_msl(addr);
+		return addr;
+	}
+	/* The memory will be allocated from system. */
+	if (align)
+		return mlx5_alloc_align(size, align, !!(flags & MLX5_MEM_ZERO));
+	else if (flags & MLX5_MEM_ZERO)
+		return calloc(1, size);
+	return malloc(size);
+}
+
+void *
+mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
+	     int socket)
+{
+	void *new_addr;
+	bool rte_mem;
+
+	/* Allocate directly if old memory address is NULL. */
+	if (!addr)
+		return mlx5_malloc(flags, size, align, socket);
+	/* Get the memory type. */
+	if (flags & MLX5_MEM_RTE)
+		rte_mem = true;
+	else if (flags & MLX5_MEM_SYS)
+		rte_mem = false;
+	else
+		rte_mem = mlx5_sys_mem.enable ? false : true;
+	/* Check if old memory and to be allocated memory are the same type. */
+	if (rte_mem != mlx5_mem_is_rte(addr)) {
+		DRV_LOG(ERR, "Couldn't reallocate to different memory type.");
+		return NULL;
+	}
+	/* Allocate memory from rte memory. */
+	if (rte_mem) {
+		new_addr = rte_realloc_socket(addr, size, align, socket);
+		mlx5_mem_update_msl(new_addr);
+		return new_addr;
+	}
+	/* Align is not supported for system memory. */
+	if (align) {
+		DRV_LOG(ERR, "Couldn't reallocate with alignment");
+		return NULL;
+	}
+	return realloc(addr, size);
+}
+
+void
+mlx5_free(void *addr)
+{
+	if (!mlx5_mem_is_rte(addr))
+		free(addr);
+	else
+		rte_free(addr);
+}
+
+void
+mlx5_malloc_mem_select(uint32_t sys_mem_en)
+{
+	/*
+	 * The initialization should be called only once and all devices
+	 * should use the same memory type. Otherwise, when new device is
+	 * being attached with some different memory allocation configuration,
+	 * the memory will get wrong behavior or a failure will be raised.
+	 */
+	if (!mlx5_sys_mem.init) {
+		if (sys_mem_en)
+			mlx5_sys_mem.enable = 1;
+		mlx5_sys_mem.init = 1;
+		DRV_LOG(INFO, "%s is selected.", sys_mem_en ? "SYS_MEM" : "RTE_MEM");
+	} else if (mlx5_sys_mem.enable != sys_mem_en) {
+		DRV_LOG(WARNING, "%s is already selected.",
+			mlx5_sys_mem.enable ? "SYS_MEM" : "RTE_MEM");
+	}
+}
diff --git a/drivers/common/mlx5/mlx5_malloc.h b/drivers/common/mlx5/mlx5_malloc.h
new file mode 100644
index 0000000..2feb37c
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_malloc.h
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef MLX5_MALLOC_H_
+#define MLX5_MALLOC_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+enum mlx5_mem_flags {
+	MLX5_MEM_ANY = 0,
+	/* Memory will be allocated dpends on sys_mem_en. */
+	MLX5_MEM_SYS = 1 << 0,
+	/* Memory should be allocated from system. */
+	MLX5_MEM_RTE = 1 << 1,
+	/* Memory should be allocated from rte hugepage. */
+	MLX5_MEM_ZERO = 1 << 2,
+	/* Memory should be cleared to zero. */
+};
+
+/**
+ * Select the PMD memory allocate preference.
+ *
+ * Once sys_mem_en is set, the default memory allocate will from
+ * system only if an explicitly flag is set to order the memory
+ * from rte hugepage memory.
+ *
+ * @param sys_mem_en
+ *   Use system memory or not.
+ */
+__rte_internal
+void mlx5_malloc_mem_select(uint32_t sys_mem_en);
+
+/**
+ * Memory allocate function.
+ *
+ * @param flags
+ *   The bits as enum mlx5_mem_flags defined.
+ * @param size
+ *   Memory size to be allocated.
+ * @param align
+ *   Memory alignment.
+ * @param socket
+ *   The socket memory should allocated.
+ *   Valid only when allocate the memory from rte hugepage.
+ *
+ * @return
+ *   Pointer of the allocated memory, NULL otherwise.
+ */
+__rte_internal
+void *mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket);
+
+/**
+ * Memory reallocate function.
+ *
+ *
+ *
+ * @param addr
+ *   The memory to be reallocated.
+ * @param flags
+ *   The bits as enum mlx5_mem_flags defined.
+ * @param size
+ *   Memory size to be allocated.
+ * @param align
+ *   Memory alignment.
+ * @param socket
+ *   The socket memory should allocated.
+ *   Valid only when allocate the memory from rte hugepage.
+ *
+ * @return
+ *   Pointer of the allocated memory, NULL otherwise.
+ */
+
+__rte_internal
+void *mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
+		   int socket);
+
+/**
+ * Memory free function.
+ *
+ * @param addr
+ *   The memory address to be freed..
+ */
+__rte_internal
+void mlx5_free(void *addr);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index ae57ebd..d66946d 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -81,5 +81,10 @@ INTERNAL {
 	mlx5_release_dbr;
 
 	mlx5_translate_port_name;
+
+	mlx5_malloc_mem_select;
+	mlx5_malloc;
+	mlx5_realloc;
+	mlx5_free;
 };
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH 2/7] net/mlx5: add allocate memory from system devarg
  2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
  2020-07-15  3:59 ` [dpdk-dev] [PATCH 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
@ 2020-07-15  3:59 ` Suanming Mou
  2020-07-15  3:59 ` [dpdk-dev] [PATCH 3/7] net/mlx5: convert control path memory to unified malloc Suanming Mou
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-15  3:59 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

Currently, for MLX5 PMD, once millions of flows created, the memory
consumption of the flows are also very huge. For the system with limited
memory, it means the system need to reserve most of the memory as huge
page memory to serve the flows in advance. And other normal applications
will have no chance to use this reserved memory any more. While most of
the time, the system will not have lots of flows, the  reserved huge page
memory becomes a bit waste of memory at most of the time.

By the new sys_mem_en devarg, once set it to be true, it allows the PMD
allocate the memory from system by default with the new add mlx5 memory
management functions. Only once the MLX5_MEM_RTE flag is set, the memory
will be allocate from rte, otherwise, it allocates memory from system.

So in this case, the system with limited memory no need to reserve most
of the memory for hugepage. Only some needed memory for datapath objects
will be enough to allocated with explicitly flag. Other memory will be
allocated from system. For system with enough memory, no need to care
about the devarg, the memory will always be from rte hugepage.

One restriction is that for DPDK application with multiple PCI devices,
if the sys_mem_en devargs are different between the devices, the
sys_mem_en only gets the value from the first device devargs, and print
out a message to warn that.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/nics/mlx5.rst         | 7 +++++++
 drivers/net/mlx5/linux/mlx5_os.c | 2 ++
 drivers/net/mlx5/mlx5.c          | 6 ++++++
 drivers/net/mlx5/mlx5.h          | 1 +
 4 files changed, 16 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4b6d8fb..d86b5c7 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -879,6 +879,13 @@ Driver options
 
   By default, the PMD will set this value to 0.
 
+- ``sys_mem_en`` parameter [int]
+
+  A nonzero value enables the PMD memory management function allocate memory
+  from system by default without explicitly rte memory flag.
+
+  By default, the PMD will set this value to 0.
+
 .. _mlx5_firmware_config:
 
 Firmware configuration
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 2dc57b2..d5acef0 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -43,6 +43,7 @@
 #include <mlx5_common.h>
 #include <mlx5_common_mp.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -495,6 +496,7 @@
 			strerror(rte_errno));
 		goto error;
 	}
+	mlx5_malloc_mem_select(config.sys_mem_en);
 	sh = mlx5_alloc_shared_dev_ctx(spawn, &config);
 	if (!sh)
 		return NULL;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 0c654ed..9b17266 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -167,6 +167,9 @@
 /* Flow memory reclaim mode. */
 #define MLX5_RECLAIM_MEM "reclaim_mem_mode"
 
+/* The default memory alloctor used in PMD. */
+#define MLX5_SYS_MEM_EN "sys_mem_en"
+
 static const char *MZ_MLX5_PMD_SHARED_DATA = "mlx5_pmd_shared_data";
 
 /* Shared memory between primary and secondary processes. */
@@ -1374,6 +1377,8 @@ struct mlx5_dev_ctx_shared *
 			return -rte_errno;
 		}
 		config->reclaim_mode = tmp;
+	} else if (strcmp(MLX5_SYS_MEM_EN, key) == 0) {
+		config->sys_mem_en = !!tmp;
 	} else {
 		DRV_LOG(WARNING, "%s: unknown parameter", key);
 		rte_errno = EINVAL;
@@ -1430,6 +1435,7 @@ struct mlx5_dev_ctx_shared *
 		MLX5_CLASS_ARG_NAME,
 		MLX5_HP_BUF_SIZE,
 		MLX5_RECLAIM_MEM,
+		MLX5_SYS_MEM_EN,
 		NULL,
 	};
 	struct rte_kvargs *kvlist;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 46e66eb..967f5d8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -216,6 +216,7 @@ struct mlx5_dev_config {
 	unsigned int devx:1; /* Whether devx interface is available or not. */
 	unsigned int dest_tir:1; /* Whether advanced DR API is available. */
 	unsigned int reclaim_mode:2; /* Memory reclaim mode. */
+	unsigned int sys_mem_en:1; /* The default memory allocator. */
 	struct {
 		unsigned int enabled:1; /* Whether MPRQ is enabled. */
 		unsigned int stride_num_n; /* Number of strides. */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH 3/7] net/mlx5: convert control path memory to unified malloc
  2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
  2020-07-15  3:59 ` [dpdk-dev] [PATCH 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
  2020-07-15  3:59 ` [dpdk-dev] [PATCH 2/7] net/mlx5: add allocate memory from system devarg Suanming Mou
@ 2020-07-15  3:59 ` Suanming Mou
  2020-07-15  4:00 ` [dpdk-dev] [PATCH 4/7] common/mlx5: " Suanming Mou
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-15  3:59 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the control path memory from unified malloc
function.

The objects be changed:

1. hlist;
2. rss key;
3. vlan vmwa;
4. indexed pool;
5. fdir objects;
6. meter profile;
7. flow counter pool;
8. hrxq and indirect table;
9. flow object cache resources;
10. temporary resources in flow create;

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c            | 88 ++++++++++++++++++++------------------
 drivers/net/mlx5/mlx5_ethdev.c     | 15 ++++---
 drivers/net/mlx5/mlx5_flow.c       | 45 +++++++++++--------
 drivers/net/mlx5/mlx5_flow_dv.c    | 46 +++++++++++---------
 drivers/net/mlx5/mlx5_flow_meter.c | 11 ++---
 drivers/net/mlx5/mlx5_flow_verbs.c |  8 ++--
 drivers/net/mlx5/mlx5_mp.c         |  3 +-
 drivers/net/mlx5/mlx5_rss.c        | 13 ++++--
 drivers/net/mlx5/mlx5_rxq.c        | 37 +++++++++-------
 drivers/net/mlx5/mlx5_utils.c      | 60 +++++++++++++++-----------
 drivers/net/mlx5/mlx5_utils.h      |  2 +-
 drivers/net/mlx5/mlx5_vlan.c       |  8 ++--
 12 files changed, 190 insertions(+), 146 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 9b17266..ba86c68 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -40,6 +40,7 @@
 #include <mlx5_common.h>
 #include <mlx5_common_os.h>
 #include <mlx5_common_mp.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -194,8 +195,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_encap_decap_ipool",
 	},
 	{
@@ -205,8 +206,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_push_vlan_ipool",
 	},
 	{
@@ -216,8 +217,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_tag_ipool",
 	},
 	{
@@ -227,8 +228,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_port_id_ipool",
 	},
 	{
@@ -238,8 +239,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_jump_ipool",
 	},
 #endif
@@ -250,8 +251,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_meter_ipool",
 	},
 	{
@@ -261,8 +262,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_mcp_ipool",
 	},
 	{
@@ -272,8 +273,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_hrxq_ipool",
 	},
 	{
@@ -287,8 +288,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_flow_handle_ipool",
 	},
 	{
@@ -296,8 +297,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.trunk_size = 4096,
 		.need_lock = 1,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "rte_flow_ipool",
 	},
 };
@@ -323,15 +324,16 @@ struct mlx5_flow_id_pool *
 	struct mlx5_flow_id_pool *pool;
 	void *mem;
 
-	pool = rte_zmalloc("id pool allocation", sizeof(*pool),
-			   RTE_CACHE_LINE_SIZE);
+	pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool),
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!pool) {
 		DRV_LOG(ERR, "can't allocate id pool");
 		rte_errno  = ENOMEM;
 		return NULL;
 	}
-	mem = rte_zmalloc("", MLX5_FLOW_MIN_ID_POOL_SIZE * sizeof(uint32_t),
-			  RTE_CACHE_LINE_SIZE);
+	mem = mlx5_malloc(MLX5_MEM_ZERO,
+			  MLX5_FLOW_MIN_ID_POOL_SIZE * sizeof(uint32_t),
+			  RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!mem) {
 		DRV_LOG(ERR, "can't allocate mem for id pool");
 		rte_errno  = ENOMEM;
@@ -344,7 +346,7 @@ struct mlx5_flow_id_pool *
 	pool->max_id = max_id;
 	return pool;
 error:
-	rte_free(pool);
+	mlx5_free(pool);
 	return NULL;
 }
 
@@ -357,8 +359,8 @@ struct mlx5_flow_id_pool *
 void
 mlx5_flow_id_pool_release(struct mlx5_flow_id_pool *pool)
 {
-	rte_free(pool->free_arr);
-	rte_free(pool);
+	mlx5_free(pool->free_arr);
+	mlx5_free(pool);
 }
 
 /**
@@ -410,14 +412,15 @@ struct mlx5_flow_id_pool *
 		size = pool->curr - pool->free_arr;
 		size2 = size * MLX5_ID_GENERATION_ARRAY_FACTOR;
 		MLX5_ASSERT(size2 > size);
-		mem = rte_malloc("", size2 * sizeof(uint32_t), 0);
+		mem = mlx5_malloc(0, size2 * sizeof(uint32_t), 0,
+				  SOCKET_ID_ANY);
 		if (!mem) {
 			DRV_LOG(ERR, "can't allocate mem for id pool");
 			rte_errno  = ENOMEM;
 			return -rte_errno;
 		}
 		memcpy(mem, pool->free_arr, size * sizeof(uint32_t));
-		rte_free(pool->free_arr);
+		mlx5_free(pool->free_arr);
 		pool->free_arr = mem;
 		pool->curr = pool->free_arr + size;
 		pool->last = pool->free_arr + size2;
@@ -486,7 +489,7 @@ struct mlx5_flow_id_pool *
 	LIST_REMOVE(mng, next);
 	claim_zero(mlx5_devx_cmd_destroy(mng->dm));
 	claim_zero(mlx5_glue->devx_umem_dereg(mng->umem));
-	rte_free(mem);
+	mlx5_free(mem);
 }
 
 /**
@@ -534,10 +537,10 @@ struct mlx5_flow_id_pool *
 						    (pool, j)->dcs));
 			}
 			TAILQ_REMOVE(&sh->cmng.ccont[i].pool_list, pool, next);
-			rte_free(pool);
+			mlx5_free(pool);
 			pool = TAILQ_FIRST(&sh->cmng.ccont[i].pool_list);
 		}
-		rte_free(sh->cmng.ccont[i].pools);
+		mlx5_free(sh->cmng.ccont[i].pools);
 	}
 	mng = LIST_FIRST(&sh->cmng.mem_mngs);
 	while (mng) {
@@ -860,7 +863,7 @@ struct mlx5_dev_ctx_shared *
 					entry);
 		MLX5_ASSERT(tbl_data);
 		mlx5_hlist_remove(sh->flow_tbls, pos);
-		rte_free(tbl_data);
+		mlx5_free(tbl_data);
 	}
 	table_key.direction = 1;
 	pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64);
@@ -869,7 +872,7 @@ struct mlx5_dev_ctx_shared *
 					entry);
 		MLX5_ASSERT(tbl_data);
 		mlx5_hlist_remove(sh->flow_tbls, pos);
-		rte_free(tbl_data);
+		mlx5_free(tbl_data);
 	}
 	table_key.direction = 0;
 	table_key.domain = 1;
@@ -879,7 +882,7 @@ struct mlx5_dev_ctx_shared *
 					entry);
 		MLX5_ASSERT(tbl_data);
 		mlx5_hlist_remove(sh->flow_tbls, pos);
-		rte_free(tbl_data);
+		mlx5_free(tbl_data);
 	}
 	mlx5_hlist_destroy(sh->flow_tbls, NULL, NULL);
 }
@@ -923,8 +926,9 @@ struct mlx5_dev_ctx_shared *
 			.direction = 0,
 		}
 	};
-	struct mlx5_flow_tbl_data_entry *tbl_data = rte_zmalloc(NULL,
-							  sizeof(*tbl_data), 0);
+	struct mlx5_flow_tbl_data_entry *tbl_data = mlx5_malloc(MLX5_MEM_ZERO,
+							  sizeof(*tbl_data), 0,
+							  SOCKET_ID_ANY);
 
 	if (!tbl_data) {
 		err = ENOMEM;
@@ -937,7 +941,8 @@ struct mlx5_dev_ctx_shared *
 	rte_atomic32_init(&tbl_data->tbl.refcnt);
 	rte_atomic32_inc(&tbl_data->tbl.refcnt);
 	table_key.direction = 1;
-	tbl_data = rte_zmalloc(NULL, sizeof(*tbl_data), 0);
+	tbl_data = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl_data), 0,
+			       SOCKET_ID_ANY);
 	if (!tbl_data) {
 		err = ENOMEM;
 		goto error;
@@ -950,7 +955,8 @@ struct mlx5_dev_ctx_shared *
 	rte_atomic32_inc(&tbl_data->tbl.refcnt);
 	table_key.direction = 0;
 	table_key.domain = 1;
-	tbl_data = rte_zmalloc(NULL, sizeof(*tbl_data), 0);
+	tbl_data = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl_data), 0,
+			       SOCKET_ID_ANY);
 	if (!tbl_data) {
 		err = ENOMEM;
 		goto error;
@@ -1181,9 +1187,9 @@ struct mlx5_dev_ctx_shared *
 	mlx5_mprq_free_mp(dev);
 	mlx5_os_free_shared_dr(priv);
 	if (priv->rss_conf.rss_key != NULL)
-		rte_free(priv->rss_conf.rss_key);
+		mlx5_free(priv->rss_conf.rss_key);
 	if (priv->reta_idx != NULL)
-		rte_free(priv->reta_idx);
+		mlx5_free(priv->reta_idx);
 	if (priv->config.vf)
 		mlx5_nl_mac_addr_flush(priv->nl_socket_route, mlx5_ifindex(dev),
 				       dev->data->mac_addrs,
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 6b4efcd..cefb450 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -21,6 +21,8 @@
 #include <rte_rwlock.h>
 #include <rte_cycles.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5_rxtx.h"
 #include "mlx5_autoconf.h"
 
@@ -75,8 +77,8 @@
 		return -rte_errno;
 	}
 	priv->rss_conf.rss_key =
-		rte_realloc(priv->rss_conf.rss_key,
-			    MLX5_RSS_HASH_KEY_LEN, 0);
+		mlx5_realloc(priv->rss_conf.rss_key, MLX5_MEM_RTE,
+			    MLX5_RSS_HASH_KEY_LEN, 0, SOCKET_ID_ANY);
 	if (!priv->rss_conf.rss_key) {
 		DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)",
 			dev->data->port_id, rxqs_n);
@@ -142,7 +144,8 @@
 
 	if (priv->skip_default_rss_reta)
 		return ret;
-	rss_queue_arr = rte_malloc("", rxqs_n * sizeof(unsigned int), 0);
+	rss_queue_arr = mlx5_malloc(0, rxqs_n * sizeof(unsigned int), 0,
+				    SOCKET_ID_ANY);
 	if (!rss_queue_arr) {
 		DRV_LOG(ERR, "port %u cannot allocate RSS queue list (%u)",
 			dev->data->port_id, rxqs_n);
@@ -163,7 +166,7 @@
 		DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u)",
 			dev->data->port_id, rss_queue_n);
 		rte_errno = EINVAL;
-		rte_free(rss_queue_arr);
+		mlx5_free(rss_queue_arr);
 		return -rte_errno;
 	}
 	DRV_LOG(INFO, "port %u Rx queues number update: %u -> %u",
@@ -179,7 +182,7 @@
 				rss_queue_n));
 	ret = mlx5_rss_reta_index_resize(dev, reta_idx_n);
 	if (ret) {
-		rte_free(rss_queue_arr);
+		mlx5_free(rss_queue_arr);
 		return ret;
 	}
 	/*
@@ -192,7 +195,7 @@
 		if (++j == rss_queue_n)
 			j = 0;
 	}
-	rte_free(rss_queue_arr);
+	mlx5_free(rss_queue_arr);
 	return ret;
 }
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ae5ccc2..cce6ce5 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -32,6 +32,7 @@
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -4010,7 +4011,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 		act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
 			   sizeof(struct rte_flow_action_set_tag) +
 			   sizeof(struct rte_flow_action_jump);
-		ext_actions = rte_zmalloc(__func__, act_size, 0);
+		ext_actions = mlx5_malloc(MLX5_MEM_ZERO, act_size, 0,
+					  SOCKET_ID_ANY);
 		if (!ext_actions)
 			return rte_flow_error_set(error, ENOMEM,
 						  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4046,7 +4048,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 		 */
 		act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
 			   sizeof(struct mlx5_flow_action_copy_mreg);
-		ext_actions = rte_zmalloc(__func__, act_size, 0);
+		ext_actions = mlx5_malloc(MLX5_MEM_ZERO, act_size, 0,
+					  SOCKET_ID_ANY);
 		if (!ext_actions)
 			return rte_flow_error_set(error, ENOMEM,
 						  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4140,7 +4143,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 	 * by flow_drv_destroy.
 	 */
 	flow_qrss_free_id(dev, qrss_id);
-	rte_free(ext_actions);
+	mlx5_free(ext_actions);
 	return ret;
 }
 
@@ -4205,7 +4208,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 #define METER_SUFFIX_ITEM 4
 		item_size = sizeof(struct rte_flow_item) * METER_SUFFIX_ITEM +
 			    sizeof(struct mlx5_rte_flow_item_tag) * 2;
-		sfx_actions = rte_zmalloc(__func__, (act_size + item_size), 0);
+		sfx_actions = mlx5_malloc(MLX5_MEM_ZERO, (act_size + item_size),
+					  0, SOCKET_ID_ANY);
 		if (!sfx_actions)
 			return rte_flow_error_set(error, ENOMEM,
 						  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4244,7 +4248,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 					 external, flow_idx, error);
 exit:
 	if (sfx_actions)
-		rte_free(sfx_actions);
+		mlx5_free(sfx_actions);
 	return ret;
 }
 
@@ -4658,8 +4662,8 @@ struct rte_flow *
 		}
 		if (priv_fdir_flow) {
 			LIST_REMOVE(priv_fdir_flow, next);
-			rte_free(priv_fdir_flow->fdir);
-			rte_free(priv_fdir_flow);
+			mlx5_free(priv_fdir_flow->fdir);
+			mlx5_free(priv_fdir_flow);
 		}
 	}
 	mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], flow_idx);
@@ -4799,11 +4803,12 @@ struct rte_flow *
 	struct mlx5_priv *priv = dev->data->dev_private;
 
 	if (!priv->inter_flows) {
-		priv->inter_flows = rte_calloc(__func__, 1,
+		priv->inter_flows = mlx5_malloc(MLX5_MEM_ZERO,
 				    MLX5_NUM_MAX_DEV_FLOWS *
 				    sizeof(struct mlx5_flow) +
 				    (sizeof(struct mlx5_flow_rss_desc) +
-				    sizeof(uint16_t) * UINT16_MAX) * 2, 0);
+				    sizeof(uint16_t) * UINT16_MAX) * 2, 0,
+				    SOCKET_ID_ANY);
 		if (!priv->inter_flows) {
 			DRV_LOG(ERR, "can't allocate intermediate memory.");
 			return;
@@ -4827,7 +4832,7 @@ struct rte_flow *
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 
-	rte_free(priv->inter_flows);
+	mlx5_free(priv->inter_flows);
 	priv->inter_flows = NULL;
 }
 
@@ -5467,7 +5472,8 @@ struct rte_flow *
 	uint32_t flow_idx;
 	int ret;
 
-	fdir_flow = rte_zmalloc(__func__, sizeof(*fdir_flow), 0);
+	fdir_flow = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*fdir_flow), 0,
+				SOCKET_ID_ANY);
 	if (!fdir_flow) {
 		rte_errno = ENOMEM;
 		return -rte_errno;
@@ -5480,8 +5486,9 @@ struct rte_flow *
 		rte_errno = EEXIST;
 		goto error;
 	}
-	priv_fdir_flow = rte_zmalloc(__func__, sizeof(struct mlx5_fdir_flow),
-				     0);
+	priv_fdir_flow = mlx5_malloc(MLX5_MEM_ZERO,
+				     sizeof(struct mlx5_fdir_flow),
+				     0, SOCKET_ID_ANY);
 	if (!priv_fdir_flow) {
 		rte_errno = ENOMEM;
 		goto error;
@@ -5500,8 +5507,8 @@ struct rte_flow *
 		dev->data->port_id, (void *)flow);
 	return 0;
 error:
-	rte_free(priv_fdir_flow);
-	rte_free(fdir_flow);
+	mlx5_free(priv_fdir_flow);
+	mlx5_free(fdir_flow);
 	return -rte_errno;
 }
 
@@ -5541,8 +5548,8 @@ struct rte_flow *
 	LIST_REMOVE(priv_fdir_flow, next);
 	flow_idx = priv_fdir_flow->rix_flow;
 	flow_list_destroy(dev, &priv->flows, flow_idx);
-	rte_free(priv_fdir_flow->fdir);
-	rte_free(priv_fdir_flow);
+	mlx5_free(priv_fdir_flow->fdir);
+	mlx5_free(priv_fdir_flow);
 	DRV_LOG(DEBUG, "port %u deleted FDIR flow %u",
 		dev->data->port_id, flow_idx);
 	return 0;
@@ -5587,8 +5594,8 @@ struct rte_flow *
 		priv_fdir_flow = LIST_FIRST(&priv->fdir_flows);
 		LIST_REMOVE(priv_fdir_flow, next);
 		flow_list_destroy(dev, &priv->flows, priv_fdir_flow->rix_flow);
-		rte_free(priv_fdir_flow->fdir);
-		rte_free(priv_fdir_flow);
+		mlx5_free(priv_fdir_flow->fdir);
+		mlx5_free(priv_fdir_flow);
 	}
 }
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 8b5b683..7c121d6 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -32,6 +32,7 @@
 
 #include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -2615,7 +2616,7 @@ struct field_modify_info modify_tcp[] = {
 					(sh->ctx, domain, cache_resource,
 					 &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -2772,7 +2773,7 @@ struct field_modify_info modify_tcp[] = {
 				(priv->sh->fdb_domain, resource->port_id,
 				 &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -2851,7 +2852,7 @@ struct field_modify_info modify_tcp[] = {
 					(domain, resource->vlan_tag,
 					 &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -4024,8 +4025,9 @@ struct field_modify_info modify_tcp[] = {
 		}
 	}
 	/* Register new modify-header resource. */
-	cache_resource = rte_calloc(__func__, 1,
-				    sizeof(*cache_resource) + actions_len, 0);
+	cache_resource = mlx5_malloc(MLX5_MEM_ZERO,
+				    sizeof(*cache_resource) + actions_len, 0,
+				    SOCKET_ID_ANY);
 	if (!cache_resource)
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
@@ -4036,7 +4038,7 @@ struct field_modify_info modify_tcp[] = {
 					(sh->ctx, ns, cache_resource,
 					 actions_len, &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -4175,7 +4177,8 @@ struct field_modify_info modify_tcp[] = {
 			MLX5_COUNTERS_PER_POOL +
 			sizeof(struct mlx5_counter_stats_raw)) * raws_n +
 			sizeof(struct mlx5_counter_stats_mem_mng);
-	uint8_t *mem = rte_calloc(__func__, 1, size, sysconf(_SC_PAGESIZE));
+	uint8_t *mem = mlx5_malloc(MLX5_MEM_ZERO, size, sysconf(_SC_PAGESIZE),
+				  SOCKET_ID_ANY);
 	int i;
 
 	if (!mem) {
@@ -4188,7 +4191,7 @@ struct field_modify_info modify_tcp[] = {
 						 IBV_ACCESS_LOCAL_WRITE);
 	if (!mem_mng->umem) {
 		rte_errno = errno;
-		rte_free(mem);
+		mlx5_free(mem);
 		return NULL;
 	}
 	mkey_attr.addr = (uintptr_t)mem;
@@ -4207,7 +4210,7 @@ struct field_modify_info modify_tcp[] = {
 	if (!mem_mng->dm) {
 		mlx5_glue->devx_umem_dereg(mem_mng->umem);
 		rte_errno = errno;
-		rte_free(mem);
+		mlx5_free(mem);
 		return NULL;
 	}
 	mem_mng->raws = (struct mlx5_counter_stats_raw *)(mem + size);
@@ -4244,7 +4247,7 @@ struct field_modify_info modify_tcp[] = {
 	void *old_pools = cont->pools;
 	uint32_t resize = cont->n + MLX5_CNT_CONTAINER_RESIZE;
 	uint32_t mem_size = sizeof(struct mlx5_flow_counter_pool *) * resize;
-	void *pools = rte_calloc(__func__, 1, mem_size, 0);
+	void *pools = mlx5_malloc(MLX5_MEM_ZERO, mem_size, 0, SOCKET_ID_ANY);
 
 	if (!pools) {
 		rte_errno = ENOMEM;
@@ -4263,7 +4266,7 @@ struct field_modify_info modify_tcp[] = {
 		mem_mng = flow_dv_create_counter_stat_mem_mng(dev,
 			  MLX5_CNT_CONTAINER_RESIZE + MLX5_MAX_PENDING_QUERIES);
 		if (!mem_mng) {
-			rte_free(pools);
+			mlx5_free(pools);
 			return -ENOMEM;
 		}
 		for (i = 0; i < MLX5_MAX_PENDING_QUERIES; ++i)
@@ -4278,7 +4281,7 @@ struct field_modify_info modify_tcp[] = {
 	cont->pools = pools;
 	rte_spinlock_unlock(&cont->resize_sl);
 	if (old_pools)
-		rte_free(old_pools);
+		mlx5_free(old_pools);
 	return 0;
 }
 
@@ -4367,7 +4370,7 @@ struct field_modify_info modify_tcp[] = {
 	size += MLX5_COUNTERS_PER_POOL * CNT_SIZE;
 	size += (batch ? 0 : MLX5_COUNTERS_PER_POOL * CNTEXT_SIZE);
 	size += (!age ? 0 : MLX5_COUNTERS_PER_POOL * AGE_SIZE);
-	pool = rte_calloc(__func__, 1, size, 0);
+	pool = mlx5_malloc(MLX5_MEM_ZERO, size, 0, SOCKET_ID_ANY);
 	if (!pool) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -7467,7 +7470,8 @@ struct field_modify_info modify_tcp[] = {
 		}
 	}
 	/* Register new matcher. */
-	cache_matcher = rte_calloc(__func__, 1, sizeof(*cache_matcher), 0);
+	cache_matcher = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*cache_matcher), 0,
+				    SOCKET_ID_ANY);
 	if (!cache_matcher) {
 		flow_dv_tbl_resource_release(dev, tbl);
 		return rte_flow_error_set(error, ENOMEM,
@@ -7483,7 +7487,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj,
 					       &cache_matcher->matcher_object);
 	if (ret) {
-		rte_free(cache_matcher);
+		mlx5_free(cache_matcher);
 #ifdef HAVE_MLX5DV_DR
 		flow_dv_tbl_resource_release(dev, tbl);
 #endif
@@ -7558,7 +7562,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_os_create_flow_action_tag(tag_be24,
 						  &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -7567,7 +7571,7 @@ struct field_modify_info modify_tcp[] = {
 	rte_atomic32_inc(&cache_resource->refcnt);
 	if (mlx5_hlist_insert(sh->tag_table, &cache_resource->entry)) {
 		mlx5_flow_os_destroy_flow_action(cache_resource->action);
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, EEXIST,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot insert tag");
@@ -8769,7 +8773,7 @@ struct field_modify_info modify_tcp[] = {
 		LIST_REMOVE(matcher, next);
 		/* table ref-- in release interface. */
 		flow_dv_tbl_resource_release(dev, matcher->tbl);
-		rte_free(matcher);
+		mlx5_free(matcher);
 		DRV_LOG(DEBUG, "port %u matcher %p: removed",
 			dev->data->port_id, (void *)matcher);
 		return 0;
@@ -8911,7 +8915,7 @@ struct field_modify_info modify_tcp[] = {
 		claim_zero(mlx5_flow_os_destroy_flow_action
 						(cache_resource->action));
 		LIST_REMOVE(cache_resource, next);
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		DRV_LOG(DEBUG, "modify-header resource %p: removed",
 			(void *)cache_resource);
 		return 0;
@@ -9284,7 +9288,7 @@ struct field_modify_info modify_tcp[] = {
 		flow_dv_tbl_resource_release(dev, mtd->transfer.sfx_tbl);
 	if (mtd->drop_actn)
 		claim_zero(mlx5_flow_os_destroy_flow_action(mtd->drop_actn));
-	rte_free(mtd);
+	mlx5_free(mtd);
 	return 0;
 }
 
@@ -9417,7 +9421,7 @@ struct field_modify_info modify_tcp[] = {
 		rte_errno = ENOTSUP;
 		return NULL;
 	}
-	mtb = rte_calloc(__func__, 1, sizeof(*mtb), 0);
+	mtb = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*mtb), 0, SOCKET_ID_ANY);
 	if (!mtb) {
 		DRV_LOG(ERR, "Failed to allocate memory for meter.");
 		return NULL;
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index 86c334b..bf34687 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -10,6 +10,7 @@
 #include <rte_mtr_driver.h>
 
 #include <mlx5_devx_cmds.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_flow.h"
@@ -356,8 +357,8 @@
 	if (ret)
 		return ret;
 	/* Meter profile memory allocation. */
-	fmp = rte_calloc(__func__, 1, sizeof(struct mlx5_flow_meter_profile),
-			 RTE_CACHE_LINE_SIZE);
+	fmp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_flow_meter_profile),
+			 RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (fmp == NULL)
 		return -rte_mtr_error_set(error, ENOMEM,
 					  RTE_MTR_ERROR_TYPE_UNSPECIFIED,
@@ -374,7 +375,7 @@
 	TAILQ_INSERT_TAIL(fmps, fmp, next);
 	return 0;
 error:
-	rte_free(fmp);
+	mlx5_free(fmp);
 	return ret;
 }
 
@@ -417,7 +418,7 @@
 					  NULL, "Meter profile is in use.");
 	/* Remove from list. */
 	TAILQ_REMOVE(&priv->flow_meter_profiles, fmp, next);
-	rte_free(fmp);
+	mlx5_free(fmp);
 	return 0;
 }
 
@@ -1286,7 +1287,7 @@ struct mlx5_flow_meter *
 		MLX5_ASSERT(!fmp->ref_cnt);
 		/* Remove from list. */
 		TAILQ_REMOVE(&priv->flow_meter_profiles, fmp, next);
-		rte_free(fmp);
+		mlx5_free(fmp);
 	}
 	return 0;
 }
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 781c97f..72106b4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -28,6 +28,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_prm.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -188,14 +189,15 @@
 			/* Resize the container pool array. */
 			size = sizeof(struct mlx5_flow_counter_pool *) *
 				     (n_valid + MLX5_CNT_CONTAINER_RESIZE);
-			pools = rte_zmalloc(__func__, size, 0);
+			pools = mlx5_malloc(MLX5_MEM_ZERO, size, 0,
+					    SOCKET_ID_ANY);
 			if (!pools)
 				return 0;
 			if (n_valid) {
 				memcpy(pools, cont->pools,
 				       sizeof(struct mlx5_flow_counter_pool *) *
 				       n_valid);
-				rte_free(cont->pools);
+				mlx5_free(cont->pools);
 			}
 			cont->pools = pools;
 			cont->n += MLX5_CNT_CONTAINER_RESIZE;
@@ -203,7 +205,7 @@
 		/* Allocate memory for new pool*/
 		size = sizeof(*pool) + (sizeof(*cnt_ext) + sizeof(*cnt)) *
 		       MLX5_COUNTERS_PER_POOL;
-		pool = rte_calloc(__func__, 1, size, 0);
+		pool = mlx5_malloc(MLX5_MEM_ZERO, size, 0, SOCKET_ID_ANY);
 		if (!pool)
 			return 0;
 		pool->type |= CNT_POOL_TYPE_EXT;
diff --git a/drivers/net/mlx5/mlx5_mp.c b/drivers/net/mlx5/mlx5_mp.c
index a2b5c40..cf6e33b 100644
--- a/drivers/net/mlx5/mlx5_mp.c
+++ b/drivers/net/mlx5/mlx5_mp.c
@@ -12,6 +12,7 @@
 
 #include <mlx5_common_mp.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -181,7 +182,7 @@
 		}
 	}
 exit:
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index 653b069..a49edbc 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -21,6 +21,8 @@
 #include <rte_malloc.h>
 #include <rte_ethdev_driver.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5_defs.h"
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -57,8 +59,10 @@
 			rte_errno = EINVAL;
 			return -rte_errno;
 		}
-		priv->rss_conf.rss_key = rte_realloc(priv->rss_conf.rss_key,
-						     rss_conf->rss_key_len, 0);
+		priv->rss_conf.rss_key = mlx5_realloc(priv->rss_conf.rss_key,
+						      MLX5_MEM_RTE,
+						      rss_conf->rss_key_len,
+						      0, SOCKET_ID_ANY);
 		if (!priv->rss_conf.rss_key) {
 			rte_errno = ENOMEM;
 			return -rte_errno;
@@ -131,8 +135,9 @@
 	if (priv->reta_idx_n == reta_size)
 		return 0;
 
-	mem = rte_realloc(priv->reta_idx,
-			  reta_size * sizeof((*priv->reta_idx)[0]), 0);
+	mem = mlx5_realloc(priv->reta_idx, MLX5_MEM_RTE,
+			   reta_size * sizeof((*priv->reta_idx)[0]), 0,
+			   SOCKET_ID_ANY);
 	if (!mem) {
 		rte_errno = ENOMEM;
 		return -rte_errno;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b436f06..c8e3a82 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -31,6 +31,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -734,7 +735,9 @@
 	if (!dev->data->dev_conf.intr_conf.rxq)
 		return 0;
 	mlx5_rx_intr_vec_disable(dev);
-	intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
+	intr_handle->intr_vec = mlx5_malloc(0,
+				n * sizeof(intr_handle->intr_vec[0]),
+				0, SOCKET_ID_ANY);
 	if (intr_handle->intr_vec == NULL) {
 		DRV_LOG(ERR,
 			"port %u failed to allocate memory for interrupt"
@@ -831,7 +834,7 @@
 free:
 	rte_intr_free_epoll_fd(intr_handle);
 	if (intr_handle->intr_vec)
-		free(intr_handle->intr_vec);
+		mlx5_free(intr_handle->intr_vec);
 	intr_handle->nb_efd = 0;
 	intr_handle->intr_vec = NULL;
 }
@@ -2187,8 +2190,8 @@ enum mlx5_rxq_type
 	struct mlx5_ind_table_obj *ind_tbl;
 	unsigned int i = 0, j = 0, k = 0;
 
-	ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl) +
-			     queues_n * sizeof(uint16_t), 0);
+	ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) +
+			      queues_n * sizeof(uint16_t), 0, SOCKET_ID_ANY);
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -2231,8 +2234,9 @@ enum mlx5_rxq_type
 			      log2above(queues_n) :
 			      log2above(priv->config.ind_table_max_size));
 
-		rqt_attr = rte_calloc(__func__, 1, sizeof(*rqt_attr) +
-				      rqt_n * sizeof(uint32_t), 0);
+		rqt_attr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt_attr) +
+				      rqt_n * sizeof(uint32_t), 0,
+				      SOCKET_ID_ANY);
 		if (!rqt_attr) {
 			DRV_LOG(ERR, "port %u cannot allocate RQT resources",
 				dev->data->port_id);
@@ -2254,7 +2258,7 @@ enum mlx5_rxq_type
 			rqt_attr->rq_list[k] = rqt_attr->rq_list[j];
 		ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->ctx,
 							rqt_attr);
-		rte_free(rqt_attr);
+		mlx5_free(rqt_attr);
 		if (!ind_tbl->rqt) {
 			DRV_LOG(ERR, "port %u cannot create DevX RQT",
 				dev->data->port_id);
@@ -2269,7 +2273,7 @@ enum mlx5_rxq_type
 error:
 	for (j = 0; j < i; j++)
 		mlx5_rxq_release(dev, ind_tbl->queues[j]);
-	rte_free(ind_tbl);
+	mlx5_free(ind_tbl);
 	DEBUG("port %u cannot create indirection table", dev->data->port_id);
 	return NULL;
 }
@@ -2339,7 +2343,7 @@ enum mlx5_rxq_type
 		claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
 	if (!rte_atomic32_read(&ind_tbl->refcnt)) {
 		LIST_REMOVE(ind_tbl, next);
-		rte_free(ind_tbl);
+		mlx5_free(ind_tbl);
 		return 0;
 	}
 	return 1;
@@ -2761,7 +2765,7 @@ enum mlx5_rxq_type
 		rte_errno = errno;
 		goto error;
 	}
-	rxq = rte_calloc(__func__, 1, sizeof(*rxq), 0);
+	rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY);
 	if (!rxq) {
 		DEBUG("port %u cannot allocate drop Rx queue memory",
 		      dev->data->port_id);
@@ -2799,7 +2803,7 @@ enum mlx5_rxq_type
 		claim_zero(mlx5_glue->destroy_wq(rxq->wq));
 	if (rxq->cq)
 		claim_zero(mlx5_glue->destroy_cq(rxq->cq));
-	rte_free(rxq);
+	mlx5_free(rxq);
 	priv->drop_queue.rxq = NULL;
 }
 
@@ -2837,7 +2841,8 @@ enum mlx5_rxq_type
 		rte_errno = errno;
 		goto error;
 	}
-	ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl), 0);
+	ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl), 0,
+			      SOCKET_ID_ANY);
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		goto error;
@@ -2863,7 +2868,7 @@ enum mlx5_rxq_type
 
 	claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
 	mlx5_rxq_obj_drop_release(dev);
-	rte_free(ind_tbl);
+	mlx5_free(ind_tbl);
 	priv->drop_queue.hrxq->ind_table = NULL;
 }
 
@@ -2888,7 +2893,7 @@ struct mlx5_hrxq *
 		rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt);
 		return priv->drop_queue.hrxq;
 	}
-	hrxq = rte_calloc(__func__, 1, sizeof(*hrxq), 0);
+	hrxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hrxq), 0, SOCKET_ID_ANY);
 	if (!hrxq) {
 		DRV_LOG(WARNING,
 			"port %u cannot allocate memory for drop queue",
@@ -2945,7 +2950,7 @@ struct mlx5_hrxq *
 		mlx5_ind_table_obj_drop_release(dev);
 	if (hrxq) {
 		priv->drop_queue.hrxq = NULL;
-		rte_free(hrxq);
+		mlx5_free(hrxq);
 	}
 	return NULL;
 }
@@ -2968,7 +2973,7 @@ struct mlx5_hrxq *
 #endif
 		claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
 		mlx5_ind_table_obj_drop_release(dev);
-		rte_free(hrxq);
+		mlx5_free(hrxq);
 		priv->drop_queue.hrxq = NULL;
 	}
 }
diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c
index bf67192..25e8b27 100644
--- a/drivers/net/mlx5/mlx5_utils.c
+++ b/drivers/net/mlx5/mlx5_utils.c
@@ -5,6 +5,8 @@
 #include <rte_malloc.h>
 #include <rte_hash_crc.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5_utils.h"
 
 struct mlx5_hlist *
@@ -27,7 +29,8 @@ struct mlx5_hlist *
 	alloc_size = sizeof(struct mlx5_hlist) +
 		     sizeof(struct mlx5_hlist_head) * act_size;
 	/* Using zmalloc, then no need to initialize the heads. */
-	h = rte_zmalloc(name, alloc_size, RTE_CACHE_LINE_SIZE);
+	h = mlx5_malloc(MLX5_MEM_ZERO, alloc_size, RTE_CACHE_LINE_SIZE,
+			SOCKET_ID_ANY);
 	if (!h) {
 		DRV_LOG(ERR, "No memory for hash list %s creation",
 			name ? name : "None");
@@ -112,10 +115,10 @@ struct mlx5_hlist_entry *
 			if (cb)
 				cb(entry, ctx);
 			else
-				rte_free(entry);
+				mlx5_free(entry);
 		}
 	}
-	rte_free(h);
+	mlx5_free(h);
 }
 
 static inline void
@@ -193,16 +196,17 @@ struct mlx5_indexed_pool *
 	    (cfg->trunk_size && ((cfg->trunk_size & (cfg->trunk_size - 1)) ||
 	    ((__builtin_ffs(cfg->trunk_size) + TRUNK_IDX_BITS) > 32))))
 		return NULL;
-	pool = rte_zmalloc("mlx5_ipool", sizeof(*pool) + cfg->grow_trunk *
-				sizeof(pool->grow_tbl[0]), RTE_CACHE_LINE_SIZE);
+	pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool) + cfg->grow_trunk *
+			   sizeof(pool->grow_tbl[0]), RTE_CACHE_LINE_SIZE,
+			   SOCKET_ID_ANY);
 	if (!pool)
 		return NULL;
 	pool->cfg = *cfg;
 	if (!pool->cfg.trunk_size)
 		pool->cfg.trunk_size = MLX5_IPOOL_DEFAULT_TRUNK_SIZE;
 	if (!cfg->malloc && !cfg->free) {
-		pool->cfg.malloc = rte_malloc_socket;
-		pool->cfg.free = rte_free;
+		pool->cfg.malloc = mlx5_malloc;
+		pool->cfg.free = mlx5_free;
 	}
 	pool->free_list = TRUNK_INVALID;
 	if (pool->cfg.need_lock)
@@ -237,10 +241,9 @@ struct mlx5_indexed_pool *
 		int n_grow = pool->n_trunk_valid ? pool->n_trunk :
 			     RTE_CACHE_LINE_SIZE / sizeof(void *);
 
-		p = pool->cfg.malloc(pool->cfg.type,
-				 (pool->n_trunk_valid + n_grow) *
-				 sizeof(struct mlx5_indexed_trunk *),
-				 RTE_CACHE_LINE_SIZE, rte_socket_id());
+		p = pool->cfg.malloc(0, (pool->n_trunk_valid + n_grow) *
+				     sizeof(struct mlx5_indexed_trunk *),
+				     RTE_CACHE_LINE_SIZE, rte_socket_id());
 		if (!p)
 			return -ENOMEM;
 		if (pool->trunks)
@@ -268,7 +271,7 @@ struct mlx5_indexed_pool *
 	/* rte_bitmap requires memory cacheline aligned. */
 	trunk_size += RTE_CACHE_LINE_ROUNDUP(data_size * pool->cfg.size);
 	trunk_size += bmp_size;
-	trunk = pool->cfg.malloc(pool->cfg.type, trunk_size,
+	trunk = pool->cfg.malloc(0, trunk_size,
 				 RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (!trunk)
 		return -ENOMEM;
@@ -464,7 +467,7 @@ struct mlx5_indexed_pool *
 	if (!pool->trunks)
 		pool->cfg.free(pool->trunks);
 	mlx5_ipool_unlock(pool);
-	rte_free(pool);
+	mlx5_free(pool);
 	return 0;
 }
 
@@ -493,15 +496,16 @@ struct mlx5_l3t_tbl *
 		.grow_shift = 1,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 	};
 
 	if (type >= MLX5_L3T_TYPE_MAX) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
-	tbl = rte_zmalloc(NULL, sizeof(struct mlx5_l3t_tbl), 1);
+	tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_l3t_tbl), 1,
+			  SOCKET_ID_ANY);
 	if (!tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -532,7 +536,7 @@ struct mlx5_l3t_tbl *
 	tbl->eip = mlx5_ipool_create(&l3t_ip_cfg);
 	if (!tbl->eip) {
 		rte_errno = ENOMEM;
-		rte_free(tbl);
+		mlx5_free(tbl);
 		tbl = NULL;
 	}
 	return tbl;
@@ -565,17 +569,17 @@ struct mlx5_l3t_tbl *
 					break;
 			}
 			MLX5_ASSERT(!m_tbl->ref_cnt);
-			rte_free(g_tbl->tbl[i]);
+			mlx5_free(g_tbl->tbl[i]);
 			g_tbl->tbl[i] = 0;
 			if (!(--g_tbl->ref_cnt))
 				break;
 		}
 		MLX5_ASSERT(!g_tbl->ref_cnt);
-		rte_free(tbl->tbl);
+		mlx5_free(tbl->tbl);
 		tbl->tbl = 0;
 	}
 	mlx5_ipool_destroy(tbl->eip);
-	rte_free(tbl);
+	mlx5_free(tbl);
 }
 
 uint32_t
@@ -667,11 +671,11 @@ struct mlx5_l3t_tbl *
 		m_tbl->tbl[(idx >> MLX5_L3T_MT_OFFSET) & MLX5_L3T_MT_MASK] =
 									NULL;
 		if (!(--m_tbl->ref_cnt)) {
-			rte_free(m_tbl);
+			mlx5_free(m_tbl);
 			g_tbl->tbl
 			[(idx >> MLX5_L3T_GT_OFFSET) & MLX5_L3T_GT_MASK] = NULL;
 			if (!(--g_tbl->ref_cnt)) {
-				rte_free(g_tbl);
+				mlx5_free(g_tbl);
 				tbl->tbl = 0;
 			}
 		}
@@ -693,8 +697,10 @@ struct mlx5_l3t_tbl *
 	/* Check the global table, create it if empty. */
 	g_tbl = tbl->tbl;
 	if (!g_tbl) {
-		g_tbl = rte_zmalloc(NULL, sizeof(struct mlx5_l3t_level_tbl) +
-				    sizeof(void *) * MLX5_L3T_GT_SIZE, 1);
+		g_tbl = mlx5_malloc(MLX5_MEM_ZERO,
+				    sizeof(struct mlx5_l3t_level_tbl) +
+				    sizeof(void *) * MLX5_L3T_GT_SIZE, 1,
+				    SOCKET_ID_ANY);
 		if (!g_tbl) {
 			rte_errno = ENOMEM;
 			return -1;
@@ -707,8 +713,10 @@ struct mlx5_l3t_tbl *
 	 */
 	m_tbl = g_tbl->tbl[(idx >> MLX5_L3T_GT_OFFSET) & MLX5_L3T_GT_MASK];
 	if (!m_tbl) {
-		m_tbl = rte_zmalloc(NULL, sizeof(struct mlx5_l3t_level_tbl) +
-				    sizeof(void *) * MLX5_L3T_MT_SIZE, 1);
+		m_tbl = mlx5_malloc(MLX5_MEM_ZERO,
+				    sizeof(struct mlx5_l3t_level_tbl) +
+				    sizeof(void *) * MLX5_L3T_MT_SIZE, 1,
+				    SOCKET_ID_ANY);
 		if (!m_tbl) {
 			rte_errno = ENOMEM;
 			return -1;
diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h
index c4b9063..562b9b1 100644
--- a/drivers/net/mlx5/mlx5_utils.h
+++ b/drivers/net/mlx5/mlx5_utils.h
@@ -193,7 +193,7 @@ struct mlx5_indexed_pool_config {
 	/* Lock is needed for multiple thread usage. */
 	uint32_t release_mem_en:1; /* Rlease trunk when it is free. */
 	const char *type; /* Memory allocate type name. */
-	void *(*malloc)(const char *type, size_t size, unsigned int align,
+	void *(*malloc)(uint32_t flags, size_t size, unsigned int align,
 			int socket);
 	/* User defined memory allocator. */
 	void (*free)(void *addr); /* User defined memory release. */
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index f65e416..4308b71 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -33,6 +33,7 @@
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
 #include <mlx5_nl.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_autoconf.h"
@@ -288,7 +289,8 @@ struct mlx5_nl_vlan_vmwa_context *
 		 */
 		return NULL;
 	}
-	vmwa = rte_zmalloc(__func__, sizeof(*vmwa), sizeof(uint32_t));
+	vmwa = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*vmwa), sizeof(uint32_t),
+			   SOCKET_ID_ANY);
 	if (!vmwa) {
 		DRV_LOG(WARNING,
 			"Can not allocate memory"
@@ -300,7 +302,7 @@ struct mlx5_nl_vlan_vmwa_context *
 		DRV_LOG(WARNING,
 			"Can not create Netlink socket"
 			" for VLAN workaround context");
-		rte_free(vmwa);
+		mlx5_free(vmwa);
 		return NULL;
 	}
 	vmwa->vf_ifindex = ifindex;
@@ -323,5 +325,5 @@ void mlx5_vlan_vmwa_exit(struct mlx5_nl_vlan_vmwa_context *vmwa)
 	}
 	if (vmwa->nl_socket >= 0)
 		close(vmwa->nl_socket);
-	rte_free(vmwa);
+	mlx5_free(vmwa);
 }
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH 4/7] common/mlx5: convert control path memory to unified malloc
  2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                   ` (2 preceding siblings ...)
  2020-07-15  3:59 ` [dpdk-dev] [PATCH 3/7] net/mlx5: convert control path memory to unified malloc Suanming Mou
@ 2020-07-15  4:00 ` Suanming Mou
  2020-07-15  4:00 ` [dpdk-dev] [PATCH 5/7] common/mlx5: convert data path objects " Suanming Mou
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-15  4:00 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocateis the control path objects memory from the unified
malloc function.

These objects are all used during the instances initialize, it will not
affect the data path.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/linux/mlx5_glue.c | 13 +++---
 drivers/common/mlx5/linux/mlx5_nl.c   |  5 ++-
 drivers/common/mlx5/mlx5_common_mp.c  |  7 ++--
 drivers/common/mlx5/mlx5_devx_cmds.c  | 75 +++++++++++++++++++----------------
 4 files changed, 55 insertions(+), 45 deletions(-)

diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index 395519d..48d2808 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -184,7 +184,7 @@
 		res = ibv_destroy_flow_action(attr->action);
 		break;
 	}
-	free(action);
+	mlx5_free(action);
 	return res;
 #endif
 #else
@@ -617,7 +617,7 @@
 	struct mlx5dv_flow_action_attr *action;
 
 	(void)offset;
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_COUNTERS_DEVX;
@@ -641,7 +641,7 @@
 #else
 	struct mlx5dv_flow_action_attr *action;
 
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_DEST_IBV_QP;
@@ -686,7 +686,7 @@
 
 	(void)domain;
 	(void)flags;
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
@@ -726,7 +726,7 @@
 	(void)flags;
 	struct mlx5dv_flow_action_attr *action;
 
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
@@ -755,7 +755,8 @@
 	return mlx5dv_dr_action_create_tag(tag);
 #else /* HAVE_MLX5DV_DR */
 	struct mlx5dv_flow_action_attr *action;
-	action = malloc(sizeof(*action));
+
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_TAG;
diff --git a/drivers/common/mlx5/linux/mlx5_nl.c b/drivers/common/mlx5/linux/mlx5_nl.c
index dc504d8..8ab7f6b 100644
--- a/drivers/common/mlx5/linux/mlx5_nl.c
+++ b/drivers/common/mlx5/linux/mlx5_nl.c
@@ -22,6 +22,7 @@
 
 #include "mlx5_nl.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 #ifdef HAVE_DEVLINK
 #include <linux/devlink.h>
 #endif
@@ -330,7 +331,7 @@ struct mlx5_nl_ifindex_data {
 	     void *arg)
 {
 	struct sockaddr_nl sa;
-	void *buf = malloc(MLX5_RECV_BUF_SIZE);
+	void *buf = mlx5_malloc(0, MLX5_RECV_BUF_SIZE, 0, SOCKET_ID_ANY);
 	struct iovec iov = {
 		.iov_base = buf,
 		.iov_len = MLX5_RECV_BUF_SIZE,
@@ -393,7 +394,7 @@ struct mlx5_nl_ifindex_data {
 		}
 	} while (multipart);
 exit:
-	free(buf);
+	mlx5_free(buf);
 	return ret;
 }
 
diff --git a/drivers/common/mlx5/mlx5_common_mp.c b/drivers/common/mlx5/mlx5_common_mp.c
index da55143..40e3956 100644
--- a/drivers/common/mlx5/mlx5_common_mp.c
+++ b/drivers/common/mlx5/mlx5_common_mp.c
@@ -11,6 +11,7 @@
 
 #include "mlx5_common_mp.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 /**
  * Request Memory Region creation to the primary process.
@@ -49,7 +50,7 @@
 	ret = res->result;
 	if (ret)
 		rte_errno = -ret;
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 	return ret;
 }
 
@@ -89,7 +90,7 @@
 	mp_res = &mp_rep.msgs[0];
 	res = (struct mlx5_mp_param *)mp_res->param;
 	ret = res->result;
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 	return ret;
 }
 
@@ -136,7 +137,7 @@
 	DRV_LOG(DEBUG, "port %u command FD from primary is %d",
 		mp_id->port_id, ret);
 exit:
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 	return ret;
 }
 
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 2179a83..af2863e 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -9,6 +9,7 @@
 #include "mlx5_prm.h"
 #include "mlx5_devx_cmds.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 
 /**
@@ -28,7 +29,8 @@
 struct mlx5_devx_obj *
 mlx5_devx_cmd_flow_counter_alloc(void *ctx, uint32_t bulk_n_128)
 {
-	struct mlx5_devx_obj *dcs = rte_zmalloc("dcs", sizeof(*dcs), 0);
+	struct mlx5_devx_obj *dcs = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*dcs),
+						0, SOCKET_ID_ANY);
 	uint32_t in[MLX5_ST_SZ_DW(alloc_flow_counter_in)]   = {0};
 	uint32_t out[MLX5_ST_SZ_DW(alloc_flow_counter_out)] = {0};
 
@@ -44,7 +46,7 @@ struct mlx5_devx_obj *
 	if (!dcs->obj) {
 		DRV_LOG(ERR, "Can't allocate counters - error %d", errno);
 		rte_errno = errno;
-		rte_free(dcs);
+		mlx5_free(dcs);
 		return NULL;
 	}
 	dcs->id = MLX5_GET(alloc_flow_counter_out, out, flow_counter_id);
@@ -149,7 +151,8 @@ struct mlx5_devx_obj *
 	uint32_t in[in_size_dw];
 	uint32_t out[MLX5_ST_SZ_DW(create_mkey_out)] = {0};
 	void *mkc;
-	struct mlx5_devx_obj *mkey = rte_zmalloc("mkey", sizeof(*mkey), 0);
+	struct mlx5_devx_obj *mkey = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*mkey),
+						 0, SOCKET_ID_ANY);
 	size_t pgsize;
 	uint32_t translation_size;
 
@@ -208,7 +211,7 @@ struct mlx5_devx_obj *
 		DRV_LOG(ERR, "Can't create %sdirect mkey - error %d\n",
 			klm_num ? "an in" : "a ", errno);
 		rte_errno = errno;
-		rte_free(mkey);
+		mlx5_free(mkey);
 		return NULL;
 	}
 	mkey->id = MLX5_GET(create_mkey_out, out, mkey_index);
@@ -260,7 +263,7 @@ struct mlx5_devx_obj *
 	if (!obj)
 		return 0;
 	ret =  mlx5_glue->devx_obj_destroy(obj->obj);
-	rte_free(obj);
+	mlx5_free(obj);
 	return ret;
 }
 
@@ -671,7 +674,7 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_wq_attr *wq_attr;
 	struct mlx5_devx_obj *rq = NULL;
 
-	rq = rte_calloc_socket(__func__, 1, sizeof(*rq), 0, socket);
+	rq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rq), 0, socket);
 	if (!rq) {
 		DRV_LOG(ERR, "Failed to allocate RQ data");
 		rte_errno = ENOMEM;
@@ -699,7 +702,7 @@ struct mlx5_devx_obj *
 	if (!rq->obj) {
 		DRV_LOG(ERR, "Failed to create RQ using DevX");
 		rte_errno = errno;
-		rte_free(rq);
+		mlx5_free(rq);
 		return NULL;
 	}
 	rq->id = MLX5_GET(create_rq_out, out, rqn);
@@ -776,7 +779,7 @@ struct mlx5_devx_obj *
 	void *tir_ctx, *outer, *inner, *rss_key;
 	struct mlx5_devx_obj *tir = NULL;
 
-	tir = rte_calloc(__func__, 1, sizeof(*tir), 0);
+	tir = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tir), 0, SOCKET_ID_ANY);
 	if (!tir) {
 		DRV_LOG(ERR, "Failed to allocate TIR data");
 		rte_errno = ENOMEM;
@@ -818,7 +821,7 @@ struct mlx5_devx_obj *
 	if (!tir->obj) {
 		DRV_LOG(ERR, "Failed to create TIR using DevX");
 		rte_errno = errno;
-		rte_free(tir);
+		mlx5_free(tir);
 		return NULL;
 	}
 	tir->id = MLX5_GET(create_tir_out, out, tirn);
@@ -848,17 +851,17 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_obj *rqt = NULL;
 	int i;
 
-	in = rte_calloc(__func__, 1, inlen, 0);
+	in = mlx5_malloc(MLX5_MEM_ZERO, inlen, 0, SOCKET_ID_ANY);
 	if (!in) {
 		DRV_LOG(ERR, "Failed to allocate RQT IN data");
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	rqt = rte_calloc(__func__, 1, sizeof(*rqt), 0);
+	rqt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt), 0, SOCKET_ID_ANY);
 	if (!rqt) {
 		DRV_LOG(ERR, "Failed to allocate RQT data");
 		rte_errno = ENOMEM;
-		rte_free(in);
+		mlx5_free(in);
 		return NULL;
 	}
 	MLX5_SET(create_rqt_in, in, opcode, MLX5_CMD_OP_CREATE_RQT);
@@ -869,11 +872,11 @@ struct mlx5_devx_obj *
 	for (i = 0; i < rqt_attr->rqt_actual_size; i++)
 		MLX5_SET(rqtc, rqt_ctx, rq_num[i], rqt_attr->rq_list[i]);
 	rqt->obj = mlx5_glue->devx_obj_create(ctx, in, inlen, out, sizeof(out));
-	rte_free(in);
+	mlx5_free(in);
 	if (!rqt->obj) {
 		DRV_LOG(ERR, "Failed to create RQT using DevX");
 		rte_errno = errno;
-		rte_free(rqt);
+		mlx5_free(rqt);
 		return NULL;
 	}
 	rqt->id = MLX5_GET(create_rqt_out, out, rqtn);
@@ -898,7 +901,7 @@ struct mlx5_devx_obj *
 	uint32_t inlen = MLX5_ST_SZ_BYTES(modify_rqt_in) +
 			 rqt_attr->rqt_actual_size * sizeof(uint32_t);
 	uint32_t out[MLX5_ST_SZ_DW(modify_rqt_out)] = {0};
-	uint32_t *in = rte_calloc(__func__, 1, inlen, 0);
+	uint32_t *in = mlx5_malloc(MLX5_MEM_ZERO, inlen, 0, SOCKET_ID_ANY);
 	void *rqt_ctx;
 	int i;
 	int ret;
@@ -918,7 +921,7 @@ struct mlx5_devx_obj *
 	for (i = 0; i < rqt_attr->rqt_actual_size; i++)
 		MLX5_SET(rqtc, rqt_ctx, rq_num[i], rqt_attr->rq_list[i]);
 	ret = mlx5_glue->devx_obj_modify(rqt->obj, in, inlen, out, sizeof(out));
-	rte_free(in);
+	mlx5_free(in);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to modify RQT using DevX.");
 		rte_errno = errno;
@@ -951,7 +954,7 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_wq_attr *wq_attr;
 	struct mlx5_devx_obj *sq = NULL;
 
-	sq = rte_calloc(__func__, 1, sizeof(*sq), 0);
+	sq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*sq), 0, SOCKET_ID_ANY);
 	if (!sq) {
 		DRV_LOG(ERR, "Failed to allocate SQ data");
 		rte_errno = ENOMEM;
@@ -985,7 +988,7 @@ struct mlx5_devx_obj *
 	if (!sq->obj) {
 		DRV_LOG(ERR, "Failed to create SQ using DevX");
 		rte_errno = errno;
-		rte_free(sq);
+		mlx5_free(sq);
 		return NULL;
 	}
 	sq->id = MLX5_GET(create_sq_out, out, sqn);
@@ -1049,7 +1052,7 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_obj *tis = NULL;
 	void *tis_ctx;
 
-	tis = rte_calloc(__func__, 1, sizeof(*tis), 0);
+	tis = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tis), 0, SOCKET_ID_ANY);
 	if (!tis) {
 		DRV_LOG(ERR, "Failed to allocate TIS object");
 		rte_errno = ENOMEM;
@@ -1069,7 +1072,7 @@ struct mlx5_devx_obj *
 	if (!tis->obj) {
 		DRV_LOG(ERR, "Failed to create TIS using DevX");
 		rte_errno = errno;
-		rte_free(tis);
+		mlx5_free(tis);
 		return NULL;
 	}
 	tis->id = MLX5_GET(create_tis_out, out, tisn);
@@ -1091,7 +1094,7 @@ struct mlx5_devx_obj *
 	uint32_t out[MLX5_ST_SZ_DW(alloc_transport_domain_out)] = {0};
 	struct mlx5_devx_obj *td = NULL;
 
-	td = rte_calloc(__func__, 1, sizeof(*td), 0);
+	td = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*td), 0, SOCKET_ID_ANY);
 	if (!td) {
 		DRV_LOG(ERR, "Failed to allocate TD object");
 		rte_errno = ENOMEM;
@@ -1104,7 +1107,7 @@ struct mlx5_devx_obj *
 	if (!td->obj) {
 		DRV_LOG(ERR, "Failed to create TIS using DevX");
 		rte_errno = errno;
-		rte_free(td);
+		mlx5_free(td);
 		return NULL;
 	}
 	td->id = MLX5_GET(alloc_transport_domain_out, out,
@@ -1168,8 +1171,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_cq_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(create_cq_out)] = {0};
-	struct mlx5_devx_obj *cq_obj = rte_zmalloc(__func__, sizeof(*cq_obj),
-						   0);
+	struct mlx5_devx_obj *cq_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						   sizeof(*cq_obj),
+						   0, SOCKET_ID_ANY);
 	void *cqctx = MLX5_ADDR_OF(create_cq_in, in, cq_context);
 
 	if (!cq_obj) {
@@ -1203,7 +1207,7 @@ struct mlx5_devx_obj *
 	if (!cq_obj->obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create CQ using DevX errno=%d.", errno);
-		rte_free(cq_obj);
+		mlx5_free(cq_obj);
 		return NULL;
 	}
 	cq_obj->id = MLX5_GET(create_cq_out, out, cqn);
@@ -1227,8 +1231,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_virtq_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
-	struct mlx5_devx_obj *virtq_obj = rte_zmalloc(__func__,
-						     sizeof(*virtq_obj), 0);
+	struct mlx5_devx_obj *virtq_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						     sizeof(*virtq_obj),
+						     0, SOCKET_ID_ANY);
 	void *virtq = MLX5_ADDR_OF(create_virtq_in, in, virtq);
 	void *hdr = MLX5_ADDR_OF(create_virtq_in, in, hdr);
 	void *virtctx = MLX5_ADDR_OF(virtio_net_q, virtq, virtio_q_context);
@@ -1276,7 +1281,7 @@ struct mlx5_devx_obj *
 	if (!virtq_obj->obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create VIRTQ Obj using DevX.");
-		rte_free(virtq_obj);
+		mlx5_free(virtq_obj);
 		return NULL;
 	}
 	virtq_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
@@ -1398,8 +1403,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_qp_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(create_qp_out)] = {0};
-	struct mlx5_devx_obj *qp_obj = rte_zmalloc(__func__, sizeof(*qp_obj),
-						   0);
+	struct mlx5_devx_obj *qp_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						   sizeof(*qp_obj),
+						   0, SOCKET_ID_ANY);
 	void *qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
 
 	if (!qp_obj) {
@@ -1454,7 +1460,7 @@ struct mlx5_devx_obj *
 	if (!qp_obj->obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create QP Obj using DevX.");
-		rte_free(qp_obj);
+		mlx5_free(qp_obj);
 		return NULL;
 	}
 	qp_obj->id = MLX5_GET(create_qp_out, out, qpn);
@@ -1550,8 +1556,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
-	struct mlx5_devx_obj *couners_obj = rte_zmalloc(__func__,
-						       sizeof(*couners_obj), 0);
+	struct mlx5_devx_obj *couners_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						       sizeof(*couners_obj), 0,
+						       SOCKET_ID_ANY);
 	void *hdr = MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr);
 
 	if (!couners_obj) {
@@ -1569,7 +1576,7 @@ struct mlx5_devx_obj *
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create virtio queue counters Obj using"
 			" DevX.");
-		rte_free(couners_obj);
+		mlx5_free(couners_obj);
 		return NULL;
 	}
 	couners_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH 5/7] common/mlx5: convert data path objects to unified malloc
  2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                   ` (3 preceding siblings ...)
  2020-07-15  4:00 ` [dpdk-dev] [PATCH 4/7] common/mlx5: " Suanming Mou
@ 2020-07-15  4:00 ` Suanming Mou
  2020-07-15  4:00 ` [dpdk-dev] [PATCH 6/7] net/mlx5: convert configuration " Suanming Mou
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-15  4:00 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the data path object page and B-tree table memory
from unified malloc function with explicit flag MLX5_MEM_RTE.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/mlx5_common.c    | 10 ++++++----
 drivers/common/mlx5/mlx5_common_mr.c | 31 +++++++++++++++----------------
 2 files changed, 21 insertions(+), 20 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 693e2c6..17168e6 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -13,6 +13,7 @@
 #include "mlx5_common.h"
 #include "mlx5_common_os.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 int mlx5_common_logtype;
 
@@ -169,8 +170,9 @@ static inline void mlx5_cpu_id(unsigned int level,
 	struct mlx5_devx_dbr_page *page;
 
 	/* Allocate space for door-bell page and management data. */
-	page = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_devx_dbr_page),
-				 RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+	page = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			   sizeof(struct mlx5_devx_dbr_page),
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!page) {
 		DRV_LOG(ERR, "cannot allocate dbr page");
 		return NULL;
@@ -180,7 +182,7 @@ static inline void mlx5_cpu_id(unsigned int level,
 					      MLX5_DBR_PAGE_SIZE, 0);
 	if (!page->umem) {
 		DRV_LOG(ERR, "cannot umem reg dbr page");
-		rte_free(page);
+		mlx5_free(page);
 		return NULL;
 	}
 	return page;
@@ -261,7 +263,7 @@ static inline void mlx5_cpu_id(unsigned int level,
 		LIST_REMOVE(page, next);
 		if (page->umem)
 			ret = -mlx5_glue->devx_umem_dereg(page->umem);
-		rte_free(page);
+		mlx5_free(page);
 	} else {
 		/* Mark in bitmap that this door-bell is not in use. */
 		offset /= MLX5_DBR_SIZE;
diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c
index 564d618..23324c0 100644
--- a/drivers/common/mlx5/mlx5_common_mr.c
+++ b/drivers/common/mlx5/mlx5_common_mr.c
@@ -12,6 +12,7 @@
 #include "mlx5_common_mp.h"
 #include "mlx5_common_mr.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 struct mr_find_contig_memsegs_data {
 	uintptr_t addr;
@@ -47,7 +48,8 @@ struct mr_find_contig_memsegs_data {
 	 * Initially cache_bh[] will be given practically enough space and once
 	 * it is expanded, expansion wouldn't be needed again ever.
 	 */
-	mem = rte_realloc(bt->table, n * sizeof(struct mr_cache_entry), 0);
+	mem = mlx5_realloc(bt->table, MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			   n * sizeof(struct mr_cache_entry), 0, SOCKET_ID_ANY);
 	if (mem == NULL) {
 		/* Not an error, B-tree search will be skipped. */
 		DRV_LOG(WARNING, "failed to expand MR B-tree (%p) table",
@@ -180,9 +182,9 @@ struct mr_find_contig_memsegs_data {
 	}
 	MLX5_ASSERT(!bt->table && !bt->size);
 	memset(bt, 0, sizeof(*bt));
-	bt->table = rte_calloc_socket("B-tree table",
-				      n, sizeof(struct mr_cache_entry),
-				      0, socket);
+	bt->table = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				sizeof(struct mr_cache_entry) * n,
+				0, socket);
 	if (bt->table == NULL) {
 		rte_errno = ENOMEM;
 		DEBUG("failed to allocate memory for btree cache on socket %d",
@@ -212,7 +214,7 @@ struct mr_find_contig_memsegs_data {
 		return;
 	DEBUG("freeing B-tree %p with table %p",
 	      (void *)bt, (void *)bt->table);
-	rte_free(bt->table);
+	mlx5_free(bt->table);
 	memset(bt, 0, sizeof(*bt));
 }
 
@@ -443,7 +445,7 @@ struct mlx5_mr *
 	dereg_mr_cb(&mr->pmd_mr);
 	if (mr->ms_bmp != NULL)
 		rte_bitmap_free(mr->ms_bmp);
-	rte_free(mr);
+	mlx5_free(mr);
 }
 
 void
@@ -650,11 +652,9 @@ struct mlx5_mr *
 	      (void *)addr, data.start, data.end, msl->page_sz, ms_n);
 	/* Size of memory for bitmap. */
 	bmp_size = rte_bitmap_get_memory_footprint(ms_n);
-	mr = rte_zmalloc_socket(NULL,
-				RTE_ALIGN_CEIL(sizeof(*mr),
-					       RTE_CACHE_LINE_SIZE) +
-				bmp_size,
-				RTE_CACHE_LINE_SIZE, msl->socket_id);
+	mr = mlx5_malloc(MLX5_MEM_RTE |  MLX5_MEM_ZERO,
+			 RTE_ALIGN_CEIL(sizeof(*mr), RTE_CACHE_LINE_SIZE) +
+			 bmp_size, RTE_CACHE_LINE_SIZE, msl->socket_id);
 	if (mr == NULL) {
 		DEBUG("Unable to allocate memory for a new MR of"
 		      " address (%p).", (void *)addr);
@@ -1033,10 +1033,9 @@ struct mlx5_mr *
 {
 	struct mlx5_mr *mr = NULL;
 
-	mr = rte_zmalloc_socket(NULL,
-				RTE_ALIGN_CEIL(sizeof(*mr),
-					       RTE_CACHE_LINE_SIZE),
-				RTE_CACHE_LINE_SIZE, socket_id);
+	mr = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			 RTE_ALIGN_CEIL(sizeof(*mr), RTE_CACHE_LINE_SIZE),
+			 RTE_CACHE_LINE_SIZE, socket_id);
 	if (mr == NULL)
 		return NULL;
 	reg_mr_cb(pd, (void *)addr, len, &mr->pmd_mr);
@@ -1044,7 +1043,7 @@ struct mlx5_mr *
 		DRV_LOG(WARNING,
 			"Fail to create MR for address (%p)",
 			(void *)addr);
-		rte_free(mr);
+		mlx5_free(mr);
 		return NULL;
 	}
 	mr->msl = NULL; /* Mark it is external memory. */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH 6/7] net/mlx5: convert configuration objects to unified malloc
  2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                   ` (4 preceding siblings ...)
  2020-07-15  4:00 ` [dpdk-dev] [PATCH 5/7] common/mlx5: convert data path objects " Suanming Mou
@ 2020-07-15  4:00 ` Suanming Mou
  2020-07-15  4:00 ` [dpdk-dev] [PATCH 7/7] net/mlx5: convert Rx/Tx queue " Suanming Mou
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-15  4:00 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the miscellaneous configuration objects from the
unified malloc function.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/linux/mlx5_ethdev_os.c |  8 +++++---
 drivers/net/mlx5/linux/mlx5_os.c        | 26 +++++++++++++-------------
 drivers/net/mlx5/mlx5.c                 | 14 +++++++-------
 3 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index 701614a..6b8a151 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -38,6 +38,7 @@
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
 #include <mlx5_common.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -1162,8 +1163,9 @@ int mlx5_get_module_eeprom(struct rte_eth_dev *dev,
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	eeprom = rte_calloc(__func__, 1,
-			    (sizeof(struct ethtool_eeprom) + info->length), 0);
+	eeprom = mlx5_malloc(MLX5_MEM_ZERO,
+			     (sizeof(struct ethtool_eeprom) + info->length), 0,
+			     SOCKET_ID_ANY);
 	if (!eeprom) {
 		DRV_LOG(WARNING, "port %u cannot allocate memory for "
 			"eeprom data", dev->data->port_id);
@@ -1182,6 +1184,6 @@ int mlx5_get_module_eeprom(struct rte_eth_dev *dev,
 			dev->data->port_id, strerror(rte_errno));
 	else
 		rte_memcpy(info->data, eeprom->data, info->length);
-	rte_free(eeprom);
+	mlx5_free(eeprom);
 	return ret;
 }
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index d5acef0..1698f2c 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -163,7 +163,7 @@
 		socket = ctrl->socket;
 	}
 	MLX5_ASSERT(data != NULL);
-	ret = rte_malloc_socket(__func__, size, alignment, socket);
+	ret = mlx5_malloc(0, size, alignment, socket);
 	if (!ret && size)
 		rte_errno = ENOMEM;
 	return ret;
@@ -181,7 +181,7 @@
 mlx5_free_verbs_buf(void *ptr, void *data __rte_unused)
 {
 	MLX5_ASSERT(data != NULL);
-	rte_free(ptr);
+	mlx5_free(ptr);
 }
 
 /**
@@ -618,9 +618,9 @@
 			mlx5_glue->port_state_str(port_attr.state),
 			port_attr.state);
 	/* Allocate private eth device data. */
-	priv = rte_zmalloc("ethdev private structure",
+	priv = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE,
 			   sizeof(*priv),
-			   RTE_CACHE_LINE_SIZE);
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (priv == NULL) {
 		DRV_LOG(ERR, "priv allocation failure");
 		err = ENOMEM;
@@ -1109,7 +1109,7 @@
 			mlx5_flow_id_pool_release(priv->qrss_id_pool);
 		if (own_domain_id)
 			claim_zero(rte_eth_switch_domain_free(priv->domain_id));
-		rte_free(priv);
+		mlx5_free(priv);
 		if (eth_dev != NULL)
 			eth_dev->data->dev_private = NULL;
 	}
@@ -1428,10 +1428,10 @@
 	 * Now we can determine the maximal
 	 * amount of devices to be spawned.
 	 */
-	list = rte_zmalloc("device spawn data",
-			 sizeof(struct mlx5_dev_spawn_data) *
-			 (np ? np : nd),
-			 RTE_CACHE_LINE_SIZE);
+	list = mlx5_malloc(MLX5_MEM_ZERO,
+			   sizeof(struct mlx5_dev_spawn_data) *
+			   (np ? np : nd),
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!list) {
 		DRV_LOG(ERR, "spawn data array allocation failure");
 		rte_errno = ENOMEM;
@@ -1722,7 +1722,7 @@
 	if (nl_route >= 0)
 		close(nl_route);
 	if (list)
-		rte_free(list);
+		mlx5_free(list);
 	MLX5_ASSERT(ibv_list);
 	mlx5_glue->free_device_list(ibv_list);
 	return ret;
@@ -2200,8 +2200,8 @@
 	/* Allocate memory to grab stat names and values. */
 	str_sz = dev_stats_n * ETH_GSTRING_LEN;
 	strings = (struct ethtool_gstrings *)
-		  rte_malloc("xstats_strings",
-			     str_sz + sizeof(struct ethtool_gstrings), 0);
+		  mlx5_malloc(0, str_sz + sizeof(struct ethtool_gstrings), 0,
+			      SOCKET_ID_ANY);
 	if (!strings) {
 		DRV_LOG(WARNING, "port %u unable to allocate memory for xstats",
 		     dev->data->port_id);
@@ -2251,7 +2251,7 @@
 	mlx5_os_read_dev_stat(priv, "out_of_buffer", &stats_ctrl->imissed_base);
 	stats_ctrl->imissed = 0;
 free:
-	rte_free(strings);
+	mlx5_free(strings);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index ba86c68..daf65f3 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -644,11 +644,11 @@ struct mlx5_dev_ctx_shared *
 	}
 	/* No device found, we have to create new shared context. */
 	MLX5_ASSERT(spawn->max_port);
-	sh = rte_zmalloc("ethdev shared ib context",
+	sh = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE,
 			 sizeof(struct mlx5_dev_ctx_shared) +
 			 spawn->max_port *
 			 sizeof(struct mlx5_dev_shared_port),
-			 RTE_CACHE_LINE_SIZE);
+			 RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!sh) {
 		DRV_LOG(ERR, "shared context allocation failure");
 		rte_errno  = ENOMEM;
@@ -764,7 +764,7 @@ struct mlx5_dev_ctx_shared *
 		claim_zero(mlx5_glue->close_device(sh->ctx));
 	if (sh->flow_id_pool)
 		mlx5_flow_id_pool_release(sh->flow_id_pool);
-	rte_free(sh);
+	mlx5_free(sh);
 	MLX5_ASSERT(err > 0);
 	rte_errno = err;
 	return NULL;
@@ -829,7 +829,7 @@ struct mlx5_dev_ctx_shared *
 		claim_zero(mlx5_glue->close_device(sh->ctx));
 	if (sh->flow_id_pool)
 		mlx5_flow_id_pool_release(sh->flow_id_pool);
-	rte_free(sh);
+	mlx5_free(sh);
 exit:
 	pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex);
 }
@@ -1089,8 +1089,8 @@ struct mlx5_dev_ctx_shared *
 	 */
 	ppriv_size =
 		sizeof(struct mlx5_proc_priv) + priv->txqs_n * sizeof(void *);
-	ppriv = rte_malloc_socket("mlx5_proc_priv", ppriv_size,
-				  RTE_CACHE_LINE_SIZE, dev->device->numa_node);
+	ppriv = mlx5_malloc(MLX5_MEM_RTE, ppriv_size, RTE_CACHE_LINE_SIZE,
+			    dev->device->numa_node);
 	if (!ppriv) {
 		rte_errno = ENOMEM;
 		return -rte_errno;
@@ -1111,7 +1111,7 @@ struct mlx5_dev_ctx_shared *
 {
 	if (!dev->process_private)
 		return;
-	rte_free(dev->process_private);
+	mlx5_free(dev->process_private);
 	dev->process_private = NULL;
 }
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH 7/7] net/mlx5: convert Rx/Tx queue objects to unified malloc
  2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                   ` (5 preceding siblings ...)
  2020-07-15  4:00 ` [dpdk-dev] [PATCH 6/7] net/mlx5: convert configuration " Suanming Mou
@ 2020-07-15  4:00 ` Suanming Mou
  2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
  8 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-15  4:00 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the Rx/Tx queue objects from unified malloc
function.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 37 ++++++++++++++++++-------------------
 drivers/net/mlx5/mlx5_txq.c | 44 +++++++++++++++++++++-----------------------
 2 files changed, 39 insertions(+), 42 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index c8e3a82..9c9cc3a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -641,7 +641,7 @@
 rxq_release_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
 {
 	if (rxq_ctrl->rxq.wqes) {
-		rte_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
+		mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
 		rxq_ctrl->rxq.wqes = NULL;
 	}
 	if (rxq_ctrl->wq_umem) {
@@ -707,7 +707,7 @@
 			claim_zero(mlx5_glue->destroy_comp_channel
 				   (rxq_obj->channel));
 		LIST_REMOVE(rxq_obj, next);
-		rte_free(rxq_obj);
+		mlx5_free(rxq_obj);
 		return 0;
 	}
 	return 1;
@@ -1233,15 +1233,15 @@
 	/* Calculate and allocate WQ memory space. */
 	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
 	wq_size = wqe_n * wqe_size;
-	buf = rte_calloc_socket(__func__, 1, wq_size, MLX5_WQE_BUF_ALIGNMENT,
-				rxq_ctrl->socket);
+	buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
+			  MLX5_WQE_BUF_ALIGNMENT, rxq_ctrl->socket);
 	if (!buf)
 		return NULL;
 	rxq_data->wqes = buf;
 	rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
 						     buf, wq_size, 0);
 	if (!rxq_ctrl->wq_umem) {
-		rte_free(buf);
+		mlx5_free(buf);
 		return NULL;
 	}
 	mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
@@ -1275,8 +1275,8 @@
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(!rxq_ctrl->obj);
-	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
-				 rxq_ctrl->socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   rxq_ctrl->socket);
 	if (!tmpl) {
 		DRV_LOG(ERR,
 			"port %u Rx queue %u cannot allocate verbs resources",
@@ -1294,7 +1294,7 @@
 			DRV_LOG(ERR, "total data size %u power of 2 is "
 				"too large for hairpin",
 				priv->config.log_hp_size);
-			rte_free(tmpl);
+			mlx5_free(tmpl);
 			rte_errno = ERANGE;
 			return NULL;
 		}
@@ -1314,7 +1314,7 @@
 		DRV_LOG(ERR,
 			"port %u Rx hairpin queue %u can't create rq object",
 			dev->data->port_id, idx);
-		rte_free(tmpl);
+		mlx5_free(tmpl);
 		rte_errno = errno;
 		return NULL;
 	}
@@ -1362,8 +1362,8 @@ struct mlx5_rxq_obj *
 		return mlx5_rxq_obj_hairpin_new(dev, idx);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
 	priv->verbs_alloc_ctx.obj = rxq_ctrl;
-	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
-				 rxq_ctrl->socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   rxq_ctrl->socket);
 	if (!tmpl) {
 		DRV_LOG(ERR,
 			"port %u Rx queue %u cannot allocate verbs resources",
@@ -1503,7 +1503,7 @@ struct mlx5_rxq_obj *
 		if (tmpl->channel)
 			claim_zero(mlx5_glue->destroy_comp_channel
 							(tmpl->channel));
-		rte_free(tmpl);
+		mlx5_free(tmpl);
 		rte_errno = ret; /* Restore rte_errno. */
 	}
 	if (type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ)
@@ -1825,10 +1825,8 @@ struct mlx5_rxq_ctrl *
 		rte_errno = ENOSPC;
 		return NULL;
 	}
-	tmpl = rte_calloc_socket("RXQ", 1,
-				 sizeof(*tmpl) +
-				 desc_n * sizeof(struct rte_mbuf *),
-				 0, socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl) +
+			   desc_n * sizeof(struct rte_mbuf *), 0, socket);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -2007,7 +2005,7 @@ struct mlx5_rxq_ctrl *
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
 error:
-	rte_free(tmpl);
+	mlx5_free(tmpl);
 	return NULL;
 }
 
@@ -2033,7 +2031,8 @@ struct mlx5_rxq_ctrl *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *tmpl;
 
-	tmpl = rte_calloc_socket("RXQ", 1, sizeof(*tmpl), 0, SOCKET_ID_ANY);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   SOCKET_ID_ANY);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -2112,7 +2111,7 @@ struct mlx5_rxq_ctrl *
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
 			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 		LIST_REMOVE(rxq_ctrl, next);
-		rte_free(rxq_ctrl);
+		mlx5_free(rxq_ctrl);
 		(*priv->rxqs)[idx] = NULL;
 		return 0;
 	}
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 35b3ade..ac9e455 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -31,6 +31,7 @@
 #include <mlx5_devx_cmds.h>
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5_utils.h"
@@ -521,8 +522,8 @@
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(!txq_ctrl->obj);
-	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
-				 txq_ctrl->socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   txq_ctrl->socket);
 	if (!tmpl) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u cannot allocate memory resources",
@@ -541,7 +542,7 @@
 			DRV_LOG(ERR, "total data size %u power of 2 is "
 				"too large for hairpin",
 				priv->config.log_hp_size);
-			rte_free(tmpl);
+			mlx5_free(tmpl);
 			rte_errno = ERANGE;
 			return NULL;
 		}
@@ -561,7 +562,7 @@
 		DRV_LOG(ERR,
 			"port %u tx hairpin queue %u can't create sq object",
 			dev->data->port_id, idx);
-		rte_free(tmpl);
+		mlx5_free(tmpl);
 		rte_errno = errno;
 		return NULL;
 	}
@@ -715,8 +716,9 @@ struct mlx5_txq_obj *
 		rte_errno = errno;
 		goto error;
 	}
-	txq_obj = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_txq_obj), 0,
-				    txq_ctrl->socket);
+	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			      sizeof(struct mlx5_txq_obj), 0,
+			      txq_ctrl->socket);
 	if (!txq_obj) {
 		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory",
 			dev->data->port_id, idx);
@@ -758,11 +760,9 @@ struct mlx5_txq_obj *
 	txq_data->wqe_pi = 0;
 	txq_data->wqe_comp = 0;
 	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
-	txq_data->fcqs = rte_calloc_socket(__func__,
-					   txq_data->cqe_s,
-					   sizeof(*txq_data->fcqs),
-					   RTE_CACHE_LINE_SIZE,
-					   txq_ctrl->socket);
+	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
+				     RTE_CACHE_LINE_SIZE, txq_ctrl->socket);
 	if (!txq_data->fcqs) {
 		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory (FCQ)",
 			dev->data->port_id, idx);
@@ -818,9 +818,9 @@ struct mlx5_txq_obj *
 	if (tmpl.qp)
 		claim_zero(mlx5_glue->destroy_qp(tmpl.qp));
 	if (txq_data->fcqs)
-		rte_free(txq_data->fcqs);
+		mlx5_free(txq_data->fcqs);
 	if (txq_obj)
-		rte_free(txq_obj);
+		mlx5_free(txq_obj);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
 	rte_errno = ret; /* Restore rte_errno. */
 	return NULL;
@@ -874,10 +874,10 @@ struct mlx5_txq_obj *
 			claim_zero(mlx5_glue->destroy_qp(txq_obj->qp));
 			claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
 				if (txq_obj->txq_ctrl->txq.fcqs)
-					rte_free(txq_obj->txq_ctrl->txq.fcqs);
+					mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
 		}
 		LIST_REMOVE(txq_obj, next);
-		rte_free(txq_obj);
+		mlx5_free(txq_obj);
 		return 0;
 	}
 	return 1;
@@ -1293,10 +1293,8 @@ struct mlx5_txq_ctrl *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_ctrl *tmpl;
 
-	tmpl = rte_calloc_socket("TXQ", 1,
-				 sizeof(*tmpl) +
-				 desc * sizeof(struct rte_mbuf *),
-				 0, socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl) +
+			   desc * sizeof(struct rte_mbuf *), 0, socket);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -1336,7 +1334,7 @@ struct mlx5_txq_ctrl *
 	LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next);
 	return tmpl;
 error:
-	rte_free(tmpl);
+	mlx5_free(tmpl);
 	return NULL;
 }
 
@@ -1362,8 +1360,8 @@ struct mlx5_txq_ctrl *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_ctrl *tmpl;
 
-	tmpl = rte_calloc_socket("TXQ", 1,
-				 sizeof(*tmpl), 0, SOCKET_ID_ANY);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   SOCKET_ID_ANY);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -1432,7 +1430,7 @@ struct mlx5_txq_ctrl *
 		txq_free_elts(txq);
 		mlx5_mr_btree_free(&txq->txq.mr_ctrl.cache_bh);
 		LIST_REMOVE(txq, next);
-		rte_free(txq);
+		mlx5_free(txq);
 		(*priv->txqs)[idx] = NULL;
 		return 0;
 	}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg
  2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                   ` (6 preceding siblings ...)
  2020-07-15  4:00 ` [dpdk-dev] [PATCH 7/7] net/mlx5: convert Rx/Tx queue " Suanming Mou
@ 2020-07-16  9:20 ` Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
                     ` (6 more replies)
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
  8 siblings, 7 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-16  9:20 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

Currently, for MLX5 PMD, once millions of flows created, the memory
consumption of the flows are also very huge. For the system with limited
memory, it means the system need to reserve most of the memory as huge
page memory to serve the flows in advance. And other normal applications
will have no chance to use this reserved memory any more. While most of
the time, the system will not have lots of flows, the  reserved huge page
memory becomes a bit waste of memory at most of the time.

By the new sys_mem_en devarg, once set it to be true, it allows the PMD
allocate the memory from system by default with the new add mlx5 memory
management functions. Only once the MLX5_MEM_RTE flag is set, the memory
will be allocate from rte, otherwise, it allocates memory from system.

So in this case, the system with limited memory no need to reserve most
of the memory for hugepage. Only some needed memory for datapath objects
will be enough to allocated with explicitly flag. Other memory will be
allocated from system. For system with enough memory, no need to care
about the devarg, the memory will always be from rte hugepage.

One restriction is that for DPDK application with multiple PCI devices,
if the sys_mem_en devargs are different between the devices, the
sys_mem_en only gets the value from the first device devargs, and print
out a message to warn that.

---

v2:
 - Add memory function call statistic.
 - Change msl to aotmic.

---

Suanming Mou (7):
  common/mlx5: add mlx5 memory management functions
  net/mlx5: add allocate memory from system devarg
  net/mlx5: convert control path memory to unified malloc
  common/mlx5: convert control path memory to unified malloc
  common/mlx5: convert data path objects to unified malloc
  net/mlx5: convert configuration objects to unified malloc
  net/mlx5: convert Rx/Tx queue objects to unified malloc

 doc/guides/nics/mlx5.rst                        |   7 +
 drivers/common/mlx5/Makefile                    |   1 +
 drivers/common/mlx5/linux/mlx5_glue.c           |  13 +-
 drivers/common/mlx5/linux/mlx5_nl.c             |   5 +-
 drivers/common/mlx5/meson.build                 |   1 +
 drivers/common/mlx5/mlx5_common.c               |  10 +-
 drivers/common/mlx5/mlx5_common_mp.c            |   7 +-
 drivers/common/mlx5/mlx5_common_mr.c            |  31 ++-
 drivers/common/mlx5/mlx5_devx_cmds.c            |  75 +++---
 drivers/common/mlx5/mlx5_malloc.c               | 306 ++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_malloc.h               |  99 ++++++++
 drivers/common/mlx5/rte_common_mlx5_version.map |   6 +
 drivers/net/mlx5/linux/mlx5_ethdev_os.c         |   8 +-
 drivers/net/mlx5/linux/mlx5_os.c                |  28 ++-
 drivers/net/mlx5/mlx5.c                         | 108 +++++----
 drivers/net/mlx5/mlx5.h                         |   1 +
 drivers/net/mlx5/mlx5_ethdev.c                  |  15 +-
 drivers/net/mlx5/mlx5_flow.c                    |  45 ++--
 drivers/net/mlx5/mlx5_flow_dv.c                 |  46 ++--
 drivers/net/mlx5/mlx5_flow_meter.c              |  11 +-
 drivers/net/mlx5/mlx5_flow_verbs.c              |   8 +-
 drivers/net/mlx5/mlx5_mp.c                      |   3 +-
 drivers/net/mlx5/mlx5_rss.c                     |  13 +-
 drivers/net/mlx5/mlx5_rxq.c                     |  74 +++---
 drivers/net/mlx5/mlx5_txq.c                     |  44 ++--
 drivers/net/mlx5/mlx5_utils.c                   |  60 +++--
 drivers/net/mlx5/mlx5_utils.h                   |   2 +-
 drivers/net/mlx5/mlx5_vlan.c                    |   8 +-
 28 files changed, 759 insertions(+), 276 deletions(-)
 create mode 100644 drivers/common/mlx5/mlx5_malloc.c
 create mode 100644 drivers/common/mlx5/mlx5_malloc.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v2 1/7] common/mlx5: add mlx5 memory management functions
  2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
@ 2020-07-16  9:20   ` Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: add allocate memory from system devarg Suanming Mou
                     ` (5 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-16  9:20 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

Add the internal mlx5 memory management functions:

mlx5_malloc_mem_select();
mlx5_memory_stat_dump();
mlx5_rellaocate();
mlx5_malloc();
mlx5_free();

User will be allowed to manage memory from system or from rte memory
with the unified functions.

In this case, for the system with limited memory which can not reserve
lots of rte hugepage memory in advanced, will allocate the memory from
system for some of not so important control path objects based on the
sys_mem_en configuration.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/Makefile                    |   1 +
 drivers/common/mlx5/meson.build                 |   1 +
 drivers/common/mlx5/mlx5_malloc.c               | 306 ++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_malloc.h               |  99 ++++++++
 drivers/common/mlx5/rte_common_mlx5_version.map |   6 +
 5 files changed, 413 insertions(+)
 create mode 100644 drivers/common/mlx5/mlx5_malloc.c
 create mode 100644 drivers/common/mlx5/mlx5_malloc.h

diff --git a/drivers/common/mlx5/Makefile b/drivers/common/mlx5/Makefile
index f6c762b..239d681 100644
--- a/drivers/common/mlx5/Makefile
+++ b/drivers/common/mlx5/Makefile
@@ -21,6 +21,7 @@ SRCS-y += linux/mlx5_nl.c
 SRCS-y += linux/mlx5_common_verbs.c
 SRCS-y += mlx5_common_mp.c
 SRCS-y += mlx5_common_mr.c
+SRCS-y += mlx5_malloc.c
 ifeq ($(CONFIG_RTE_IBVERBS_LINK_DLOPEN),y)
 INSTALL-y-lib += $(LIB_GLUE)
 endif
diff --git a/drivers/common/mlx5/meson.build b/drivers/common/mlx5/meson.build
index ba43714..70e2c1c 100644
--- a/drivers/common/mlx5/meson.build
+++ b/drivers/common/mlx5/meson.build
@@ -13,6 +13,7 @@ sources += files(
 	'mlx5_common.c',
 	'mlx5_common_mp.c',
 	'mlx5_common_mr.c',
+	'mlx5_malloc.c',
 )
 
 cflags_options = [
diff --git a/drivers/common/mlx5/mlx5_malloc.c b/drivers/common/mlx5/mlx5_malloc.c
new file mode 100644
index 0000000..316305d
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_malloc.c
@@ -0,0 +1,306 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#include <errno.h>
+#include <rte_malloc.h>
+#include <malloc.h>
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_atomic.h>
+
+#include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
+
+struct mlx5_sys_mem {
+	uint32_t init:1; /* Memory allocator initialized. */
+	uint32_t enable:1; /* System memory select. */
+	uint32_t reserve:30; /* Reserve. */
+	union {
+		struct rte_memseg_list *last_msl;
+		rte_atomic64_t a64_last_msl;
+	};
+	/* last allocated rte memory memseg list. */
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	rte_atomic64_t malloc_sys;
+	/* Memory allocated from system count. */
+	rte_atomic64_t malloc_rte;
+	/* Memory allocated from hugepage count. */
+	rte_atomic64_t realloc_sys;
+	/* Memory reallocate from system count. */
+	rte_atomic64_t realloc_rte;
+	/* Memory reallocate from hugepage count. */
+	rte_atomic64_t free_sys;
+	/* Memory free to system count. */
+	rte_atomic64_t free_rte;
+	/* Memory free to hugepage count. */
+	rte_atomic64_t msl_miss;
+	/* MSL miss count. */
+	rte_atomic64_t msl_update;
+	/* MSL update count. */
+#endif
+};
+
+/* Initialize default as not */
+static struct mlx5_sys_mem mlx5_sys_mem = {
+	.init = 0,
+	.enable = 0,
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	.malloc_sys = RTE_ATOMIC64_INIT(0),
+	.malloc_rte = RTE_ATOMIC64_INIT(0),
+	.realloc_sys = RTE_ATOMIC64_INIT(0),
+	.realloc_rte = RTE_ATOMIC64_INIT(0),
+	.free_sys = RTE_ATOMIC64_INIT(0),
+	.free_rte = RTE_ATOMIC64_INIT(0),
+	.msl_miss = RTE_ATOMIC64_INIT(0),
+	.msl_update = RTE_ATOMIC64_INIT(0),
+#endif
+};
+
+/**
+ * Check if the address belongs to memory seg list.
+ *
+ * @param addr
+ *   Memory address to be ckeced.
+ * @param msl
+ *   Memory seg list.
+ *
+ * @return
+ *   True if it belongs, false otherwise.
+ */
+static bool
+mlx5_mem_check_msl(void *addr, struct rte_memseg_list *msl)
+{
+	void *start, *end;
+
+	if (!msl)
+		return false;
+	start = msl->base_va;
+	end = RTE_PTR_ADD(start, msl->len);
+	if (addr >= start && addr < end)
+		return true;
+	return false;
+}
+
+/**
+ * Update the msl if memory belongs to new msl.
+ *
+ * @param addr
+ *   Memory address.
+ */
+static void
+mlx5_mem_update_msl(void *addr)
+{
+	/*
+	 * Update the cache msl if the new addr comes from the new msl
+	 * different with the cached msl.
+	 */
+	if (addr && !mlx5_mem_check_msl(addr,
+	    (struct rte_memseg_list *)(uintptr_t)rte_atomic64_read
+	    (&mlx5_sys_mem.a64_last_msl))) {
+		rte_atomic64_set(&mlx5_sys_mem.a64_last_msl,
+			(int64_t)(uintptr_t)rte_mem_virt2memseg_list(addr));
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		rte_atomic64_inc(&mlx5_sys_mem.msl_update);
+#endif
+	}
+}
+
+/**
+ * Check if the address belongs to rte memory.
+ *
+ * @param addr
+ *   Memory address to be ckeced.
+ *
+ * @return
+ *   True if it belongs, false otherwise.
+ */
+static bool
+mlx5_mem_is_rte(void *addr)
+{
+	/*
+	 * Check if the last cache msl matches. Drop to slow path
+	 * to check if the memory belongs to rte memory.
+	 */
+	if (!mlx5_mem_check_msl(addr, (struct rte_memseg_list *)(uintptr_t)
+	    rte_atomic64_read(&mlx5_sys_mem.a64_last_msl))) {
+		if (!rte_mem_virt2memseg_list(addr))
+			return false;
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		rte_atomic64_inc(&mlx5_sys_mem.msl_miss);
+#endif
+	}
+	return true;
+}
+
+/**
+ * Allocate memory with alignment.
+ *
+ * @param size
+ *   Memory size to be allocated.
+ * @param align
+ *   Memory alignment.
+ * @param zero
+ *   Clear the allocated memory or not.
+ *
+ * @return
+ *   Pointer of the allocated memory, NULL otherwise.
+ */
+static void *
+mlx5_alloc_align(size_t size, unsigned int align, unsigned int zero)
+{
+	void *buf;
+	buf = memalign(align, size);
+	if (!buf) {
+		DRV_LOG(ERR, "Couldn't allocate buf.\n");
+		return NULL;
+	}
+	if (zero)
+		memset(buf, 0, size);
+	return buf;
+}
+
+void *
+mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket)
+{
+	void *addr;
+	bool rte_mem;
+
+	/*
+	 * If neither system memory nor rte memory is required, allocate
+	 * memory according to mlx5_sys_mem.enable.
+	 */
+	if (flags & MLX5_MEM_RTE)
+		rte_mem = true;
+	else if (flags & MLX5_MEM_SYS)
+		rte_mem = false;
+	else
+		rte_mem = mlx5_sys_mem.enable ? false : true;
+	if (rte_mem) {
+		if (flags & MLX5_MEM_ZERO)
+			addr = rte_zmalloc_socket(NULL, size, align, socket);
+		else
+			addr = rte_malloc_socket(NULL, size, align, socket);
+		mlx5_mem_update_msl(addr);
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		if (addr)
+			rte_atomic64_inc(&mlx5_sys_mem.malloc_rte);
+#endif
+		return addr;
+	}
+	/* The memory will be allocated from system. */
+	if (align)
+		addr = mlx5_alloc_align(size, align, !!(flags & MLX5_MEM_ZERO));
+	else if (flags & MLX5_MEM_ZERO)
+		addr = calloc(1, size);
+	else
+		addr = malloc(size);
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	if (addr)
+		rte_atomic64_inc(&mlx5_sys_mem.malloc_sys);
+#endif
+	return addr;
+}
+
+void *
+mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
+	     int socket)
+{
+	void *new_addr;
+	bool rte_mem;
+
+	/* Allocate directly if old memory address is NULL. */
+	if (!addr)
+		return mlx5_malloc(flags, size, align, socket);
+	/* Get the memory type. */
+	if (flags & MLX5_MEM_RTE)
+		rte_mem = true;
+	else if (flags & MLX5_MEM_SYS)
+		rte_mem = false;
+	else
+		rte_mem = mlx5_sys_mem.enable ? false : true;
+	/* Check if old memory and to be allocated memory are the same type. */
+	if (rte_mem != mlx5_mem_is_rte(addr)) {
+		DRV_LOG(ERR, "Couldn't reallocate to different memory type.");
+		return NULL;
+	}
+	/* Allocate memory from rte memory. */
+	if (rte_mem) {
+		new_addr = rte_realloc_socket(addr, size, align, socket);
+		mlx5_mem_update_msl(new_addr);
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		if (new_addr)
+			rte_atomic64_inc(&mlx5_sys_mem.realloc_rte);
+#endif
+		return new_addr;
+	}
+	/* Align is not supported for system memory. */
+	if (align) {
+		DRV_LOG(ERR, "Couldn't reallocate with alignment");
+		return NULL;
+	}
+	new_addr = realloc(addr, size);
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	if (new_addr)
+		rte_atomic64_inc(&mlx5_sys_mem.realloc_sys);
+#endif
+	return new_addr;
+}
+
+void
+mlx5_free(void *addr)
+{
+	if (addr == NULL)
+		return;
+	if (!mlx5_mem_is_rte(addr)) {
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		rte_atomic64_inc(&mlx5_sys_mem.free_sys);
+#endif
+		free(addr);
+	} else {
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		rte_atomic64_inc(&mlx5_sys_mem.free_rte);
+#endif
+		rte_free(addr);
+	}
+}
+
+void
+mlx5_memory_stat_dump(void)
+{
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	DRV_LOG(INFO, "System memory malloc:%"PRIi64", realloc:%"PRIi64","
+		" free:%"PRIi64"\nRTE memory malloc:%"PRIi64","
+		" realloc:%"PRIi64", free:%"PRIi64"\nMSL miss:%"PRIi64","
+		" update:%"PRIi64"",
+		rte_atomic64_read(&mlx5_sys_mem.malloc_sys),
+		rte_atomic64_read(&mlx5_sys_mem.realloc_sys),
+		rte_atomic64_read(&mlx5_sys_mem.free_sys),
+		rte_atomic64_read(&mlx5_sys_mem.malloc_rte),
+		rte_atomic64_read(&mlx5_sys_mem.realloc_rte),
+		rte_atomic64_read(&mlx5_sys_mem.free_rte),
+		rte_atomic64_read(&mlx5_sys_mem.msl_miss),
+		rte_atomic64_read(&mlx5_sys_mem.msl_update));
+#endif
+}
+
+void
+mlx5_malloc_mem_select(uint32_t sys_mem_en)
+{
+	/*
+	 * The initialization should be called only once and all devices
+	 * should use the same memory type. Otherwise, when new device is
+	 * being attached with some different memory allocation configuration,
+	 * the memory will get wrong behavior or a failure will be raised.
+	 */
+	if (!mlx5_sys_mem.init) {
+		if (sys_mem_en)
+			mlx5_sys_mem.enable = 1;
+		mlx5_sys_mem.init = 1;
+		DRV_LOG(INFO, "%s is selected.", sys_mem_en ? "SYS_MEM" : "RTE_MEM");
+	} else if (mlx5_sys_mem.enable != sys_mem_en) {
+		DRV_LOG(WARNING, "%s is already selected.",
+			mlx5_sys_mem.enable ? "SYS_MEM" : "RTE_MEM");
+	}
+}
diff --git a/drivers/common/mlx5/mlx5_malloc.h b/drivers/common/mlx5/mlx5_malloc.h
new file mode 100644
index 0000000..d3e5f5b
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_malloc.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef MLX5_MALLOC_H_
+#define MLX5_MALLOC_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+enum mlx5_mem_flags {
+	MLX5_MEM_ANY = 0,
+	/* Memory will be allocated dpends on sys_mem_en. */
+	MLX5_MEM_SYS = 1 << 0,
+	/* Memory should be allocated from system. */
+	MLX5_MEM_RTE = 1 << 1,
+	/* Memory should be allocated from rte hugepage. */
+	MLX5_MEM_ZERO = 1 << 2,
+	/* Memory should be cleared to zero. */
+};
+
+/**
+ * Select the PMD memory allocate preference.
+ *
+ * Once sys_mem_en is set, the default memory allocate will from
+ * system only if an explicitly flag is set to order the memory
+ * from rte hugepage memory.
+ *
+ * @param sys_mem_en
+ *   Use system memory or not.
+ */
+__rte_internal
+void mlx5_malloc_mem_select(uint32_t sys_mem_en);
+
+/**
+ * Dump the PMD memory usage statistic.
+ */
+__rte_internal
+void mlx5_memory_stat_dump(void);
+
+/**
+ * Memory allocate function.
+ *
+ * @param flags
+ *   The bits as enum mlx5_mem_flags defined.
+ * @param size
+ *   Memory size to be allocated.
+ * @param align
+ *   Memory alignment.
+ * @param socket
+ *   The socket memory should allocated.
+ *   Valid only when allocate the memory from rte hugepage.
+ *
+ * @return
+ *   Pointer of the allocated memory, NULL otherwise.
+ */
+__rte_internal
+void *mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket);
+
+/**
+ * Memory reallocate function.
+ *
+ *
+ *
+ * @param addr
+ *   The memory to be reallocated.
+ * @param flags
+ *   The bits as enum mlx5_mem_flags defined.
+ * @param size
+ *   Memory size to be allocated.
+ * @param align
+ *   Memory alignment.
+ * @param socket
+ *   The socket memory should allocated.
+ *   Valid only when allocate the memory from rte hugepage.
+ *
+ * @return
+ *   Pointer of the allocated memory, NULL otherwise.
+ */
+
+__rte_internal
+void *mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
+		   int socket);
+
+/**
+ * Memory free function.
+ *
+ * @param addr
+ *   The memory address to be freed..
+ */
+__rte_internal
+void mlx5_free(void *addr);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index ae57ebd..381a455 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -81,5 +81,11 @@ INTERNAL {
 	mlx5_release_dbr;
 
 	mlx5_translate_port_name;
+
+	mlx5_malloc_mem_select;
+	mlx5_memory_stat_dump;
+	mlx5_malloc;
+	mlx5_realloc;
+	mlx5_free;
 };
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v2 2/7] net/mlx5: add allocate memory from system devarg
  2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
@ 2020-07-16  9:20   ` Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 3/7] net/mlx5: convert control path memory to unified malloc Suanming Mou
                     ` (4 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-16  9:20 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

Currently, for MLX5 PMD, once millions of flows created, the memory
consumption of the flows are also very huge. For the system with limited
memory, it means the system need to reserve most of the memory as huge
page memory to serve the flows in advance. And other normal applications
will have no chance to use this reserved memory any more. While most of
the time, the system will not have lots of flows, the  reserved huge page
memory becomes a bit waste of memory at most of the time.

By the new sys_mem_en devarg, once set it to be true, it allows the PMD
allocate the memory from system by default with the new add mlx5 memory
management functions. Only once the MLX5_MEM_RTE flag is set, the memory
will be allocate from rte, otherwise, it allocates memory from system.

So in this case, the system with limited memory no need to reserve most
of the memory for hugepage. Only some needed memory for datapath objects
will be enough to allocated with explicitly flag. Other memory will be
allocated from system. For system with enough memory, no need to care
about the devarg, the memory will always be from rte hugepage.

One restriction is that for DPDK application with multiple PCI devices,
if the sys_mem_en devargs are different between the devices, the
sys_mem_en only gets the value from the first device devargs, and print
out a message to warn that.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/nics/mlx5.rst         | 7 +++++++
 drivers/net/mlx5/linux/mlx5_os.c | 2 ++
 drivers/net/mlx5/mlx5.c          | 6 ++++++
 drivers/net/mlx5/mlx5.h          | 1 +
 4 files changed, 16 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4b6d8fb..d86b5c7 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -879,6 +879,13 @@ Driver options
 
   By default, the PMD will set this value to 0.
 
+- ``sys_mem_en`` parameter [int]
+
+  A nonzero value enables the PMD memory management function allocate memory
+  from system by default without explicitly rte memory flag.
+
+  By default, the PMD will set this value to 0.
+
 .. _mlx5_firmware_config:
 
 Firmware configuration
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 2dc57b2..d5acef0 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -43,6 +43,7 @@
 #include <mlx5_common.h>
 #include <mlx5_common_mp.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -495,6 +496,7 @@
 			strerror(rte_errno));
 		goto error;
 	}
+	mlx5_malloc_mem_select(config.sys_mem_en);
 	sh = mlx5_alloc_shared_dev_ctx(spawn, &config);
 	if (!sh)
 		return NULL;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 0c654ed..9b17266 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -167,6 +167,9 @@
 /* Flow memory reclaim mode. */
 #define MLX5_RECLAIM_MEM "reclaim_mem_mode"
 
+/* The default memory alloctor used in PMD. */
+#define MLX5_SYS_MEM_EN "sys_mem_en"
+
 static const char *MZ_MLX5_PMD_SHARED_DATA = "mlx5_pmd_shared_data";
 
 /* Shared memory between primary and secondary processes. */
@@ -1374,6 +1377,8 @@ struct mlx5_dev_ctx_shared *
 			return -rte_errno;
 		}
 		config->reclaim_mode = tmp;
+	} else if (strcmp(MLX5_SYS_MEM_EN, key) == 0) {
+		config->sys_mem_en = !!tmp;
 	} else {
 		DRV_LOG(WARNING, "%s: unknown parameter", key);
 		rte_errno = EINVAL;
@@ -1430,6 +1435,7 @@ struct mlx5_dev_ctx_shared *
 		MLX5_CLASS_ARG_NAME,
 		MLX5_HP_BUF_SIZE,
 		MLX5_RECLAIM_MEM,
+		MLX5_SYS_MEM_EN,
 		NULL,
 	};
 	struct rte_kvargs *kvlist;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 46e66eb..967f5d8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -216,6 +216,7 @@ struct mlx5_dev_config {
 	unsigned int devx:1; /* Whether devx interface is available or not. */
 	unsigned int dest_tir:1; /* Whether advanced DR API is available. */
 	unsigned int reclaim_mode:2; /* Memory reclaim mode. */
+	unsigned int sys_mem_en:1; /* The default memory allocator. */
 	struct {
 		unsigned int enabled:1; /* Whether MPRQ is enabled. */
 		unsigned int stride_num_n; /* Number of strides. */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v2 3/7] net/mlx5: convert control path memory to unified malloc
  2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: add allocate memory from system devarg Suanming Mou
@ 2020-07-16  9:20   ` Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 4/7] common/mlx5: " Suanming Mou
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-16  9:20 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the control path memory from unified malloc
function.

The objects be changed:

1. hlist;
2. rss key;
3. vlan vmwa;
4. indexed pool;
5. fdir objects;
6. meter profile;
7. flow counter pool;
8. hrxq and indirect table;
9. flow object cache resources;
10. temporary resources in flow create;

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c            | 88 ++++++++++++++++++++------------------
 drivers/net/mlx5/mlx5_ethdev.c     | 15 ++++---
 drivers/net/mlx5/mlx5_flow.c       | 45 +++++++++++--------
 drivers/net/mlx5/mlx5_flow_dv.c    | 46 +++++++++++---------
 drivers/net/mlx5/mlx5_flow_meter.c | 11 ++---
 drivers/net/mlx5/mlx5_flow_verbs.c |  8 ++--
 drivers/net/mlx5/mlx5_mp.c         |  3 +-
 drivers/net/mlx5/mlx5_rss.c        | 13 ++++--
 drivers/net/mlx5/mlx5_rxq.c        | 37 +++++++++-------
 drivers/net/mlx5/mlx5_utils.c      | 60 +++++++++++++++-----------
 drivers/net/mlx5/mlx5_utils.h      |  2 +-
 drivers/net/mlx5/mlx5_vlan.c       |  8 ++--
 12 files changed, 190 insertions(+), 146 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 9b17266..ba86c68 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -40,6 +40,7 @@
 #include <mlx5_common.h>
 #include <mlx5_common_os.h>
 #include <mlx5_common_mp.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -194,8 +195,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_encap_decap_ipool",
 	},
 	{
@@ -205,8 +206,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_push_vlan_ipool",
 	},
 	{
@@ -216,8 +217,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_tag_ipool",
 	},
 	{
@@ -227,8 +228,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_port_id_ipool",
 	},
 	{
@@ -238,8 +239,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_jump_ipool",
 	},
 #endif
@@ -250,8 +251,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_meter_ipool",
 	},
 	{
@@ -261,8 +262,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_mcp_ipool",
 	},
 	{
@@ -272,8 +273,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_hrxq_ipool",
 	},
 	{
@@ -287,8 +288,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_flow_handle_ipool",
 	},
 	{
@@ -296,8 +297,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.trunk_size = 4096,
 		.need_lock = 1,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "rte_flow_ipool",
 	},
 };
@@ -323,15 +324,16 @@ struct mlx5_flow_id_pool *
 	struct mlx5_flow_id_pool *pool;
 	void *mem;
 
-	pool = rte_zmalloc("id pool allocation", sizeof(*pool),
-			   RTE_CACHE_LINE_SIZE);
+	pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool),
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!pool) {
 		DRV_LOG(ERR, "can't allocate id pool");
 		rte_errno  = ENOMEM;
 		return NULL;
 	}
-	mem = rte_zmalloc("", MLX5_FLOW_MIN_ID_POOL_SIZE * sizeof(uint32_t),
-			  RTE_CACHE_LINE_SIZE);
+	mem = mlx5_malloc(MLX5_MEM_ZERO,
+			  MLX5_FLOW_MIN_ID_POOL_SIZE * sizeof(uint32_t),
+			  RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!mem) {
 		DRV_LOG(ERR, "can't allocate mem for id pool");
 		rte_errno  = ENOMEM;
@@ -344,7 +346,7 @@ struct mlx5_flow_id_pool *
 	pool->max_id = max_id;
 	return pool;
 error:
-	rte_free(pool);
+	mlx5_free(pool);
 	return NULL;
 }
 
@@ -357,8 +359,8 @@ struct mlx5_flow_id_pool *
 void
 mlx5_flow_id_pool_release(struct mlx5_flow_id_pool *pool)
 {
-	rte_free(pool->free_arr);
-	rte_free(pool);
+	mlx5_free(pool->free_arr);
+	mlx5_free(pool);
 }
 
 /**
@@ -410,14 +412,15 @@ struct mlx5_flow_id_pool *
 		size = pool->curr - pool->free_arr;
 		size2 = size * MLX5_ID_GENERATION_ARRAY_FACTOR;
 		MLX5_ASSERT(size2 > size);
-		mem = rte_malloc("", size2 * sizeof(uint32_t), 0);
+		mem = mlx5_malloc(0, size2 * sizeof(uint32_t), 0,
+				  SOCKET_ID_ANY);
 		if (!mem) {
 			DRV_LOG(ERR, "can't allocate mem for id pool");
 			rte_errno  = ENOMEM;
 			return -rte_errno;
 		}
 		memcpy(mem, pool->free_arr, size * sizeof(uint32_t));
-		rte_free(pool->free_arr);
+		mlx5_free(pool->free_arr);
 		pool->free_arr = mem;
 		pool->curr = pool->free_arr + size;
 		pool->last = pool->free_arr + size2;
@@ -486,7 +489,7 @@ struct mlx5_flow_id_pool *
 	LIST_REMOVE(mng, next);
 	claim_zero(mlx5_devx_cmd_destroy(mng->dm));
 	claim_zero(mlx5_glue->devx_umem_dereg(mng->umem));
-	rte_free(mem);
+	mlx5_free(mem);
 }
 
 /**
@@ -534,10 +537,10 @@ struct mlx5_flow_id_pool *
 						    (pool, j)->dcs));
 			}
 			TAILQ_REMOVE(&sh->cmng.ccont[i].pool_list, pool, next);
-			rte_free(pool);
+			mlx5_free(pool);
 			pool = TAILQ_FIRST(&sh->cmng.ccont[i].pool_list);
 		}
-		rte_free(sh->cmng.ccont[i].pools);
+		mlx5_free(sh->cmng.ccont[i].pools);
 	}
 	mng = LIST_FIRST(&sh->cmng.mem_mngs);
 	while (mng) {
@@ -860,7 +863,7 @@ struct mlx5_dev_ctx_shared *
 					entry);
 		MLX5_ASSERT(tbl_data);
 		mlx5_hlist_remove(sh->flow_tbls, pos);
-		rte_free(tbl_data);
+		mlx5_free(tbl_data);
 	}
 	table_key.direction = 1;
 	pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64);
@@ -869,7 +872,7 @@ struct mlx5_dev_ctx_shared *
 					entry);
 		MLX5_ASSERT(tbl_data);
 		mlx5_hlist_remove(sh->flow_tbls, pos);
-		rte_free(tbl_data);
+		mlx5_free(tbl_data);
 	}
 	table_key.direction = 0;
 	table_key.domain = 1;
@@ -879,7 +882,7 @@ struct mlx5_dev_ctx_shared *
 					entry);
 		MLX5_ASSERT(tbl_data);
 		mlx5_hlist_remove(sh->flow_tbls, pos);
-		rte_free(tbl_data);
+		mlx5_free(tbl_data);
 	}
 	mlx5_hlist_destroy(sh->flow_tbls, NULL, NULL);
 }
@@ -923,8 +926,9 @@ struct mlx5_dev_ctx_shared *
 			.direction = 0,
 		}
 	};
-	struct mlx5_flow_tbl_data_entry *tbl_data = rte_zmalloc(NULL,
-							  sizeof(*tbl_data), 0);
+	struct mlx5_flow_tbl_data_entry *tbl_data = mlx5_malloc(MLX5_MEM_ZERO,
+							  sizeof(*tbl_data), 0,
+							  SOCKET_ID_ANY);
 
 	if (!tbl_data) {
 		err = ENOMEM;
@@ -937,7 +941,8 @@ struct mlx5_dev_ctx_shared *
 	rte_atomic32_init(&tbl_data->tbl.refcnt);
 	rte_atomic32_inc(&tbl_data->tbl.refcnt);
 	table_key.direction = 1;
-	tbl_data = rte_zmalloc(NULL, sizeof(*tbl_data), 0);
+	tbl_data = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl_data), 0,
+			       SOCKET_ID_ANY);
 	if (!tbl_data) {
 		err = ENOMEM;
 		goto error;
@@ -950,7 +955,8 @@ struct mlx5_dev_ctx_shared *
 	rte_atomic32_inc(&tbl_data->tbl.refcnt);
 	table_key.direction = 0;
 	table_key.domain = 1;
-	tbl_data = rte_zmalloc(NULL, sizeof(*tbl_data), 0);
+	tbl_data = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl_data), 0,
+			       SOCKET_ID_ANY);
 	if (!tbl_data) {
 		err = ENOMEM;
 		goto error;
@@ -1181,9 +1187,9 @@ struct mlx5_dev_ctx_shared *
 	mlx5_mprq_free_mp(dev);
 	mlx5_os_free_shared_dr(priv);
 	if (priv->rss_conf.rss_key != NULL)
-		rte_free(priv->rss_conf.rss_key);
+		mlx5_free(priv->rss_conf.rss_key);
 	if (priv->reta_idx != NULL)
-		rte_free(priv->reta_idx);
+		mlx5_free(priv->reta_idx);
 	if (priv->config.vf)
 		mlx5_nl_mac_addr_flush(priv->nl_socket_route, mlx5_ifindex(dev),
 				       dev->data->mac_addrs,
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 6b4efcd..cefb450 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -21,6 +21,8 @@
 #include <rte_rwlock.h>
 #include <rte_cycles.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5_rxtx.h"
 #include "mlx5_autoconf.h"
 
@@ -75,8 +77,8 @@
 		return -rte_errno;
 	}
 	priv->rss_conf.rss_key =
-		rte_realloc(priv->rss_conf.rss_key,
-			    MLX5_RSS_HASH_KEY_LEN, 0);
+		mlx5_realloc(priv->rss_conf.rss_key, MLX5_MEM_RTE,
+			    MLX5_RSS_HASH_KEY_LEN, 0, SOCKET_ID_ANY);
 	if (!priv->rss_conf.rss_key) {
 		DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)",
 			dev->data->port_id, rxqs_n);
@@ -142,7 +144,8 @@
 
 	if (priv->skip_default_rss_reta)
 		return ret;
-	rss_queue_arr = rte_malloc("", rxqs_n * sizeof(unsigned int), 0);
+	rss_queue_arr = mlx5_malloc(0, rxqs_n * sizeof(unsigned int), 0,
+				    SOCKET_ID_ANY);
 	if (!rss_queue_arr) {
 		DRV_LOG(ERR, "port %u cannot allocate RSS queue list (%u)",
 			dev->data->port_id, rxqs_n);
@@ -163,7 +166,7 @@
 		DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u)",
 			dev->data->port_id, rss_queue_n);
 		rte_errno = EINVAL;
-		rte_free(rss_queue_arr);
+		mlx5_free(rss_queue_arr);
 		return -rte_errno;
 	}
 	DRV_LOG(INFO, "port %u Rx queues number update: %u -> %u",
@@ -179,7 +182,7 @@
 				rss_queue_n));
 	ret = mlx5_rss_reta_index_resize(dev, reta_idx_n);
 	if (ret) {
-		rte_free(rss_queue_arr);
+		mlx5_free(rss_queue_arr);
 		return ret;
 	}
 	/*
@@ -192,7 +195,7 @@
 		if (++j == rss_queue_n)
 			j = 0;
 	}
-	rte_free(rss_queue_arr);
+	mlx5_free(rss_queue_arr);
 	return ret;
 }
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ae5ccc2..cce6ce5 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -32,6 +32,7 @@
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -4010,7 +4011,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 		act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
 			   sizeof(struct rte_flow_action_set_tag) +
 			   sizeof(struct rte_flow_action_jump);
-		ext_actions = rte_zmalloc(__func__, act_size, 0);
+		ext_actions = mlx5_malloc(MLX5_MEM_ZERO, act_size, 0,
+					  SOCKET_ID_ANY);
 		if (!ext_actions)
 			return rte_flow_error_set(error, ENOMEM,
 						  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4046,7 +4048,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 		 */
 		act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
 			   sizeof(struct mlx5_flow_action_copy_mreg);
-		ext_actions = rte_zmalloc(__func__, act_size, 0);
+		ext_actions = mlx5_malloc(MLX5_MEM_ZERO, act_size, 0,
+					  SOCKET_ID_ANY);
 		if (!ext_actions)
 			return rte_flow_error_set(error, ENOMEM,
 						  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4140,7 +4143,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 	 * by flow_drv_destroy.
 	 */
 	flow_qrss_free_id(dev, qrss_id);
-	rte_free(ext_actions);
+	mlx5_free(ext_actions);
 	return ret;
 }
 
@@ -4205,7 +4208,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 #define METER_SUFFIX_ITEM 4
 		item_size = sizeof(struct rte_flow_item) * METER_SUFFIX_ITEM +
 			    sizeof(struct mlx5_rte_flow_item_tag) * 2;
-		sfx_actions = rte_zmalloc(__func__, (act_size + item_size), 0);
+		sfx_actions = mlx5_malloc(MLX5_MEM_ZERO, (act_size + item_size),
+					  0, SOCKET_ID_ANY);
 		if (!sfx_actions)
 			return rte_flow_error_set(error, ENOMEM,
 						  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4244,7 +4248,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 					 external, flow_idx, error);
 exit:
 	if (sfx_actions)
-		rte_free(sfx_actions);
+		mlx5_free(sfx_actions);
 	return ret;
 }
 
@@ -4658,8 +4662,8 @@ struct rte_flow *
 		}
 		if (priv_fdir_flow) {
 			LIST_REMOVE(priv_fdir_flow, next);
-			rte_free(priv_fdir_flow->fdir);
-			rte_free(priv_fdir_flow);
+			mlx5_free(priv_fdir_flow->fdir);
+			mlx5_free(priv_fdir_flow);
 		}
 	}
 	mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], flow_idx);
@@ -4799,11 +4803,12 @@ struct rte_flow *
 	struct mlx5_priv *priv = dev->data->dev_private;
 
 	if (!priv->inter_flows) {
-		priv->inter_flows = rte_calloc(__func__, 1,
+		priv->inter_flows = mlx5_malloc(MLX5_MEM_ZERO,
 				    MLX5_NUM_MAX_DEV_FLOWS *
 				    sizeof(struct mlx5_flow) +
 				    (sizeof(struct mlx5_flow_rss_desc) +
-				    sizeof(uint16_t) * UINT16_MAX) * 2, 0);
+				    sizeof(uint16_t) * UINT16_MAX) * 2, 0,
+				    SOCKET_ID_ANY);
 		if (!priv->inter_flows) {
 			DRV_LOG(ERR, "can't allocate intermediate memory.");
 			return;
@@ -4827,7 +4832,7 @@ struct rte_flow *
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 
-	rte_free(priv->inter_flows);
+	mlx5_free(priv->inter_flows);
 	priv->inter_flows = NULL;
 }
 
@@ -5467,7 +5472,8 @@ struct rte_flow *
 	uint32_t flow_idx;
 	int ret;
 
-	fdir_flow = rte_zmalloc(__func__, sizeof(*fdir_flow), 0);
+	fdir_flow = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*fdir_flow), 0,
+				SOCKET_ID_ANY);
 	if (!fdir_flow) {
 		rte_errno = ENOMEM;
 		return -rte_errno;
@@ -5480,8 +5486,9 @@ struct rte_flow *
 		rte_errno = EEXIST;
 		goto error;
 	}
-	priv_fdir_flow = rte_zmalloc(__func__, sizeof(struct mlx5_fdir_flow),
-				     0);
+	priv_fdir_flow = mlx5_malloc(MLX5_MEM_ZERO,
+				     sizeof(struct mlx5_fdir_flow),
+				     0, SOCKET_ID_ANY);
 	if (!priv_fdir_flow) {
 		rte_errno = ENOMEM;
 		goto error;
@@ -5500,8 +5507,8 @@ struct rte_flow *
 		dev->data->port_id, (void *)flow);
 	return 0;
 error:
-	rte_free(priv_fdir_flow);
-	rte_free(fdir_flow);
+	mlx5_free(priv_fdir_flow);
+	mlx5_free(fdir_flow);
 	return -rte_errno;
 }
 
@@ -5541,8 +5548,8 @@ struct rte_flow *
 	LIST_REMOVE(priv_fdir_flow, next);
 	flow_idx = priv_fdir_flow->rix_flow;
 	flow_list_destroy(dev, &priv->flows, flow_idx);
-	rte_free(priv_fdir_flow->fdir);
-	rte_free(priv_fdir_flow);
+	mlx5_free(priv_fdir_flow->fdir);
+	mlx5_free(priv_fdir_flow);
 	DRV_LOG(DEBUG, "port %u deleted FDIR flow %u",
 		dev->data->port_id, flow_idx);
 	return 0;
@@ -5587,8 +5594,8 @@ struct rte_flow *
 		priv_fdir_flow = LIST_FIRST(&priv->fdir_flows);
 		LIST_REMOVE(priv_fdir_flow, next);
 		flow_list_destroy(dev, &priv->flows, priv_fdir_flow->rix_flow);
-		rte_free(priv_fdir_flow->fdir);
-		rte_free(priv_fdir_flow);
+		mlx5_free(priv_fdir_flow->fdir);
+		mlx5_free(priv_fdir_flow);
 	}
 }
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 8b5b683..7c121d6 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -32,6 +32,7 @@
 
 #include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -2615,7 +2616,7 @@ struct field_modify_info modify_tcp[] = {
 					(sh->ctx, domain, cache_resource,
 					 &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -2772,7 +2773,7 @@ struct field_modify_info modify_tcp[] = {
 				(priv->sh->fdb_domain, resource->port_id,
 				 &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -2851,7 +2852,7 @@ struct field_modify_info modify_tcp[] = {
 					(domain, resource->vlan_tag,
 					 &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -4024,8 +4025,9 @@ struct field_modify_info modify_tcp[] = {
 		}
 	}
 	/* Register new modify-header resource. */
-	cache_resource = rte_calloc(__func__, 1,
-				    sizeof(*cache_resource) + actions_len, 0);
+	cache_resource = mlx5_malloc(MLX5_MEM_ZERO,
+				    sizeof(*cache_resource) + actions_len, 0,
+				    SOCKET_ID_ANY);
 	if (!cache_resource)
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
@@ -4036,7 +4038,7 @@ struct field_modify_info modify_tcp[] = {
 					(sh->ctx, ns, cache_resource,
 					 actions_len, &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -4175,7 +4177,8 @@ struct field_modify_info modify_tcp[] = {
 			MLX5_COUNTERS_PER_POOL +
 			sizeof(struct mlx5_counter_stats_raw)) * raws_n +
 			sizeof(struct mlx5_counter_stats_mem_mng);
-	uint8_t *mem = rte_calloc(__func__, 1, size, sysconf(_SC_PAGESIZE));
+	uint8_t *mem = mlx5_malloc(MLX5_MEM_ZERO, size, sysconf(_SC_PAGESIZE),
+				  SOCKET_ID_ANY);
 	int i;
 
 	if (!mem) {
@@ -4188,7 +4191,7 @@ struct field_modify_info modify_tcp[] = {
 						 IBV_ACCESS_LOCAL_WRITE);
 	if (!mem_mng->umem) {
 		rte_errno = errno;
-		rte_free(mem);
+		mlx5_free(mem);
 		return NULL;
 	}
 	mkey_attr.addr = (uintptr_t)mem;
@@ -4207,7 +4210,7 @@ struct field_modify_info modify_tcp[] = {
 	if (!mem_mng->dm) {
 		mlx5_glue->devx_umem_dereg(mem_mng->umem);
 		rte_errno = errno;
-		rte_free(mem);
+		mlx5_free(mem);
 		return NULL;
 	}
 	mem_mng->raws = (struct mlx5_counter_stats_raw *)(mem + size);
@@ -4244,7 +4247,7 @@ struct field_modify_info modify_tcp[] = {
 	void *old_pools = cont->pools;
 	uint32_t resize = cont->n + MLX5_CNT_CONTAINER_RESIZE;
 	uint32_t mem_size = sizeof(struct mlx5_flow_counter_pool *) * resize;
-	void *pools = rte_calloc(__func__, 1, mem_size, 0);
+	void *pools = mlx5_malloc(MLX5_MEM_ZERO, mem_size, 0, SOCKET_ID_ANY);
 
 	if (!pools) {
 		rte_errno = ENOMEM;
@@ -4263,7 +4266,7 @@ struct field_modify_info modify_tcp[] = {
 		mem_mng = flow_dv_create_counter_stat_mem_mng(dev,
 			  MLX5_CNT_CONTAINER_RESIZE + MLX5_MAX_PENDING_QUERIES);
 		if (!mem_mng) {
-			rte_free(pools);
+			mlx5_free(pools);
 			return -ENOMEM;
 		}
 		for (i = 0; i < MLX5_MAX_PENDING_QUERIES; ++i)
@@ -4278,7 +4281,7 @@ struct field_modify_info modify_tcp[] = {
 	cont->pools = pools;
 	rte_spinlock_unlock(&cont->resize_sl);
 	if (old_pools)
-		rte_free(old_pools);
+		mlx5_free(old_pools);
 	return 0;
 }
 
@@ -4367,7 +4370,7 @@ struct field_modify_info modify_tcp[] = {
 	size += MLX5_COUNTERS_PER_POOL * CNT_SIZE;
 	size += (batch ? 0 : MLX5_COUNTERS_PER_POOL * CNTEXT_SIZE);
 	size += (!age ? 0 : MLX5_COUNTERS_PER_POOL * AGE_SIZE);
-	pool = rte_calloc(__func__, 1, size, 0);
+	pool = mlx5_malloc(MLX5_MEM_ZERO, size, 0, SOCKET_ID_ANY);
 	if (!pool) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -7467,7 +7470,8 @@ struct field_modify_info modify_tcp[] = {
 		}
 	}
 	/* Register new matcher. */
-	cache_matcher = rte_calloc(__func__, 1, sizeof(*cache_matcher), 0);
+	cache_matcher = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*cache_matcher), 0,
+				    SOCKET_ID_ANY);
 	if (!cache_matcher) {
 		flow_dv_tbl_resource_release(dev, tbl);
 		return rte_flow_error_set(error, ENOMEM,
@@ -7483,7 +7487,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj,
 					       &cache_matcher->matcher_object);
 	if (ret) {
-		rte_free(cache_matcher);
+		mlx5_free(cache_matcher);
 #ifdef HAVE_MLX5DV_DR
 		flow_dv_tbl_resource_release(dev, tbl);
 #endif
@@ -7558,7 +7562,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_os_create_flow_action_tag(tag_be24,
 						  &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -7567,7 +7571,7 @@ struct field_modify_info modify_tcp[] = {
 	rte_atomic32_inc(&cache_resource->refcnt);
 	if (mlx5_hlist_insert(sh->tag_table, &cache_resource->entry)) {
 		mlx5_flow_os_destroy_flow_action(cache_resource->action);
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, EEXIST,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot insert tag");
@@ -8769,7 +8773,7 @@ struct field_modify_info modify_tcp[] = {
 		LIST_REMOVE(matcher, next);
 		/* table ref-- in release interface. */
 		flow_dv_tbl_resource_release(dev, matcher->tbl);
-		rte_free(matcher);
+		mlx5_free(matcher);
 		DRV_LOG(DEBUG, "port %u matcher %p: removed",
 			dev->data->port_id, (void *)matcher);
 		return 0;
@@ -8911,7 +8915,7 @@ struct field_modify_info modify_tcp[] = {
 		claim_zero(mlx5_flow_os_destroy_flow_action
 						(cache_resource->action));
 		LIST_REMOVE(cache_resource, next);
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		DRV_LOG(DEBUG, "modify-header resource %p: removed",
 			(void *)cache_resource);
 		return 0;
@@ -9284,7 +9288,7 @@ struct field_modify_info modify_tcp[] = {
 		flow_dv_tbl_resource_release(dev, mtd->transfer.sfx_tbl);
 	if (mtd->drop_actn)
 		claim_zero(mlx5_flow_os_destroy_flow_action(mtd->drop_actn));
-	rte_free(mtd);
+	mlx5_free(mtd);
 	return 0;
 }
 
@@ -9417,7 +9421,7 @@ struct field_modify_info modify_tcp[] = {
 		rte_errno = ENOTSUP;
 		return NULL;
 	}
-	mtb = rte_calloc(__func__, 1, sizeof(*mtb), 0);
+	mtb = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*mtb), 0, SOCKET_ID_ANY);
 	if (!mtb) {
 		DRV_LOG(ERR, "Failed to allocate memory for meter.");
 		return NULL;
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index 86c334b..bf34687 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -10,6 +10,7 @@
 #include <rte_mtr_driver.h>
 
 #include <mlx5_devx_cmds.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_flow.h"
@@ -356,8 +357,8 @@
 	if (ret)
 		return ret;
 	/* Meter profile memory allocation. */
-	fmp = rte_calloc(__func__, 1, sizeof(struct mlx5_flow_meter_profile),
-			 RTE_CACHE_LINE_SIZE);
+	fmp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_flow_meter_profile),
+			 RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (fmp == NULL)
 		return -rte_mtr_error_set(error, ENOMEM,
 					  RTE_MTR_ERROR_TYPE_UNSPECIFIED,
@@ -374,7 +375,7 @@
 	TAILQ_INSERT_TAIL(fmps, fmp, next);
 	return 0;
 error:
-	rte_free(fmp);
+	mlx5_free(fmp);
 	return ret;
 }
 
@@ -417,7 +418,7 @@
 					  NULL, "Meter profile is in use.");
 	/* Remove from list. */
 	TAILQ_REMOVE(&priv->flow_meter_profiles, fmp, next);
-	rte_free(fmp);
+	mlx5_free(fmp);
 	return 0;
 }
 
@@ -1286,7 +1287,7 @@ struct mlx5_flow_meter *
 		MLX5_ASSERT(!fmp->ref_cnt);
 		/* Remove from list. */
 		TAILQ_REMOVE(&priv->flow_meter_profiles, fmp, next);
-		rte_free(fmp);
+		mlx5_free(fmp);
 	}
 	return 0;
 }
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 781c97f..72106b4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -28,6 +28,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_prm.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -188,14 +189,15 @@
 			/* Resize the container pool array. */
 			size = sizeof(struct mlx5_flow_counter_pool *) *
 				     (n_valid + MLX5_CNT_CONTAINER_RESIZE);
-			pools = rte_zmalloc(__func__, size, 0);
+			pools = mlx5_malloc(MLX5_MEM_ZERO, size, 0,
+					    SOCKET_ID_ANY);
 			if (!pools)
 				return 0;
 			if (n_valid) {
 				memcpy(pools, cont->pools,
 				       sizeof(struct mlx5_flow_counter_pool *) *
 				       n_valid);
-				rte_free(cont->pools);
+				mlx5_free(cont->pools);
 			}
 			cont->pools = pools;
 			cont->n += MLX5_CNT_CONTAINER_RESIZE;
@@ -203,7 +205,7 @@
 		/* Allocate memory for new pool*/
 		size = sizeof(*pool) + (sizeof(*cnt_ext) + sizeof(*cnt)) *
 		       MLX5_COUNTERS_PER_POOL;
-		pool = rte_calloc(__func__, 1, size, 0);
+		pool = mlx5_malloc(MLX5_MEM_ZERO, size, 0, SOCKET_ID_ANY);
 		if (!pool)
 			return 0;
 		pool->type |= CNT_POOL_TYPE_EXT;
diff --git a/drivers/net/mlx5/mlx5_mp.c b/drivers/net/mlx5/mlx5_mp.c
index a2b5c40..cf6e33b 100644
--- a/drivers/net/mlx5/mlx5_mp.c
+++ b/drivers/net/mlx5/mlx5_mp.c
@@ -12,6 +12,7 @@
 
 #include <mlx5_common_mp.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -181,7 +182,7 @@
 		}
 	}
 exit:
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index 653b069..a49edbc 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -21,6 +21,8 @@
 #include <rte_malloc.h>
 #include <rte_ethdev_driver.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5_defs.h"
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -57,8 +59,10 @@
 			rte_errno = EINVAL;
 			return -rte_errno;
 		}
-		priv->rss_conf.rss_key = rte_realloc(priv->rss_conf.rss_key,
-						     rss_conf->rss_key_len, 0);
+		priv->rss_conf.rss_key = mlx5_realloc(priv->rss_conf.rss_key,
+						      MLX5_MEM_RTE,
+						      rss_conf->rss_key_len,
+						      0, SOCKET_ID_ANY);
 		if (!priv->rss_conf.rss_key) {
 			rte_errno = ENOMEM;
 			return -rte_errno;
@@ -131,8 +135,9 @@
 	if (priv->reta_idx_n == reta_size)
 		return 0;
 
-	mem = rte_realloc(priv->reta_idx,
-			  reta_size * sizeof((*priv->reta_idx)[0]), 0);
+	mem = mlx5_realloc(priv->reta_idx, MLX5_MEM_RTE,
+			   reta_size * sizeof((*priv->reta_idx)[0]), 0,
+			   SOCKET_ID_ANY);
 	if (!mem) {
 		rte_errno = ENOMEM;
 		return -rte_errno;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b436f06..c8e3a82 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -31,6 +31,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -734,7 +735,9 @@
 	if (!dev->data->dev_conf.intr_conf.rxq)
 		return 0;
 	mlx5_rx_intr_vec_disable(dev);
-	intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
+	intr_handle->intr_vec = mlx5_malloc(0,
+				n * sizeof(intr_handle->intr_vec[0]),
+				0, SOCKET_ID_ANY);
 	if (intr_handle->intr_vec == NULL) {
 		DRV_LOG(ERR,
 			"port %u failed to allocate memory for interrupt"
@@ -831,7 +834,7 @@
 free:
 	rte_intr_free_epoll_fd(intr_handle);
 	if (intr_handle->intr_vec)
-		free(intr_handle->intr_vec);
+		mlx5_free(intr_handle->intr_vec);
 	intr_handle->nb_efd = 0;
 	intr_handle->intr_vec = NULL;
 }
@@ -2187,8 +2190,8 @@ enum mlx5_rxq_type
 	struct mlx5_ind_table_obj *ind_tbl;
 	unsigned int i = 0, j = 0, k = 0;
 
-	ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl) +
-			     queues_n * sizeof(uint16_t), 0);
+	ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) +
+			      queues_n * sizeof(uint16_t), 0, SOCKET_ID_ANY);
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -2231,8 +2234,9 @@ enum mlx5_rxq_type
 			      log2above(queues_n) :
 			      log2above(priv->config.ind_table_max_size));
 
-		rqt_attr = rte_calloc(__func__, 1, sizeof(*rqt_attr) +
-				      rqt_n * sizeof(uint32_t), 0);
+		rqt_attr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt_attr) +
+				      rqt_n * sizeof(uint32_t), 0,
+				      SOCKET_ID_ANY);
 		if (!rqt_attr) {
 			DRV_LOG(ERR, "port %u cannot allocate RQT resources",
 				dev->data->port_id);
@@ -2254,7 +2258,7 @@ enum mlx5_rxq_type
 			rqt_attr->rq_list[k] = rqt_attr->rq_list[j];
 		ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->ctx,
 							rqt_attr);
-		rte_free(rqt_attr);
+		mlx5_free(rqt_attr);
 		if (!ind_tbl->rqt) {
 			DRV_LOG(ERR, "port %u cannot create DevX RQT",
 				dev->data->port_id);
@@ -2269,7 +2273,7 @@ enum mlx5_rxq_type
 error:
 	for (j = 0; j < i; j++)
 		mlx5_rxq_release(dev, ind_tbl->queues[j]);
-	rte_free(ind_tbl);
+	mlx5_free(ind_tbl);
 	DEBUG("port %u cannot create indirection table", dev->data->port_id);
 	return NULL;
 }
@@ -2339,7 +2343,7 @@ enum mlx5_rxq_type
 		claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
 	if (!rte_atomic32_read(&ind_tbl->refcnt)) {
 		LIST_REMOVE(ind_tbl, next);
-		rte_free(ind_tbl);
+		mlx5_free(ind_tbl);
 		return 0;
 	}
 	return 1;
@@ -2761,7 +2765,7 @@ enum mlx5_rxq_type
 		rte_errno = errno;
 		goto error;
 	}
-	rxq = rte_calloc(__func__, 1, sizeof(*rxq), 0);
+	rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY);
 	if (!rxq) {
 		DEBUG("port %u cannot allocate drop Rx queue memory",
 		      dev->data->port_id);
@@ -2799,7 +2803,7 @@ enum mlx5_rxq_type
 		claim_zero(mlx5_glue->destroy_wq(rxq->wq));
 	if (rxq->cq)
 		claim_zero(mlx5_glue->destroy_cq(rxq->cq));
-	rte_free(rxq);
+	mlx5_free(rxq);
 	priv->drop_queue.rxq = NULL;
 }
 
@@ -2837,7 +2841,8 @@ enum mlx5_rxq_type
 		rte_errno = errno;
 		goto error;
 	}
-	ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl), 0);
+	ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl), 0,
+			      SOCKET_ID_ANY);
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		goto error;
@@ -2863,7 +2868,7 @@ enum mlx5_rxq_type
 
 	claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
 	mlx5_rxq_obj_drop_release(dev);
-	rte_free(ind_tbl);
+	mlx5_free(ind_tbl);
 	priv->drop_queue.hrxq->ind_table = NULL;
 }
 
@@ -2888,7 +2893,7 @@ struct mlx5_hrxq *
 		rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt);
 		return priv->drop_queue.hrxq;
 	}
-	hrxq = rte_calloc(__func__, 1, sizeof(*hrxq), 0);
+	hrxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hrxq), 0, SOCKET_ID_ANY);
 	if (!hrxq) {
 		DRV_LOG(WARNING,
 			"port %u cannot allocate memory for drop queue",
@@ -2945,7 +2950,7 @@ struct mlx5_hrxq *
 		mlx5_ind_table_obj_drop_release(dev);
 	if (hrxq) {
 		priv->drop_queue.hrxq = NULL;
-		rte_free(hrxq);
+		mlx5_free(hrxq);
 	}
 	return NULL;
 }
@@ -2968,7 +2973,7 @@ struct mlx5_hrxq *
 #endif
 		claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
 		mlx5_ind_table_obj_drop_release(dev);
-		rte_free(hrxq);
+		mlx5_free(hrxq);
 		priv->drop_queue.hrxq = NULL;
 	}
 }
diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c
index bf67192..25e8b27 100644
--- a/drivers/net/mlx5/mlx5_utils.c
+++ b/drivers/net/mlx5/mlx5_utils.c
@@ -5,6 +5,8 @@
 #include <rte_malloc.h>
 #include <rte_hash_crc.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5_utils.h"
 
 struct mlx5_hlist *
@@ -27,7 +29,8 @@ struct mlx5_hlist *
 	alloc_size = sizeof(struct mlx5_hlist) +
 		     sizeof(struct mlx5_hlist_head) * act_size;
 	/* Using zmalloc, then no need to initialize the heads. */
-	h = rte_zmalloc(name, alloc_size, RTE_CACHE_LINE_SIZE);
+	h = mlx5_malloc(MLX5_MEM_ZERO, alloc_size, RTE_CACHE_LINE_SIZE,
+			SOCKET_ID_ANY);
 	if (!h) {
 		DRV_LOG(ERR, "No memory for hash list %s creation",
 			name ? name : "None");
@@ -112,10 +115,10 @@ struct mlx5_hlist_entry *
 			if (cb)
 				cb(entry, ctx);
 			else
-				rte_free(entry);
+				mlx5_free(entry);
 		}
 	}
-	rte_free(h);
+	mlx5_free(h);
 }
 
 static inline void
@@ -193,16 +196,17 @@ struct mlx5_indexed_pool *
 	    (cfg->trunk_size && ((cfg->trunk_size & (cfg->trunk_size - 1)) ||
 	    ((__builtin_ffs(cfg->trunk_size) + TRUNK_IDX_BITS) > 32))))
 		return NULL;
-	pool = rte_zmalloc("mlx5_ipool", sizeof(*pool) + cfg->grow_trunk *
-				sizeof(pool->grow_tbl[0]), RTE_CACHE_LINE_SIZE);
+	pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool) + cfg->grow_trunk *
+			   sizeof(pool->grow_tbl[0]), RTE_CACHE_LINE_SIZE,
+			   SOCKET_ID_ANY);
 	if (!pool)
 		return NULL;
 	pool->cfg = *cfg;
 	if (!pool->cfg.trunk_size)
 		pool->cfg.trunk_size = MLX5_IPOOL_DEFAULT_TRUNK_SIZE;
 	if (!cfg->malloc && !cfg->free) {
-		pool->cfg.malloc = rte_malloc_socket;
-		pool->cfg.free = rte_free;
+		pool->cfg.malloc = mlx5_malloc;
+		pool->cfg.free = mlx5_free;
 	}
 	pool->free_list = TRUNK_INVALID;
 	if (pool->cfg.need_lock)
@@ -237,10 +241,9 @@ struct mlx5_indexed_pool *
 		int n_grow = pool->n_trunk_valid ? pool->n_trunk :
 			     RTE_CACHE_LINE_SIZE / sizeof(void *);
 
-		p = pool->cfg.malloc(pool->cfg.type,
-				 (pool->n_trunk_valid + n_grow) *
-				 sizeof(struct mlx5_indexed_trunk *),
-				 RTE_CACHE_LINE_SIZE, rte_socket_id());
+		p = pool->cfg.malloc(0, (pool->n_trunk_valid + n_grow) *
+				     sizeof(struct mlx5_indexed_trunk *),
+				     RTE_CACHE_LINE_SIZE, rte_socket_id());
 		if (!p)
 			return -ENOMEM;
 		if (pool->trunks)
@@ -268,7 +271,7 @@ struct mlx5_indexed_pool *
 	/* rte_bitmap requires memory cacheline aligned. */
 	trunk_size += RTE_CACHE_LINE_ROUNDUP(data_size * pool->cfg.size);
 	trunk_size += bmp_size;
-	trunk = pool->cfg.malloc(pool->cfg.type, trunk_size,
+	trunk = pool->cfg.malloc(0, trunk_size,
 				 RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (!trunk)
 		return -ENOMEM;
@@ -464,7 +467,7 @@ struct mlx5_indexed_pool *
 	if (!pool->trunks)
 		pool->cfg.free(pool->trunks);
 	mlx5_ipool_unlock(pool);
-	rte_free(pool);
+	mlx5_free(pool);
 	return 0;
 }
 
@@ -493,15 +496,16 @@ struct mlx5_l3t_tbl *
 		.grow_shift = 1,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 	};
 
 	if (type >= MLX5_L3T_TYPE_MAX) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
-	tbl = rte_zmalloc(NULL, sizeof(struct mlx5_l3t_tbl), 1);
+	tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_l3t_tbl), 1,
+			  SOCKET_ID_ANY);
 	if (!tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -532,7 +536,7 @@ struct mlx5_l3t_tbl *
 	tbl->eip = mlx5_ipool_create(&l3t_ip_cfg);
 	if (!tbl->eip) {
 		rte_errno = ENOMEM;
-		rte_free(tbl);
+		mlx5_free(tbl);
 		tbl = NULL;
 	}
 	return tbl;
@@ -565,17 +569,17 @@ struct mlx5_l3t_tbl *
 					break;
 			}
 			MLX5_ASSERT(!m_tbl->ref_cnt);
-			rte_free(g_tbl->tbl[i]);
+			mlx5_free(g_tbl->tbl[i]);
 			g_tbl->tbl[i] = 0;
 			if (!(--g_tbl->ref_cnt))
 				break;
 		}
 		MLX5_ASSERT(!g_tbl->ref_cnt);
-		rte_free(tbl->tbl);
+		mlx5_free(tbl->tbl);
 		tbl->tbl = 0;
 	}
 	mlx5_ipool_destroy(tbl->eip);
-	rte_free(tbl);
+	mlx5_free(tbl);
 }
 
 uint32_t
@@ -667,11 +671,11 @@ struct mlx5_l3t_tbl *
 		m_tbl->tbl[(idx >> MLX5_L3T_MT_OFFSET) & MLX5_L3T_MT_MASK] =
 									NULL;
 		if (!(--m_tbl->ref_cnt)) {
-			rte_free(m_tbl);
+			mlx5_free(m_tbl);
 			g_tbl->tbl
 			[(idx >> MLX5_L3T_GT_OFFSET) & MLX5_L3T_GT_MASK] = NULL;
 			if (!(--g_tbl->ref_cnt)) {
-				rte_free(g_tbl);
+				mlx5_free(g_tbl);
 				tbl->tbl = 0;
 			}
 		}
@@ -693,8 +697,10 @@ struct mlx5_l3t_tbl *
 	/* Check the global table, create it if empty. */
 	g_tbl = tbl->tbl;
 	if (!g_tbl) {
-		g_tbl = rte_zmalloc(NULL, sizeof(struct mlx5_l3t_level_tbl) +
-				    sizeof(void *) * MLX5_L3T_GT_SIZE, 1);
+		g_tbl = mlx5_malloc(MLX5_MEM_ZERO,
+				    sizeof(struct mlx5_l3t_level_tbl) +
+				    sizeof(void *) * MLX5_L3T_GT_SIZE, 1,
+				    SOCKET_ID_ANY);
 		if (!g_tbl) {
 			rte_errno = ENOMEM;
 			return -1;
@@ -707,8 +713,10 @@ struct mlx5_l3t_tbl *
 	 */
 	m_tbl = g_tbl->tbl[(idx >> MLX5_L3T_GT_OFFSET) & MLX5_L3T_GT_MASK];
 	if (!m_tbl) {
-		m_tbl = rte_zmalloc(NULL, sizeof(struct mlx5_l3t_level_tbl) +
-				    sizeof(void *) * MLX5_L3T_MT_SIZE, 1);
+		m_tbl = mlx5_malloc(MLX5_MEM_ZERO,
+				    sizeof(struct mlx5_l3t_level_tbl) +
+				    sizeof(void *) * MLX5_L3T_MT_SIZE, 1,
+				    SOCKET_ID_ANY);
 		if (!m_tbl) {
 			rte_errno = ENOMEM;
 			return -1;
diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h
index c4b9063..562b9b1 100644
--- a/drivers/net/mlx5/mlx5_utils.h
+++ b/drivers/net/mlx5/mlx5_utils.h
@@ -193,7 +193,7 @@ struct mlx5_indexed_pool_config {
 	/* Lock is needed for multiple thread usage. */
 	uint32_t release_mem_en:1; /* Rlease trunk when it is free. */
 	const char *type; /* Memory allocate type name. */
-	void *(*malloc)(const char *type, size_t size, unsigned int align,
+	void *(*malloc)(uint32_t flags, size_t size, unsigned int align,
 			int socket);
 	/* User defined memory allocator. */
 	void (*free)(void *addr); /* User defined memory release. */
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index f65e416..4308b71 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -33,6 +33,7 @@
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
 #include <mlx5_nl.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_autoconf.h"
@@ -288,7 +289,8 @@ struct mlx5_nl_vlan_vmwa_context *
 		 */
 		return NULL;
 	}
-	vmwa = rte_zmalloc(__func__, sizeof(*vmwa), sizeof(uint32_t));
+	vmwa = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*vmwa), sizeof(uint32_t),
+			   SOCKET_ID_ANY);
 	if (!vmwa) {
 		DRV_LOG(WARNING,
 			"Can not allocate memory"
@@ -300,7 +302,7 @@ struct mlx5_nl_vlan_vmwa_context *
 		DRV_LOG(WARNING,
 			"Can not create Netlink socket"
 			" for VLAN workaround context");
-		rte_free(vmwa);
+		mlx5_free(vmwa);
 		return NULL;
 	}
 	vmwa->vf_ifindex = ifindex;
@@ -323,5 +325,5 @@ void mlx5_vlan_vmwa_exit(struct mlx5_nl_vlan_vmwa_context *vmwa)
 	}
 	if (vmwa->nl_socket >= 0)
 		close(vmwa->nl_socket);
-	rte_free(vmwa);
+	mlx5_free(vmwa);
 }
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v2 4/7] common/mlx5: convert control path memory to unified malloc
  2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                     ` (2 preceding siblings ...)
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 3/7] net/mlx5: convert control path memory to unified malloc Suanming Mou
@ 2020-07-16  9:20   ` Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 5/7] common/mlx5: convert data path objects " Suanming Mou
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-16  9:20 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocateis the control path objects memory from the unified
malloc function.

These objects are all used during the instances initialize, it will not
affect the data path.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/linux/mlx5_glue.c | 13 +++---
 drivers/common/mlx5/linux/mlx5_nl.c   |  5 ++-
 drivers/common/mlx5/mlx5_common_mp.c  |  7 ++--
 drivers/common/mlx5/mlx5_devx_cmds.c  | 75 +++++++++++++++++++----------------
 4 files changed, 55 insertions(+), 45 deletions(-)

diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index 395519d..48d2808 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -184,7 +184,7 @@
 		res = ibv_destroy_flow_action(attr->action);
 		break;
 	}
-	free(action);
+	mlx5_free(action);
 	return res;
 #endif
 #else
@@ -617,7 +617,7 @@
 	struct mlx5dv_flow_action_attr *action;
 
 	(void)offset;
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_COUNTERS_DEVX;
@@ -641,7 +641,7 @@
 #else
 	struct mlx5dv_flow_action_attr *action;
 
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_DEST_IBV_QP;
@@ -686,7 +686,7 @@
 
 	(void)domain;
 	(void)flags;
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
@@ -726,7 +726,7 @@
 	(void)flags;
 	struct mlx5dv_flow_action_attr *action;
 
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
@@ -755,7 +755,8 @@
 	return mlx5dv_dr_action_create_tag(tag);
 #else /* HAVE_MLX5DV_DR */
 	struct mlx5dv_flow_action_attr *action;
-	action = malloc(sizeof(*action));
+
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_TAG;
diff --git a/drivers/common/mlx5/linux/mlx5_nl.c b/drivers/common/mlx5/linux/mlx5_nl.c
index dc504d8..8ab7f6b 100644
--- a/drivers/common/mlx5/linux/mlx5_nl.c
+++ b/drivers/common/mlx5/linux/mlx5_nl.c
@@ -22,6 +22,7 @@
 
 #include "mlx5_nl.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 #ifdef HAVE_DEVLINK
 #include <linux/devlink.h>
 #endif
@@ -330,7 +331,7 @@ struct mlx5_nl_ifindex_data {
 	     void *arg)
 {
 	struct sockaddr_nl sa;
-	void *buf = malloc(MLX5_RECV_BUF_SIZE);
+	void *buf = mlx5_malloc(0, MLX5_RECV_BUF_SIZE, 0, SOCKET_ID_ANY);
 	struct iovec iov = {
 		.iov_base = buf,
 		.iov_len = MLX5_RECV_BUF_SIZE,
@@ -393,7 +394,7 @@ struct mlx5_nl_ifindex_data {
 		}
 	} while (multipart);
 exit:
-	free(buf);
+	mlx5_free(buf);
 	return ret;
 }
 
diff --git a/drivers/common/mlx5/mlx5_common_mp.c b/drivers/common/mlx5/mlx5_common_mp.c
index da55143..40e3956 100644
--- a/drivers/common/mlx5/mlx5_common_mp.c
+++ b/drivers/common/mlx5/mlx5_common_mp.c
@@ -11,6 +11,7 @@
 
 #include "mlx5_common_mp.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 /**
  * Request Memory Region creation to the primary process.
@@ -49,7 +50,7 @@
 	ret = res->result;
 	if (ret)
 		rte_errno = -ret;
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 	return ret;
 }
 
@@ -89,7 +90,7 @@
 	mp_res = &mp_rep.msgs[0];
 	res = (struct mlx5_mp_param *)mp_res->param;
 	ret = res->result;
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 	return ret;
 }
 
@@ -136,7 +137,7 @@
 	DRV_LOG(DEBUG, "port %u command FD from primary is %d",
 		mp_id->port_id, ret);
 exit:
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 	return ret;
 }
 
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 2179a83..af2863e 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -9,6 +9,7 @@
 #include "mlx5_prm.h"
 #include "mlx5_devx_cmds.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 
 /**
@@ -28,7 +29,8 @@
 struct mlx5_devx_obj *
 mlx5_devx_cmd_flow_counter_alloc(void *ctx, uint32_t bulk_n_128)
 {
-	struct mlx5_devx_obj *dcs = rte_zmalloc("dcs", sizeof(*dcs), 0);
+	struct mlx5_devx_obj *dcs = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*dcs),
+						0, SOCKET_ID_ANY);
 	uint32_t in[MLX5_ST_SZ_DW(alloc_flow_counter_in)]   = {0};
 	uint32_t out[MLX5_ST_SZ_DW(alloc_flow_counter_out)] = {0};
 
@@ -44,7 +46,7 @@ struct mlx5_devx_obj *
 	if (!dcs->obj) {
 		DRV_LOG(ERR, "Can't allocate counters - error %d", errno);
 		rte_errno = errno;
-		rte_free(dcs);
+		mlx5_free(dcs);
 		return NULL;
 	}
 	dcs->id = MLX5_GET(alloc_flow_counter_out, out, flow_counter_id);
@@ -149,7 +151,8 @@ struct mlx5_devx_obj *
 	uint32_t in[in_size_dw];
 	uint32_t out[MLX5_ST_SZ_DW(create_mkey_out)] = {0};
 	void *mkc;
-	struct mlx5_devx_obj *mkey = rte_zmalloc("mkey", sizeof(*mkey), 0);
+	struct mlx5_devx_obj *mkey = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*mkey),
+						 0, SOCKET_ID_ANY);
 	size_t pgsize;
 	uint32_t translation_size;
 
@@ -208,7 +211,7 @@ struct mlx5_devx_obj *
 		DRV_LOG(ERR, "Can't create %sdirect mkey - error %d\n",
 			klm_num ? "an in" : "a ", errno);
 		rte_errno = errno;
-		rte_free(mkey);
+		mlx5_free(mkey);
 		return NULL;
 	}
 	mkey->id = MLX5_GET(create_mkey_out, out, mkey_index);
@@ -260,7 +263,7 @@ struct mlx5_devx_obj *
 	if (!obj)
 		return 0;
 	ret =  mlx5_glue->devx_obj_destroy(obj->obj);
-	rte_free(obj);
+	mlx5_free(obj);
 	return ret;
 }
 
@@ -671,7 +674,7 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_wq_attr *wq_attr;
 	struct mlx5_devx_obj *rq = NULL;
 
-	rq = rte_calloc_socket(__func__, 1, sizeof(*rq), 0, socket);
+	rq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rq), 0, socket);
 	if (!rq) {
 		DRV_LOG(ERR, "Failed to allocate RQ data");
 		rte_errno = ENOMEM;
@@ -699,7 +702,7 @@ struct mlx5_devx_obj *
 	if (!rq->obj) {
 		DRV_LOG(ERR, "Failed to create RQ using DevX");
 		rte_errno = errno;
-		rte_free(rq);
+		mlx5_free(rq);
 		return NULL;
 	}
 	rq->id = MLX5_GET(create_rq_out, out, rqn);
@@ -776,7 +779,7 @@ struct mlx5_devx_obj *
 	void *tir_ctx, *outer, *inner, *rss_key;
 	struct mlx5_devx_obj *tir = NULL;
 
-	tir = rte_calloc(__func__, 1, sizeof(*tir), 0);
+	tir = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tir), 0, SOCKET_ID_ANY);
 	if (!tir) {
 		DRV_LOG(ERR, "Failed to allocate TIR data");
 		rte_errno = ENOMEM;
@@ -818,7 +821,7 @@ struct mlx5_devx_obj *
 	if (!tir->obj) {
 		DRV_LOG(ERR, "Failed to create TIR using DevX");
 		rte_errno = errno;
-		rte_free(tir);
+		mlx5_free(tir);
 		return NULL;
 	}
 	tir->id = MLX5_GET(create_tir_out, out, tirn);
@@ -848,17 +851,17 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_obj *rqt = NULL;
 	int i;
 
-	in = rte_calloc(__func__, 1, inlen, 0);
+	in = mlx5_malloc(MLX5_MEM_ZERO, inlen, 0, SOCKET_ID_ANY);
 	if (!in) {
 		DRV_LOG(ERR, "Failed to allocate RQT IN data");
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	rqt = rte_calloc(__func__, 1, sizeof(*rqt), 0);
+	rqt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt), 0, SOCKET_ID_ANY);
 	if (!rqt) {
 		DRV_LOG(ERR, "Failed to allocate RQT data");
 		rte_errno = ENOMEM;
-		rte_free(in);
+		mlx5_free(in);
 		return NULL;
 	}
 	MLX5_SET(create_rqt_in, in, opcode, MLX5_CMD_OP_CREATE_RQT);
@@ -869,11 +872,11 @@ struct mlx5_devx_obj *
 	for (i = 0; i < rqt_attr->rqt_actual_size; i++)
 		MLX5_SET(rqtc, rqt_ctx, rq_num[i], rqt_attr->rq_list[i]);
 	rqt->obj = mlx5_glue->devx_obj_create(ctx, in, inlen, out, sizeof(out));
-	rte_free(in);
+	mlx5_free(in);
 	if (!rqt->obj) {
 		DRV_LOG(ERR, "Failed to create RQT using DevX");
 		rte_errno = errno;
-		rte_free(rqt);
+		mlx5_free(rqt);
 		return NULL;
 	}
 	rqt->id = MLX5_GET(create_rqt_out, out, rqtn);
@@ -898,7 +901,7 @@ struct mlx5_devx_obj *
 	uint32_t inlen = MLX5_ST_SZ_BYTES(modify_rqt_in) +
 			 rqt_attr->rqt_actual_size * sizeof(uint32_t);
 	uint32_t out[MLX5_ST_SZ_DW(modify_rqt_out)] = {0};
-	uint32_t *in = rte_calloc(__func__, 1, inlen, 0);
+	uint32_t *in = mlx5_malloc(MLX5_MEM_ZERO, inlen, 0, SOCKET_ID_ANY);
 	void *rqt_ctx;
 	int i;
 	int ret;
@@ -918,7 +921,7 @@ struct mlx5_devx_obj *
 	for (i = 0; i < rqt_attr->rqt_actual_size; i++)
 		MLX5_SET(rqtc, rqt_ctx, rq_num[i], rqt_attr->rq_list[i]);
 	ret = mlx5_glue->devx_obj_modify(rqt->obj, in, inlen, out, sizeof(out));
-	rte_free(in);
+	mlx5_free(in);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to modify RQT using DevX.");
 		rte_errno = errno;
@@ -951,7 +954,7 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_wq_attr *wq_attr;
 	struct mlx5_devx_obj *sq = NULL;
 
-	sq = rte_calloc(__func__, 1, sizeof(*sq), 0);
+	sq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*sq), 0, SOCKET_ID_ANY);
 	if (!sq) {
 		DRV_LOG(ERR, "Failed to allocate SQ data");
 		rte_errno = ENOMEM;
@@ -985,7 +988,7 @@ struct mlx5_devx_obj *
 	if (!sq->obj) {
 		DRV_LOG(ERR, "Failed to create SQ using DevX");
 		rte_errno = errno;
-		rte_free(sq);
+		mlx5_free(sq);
 		return NULL;
 	}
 	sq->id = MLX5_GET(create_sq_out, out, sqn);
@@ -1049,7 +1052,7 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_obj *tis = NULL;
 	void *tis_ctx;
 
-	tis = rte_calloc(__func__, 1, sizeof(*tis), 0);
+	tis = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tis), 0, SOCKET_ID_ANY);
 	if (!tis) {
 		DRV_LOG(ERR, "Failed to allocate TIS object");
 		rte_errno = ENOMEM;
@@ -1069,7 +1072,7 @@ struct mlx5_devx_obj *
 	if (!tis->obj) {
 		DRV_LOG(ERR, "Failed to create TIS using DevX");
 		rte_errno = errno;
-		rte_free(tis);
+		mlx5_free(tis);
 		return NULL;
 	}
 	tis->id = MLX5_GET(create_tis_out, out, tisn);
@@ -1091,7 +1094,7 @@ struct mlx5_devx_obj *
 	uint32_t out[MLX5_ST_SZ_DW(alloc_transport_domain_out)] = {0};
 	struct mlx5_devx_obj *td = NULL;
 
-	td = rte_calloc(__func__, 1, sizeof(*td), 0);
+	td = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*td), 0, SOCKET_ID_ANY);
 	if (!td) {
 		DRV_LOG(ERR, "Failed to allocate TD object");
 		rte_errno = ENOMEM;
@@ -1104,7 +1107,7 @@ struct mlx5_devx_obj *
 	if (!td->obj) {
 		DRV_LOG(ERR, "Failed to create TIS using DevX");
 		rte_errno = errno;
-		rte_free(td);
+		mlx5_free(td);
 		return NULL;
 	}
 	td->id = MLX5_GET(alloc_transport_domain_out, out,
@@ -1168,8 +1171,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_cq_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(create_cq_out)] = {0};
-	struct mlx5_devx_obj *cq_obj = rte_zmalloc(__func__, sizeof(*cq_obj),
-						   0);
+	struct mlx5_devx_obj *cq_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						   sizeof(*cq_obj),
+						   0, SOCKET_ID_ANY);
 	void *cqctx = MLX5_ADDR_OF(create_cq_in, in, cq_context);
 
 	if (!cq_obj) {
@@ -1203,7 +1207,7 @@ struct mlx5_devx_obj *
 	if (!cq_obj->obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create CQ using DevX errno=%d.", errno);
-		rte_free(cq_obj);
+		mlx5_free(cq_obj);
 		return NULL;
 	}
 	cq_obj->id = MLX5_GET(create_cq_out, out, cqn);
@@ -1227,8 +1231,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_virtq_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
-	struct mlx5_devx_obj *virtq_obj = rte_zmalloc(__func__,
-						     sizeof(*virtq_obj), 0);
+	struct mlx5_devx_obj *virtq_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						     sizeof(*virtq_obj),
+						     0, SOCKET_ID_ANY);
 	void *virtq = MLX5_ADDR_OF(create_virtq_in, in, virtq);
 	void *hdr = MLX5_ADDR_OF(create_virtq_in, in, hdr);
 	void *virtctx = MLX5_ADDR_OF(virtio_net_q, virtq, virtio_q_context);
@@ -1276,7 +1281,7 @@ struct mlx5_devx_obj *
 	if (!virtq_obj->obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create VIRTQ Obj using DevX.");
-		rte_free(virtq_obj);
+		mlx5_free(virtq_obj);
 		return NULL;
 	}
 	virtq_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
@@ -1398,8 +1403,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_qp_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(create_qp_out)] = {0};
-	struct mlx5_devx_obj *qp_obj = rte_zmalloc(__func__, sizeof(*qp_obj),
-						   0);
+	struct mlx5_devx_obj *qp_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						   sizeof(*qp_obj),
+						   0, SOCKET_ID_ANY);
 	void *qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
 
 	if (!qp_obj) {
@@ -1454,7 +1460,7 @@ struct mlx5_devx_obj *
 	if (!qp_obj->obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create QP Obj using DevX.");
-		rte_free(qp_obj);
+		mlx5_free(qp_obj);
 		return NULL;
 	}
 	qp_obj->id = MLX5_GET(create_qp_out, out, qpn);
@@ -1550,8 +1556,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
-	struct mlx5_devx_obj *couners_obj = rte_zmalloc(__func__,
-						       sizeof(*couners_obj), 0);
+	struct mlx5_devx_obj *couners_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						       sizeof(*couners_obj), 0,
+						       SOCKET_ID_ANY);
 	void *hdr = MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr);
 
 	if (!couners_obj) {
@@ -1569,7 +1576,7 @@ struct mlx5_devx_obj *
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create virtio queue counters Obj using"
 			" DevX.");
-		rte_free(couners_obj);
+		mlx5_free(couners_obj);
 		return NULL;
 	}
 	couners_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v2 5/7] common/mlx5: convert data path objects to unified malloc
  2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                     ` (3 preceding siblings ...)
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 4/7] common/mlx5: " Suanming Mou
@ 2020-07-16  9:20   ` Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: convert configuration " Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 7/7] net/mlx5: convert Rx/Tx queue " Suanming Mou
  6 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-16  9:20 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the data path object page and B-tree table memory
from unified malloc function with explicit flag MLX5_MEM_RTE.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/mlx5_common.c    | 10 ++++++----
 drivers/common/mlx5/mlx5_common_mr.c | 31 +++++++++++++++----------------
 2 files changed, 21 insertions(+), 20 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 693e2c6..17168e6 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -13,6 +13,7 @@
 #include "mlx5_common.h"
 #include "mlx5_common_os.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 int mlx5_common_logtype;
 
@@ -169,8 +170,9 @@ static inline void mlx5_cpu_id(unsigned int level,
 	struct mlx5_devx_dbr_page *page;
 
 	/* Allocate space for door-bell page and management data. */
-	page = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_devx_dbr_page),
-				 RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+	page = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			   sizeof(struct mlx5_devx_dbr_page),
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!page) {
 		DRV_LOG(ERR, "cannot allocate dbr page");
 		return NULL;
@@ -180,7 +182,7 @@ static inline void mlx5_cpu_id(unsigned int level,
 					      MLX5_DBR_PAGE_SIZE, 0);
 	if (!page->umem) {
 		DRV_LOG(ERR, "cannot umem reg dbr page");
-		rte_free(page);
+		mlx5_free(page);
 		return NULL;
 	}
 	return page;
@@ -261,7 +263,7 @@ static inline void mlx5_cpu_id(unsigned int level,
 		LIST_REMOVE(page, next);
 		if (page->umem)
 			ret = -mlx5_glue->devx_umem_dereg(page->umem);
-		rte_free(page);
+		mlx5_free(page);
 	} else {
 		/* Mark in bitmap that this door-bell is not in use. */
 		offset /= MLX5_DBR_SIZE;
diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c
index 564d618..23324c0 100644
--- a/drivers/common/mlx5/mlx5_common_mr.c
+++ b/drivers/common/mlx5/mlx5_common_mr.c
@@ -12,6 +12,7 @@
 #include "mlx5_common_mp.h"
 #include "mlx5_common_mr.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 struct mr_find_contig_memsegs_data {
 	uintptr_t addr;
@@ -47,7 +48,8 @@ struct mr_find_contig_memsegs_data {
 	 * Initially cache_bh[] will be given practically enough space and once
 	 * it is expanded, expansion wouldn't be needed again ever.
 	 */
-	mem = rte_realloc(bt->table, n * sizeof(struct mr_cache_entry), 0);
+	mem = mlx5_realloc(bt->table, MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			   n * sizeof(struct mr_cache_entry), 0, SOCKET_ID_ANY);
 	if (mem == NULL) {
 		/* Not an error, B-tree search will be skipped. */
 		DRV_LOG(WARNING, "failed to expand MR B-tree (%p) table",
@@ -180,9 +182,9 @@ struct mr_find_contig_memsegs_data {
 	}
 	MLX5_ASSERT(!bt->table && !bt->size);
 	memset(bt, 0, sizeof(*bt));
-	bt->table = rte_calloc_socket("B-tree table",
-				      n, sizeof(struct mr_cache_entry),
-				      0, socket);
+	bt->table = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				sizeof(struct mr_cache_entry) * n,
+				0, socket);
 	if (bt->table == NULL) {
 		rte_errno = ENOMEM;
 		DEBUG("failed to allocate memory for btree cache on socket %d",
@@ -212,7 +214,7 @@ struct mr_find_contig_memsegs_data {
 		return;
 	DEBUG("freeing B-tree %p with table %p",
 	      (void *)bt, (void *)bt->table);
-	rte_free(bt->table);
+	mlx5_free(bt->table);
 	memset(bt, 0, sizeof(*bt));
 }
 
@@ -443,7 +445,7 @@ struct mlx5_mr *
 	dereg_mr_cb(&mr->pmd_mr);
 	if (mr->ms_bmp != NULL)
 		rte_bitmap_free(mr->ms_bmp);
-	rte_free(mr);
+	mlx5_free(mr);
 }
 
 void
@@ -650,11 +652,9 @@ struct mlx5_mr *
 	      (void *)addr, data.start, data.end, msl->page_sz, ms_n);
 	/* Size of memory for bitmap. */
 	bmp_size = rte_bitmap_get_memory_footprint(ms_n);
-	mr = rte_zmalloc_socket(NULL,
-				RTE_ALIGN_CEIL(sizeof(*mr),
-					       RTE_CACHE_LINE_SIZE) +
-				bmp_size,
-				RTE_CACHE_LINE_SIZE, msl->socket_id);
+	mr = mlx5_malloc(MLX5_MEM_RTE |  MLX5_MEM_ZERO,
+			 RTE_ALIGN_CEIL(sizeof(*mr), RTE_CACHE_LINE_SIZE) +
+			 bmp_size, RTE_CACHE_LINE_SIZE, msl->socket_id);
 	if (mr == NULL) {
 		DEBUG("Unable to allocate memory for a new MR of"
 		      " address (%p).", (void *)addr);
@@ -1033,10 +1033,9 @@ struct mlx5_mr *
 {
 	struct mlx5_mr *mr = NULL;
 
-	mr = rte_zmalloc_socket(NULL,
-				RTE_ALIGN_CEIL(sizeof(*mr),
-					       RTE_CACHE_LINE_SIZE),
-				RTE_CACHE_LINE_SIZE, socket_id);
+	mr = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			 RTE_ALIGN_CEIL(sizeof(*mr), RTE_CACHE_LINE_SIZE),
+			 RTE_CACHE_LINE_SIZE, socket_id);
 	if (mr == NULL)
 		return NULL;
 	reg_mr_cb(pd, (void *)addr, len, &mr->pmd_mr);
@@ -1044,7 +1043,7 @@ struct mlx5_mr *
 		DRV_LOG(WARNING,
 			"Fail to create MR for address (%p)",
 			(void *)addr);
-		rte_free(mr);
+		mlx5_free(mr);
 		return NULL;
 	}
 	mr->msl = NULL; /* Mark it is external memory. */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v2 6/7] net/mlx5: convert configuration objects to unified malloc
  2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                     ` (4 preceding siblings ...)
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 5/7] common/mlx5: convert data path objects " Suanming Mou
@ 2020-07-16  9:20   ` Suanming Mou
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 7/7] net/mlx5: convert Rx/Tx queue " Suanming Mou
  6 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-16  9:20 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the miscellaneous configuration objects from the
unified malloc function.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/linux/mlx5_ethdev_os.c |  8 +++++---
 drivers/net/mlx5/linux/mlx5_os.c        | 26 +++++++++++++-------------
 drivers/net/mlx5/mlx5.c                 | 14 +++++++-------
 3 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index 701614a..6b8a151 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -38,6 +38,7 @@
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
 #include <mlx5_common.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -1162,8 +1163,9 @@ int mlx5_get_module_eeprom(struct rte_eth_dev *dev,
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	eeprom = rte_calloc(__func__, 1,
-			    (sizeof(struct ethtool_eeprom) + info->length), 0);
+	eeprom = mlx5_malloc(MLX5_MEM_ZERO,
+			     (sizeof(struct ethtool_eeprom) + info->length), 0,
+			     SOCKET_ID_ANY);
 	if (!eeprom) {
 		DRV_LOG(WARNING, "port %u cannot allocate memory for "
 			"eeprom data", dev->data->port_id);
@@ -1182,6 +1184,6 @@ int mlx5_get_module_eeprom(struct rte_eth_dev *dev,
 			dev->data->port_id, strerror(rte_errno));
 	else
 		rte_memcpy(info->data, eeprom->data, info->length);
-	rte_free(eeprom);
+	mlx5_free(eeprom);
 	return ret;
 }
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index d5acef0..1698f2c 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -163,7 +163,7 @@
 		socket = ctrl->socket;
 	}
 	MLX5_ASSERT(data != NULL);
-	ret = rte_malloc_socket(__func__, size, alignment, socket);
+	ret = mlx5_malloc(0, size, alignment, socket);
 	if (!ret && size)
 		rte_errno = ENOMEM;
 	return ret;
@@ -181,7 +181,7 @@
 mlx5_free_verbs_buf(void *ptr, void *data __rte_unused)
 {
 	MLX5_ASSERT(data != NULL);
-	rte_free(ptr);
+	mlx5_free(ptr);
 }
 
 /**
@@ -618,9 +618,9 @@
 			mlx5_glue->port_state_str(port_attr.state),
 			port_attr.state);
 	/* Allocate private eth device data. */
-	priv = rte_zmalloc("ethdev private structure",
+	priv = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE,
 			   sizeof(*priv),
-			   RTE_CACHE_LINE_SIZE);
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (priv == NULL) {
 		DRV_LOG(ERR, "priv allocation failure");
 		err = ENOMEM;
@@ -1109,7 +1109,7 @@
 			mlx5_flow_id_pool_release(priv->qrss_id_pool);
 		if (own_domain_id)
 			claim_zero(rte_eth_switch_domain_free(priv->domain_id));
-		rte_free(priv);
+		mlx5_free(priv);
 		if (eth_dev != NULL)
 			eth_dev->data->dev_private = NULL;
 	}
@@ -1428,10 +1428,10 @@
 	 * Now we can determine the maximal
 	 * amount of devices to be spawned.
 	 */
-	list = rte_zmalloc("device spawn data",
-			 sizeof(struct mlx5_dev_spawn_data) *
-			 (np ? np : nd),
-			 RTE_CACHE_LINE_SIZE);
+	list = mlx5_malloc(MLX5_MEM_ZERO,
+			   sizeof(struct mlx5_dev_spawn_data) *
+			   (np ? np : nd),
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!list) {
 		DRV_LOG(ERR, "spawn data array allocation failure");
 		rte_errno = ENOMEM;
@@ -1722,7 +1722,7 @@
 	if (nl_route >= 0)
 		close(nl_route);
 	if (list)
-		rte_free(list);
+		mlx5_free(list);
 	MLX5_ASSERT(ibv_list);
 	mlx5_glue->free_device_list(ibv_list);
 	return ret;
@@ -2200,8 +2200,8 @@
 	/* Allocate memory to grab stat names and values. */
 	str_sz = dev_stats_n * ETH_GSTRING_LEN;
 	strings = (struct ethtool_gstrings *)
-		  rte_malloc("xstats_strings",
-			     str_sz + sizeof(struct ethtool_gstrings), 0);
+		  mlx5_malloc(0, str_sz + sizeof(struct ethtool_gstrings), 0,
+			      SOCKET_ID_ANY);
 	if (!strings) {
 		DRV_LOG(WARNING, "port %u unable to allocate memory for xstats",
 		     dev->data->port_id);
@@ -2251,7 +2251,7 @@
 	mlx5_os_read_dev_stat(priv, "out_of_buffer", &stats_ctrl->imissed_base);
 	stats_ctrl->imissed = 0;
 free:
-	rte_free(strings);
+	mlx5_free(strings);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index ba86c68..daf65f3 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -644,11 +644,11 @@ struct mlx5_dev_ctx_shared *
 	}
 	/* No device found, we have to create new shared context. */
 	MLX5_ASSERT(spawn->max_port);
-	sh = rte_zmalloc("ethdev shared ib context",
+	sh = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE,
 			 sizeof(struct mlx5_dev_ctx_shared) +
 			 spawn->max_port *
 			 sizeof(struct mlx5_dev_shared_port),
-			 RTE_CACHE_LINE_SIZE);
+			 RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!sh) {
 		DRV_LOG(ERR, "shared context allocation failure");
 		rte_errno  = ENOMEM;
@@ -764,7 +764,7 @@ struct mlx5_dev_ctx_shared *
 		claim_zero(mlx5_glue->close_device(sh->ctx));
 	if (sh->flow_id_pool)
 		mlx5_flow_id_pool_release(sh->flow_id_pool);
-	rte_free(sh);
+	mlx5_free(sh);
 	MLX5_ASSERT(err > 0);
 	rte_errno = err;
 	return NULL;
@@ -829,7 +829,7 @@ struct mlx5_dev_ctx_shared *
 		claim_zero(mlx5_glue->close_device(sh->ctx));
 	if (sh->flow_id_pool)
 		mlx5_flow_id_pool_release(sh->flow_id_pool);
-	rte_free(sh);
+	mlx5_free(sh);
 exit:
 	pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex);
 }
@@ -1089,8 +1089,8 @@ struct mlx5_dev_ctx_shared *
 	 */
 	ppriv_size =
 		sizeof(struct mlx5_proc_priv) + priv->txqs_n * sizeof(void *);
-	ppriv = rte_malloc_socket("mlx5_proc_priv", ppriv_size,
-				  RTE_CACHE_LINE_SIZE, dev->device->numa_node);
+	ppriv = mlx5_malloc(MLX5_MEM_RTE, ppriv_size, RTE_CACHE_LINE_SIZE,
+			    dev->device->numa_node);
 	if (!ppriv) {
 		rte_errno = ENOMEM;
 		return -rte_errno;
@@ -1111,7 +1111,7 @@ struct mlx5_dev_ctx_shared *
 {
 	if (!dev->process_private)
 		return;
-	rte_free(dev->process_private);
+	mlx5_free(dev->process_private);
 	dev->process_private = NULL;
 }
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v2 7/7] net/mlx5: convert Rx/Tx queue objects to unified malloc
  2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                     ` (5 preceding siblings ...)
  2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: convert configuration " Suanming Mou
@ 2020-07-16  9:20   ` Suanming Mou
  6 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-16  9:20 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the Rx/Tx queue objects from unified malloc
function.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 37 ++++++++++++++++++-------------------
 drivers/net/mlx5/mlx5_txq.c | 44 +++++++++++++++++++++-----------------------
 2 files changed, 39 insertions(+), 42 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index c8e3a82..9c9cc3a 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -641,7 +641,7 @@
 rxq_release_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
 {
 	if (rxq_ctrl->rxq.wqes) {
-		rte_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
+		mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
 		rxq_ctrl->rxq.wqes = NULL;
 	}
 	if (rxq_ctrl->wq_umem) {
@@ -707,7 +707,7 @@
 			claim_zero(mlx5_glue->destroy_comp_channel
 				   (rxq_obj->channel));
 		LIST_REMOVE(rxq_obj, next);
-		rte_free(rxq_obj);
+		mlx5_free(rxq_obj);
 		return 0;
 	}
 	return 1;
@@ -1233,15 +1233,15 @@
 	/* Calculate and allocate WQ memory space. */
 	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
 	wq_size = wqe_n * wqe_size;
-	buf = rte_calloc_socket(__func__, 1, wq_size, MLX5_WQE_BUF_ALIGNMENT,
-				rxq_ctrl->socket);
+	buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
+			  MLX5_WQE_BUF_ALIGNMENT, rxq_ctrl->socket);
 	if (!buf)
 		return NULL;
 	rxq_data->wqes = buf;
 	rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
 						     buf, wq_size, 0);
 	if (!rxq_ctrl->wq_umem) {
-		rte_free(buf);
+		mlx5_free(buf);
 		return NULL;
 	}
 	mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
@@ -1275,8 +1275,8 @@
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(!rxq_ctrl->obj);
-	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
-				 rxq_ctrl->socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   rxq_ctrl->socket);
 	if (!tmpl) {
 		DRV_LOG(ERR,
 			"port %u Rx queue %u cannot allocate verbs resources",
@@ -1294,7 +1294,7 @@
 			DRV_LOG(ERR, "total data size %u power of 2 is "
 				"too large for hairpin",
 				priv->config.log_hp_size);
-			rte_free(tmpl);
+			mlx5_free(tmpl);
 			rte_errno = ERANGE;
 			return NULL;
 		}
@@ -1314,7 +1314,7 @@
 		DRV_LOG(ERR,
 			"port %u Rx hairpin queue %u can't create rq object",
 			dev->data->port_id, idx);
-		rte_free(tmpl);
+		mlx5_free(tmpl);
 		rte_errno = errno;
 		return NULL;
 	}
@@ -1362,8 +1362,8 @@ struct mlx5_rxq_obj *
 		return mlx5_rxq_obj_hairpin_new(dev, idx);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
 	priv->verbs_alloc_ctx.obj = rxq_ctrl;
-	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
-				 rxq_ctrl->socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   rxq_ctrl->socket);
 	if (!tmpl) {
 		DRV_LOG(ERR,
 			"port %u Rx queue %u cannot allocate verbs resources",
@@ -1503,7 +1503,7 @@ struct mlx5_rxq_obj *
 		if (tmpl->channel)
 			claim_zero(mlx5_glue->destroy_comp_channel
 							(tmpl->channel));
-		rte_free(tmpl);
+		mlx5_free(tmpl);
 		rte_errno = ret; /* Restore rte_errno. */
 	}
 	if (type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ)
@@ -1825,10 +1825,8 @@ struct mlx5_rxq_ctrl *
 		rte_errno = ENOSPC;
 		return NULL;
 	}
-	tmpl = rte_calloc_socket("RXQ", 1,
-				 sizeof(*tmpl) +
-				 desc_n * sizeof(struct rte_mbuf *),
-				 0, socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl) +
+			   desc_n * sizeof(struct rte_mbuf *), 0, socket);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -2007,7 +2005,7 @@ struct mlx5_rxq_ctrl *
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
 error:
-	rte_free(tmpl);
+	mlx5_free(tmpl);
 	return NULL;
 }
 
@@ -2033,7 +2031,8 @@ struct mlx5_rxq_ctrl *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *tmpl;
 
-	tmpl = rte_calloc_socket("RXQ", 1, sizeof(*tmpl), 0, SOCKET_ID_ANY);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   SOCKET_ID_ANY);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -2112,7 +2111,7 @@ struct mlx5_rxq_ctrl *
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
 			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 		LIST_REMOVE(rxq_ctrl, next);
-		rte_free(rxq_ctrl);
+		mlx5_free(rxq_ctrl);
 		(*priv->rxqs)[idx] = NULL;
 		return 0;
 	}
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 35b3ade..ac9e455 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -31,6 +31,7 @@
 #include <mlx5_devx_cmds.h>
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5_utils.h"
@@ -521,8 +522,8 @@
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(!txq_ctrl->obj);
-	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
-				 txq_ctrl->socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   txq_ctrl->socket);
 	if (!tmpl) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u cannot allocate memory resources",
@@ -541,7 +542,7 @@
 			DRV_LOG(ERR, "total data size %u power of 2 is "
 				"too large for hairpin",
 				priv->config.log_hp_size);
-			rte_free(tmpl);
+			mlx5_free(tmpl);
 			rte_errno = ERANGE;
 			return NULL;
 		}
@@ -561,7 +562,7 @@
 		DRV_LOG(ERR,
 			"port %u tx hairpin queue %u can't create sq object",
 			dev->data->port_id, idx);
-		rte_free(tmpl);
+		mlx5_free(tmpl);
 		rte_errno = errno;
 		return NULL;
 	}
@@ -715,8 +716,9 @@ struct mlx5_txq_obj *
 		rte_errno = errno;
 		goto error;
 	}
-	txq_obj = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_txq_obj), 0,
-				    txq_ctrl->socket);
+	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			      sizeof(struct mlx5_txq_obj), 0,
+			      txq_ctrl->socket);
 	if (!txq_obj) {
 		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory",
 			dev->data->port_id, idx);
@@ -758,11 +760,9 @@ struct mlx5_txq_obj *
 	txq_data->wqe_pi = 0;
 	txq_data->wqe_comp = 0;
 	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
-	txq_data->fcqs = rte_calloc_socket(__func__,
-					   txq_data->cqe_s,
-					   sizeof(*txq_data->fcqs),
-					   RTE_CACHE_LINE_SIZE,
-					   txq_ctrl->socket);
+	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
+				     RTE_CACHE_LINE_SIZE, txq_ctrl->socket);
 	if (!txq_data->fcqs) {
 		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory (FCQ)",
 			dev->data->port_id, idx);
@@ -818,9 +818,9 @@ struct mlx5_txq_obj *
 	if (tmpl.qp)
 		claim_zero(mlx5_glue->destroy_qp(tmpl.qp));
 	if (txq_data->fcqs)
-		rte_free(txq_data->fcqs);
+		mlx5_free(txq_data->fcqs);
 	if (txq_obj)
-		rte_free(txq_obj);
+		mlx5_free(txq_obj);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
 	rte_errno = ret; /* Restore rte_errno. */
 	return NULL;
@@ -874,10 +874,10 @@ struct mlx5_txq_obj *
 			claim_zero(mlx5_glue->destroy_qp(txq_obj->qp));
 			claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
 				if (txq_obj->txq_ctrl->txq.fcqs)
-					rte_free(txq_obj->txq_ctrl->txq.fcqs);
+					mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
 		}
 		LIST_REMOVE(txq_obj, next);
-		rte_free(txq_obj);
+		mlx5_free(txq_obj);
 		return 0;
 	}
 	return 1;
@@ -1293,10 +1293,8 @@ struct mlx5_txq_ctrl *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_ctrl *tmpl;
 
-	tmpl = rte_calloc_socket("TXQ", 1,
-				 sizeof(*tmpl) +
-				 desc * sizeof(struct rte_mbuf *),
-				 0, socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl) +
+			   desc * sizeof(struct rte_mbuf *), 0, socket);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -1336,7 +1334,7 @@ struct mlx5_txq_ctrl *
 	LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next);
 	return tmpl;
 error:
-	rte_free(tmpl);
+	mlx5_free(tmpl);
 	return NULL;
 }
 
@@ -1362,8 +1360,8 @@ struct mlx5_txq_ctrl *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_ctrl *tmpl;
 
-	tmpl = rte_calloc_socket("TXQ", 1,
-				 sizeof(*tmpl), 0, SOCKET_ID_ANY);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   SOCKET_ID_ANY);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -1432,7 +1430,7 @@ struct mlx5_txq_ctrl *
 		txq_free_elts(txq);
 		mlx5_mr_btree_free(&txq->txq.mr_ctrl.cache_bh);
 		LIST_REMOVE(txq, next);
-		rte_free(txq);
+		mlx5_free(txq);
 		(*priv->txqs)[idx] = NULL;
 		return 0;
 	}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg
  2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                   ` (7 preceding siblings ...)
  2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
@ 2020-07-17 13:50 ` Suanming Mou
  2020-07-17 13:50   ` [dpdk-dev] [PATCH v3 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
                     ` (7 more replies)
  8 siblings, 8 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-17 13:50 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

Currently, for MLX5 PMD, once millions of flows created, the memory
consumption of the flows are also very huge. For the system with limited
memory, it means the system need to reserve most of the memory as huge
page memory to serve the flows in advance. And other normal applications
will have no chance to use this reserved memory any more. While most of
the time, the system will not have lots of flows, the  reserved huge page
memory becomes a bit waste of memory at most of the time.

By the new sys_mem_en devarg, once set it to be true, it allows the PMD
allocate the memory from system by default with the new add mlx5 memory
management functions. Only once the MLX5_MEM_RTE flag is set, the memory
will be allocate from rte, otherwise, it allocates memory from system.

So in this case, the system with limited memory no need to reserve most
of the memory for hugepage. Only some needed memory for datapath objects
will be enough to allocated with explicitly flag. Other memory will be
allocated from system. For system with enough memory, no need to care
about the devarg, the memory will always be from rte hugepage.

One restriction is that for DPDK application with multiple PCI devices,
if the sys_mem_en devargs are different between the devices, the
sys_mem_en only gets the value from the first device devargs, and print
out a message to warn that.

---

v3:
 - Rebase on top of latest code.

v2:
 - Add memory function call statistic.
 - Change msl to aotmic.

---

Suanming Mou (7):
  common/mlx5: add mlx5 memory management functions
  net/mlx5: add allocate memory from system devarg
  net/mlx5: convert control path memory to unified malloc
  common/mlx5: convert control path memory to unified malloc
  common/mlx5: convert data path objects to unified malloc
  net/mlx5: convert configuration objects to unified malloc
  net/mlx5: convert Rx/Tx queue objects to unified malloc

 doc/guides/nics/mlx5.rst                        |   7 +
 drivers/common/mlx5/Makefile                    |   1 +
 drivers/common/mlx5/linux/mlx5_glue.c           |  13 +-
 drivers/common/mlx5/linux/mlx5_nl.c             |   5 +-
 drivers/common/mlx5/meson.build                 |   1 +
 drivers/common/mlx5/mlx5_common.c               |  10 +-
 drivers/common/mlx5/mlx5_common_mp.c            |   7 +-
 drivers/common/mlx5/mlx5_common_mr.c            |  31 ++-
 drivers/common/mlx5/mlx5_devx_cmds.c            |  82 ++++---
 drivers/common/mlx5/mlx5_malloc.c               | 306 ++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_malloc.h               |  99 ++++++++
 drivers/common/mlx5/rte_common_mlx5_version.map |   6 +
 drivers/net/mlx5/linux/mlx5_ethdev_os.c         |   8 +-
 drivers/net/mlx5/linux/mlx5_os.c                |  28 ++-
 drivers/net/mlx5/mlx5.c                         | 108 +++++----
 drivers/net/mlx5/mlx5.h                         |   1 +
 drivers/net/mlx5/mlx5_ethdev.c                  |  15 +-
 drivers/net/mlx5/mlx5_flow.c                    |  45 ++--
 drivers/net/mlx5/mlx5_flow_dv.c                 |  46 ++--
 drivers/net/mlx5/mlx5_flow_meter.c              |  11 +-
 drivers/net/mlx5/mlx5_flow_verbs.c              |   8 +-
 drivers/net/mlx5/mlx5_mp.c                      |   3 +-
 drivers/net/mlx5/mlx5_rss.c                     |  13 +-
 drivers/net/mlx5/mlx5_rxq.c                     |  74 +++---
 drivers/net/mlx5/mlx5_txpp.c                    |  30 +--
 drivers/net/mlx5/mlx5_txq.c                     |  82 +++----
 drivers/net/mlx5/mlx5_utils.c                   |  60 +++--
 drivers/net/mlx5/mlx5_utils.h                   |   2 +-
 drivers/net/mlx5/mlx5_vlan.c                    |   8 +-
 29 files changed, 797 insertions(+), 313 deletions(-)
 create mode 100644 drivers/common/mlx5/mlx5_malloc.c
 create mode 100644 drivers/common/mlx5/mlx5_malloc.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v3 1/7] common/mlx5: add mlx5 memory management functions
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
@ 2020-07-17 13:50   ` Suanming Mou
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add allocate memory from system devarg Suanming Mou
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-17 13:50 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

Add the internal mlx5 memory management functions:

mlx5_malloc_mem_select();
mlx5_memory_stat_dump();
mlx5_rellaocate();
mlx5_malloc();
mlx5_free();

User will be allowed to manage memory from system or from rte memory
with the unified functions.

In this case, for the system with limited memory which can not reserve
lots of rte hugepage memory in advanced, will allocate the memory from
system for some of not so important control path objects based on the
sys_mem_en configuration.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/Makefile                    |   1 +
 drivers/common/mlx5/meson.build                 |   1 +
 drivers/common/mlx5/mlx5_malloc.c               | 306 ++++++++++++++++++++++++
 drivers/common/mlx5/mlx5_malloc.h               |  99 ++++++++
 drivers/common/mlx5/rte_common_mlx5_version.map |   6 +
 5 files changed, 413 insertions(+)
 create mode 100644 drivers/common/mlx5/mlx5_malloc.c
 create mode 100644 drivers/common/mlx5/mlx5_malloc.h

diff --git a/drivers/common/mlx5/Makefile b/drivers/common/mlx5/Makefile
index f9dc376..96a2dae 100644
--- a/drivers/common/mlx5/Makefile
+++ b/drivers/common/mlx5/Makefile
@@ -21,6 +21,7 @@ SRCS-y += linux/mlx5_nl.c
 SRCS-y += linux/mlx5_common_verbs.c
 SRCS-y += mlx5_common_mp.c
 SRCS-y += mlx5_common_mr.c
+SRCS-y += mlx5_malloc.c
 ifeq ($(CONFIG_RTE_IBVERBS_LINK_DLOPEN),y)
 INSTALL-y-lib += $(LIB_GLUE)
 endif
diff --git a/drivers/common/mlx5/meson.build b/drivers/common/mlx5/meson.build
index ba43714..70e2c1c 100644
--- a/drivers/common/mlx5/meson.build
+++ b/drivers/common/mlx5/meson.build
@@ -13,6 +13,7 @@ sources += files(
 	'mlx5_common.c',
 	'mlx5_common_mp.c',
 	'mlx5_common_mr.c',
+	'mlx5_malloc.c',
 )
 
 cflags_options = [
diff --git a/drivers/common/mlx5/mlx5_malloc.c b/drivers/common/mlx5/mlx5_malloc.c
new file mode 100644
index 0000000..316305d
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_malloc.c
@@ -0,0 +1,306 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#include <errno.h>
+#include <rte_malloc.h>
+#include <malloc.h>
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_atomic.h>
+
+#include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
+
+struct mlx5_sys_mem {
+	uint32_t init:1; /* Memory allocator initialized. */
+	uint32_t enable:1; /* System memory select. */
+	uint32_t reserve:30; /* Reserve. */
+	union {
+		struct rte_memseg_list *last_msl;
+		rte_atomic64_t a64_last_msl;
+	};
+	/* last allocated rte memory memseg list. */
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	rte_atomic64_t malloc_sys;
+	/* Memory allocated from system count. */
+	rte_atomic64_t malloc_rte;
+	/* Memory allocated from hugepage count. */
+	rte_atomic64_t realloc_sys;
+	/* Memory reallocate from system count. */
+	rte_atomic64_t realloc_rte;
+	/* Memory reallocate from hugepage count. */
+	rte_atomic64_t free_sys;
+	/* Memory free to system count. */
+	rte_atomic64_t free_rte;
+	/* Memory free to hugepage count. */
+	rte_atomic64_t msl_miss;
+	/* MSL miss count. */
+	rte_atomic64_t msl_update;
+	/* MSL update count. */
+#endif
+};
+
+/* Initialize default as not */
+static struct mlx5_sys_mem mlx5_sys_mem = {
+	.init = 0,
+	.enable = 0,
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	.malloc_sys = RTE_ATOMIC64_INIT(0),
+	.malloc_rte = RTE_ATOMIC64_INIT(0),
+	.realloc_sys = RTE_ATOMIC64_INIT(0),
+	.realloc_rte = RTE_ATOMIC64_INIT(0),
+	.free_sys = RTE_ATOMIC64_INIT(0),
+	.free_rte = RTE_ATOMIC64_INIT(0),
+	.msl_miss = RTE_ATOMIC64_INIT(0),
+	.msl_update = RTE_ATOMIC64_INIT(0),
+#endif
+};
+
+/**
+ * Check if the address belongs to memory seg list.
+ *
+ * @param addr
+ *   Memory address to be ckeced.
+ * @param msl
+ *   Memory seg list.
+ *
+ * @return
+ *   True if it belongs, false otherwise.
+ */
+static bool
+mlx5_mem_check_msl(void *addr, struct rte_memseg_list *msl)
+{
+	void *start, *end;
+
+	if (!msl)
+		return false;
+	start = msl->base_va;
+	end = RTE_PTR_ADD(start, msl->len);
+	if (addr >= start && addr < end)
+		return true;
+	return false;
+}
+
+/**
+ * Update the msl if memory belongs to new msl.
+ *
+ * @param addr
+ *   Memory address.
+ */
+static void
+mlx5_mem_update_msl(void *addr)
+{
+	/*
+	 * Update the cache msl if the new addr comes from the new msl
+	 * different with the cached msl.
+	 */
+	if (addr && !mlx5_mem_check_msl(addr,
+	    (struct rte_memseg_list *)(uintptr_t)rte_atomic64_read
+	    (&mlx5_sys_mem.a64_last_msl))) {
+		rte_atomic64_set(&mlx5_sys_mem.a64_last_msl,
+			(int64_t)(uintptr_t)rte_mem_virt2memseg_list(addr));
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		rte_atomic64_inc(&mlx5_sys_mem.msl_update);
+#endif
+	}
+}
+
+/**
+ * Check if the address belongs to rte memory.
+ *
+ * @param addr
+ *   Memory address to be ckeced.
+ *
+ * @return
+ *   True if it belongs, false otherwise.
+ */
+static bool
+mlx5_mem_is_rte(void *addr)
+{
+	/*
+	 * Check if the last cache msl matches. Drop to slow path
+	 * to check if the memory belongs to rte memory.
+	 */
+	if (!mlx5_mem_check_msl(addr, (struct rte_memseg_list *)(uintptr_t)
+	    rte_atomic64_read(&mlx5_sys_mem.a64_last_msl))) {
+		if (!rte_mem_virt2memseg_list(addr))
+			return false;
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		rte_atomic64_inc(&mlx5_sys_mem.msl_miss);
+#endif
+	}
+	return true;
+}
+
+/**
+ * Allocate memory with alignment.
+ *
+ * @param size
+ *   Memory size to be allocated.
+ * @param align
+ *   Memory alignment.
+ * @param zero
+ *   Clear the allocated memory or not.
+ *
+ * @return
+ *   Pointer of the allocated memory, NULL otherwise.
+ */
+static void *
+mlx5_alloc_align(size_t size, unsigned int align, unsigned int zero)
+{
+	void *buf;
+	buf = memalign(align, size);
+	if (!buf) {
+		DRV_LOG(ERR, "Couldn't allocate buf.\n");
+		return NULL;
+	}
+	if (zero)
+		memset(buf, 0, size);
+	return buf;
+}
+
+void *
+mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket)
+{
+	void *addr;
+	bool rte_mem;
+
+	/*
+	 * If neither system memory nor rte memory is required, allocate
+	 * memory according to mlx5_sys_mem.enable.
+	 */
+	if (flags & MLX5_MEM_RTE)
+		rte_mem = true;
+	else if (flags & MLX5_MEM_SYS)
+		rte_mem = false;
+	else
+		rte_mem = mlx5_sys_mem.enable ? false : true;
+	if (rte_mem) {
+		if (flags & MLX5_MEM_ZERO)
+			addr = rte_zmalloc_socket(NULL, size, align, socket);
+		else
+			addr = rte_malloc_socket(NULL, size, align, socket);
+		mlx5_mem_update_msl(addr);
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		if (addr)
+			rte_atomic64_inc(&mlx5_sys_mem.malloc_rte);
+#endif
+		return addr;
+	}
+	/* The memory will be allocated from system. */
+	if (align)
+		addr = mlx5_alloc_align(size, align, !!(flags & MLX5_MEM_ZERO));
+	else if (flags & MLX5_MEM_ZERO)
+		addr = calloc(1, size);
+	else
+		addr = malloc(size);
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	if (addr)
+		rte_atomic64_inc(&mlx5_sys_mem.malloc_sys);
+#endif
+	return addr;
+}
+
+void *
+mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
+	     int socket)
+{
+	void *new_addr;
+	bool rte_mem;
+
+	/* Allocate directly if old memory address is NULL. */
+	if (!addr)
+		return mlx5_malloc(flags, size, align, socket);
+	/* Get the memory type. */
+	if (flags & MLX5_MEM_RTE)
+		rte_mem = true;
+	else if (flags & MLX5_MEM_SYS)
+		rte_mem = false;
+	else
+		rte_mem = mlx5_sys_mem.enable ? false : true;
+	/* Check if old memory and to be allocated memory are the same type. */
+	if (rte_mem != mlx5_mem_is_rte(addr)) {
+		DRV_LOG(ERR, "Couldn't reallocate to different memory type.");
+		return NULL;
+	}
+	/* Allocate memory from rte memory. */
+	if (rte_mem) {
+		new_addr = rte_realloc_socket(addr, size, align, socket);
+		mlx5_mem_update_msl(new_addr);
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		if (new_addr)
+			rte_atomic64_inc(&mlx5_sys_mem.realloc_rte);
+#endif
+		return new_addr;
+	}
+	/* Align is not supported for system memory. */
+	if (align) {
+		DRV_LOG(ERR, "Couldn't reallocate with alignment");
+		return NULL;
+	}
+	new_addr = realloc(addr, size);
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	if (new_addr)
+		rte_atomic64_inc(&mlx5_sys_mem.realloc_sys);
+#endif
+	return new_addr;
+}
+
+void
+mlx5_free(void *addr)
+{
+	if (addr == NULL)
+		return;
+	if (!mlx5_mem_is_rte(addr)) {
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		rte_atomic64_inc(&mlx5_sys_mem.free_sys);
+#endif
+		free(addr);
+	} else {
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+		rte_atomic64_inc(&mlx5_sys_mem.free_rte);
+#endif
+		rte_free(addr);
+	}
+}
+
+void
+mlx5_memory_stat_dump(void)
+{
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	DRV_LOG(INFO, "System memory malloc:%"PRIi64", realloc:%"PRIi64","
+		" free:%"PRIi64"\nRTE memory malloc:%"PRIi64","
+		" realloc:%"PRIi64", free:%"PRIi64"\nMSL miss:%"PRIi64","
+		" update:%"PRIi64"",
+		rte_atomic64_read(&mlx5_sys_mem.malloc_sys),
+		rte_atomic64_read(&mlx5_sys_mem.realloc_sys),
+		rte_atomic64_read(&mlx5_sys_mem.free_sys),
+		rte_atomic64_read(&mlx5_sys_mem.malloc_rte),
+		rte_atomic64_read(&mlx5_sys_mem.realloc_rte),
+		rte_atomic64_read(&mlx5_sys_mem.free_rte),
+		rte_atomic64_read(&mlx5_sys_mem.msl_miss),
+		rte_atomic64_read(&mlx5_sys_mem.msl_update));
+#endif
+}
+
+void
+mlx5_malloc_mem_select(uint32_t sys_mem_en)
+{
+	/*
+	 * The initialization should be called only once and all devices
+	 * should use the same memory type. Otherwise, when new device is
+	 * being attached with some different memory allocation configuration,
+	 * the memory will get wrong behavior or a failure will be raised.
+	 */
+	if (!mlx5_sys_mem.init) {
+		if (sys_mem_en)
+			mlx5_sys_mem.enable = 1;
+		mlx5_sys_mem.init = 1;
+		DRV_LOG(INFO, "%s is selected.", sys_mem_en ? "SYS_MEM" : "RTE_MEM");
+	} else if (mlx5_sys_mem.enable != sys_mem_en) {
+		DRV_LOG(WARNING, "%s is already selected.",
+			mlx5_sys_mem.enable ? "SYS_MEM" : "RTE_MEM");
+	}
+}
diff --git a/drivers/common/mlx5/mlx5_malloc.h b/drivers/common/mlx5/mlx5_malloc.h
new file mode 100644
index 0000000..d3e5f5b
--- /dev/null
+++ b/drivers/common/mlx5/mlx5_malloc.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2020 Mellanox Technologies, Ltd
+ */
+
+#ifndef MLX5_MALLOC_H_
+#define MLX5_MALLOC_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+enum mlx5_mem_flags {
+	MLX5_MEM_ANY = 0,
+	/* Memory will be allocated dpends on sys_mem_en. */
+	MLX5_MEM_SYS = 1 << 0,
+	/* Memory should be allocated from system. */
+	MLX5_MEM_RTE = 1 << 1,
+	/* Memory should be allocated from rte hugepage. */
+	MLX5_MEM_ZERO = 1 << 2,
+	/* Memory should be cleared to zero. */
+};
+
+/**
+ * Select the PMD memory allocate preference.
+ *
+ * Once sys_mem_en is set, the default memory allocate will from
+ * system only if an explicitly flag is set to order the memory
+ * from rte hugepage memory.
+ *
+ * @param sys_mem_en
+ *   Use system memory or not.
+ */
+__rte_internal
+void mlx5_malloc_mem_select(uint32_t sys_mem_en);
+
+/**
+ * Dump the PMD memory usage statistic.
+ */
+__rte_internal
+void mlx5_memory_stat_dump(void);
+
+/**
+ * Memory allocate function.
+ *
+ * @param flags
+ *   The bits as enum mlx5_mem_flags defined.
+ * @param size
+ *   Memory size to be allocated.
+ * @param align
+ *   Memory alignment.
+ * @param socket
+ *   The socket memory should allocated.
+ *   Valid only when allocate the memory from rte hugepage.
+ *
+ * @return
+ *   Pointer of the allocated memory, NULL otherwise.
+ */
+__rte_internal
+void *mlx5_malloc(uint32_t flags, size_t size, unsigned int align, int socket);
+
+/**
+ * Memory reallocate function.
+ *
+ *
+ *
+ * @param addr
+ *   The memory to be reallocated.
+ * @param flags
+ *   The bits as enum mlx5_mem_flags defined.
+ * @param size
+ *   Memory size to be allocated.
+ * @param align
+ *   Memory alignment.
+ * @param socket
+ *   The socket memory should allocated.
+ *   Valid only when allocate the memory from rte hugepage.
+ *
+ * @return
+ *   Pointer of the allocated memory, NULL otherwise.
+ */
+
+__rte_internal
+void *mlx5_realloc(void *addr, uint32_t flags, size_t size, unsigned int align,
+		   int socket);
+
+/**
+ * Memory free function.
+ *
+ * @param addr
+ *   The memory address to be freed..
+ */
+__rte_internal
+void mlx5_free(void *addr);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index 5aad219..132a069 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -84,5 +84,11 @@ INTERNAL {
 	mlx5_release_dbr;
 
 	mlx5_translate_port_name;
+
+	mlx5_malloc_mem_select;
+	mlx5_memory_stat_dump;
+	mlx5_malloc;
+	mlx5_realloc;
+	mlx5_free;
 };
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v3 2/7] net/mlx5: add allocate memory from system devarg
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
  2020-07-17 13:50   ` [dpdk-dev] [PATCH v3 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
@ 2020-07-17 13:51   ` Suanming Mou
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 3/7] net/mlx5: convert control path memory to unified malloc Suanming Mou
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-17 13:51 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

Currently, for MLX5 PMD, once millions of flows created, the memory
consumption of the flows are also very huge. For the system with limited
memory, it means the system need to reserve most of the memory as huge
page memory to serve the flows in advance. And other normal applications
will have no chance to use this reserved memory any more. While most of
the time, the system will not have lots of flows, the  reserved huge page
memory becomes a bit waste of memory at most of the time.

By the new sys_mem_en devarg, once set it to be true, it allows the PMD
allocate the memory from system by default with the new add mlx5 memory
management functions. Only once the MLX5_MEM_RTE flag is set, the memory
will be allocate from rte, otherwise, it allocates memory from system.

So in this case, the system with limited memory no need to reserve most
of the memory for hugepage. Only some needed memory for datapath objects
will be enough to allocated with explicitly flag. Other memory will be
allocated from system. For system with enough memory, no need to care
about the devarg, the memory will always be from rte hugepage.

One restriction is that for DPDK application with multiple PCI devices,
if the sys_mem_en devargs are different between the devices, the
sys_mem_en only gets the value from the first device devargs, and print
out a message to warn that.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 doc/guides/nics/mlx5.rst         | 7 +++++++
 drivers/net/mlx5/linux/mlx5_os.c | 2 ++
 drivers/net/mlx5/mlx5.c          | 6 ++++++
 drivers/net/mlx5/mlx5.h          | 1 +
 4 files changed, 16 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index c185129..a697d30 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -916,6 +916,13 @@ Driver options
 
   By default, the PMD will set this value to 0.
 
+- ``sys_mem_en`` parameter [int]
+
+  A nonzero value enables the PMD memory management function allocate memory
+  from system by default without explicitly rte memory flag.
+
+  By default, the PMD will set this value to 0.
+
 .. _mlx5_firmware_config:
 
 Firmware configuration
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index f228bab..df0fae9 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -43,6 +43,7 @@
 #include <mlx5_common.h>
 #include <mlx5_common_mp.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -495,6 +496,7 @@
 			strerror(rte_errno));
 		goto error;
 	}
+	mlx5_malloc_mem_select(config.sys_mem_en);
 	sh = mlx5_alloc_shared_dev_ctx(spawn, &config);
 	if (!sh)
 		return NULL;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 723c1dd..f39acd7 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -180,6 +180,9 @@
 /* Flow memory reclaim mode. */
 #define MLX5_RECLAIM_MEM "reclaim_mem_mode"
 
+/* The default memory alloctor used in PMD. */
+#define MLX5_SYS_MEM_EN "sys_mem_en"
+
 static const char *MZ_MLX5_PMD_SHARED_DATA = "mlx5_pmd_shared_data";
 
 /* Shared memory between primary and secondary processes. */
@@ -1533,6 +1536,8 @@ struct mlx5_dev_ctx_shared *
 			return -rte_errno;
 		}
 		config->reclaim_mode = tmp;
+	} else if (strcmp(MLX5_SYS_MEM_EN, key) == 0) {
+		config->sys_mem_en = !!tmp;
 	} else {
 		DRV_LOG(WARNING, "%s: unknown parameter", key);
 		rte_errno = EINVAL;
@@ -1591,6 +1596,7 @@ struct mlx5_dev_ctx_shared *
 		MLX5_CLASS_ARG_NAME,
 		MLX5_HP_BUF_SIZE,
 		MLX5_RECLAIM_MEM,
+		MLX5_SYS_MEM_EN,
 		NULL,
 	};
 	struct rte_kvargs *kvlist;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 2e61d0c..4d90a19 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -217,6 +217,7 @@ struct mlx5_dev_config {
 	unsigned int dest_tir:1; /* Whether advanced DR API is available. */
 	unsigned int reclaim_mode:2; /* Memory reclaim mode. */
 	unsigned int rt_timestamp:1; /* realtime timestamp format. */
+	unsigned int sys_mem_en:1; /* The default memory allocator. */
 	struct {
 		unsigned int enabled:1; /* Whether MPRQ is enabled. */
 		unsigned int stride_num_n; /* Number of strides. */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v3 3/7] net/mlx5: convert control path memory to unified malloc
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
  2020-07-17 13:50   ` [dpdk-dev] [PATCH v3 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add allocate memory from system devarg Suanming Mou
@ 2020-07-17 13:51   ` Suanming Mou
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 4/7] common/mlx5: " Suanming Mou
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-17 13:51 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the control path memory from unified malloc
function.

The objects be changed:

1. hlist;
2. rss key;
3. vlan vmwa;
4. indexed pool;
5. fdir objects;
6. meter profile;
7. flow counter pool;
8. hrxq and indirect table;
9. flow object cache resources;
10. temporary resources in flow create;

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c            | 88 ++++++++++++++++++++------------------
 drivers/net/mlx5/mlx5_ethdev.c     | 15 ++++---
 drivers/net/mlx5/mlx5_flow.c       | 45 +++++++++++--------
 drivers/net/mlx5/mlx5_flow_dv.c    | 46 +++++++++++---------
 drivers/net/mlx5/mlx5_flow_meter.c | 11 ++---
 drivers/net/mlx5/mlx5_flow_verbs.c |  8 ++--
 drivers/net/mlx5/mlx5_mp.c         |  3 +-
 drivers/net/mlx5/mlx5_rss.c        | 13 ++++--
 drivers/net/mlx5/mlx5_rxq.c        | 37 +++++++++-------
 drivers/net/mlx5/mlx5_utils.c      | 60 +++++++++++++++-----------
 drivers/net/mlx5/mlx5_utils.h      |  2 +-
 drivers/net/mlx5/mlx5_vlan.c       |  8 ++--
 12 files changed, 190 insertions(+), 146 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f39acd7..3390869 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -40,6 +40,7 @@
 #include <mlx5_common.h>
 #include <mlx5_common_os.h>
 #include <mlx5_common_mp.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -207,8 +208,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_encap_decap_ipool",
 	},
 	{
@@ -218,8 +219,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_push_vlan_ipool",
 	},
 	{
@@ -229,8 +230,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_tag_ipool",
 	},
 	{
@@ -240,8 +241,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_port_id_ipool",
 	},
 	{
@@ -251,8 +252,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_jump_ipool",
 	},
 #endif
@@ -263,8 +264,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_meter_ipool",
 	},
 	{
@@ -274,8 +275,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_mcp_ipool",
 	},
 	{
@@ -285,8 +286,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_hrxq_ipool",
 	},
 	{
@@ -300,8 +301,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.grow_shift = 2,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "mlx5_flow_handle_ipool",
 	},
 	{
@@ -309,8 +310,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =
 		.trunk_size = 4096,
 		.need_lock = 1,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 		.type = "rte_flow_ipool",
 	},
 };
@@ -336,15 +337,16 @@ struct mlx5_flow_id_pool *
 	struct mlx5_flow_id_pool *pool;
 	void *mem;
 
-	pool = rte_zmalloc("id pool allocation", sizeof(*pool),
-			   RTE_CACHE_LINE_SIZE);
+	pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool),
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!pool) {
 		DRV_LOG(ERR, "can't allocate id pool");
 		rte_errno  = ENOMEM;
 		return NULL;
 	}
-	mem = rte_zmalloc("", MLX5_FLOW_MIN_ID_POOL_SIZE * sizeof(uint32_t),
-			  RTE_CACHE_LINE_SIZE);
+	mem = mlx5_malloc(MLX5_MEM_ZERO,
+			  MLX5_FLOW_MIN_ID_POOL_SIZE * sizeof(uint32_t),
+			  RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!mem) {
 		DRV_LOG(ERR, "can't allocate mem for id pool");
 		rte_errno  = ENOMEM;
@@ -357,7 +359,7 @@ struct mlx5_flow_id_pool *
 	pool->max_id = max_id;
 	return pool;
 error:
-	rte_free(pool);
+	mlx5_free(pool);
 	return NULL;
 }
 
@@ -370,8 +372,8 @@ struct mlx5_flow_id_pool *
 void
 mlx5_flow_id_pool_release(struct mlx5_flow_id_pool *pool)
 {
-	rte_free(pool->free_arr);
-	rte_free(pool);
+	mlx5_free(pool->free_arr);
+	mlx5_free(pool);
 }
 
 /**
@@ -423,14 +425,15 @@ struct mlx5_flow_id_pool *
 		size = pool->curr - pool->free_arr;
 		size2 = size * MLX5_ID_GENERATION_ARRAY_FACTOR;
 		MLX5_ASSERT(size2 > size);
-		mem = rte_malloc("", size2 * sizeof(uint32_t), 0);
+		mem = mlx5_malloc(0, size2 * sizeof(uint32_t), 0,
+				  SOCKET_ID_ANY);
 		if (!mem) {
 			DRV_LOG(ERR, "can't allocate mem for id pool");
 			rte_errno  = ENOMEM;
 			return -rte_errno;
 		}
 		memcpy(mem, pool->free_arr, size * sizeof(uint32_t));
-		rte_free(pool->free_arr);
+		mlx5_free(pool->free_arr);
 		pool->free_arr = mem;
 		pool->curr = pool->free_arr + size;
 		pool->last = pool->free_arr + size2;
@@ -499,7 +502,7 @@ struct mlx5_flow_id_pool *
 	LIST_REMOVE(mng, next);
 	claim_zero(mlx5_devx_cmd_destroy(mng->dm));
 	claim_zero(mlx5_glue->devx_umem_dereg(mng->umem));
-	rte_free(mem);
+	mlx5_free(mem);
 }
 
 /**
@@ -547,10 +550,10 @@ struct mlx5_flow_id_pool *
 						    (pool, j)->dcs));
 			}
 			TAILQ_REMOVE(&sh->cmng.ccont[i].pool_list, pool, next);
-			rte_free(pool);
+			mlx5_free(pool);
 			pool = TAILQ_FIRST(&sh->cmng.ccont[i].pool_list);
 		}
-		rte_free(sh->cmng.ccont[i].pools);
+		mlx5_free(sh->cmng.ccont[i].pools);
 	}
 	mng = LIST_FIRST(&sh->cmng.mem_mngs);
 	while (mng) {
@@ -1000,7 +1003,7 @@ struct mlx5_dev_ctx_shared *
 					entry);
 		MLX5_ASSERT(tbl_data);
 		mlx5_hlist_remove(sh->flow_tbls, pos);
-		rte_free(tbl_data);
+		mlx5_free(tbl_data);
 	}
 	table_key.direction = 1;
 	pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64);
@@ -1009,7 +1012,7 @@ struct mlx5_dev_ctx_shared *
 					entry);
 		MLX5_ASSERT(tbl_data);
 		mlx5_hlist_remove(sh->flow_tbls, pos);
-		rte_free(tbl_data);
+		mlx5_free(tbl_data);
 	}
 	table_key.direction = 0;
 	table_key.domain = 1;
@@ -1019,7 +1022,7 @@ struct mlx5_dev_ctx_shared *
 					entry);
 		MLX5_ASSERT(tbl_data);
 		mlx5_hlist_remove(sh->flow_tbls, pos);
-		rte_free(tbl_data);
+		mlx5_free(tbl_data);
 	}
 	mlx5_hlist_destroy(sh->flow_tbls, NULL, NULL);
 }
@@ -1063,8 +1066,9 @@ struct mlx5_dev_ctx_shared *
 			.direction = 0,
 		}
 	};
-	struct mlx5_flow_tbl_data_entry *tbl_data = rte_zmalloc(NULL,
-							  sizeof(*tbl_data), 0);
+	struct mlx5_flow_tbl_data_entry *tbl_data = mlx5_malloc(MLX5_MEM_ZERO,
+							  sizeof(*tbl_data), 0,
+							  SOCKET_ID_ANY);
 
 	if (!tbl_data) {
 		err = ENOMEM;
@@ -1077,7 +1081,8 @@ struct mlx5_dev_ctx_shared *
 	rte_atomic32_init(&tbl_data->tbl.refcnt);
 	rte_atomic32_inc(&tbl_data->tbl.refcnt);
 	table_key.direction = 1;
-	tbl_data = rte_zmalloc(NULL, sizeof(*tbl_data), 0);
+	tbl_data = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl_data), 0,
+			       SOCKET_ID_ANY);
 	if (!tbl_data) {
 		err = ENOMEM;
 		goto error;
@@ -1090,7 +1095,8 @@ struct mlx5_dev_ctx_shared *
 	rte_atomic32_inc(&tbl_data->tbl.refcnt);
 	table_key.direction = 0;
 	table_key.domain = 1;
-	tbl_data = rte_zmalloc(NULL, sizeof(*tbl_data), 0);
+	tbl_data = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tbl_data), 0,
+			       SOCKET_ID_ANY);
 	if (!tbl_data) {
 		err = ENOMEM;
 		goto error;
@@ -1323,9 +1329,9 @@ struct mlx5_dev_ctx_shared *
 	mlx5_mprq_free_mp(dev);
 	mlx5_os_free_shared_dr(priv);
 	if (priv->rss_conf.rss_key != NULL)
-		rte_free(priv->rss_conf.rss_key);
+		mlx5_free(priv->rss_conf.rss_key);
 	if (priv->reta_idx != NULL)
-		rte_free(priv->reta_idx);
+		mlx5_free(priv->reta_idx);
 	if (priv->config.vf)
 		mlx5_nl_mac_addr_flush(priv->nl_socket_route, mlx5_ifindex(dev),
 				       dev->data->mac_addrs,
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 6b4efcd..cefb450 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -21,6 +21,8 @@
 #include <rte_rwlock.h>
 #include <rte_cycles.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5_rxtx.h"
 #include "mlx5_autoconf.h"
 
@@ -75,8 +77,8 @@
 		return -rte_errno;
 	}
 	priv->rss_conf.rss_key =
-		rte_realloc(priv->rss_conf.rss_key,
-			    MLX5_RSS_HASH_KEY_LEN, 0);
+		mlx5_realloc(priv->rss_conf.rss_key, MLX5_MEM_RTE,
+			    MLX5_RSS_HASH_KEY_LEN, 0, SOCKET_ID_ANY);
 	if (!priv->rss_conf.rss_key) {
 		DRV_LOG(ERR, "port %u cannot allocate RSS hash key memory (%u)",
 			dev->data->port_id, rxqs_n);
@@ -142,7 +144,8 @@
 
 	if (priv->skip_default_rss_reta)
 		return ret;
-	rss_queue_arr = rte_malloc("", rxqs_n * sizeof(unsigned int), 0);
+	rss_queue_arr = mlx5_malloc(0, rxqs_n * sizeof(unsigned int), 0,
+				    SOCKET_ID_ANY);
 	if (!rss_queue_arr) {
 		DRV_LOG(ERR, "port %u cannot allocate RSS queue list (%u)",
 			dev->data->port_id, rxqs_n);
@@ -163,7 +166,7 @@
 		DRV_LOG(ERR, "port %u cannot handle this many Rx queues (%u)",
 			dev->data->port_id, rss_queue_n);
 		rte_errno = EINVAL;
-		rte_free(rss_queue_arr);
+		mlx5_free(rss_queue_arr);
 		return -rte_errno;
 	}
 	DRV_LOG(INFO, "port %u Rx queues number update: %u -> %u",
@@ -179,7 +182,7 @@
 				rss_queue_n));
 	ret = mlx5_rss_reta_index_resize(dev, reta_idx_n);
 	if (ret) {
-		rte_free(rss_queue_arr);
+		mlx5_free(rss_queue_arr);
 		return ret;
 	}
 	/*
@@ -192,7 +195,7 @@
 		if (++j == rss_queue_n)
 			j = 0;
 	}
-	rte_free(rss_queue_arr);
+	mlx5_free(rss_queue_arr);
 	return ret;
 }
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 12d80b5..d171ab0 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -32,6 +32,7 @@
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -4115,7 +4116,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 		act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
 			   sizeof(struct rte_flow_action_set_tag) +
 			   sizeof(struct rte_flow_action_jump);
-		ext_actions = rte_zmalloc(__func__, act_size, 0);
+		ext_actions = mlx5_malloc(MLX5_MEM_ZERO, act_size, 0,
+					  SOCKET_ID_ANY);
 		if (!ext_actions)
 			return rte_flow_error_set(error, ENOMEM,
 						  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4151,7 +4153,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 		 */
 		act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
 			   sizeof(struct mlx5_flow_action_copy_mreg);
-		ext_actions = rte_zmalloc(__func__, act_size, 0);
+		ext_actions = mlx5_malloc(MLX5_MEM_ZERO, act_size, 0,
+					  SOCKET_ID_ANY);
 		if (!ext_actions)
 			return rte_flow_error_set(error, ENOMEM,
 						  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4245,7 +4248,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 	 * by flow_drv_destroy.
 	 */
 	flow_qrss_free_id(dev, qrss_id);
-	rte_free(ext_actions);
+	mlx5_free(ext_actions);
 	return ret;
 }
 
@@ -4310,7 +4313,8 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 #define METER_SUFFIX_ITEM 4
 		item_size = sizeof(struct rte_flow_item) * METER_SUFFIX_ITEM +
 			    sizeof(struct mlx5_rte_flow_item_tag) * 2;
-		sfx_actions = rte_zmalloc(__func__, (act_size + item_size), 0);
+		sfx_actions = mlx5_malloc(MLX5_MEM_ZERO, (act_size + item_size),
+					  0, SOCKET_ID_ANY);
 		if (!sfx_actions)
 			return rte_flow_error_set(error, ENOMEM,
 						  RTE_FLOW_ERROR_TYPE_ACTION,
@@ -4349,7 +4353,7 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
 					 external, flow_idx, error);
 exit:
 	if (sfx_actions)
-		rte_free(sfx_actions);
+		mlx5_free(sfx_actions);
 	return ret;
 }
 
@@ -4763,8 +4767,8 @@ struct rte_flow *
 		}
 		if (priv_fdir_flow) {
 			LIST_REMOVE(priv_fdir_flow, next);
-			rte_free(priv_fdir_flow->fdir);
-			rte_free(priv_fdir_flow);
+			mlx5_free(priv_fdir_flow->fdir);
+			mlx5_free(priv_fdir_flow);
 		}
 	}
 	mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], flow_idx);
@@ -4904,11 +4908,12 @@ struct rte_flow *
 	struct mlx5_priv *priv = dev->data->dev_private;
 
 	if (!priv->inter_flows) {
-		priv->inter_flows = rte_calloc(__func__, 1,
+		priv->inter_flows = mlx5_malloc(MLX5_MEM_ZERO,
 				    MLX5_NUM_MAX_DEV_FLOWS *
 				    sizeof(struct mlx5_flow) +
 				    (sizeof(struct mlx5_flow_rss_desc) +
-				    sizeof(uint16_t) * UINT16_MAX) * 2, 0);
+				    sizeof(uint16_t) * UINT16_MAX) * 2, 0,
+				    SOCKET_ID_ANY);
 		if (!priv->inter_flows) {
 			DRV_LOG(ERR, "can't allocate intermediate memory.");
 			return;
@@ -4932,7 +4937,7 @@ struct rte_flow *
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 
-	rte_free(priv->inter_flows);
+	mlx5_free(priv->inter_flows);
 	priv->inter_flows = NULL;
 }
 
@@ -5572,7 +5577,8 @@ struct rte_flow *
 	uint32_t flow_idx;
 	int ret;
 
-	fdir_flow = rte_zmalloc(__func__, sizeof(*fdir_flow), 0);
+	fdir_flow = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*fdir_flow), 0,
+				SOCKET_ID_ANY);
 	if (!fdir_flow) {
 		rte_errno = ENOMEM;
 		return -rte_errno;
@@ -5585,8 +5591,9 @@ struct rte_flow *
 		rte_errno = EEXIST;
 		goto error;
 	}
-	priv_fdir_flow = rte_zmalloc(__func__, sizeof(struct mlx5_fdir_flow),
-				     0);
+	priv_fdir_flow = mlx5_malloc(MLX5_MEM_ZERO,
+				     sizeof(struct mlx5_fdir_flow),
+				     0, SOCKET_ID_ANY);
 	if (!priv_fdir_flow) {
 		rte_errno = ENOMEM;
 		goto error;
@@ -5605,8 +5612,8 @@ struct rte_flow *
 		dev->data->port_id, (void *)flow);
 	return 0;
 error:
-	rte_free(priv_fdir_flow);
-	rte_free(fdir_flow);
+	mlx5_free(priv_fdir_flow);
+	mlx5_free(fdir_flow);
 	return -rte_errno;
 }
 
@@ -5646,8 +5653,8 @@ struct rte_flow *
 	LIST_REMOVE(priv_fdir_flow, next);
 	flow_idx = priv_fdir_flow->rix_flow;
 	flow_list_destroy(dev, &priv->flows, flow_idx);
-	rte_free(priv_fdir_flow->fdir);
-	rte_free(priv_fdir_flow);
+	mlx5_free(priv_fdir_flow->fdir);
+	mlx5_free(priv_fdir_flow);
 	DRV_LOG(DEBUG, "port %u deleted FDIR flow %u",
 		dev->data->port_id, flow_idx);
 	return 0;
@@ -5692,8 +5699,8 @@ struct rte_flow *
 		priv_fdir_flow = LIST_FIRST(&priv->fdir_flows);
 		LIST_REMOVE(priv_fdir_flow, next);
 		flow_list_destroy(dev, &priv->flows, priv_fdir_flow->rix_flow);
-		rte_free(priv_fdir_flow->fdir);
-		rte_free(priv_fdir_flow);
+		mlx5_free(priv_fdir_flow->fdir);
+		mlx5_free(priv_fdir_flow);
 	}
 }
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ceb585d..0b36ec3 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -32,6 +32,7 @@
 
 #include <mlx5_devx_cmds.h>
 #include <mlx5_prm.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -2615,7 +2616,7 @@ struct field_modify_info modify_tcp[] = {
 					(sh->ctx, domain, cache_resource,
 					 &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -2772,7 +2773,7 @@ struct field_modify_info modify_tcp[] = {
 				(priv->sh->fdb_domain, resource->port_id,
 				 &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -2851,7 +2852,7 @@ struct field_modify_info modify_tcp[] = {
 					(domain, resource->vlan_tag,
 					 &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -4024,8 +4025,9 @@ struct field_modify_info modify_tcp[] = {
 		}
 	}
 	/* Register new modify-header resource. */
-	cache_resource = rte_calloc(__func__, 1,
-				    sizeof(*cache_resource) + actions_len, 0);
+	cache_resource = mlx5_malloc(MLX5_MEM_ZERO,
+				    sizeof(*cache_resource) + actions_len, 0,
+				    SOCKET_ID_ANY);
 	if (!cache_resource)
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
@@ -4036,7 +4038,7 @@ struct field_modify_info modify_tcp[] = {
 					(sh->ctx, ns, cache_resource,
 					 actions_len, &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -4175,7 +4177,8 @@ struct field_modify_info modify_tcp[] = {
 			MLX5_COUNTERS_PER_POOL +
 			sizeof(struct mlx5_counter_stats_raw)) * raws_n +
 			sizeof(struct mlx5_counter_stats_mem_mng);
-	uint8_t *mem = rte_calloc(__func__, 1, size, sysconf(_SC_PAGESIZE));
+	uint8_t *mem = mlx5_malloc(MLX5_MEM_ZERO, size, sysconf(_SC_PAGESIZE),
+				  SOCKET_ID_ANY);
 	int i;
 
 	if (!mem) {
@@ -4188,7 +4191,7 @@ struct field_modify_info modify_tcp[] = {
 						 IBV_ACCESS_LOCAL_WRITE);
 	if (!mem_mng->umem) {
 		rte_errno = errno;
-		rte_free(mem);
+		mlx5_free(mem);
 		return NULL;
 	}
 	mkey_attr.addr = (uintptr_t)mem;
@@ -4207,7 +4210,7 @@ struct field_modify_info modify_tcp[] = {
 	if (!mem_mng->dm) {
 		mlx5_glue->devx_umem_dereg(mem_mng->umem);
 		rte_errno = errno;
-		rte_free(mem);
+		mlx5_free(mem);
 		return NULL;
 	}
 	mem_mng->raws = (struct mlx5_counter_stats_raw *)(mem + size);
@@ -4244,7 +4247,7 @@ struct field_modify_info modify_tcp[] = {
 	void *old_pools = cont->pools;
 	uint32_t resize = cont->n + MLX5_CNT_CONTAINER_RESIZE;
 	uint32_t mem_size = sizeof(struct mlx5_flow_counter_pool *) * resize;
-	void *pools = rte_calloc(__func__, 1, mem_size, 0);
+	void *pools = mlx5_malloc(MLX5_MEM_ZERO, mem_size, 0, SOCKET_ID_ANY);
 
 	if (!pools) {
 		rte_errno = ENOMEM;
@@ -4263,7 +4266,7 @@ struct field_modify_info modify_tcp[] = {
 		mem_mng = flow_dv_create_counter_stat_mem_mng(dev,
 			  MLX5_CNT_CONTAINER_RESIZE + MLX5_MAX_PENDING_QUERIES);
 		if (!mem_mng) {
-			rte_free(pools);
+			mlx5_free(pools);
 			return -ENOMEM;
 		}
 		for (i = 0; i < MLX5_MAX_PENDING_QUERIES; ++i)
@@ -4278,7 +4281,7 @@ struct field_modify_info modify_tcp[] = {
 	cont->pools = pools;
 	rte_spinlock_unlock(&cont->resize_sl);
 	if (old_pools)
-		rte_free(old_pools);
+		mlx5_free(old_pools);
 	return 0;
 }
 
@@ -4367,7 +4370,7 @@ struct field_modify_info modify_tcp[] = {
 	size += MLX5_COUNTERS_PER_POOL * CNT_SIZE;
 	size += (batch ? 0 : MLX5_COUNTERS_PER_POOL * CNTEXT_SIZE);
 	size += (!age ? 0 : MLX5_COUNTERS_PER_POOL * AGE_SIZE);
-	pool = rte_calloc(__func__, 1, size, 0);
+	pool = mlx5_malloc(MLX5_MEM_ZERO, size, 0, SOCKET_ID_ANY);
 	if (!pool) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -7577,7 +7580,8 @@ struct field_modify_info modify_tcp[] = {
 		}
 	}
 	/* Register new matcher. */
-	cache_matcher = rte_calloc(__func__, 1, sizeof(*cache_matcher), 0);
+	cache_matcher = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*cache_matcher), 0,
+				    SOCKET_ID_ANY);
 	if (!cache_matcher) {
 		flow_dv_tbl_resource_release(dev, tbl);
 		return rte_flow_error_set(error, ENOMEM,
@@ -7593,7 +7597,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj,
 					       &cache_matcher->matcher_object);
 	if (ret) {
-		rte_free(cache_matcher);
+		mlx5_free(cache_matcher);
 #ifdef HAVE_MLX5DV_DR
 		flow_dv_tbl_resource_release(dev, tbl);
 #endif
@@ -7668,7 +7672,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_os_create_flow_action_tag(tag_be24,
 						  &cache_resource->action);
 	if (ret) {
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot create action");
@@ -7677,7 +7681,7 @@ struct field_modify_info modify_tcp[] = {
 	rte_atomic32_inc(&cache_resource->refcnt);
 	if (mlx5_hlist_insert(sh->tag_table, &cache_resource->entry)) {
 		mlx5_flow_os_destroy_flow_action(cache_resource->action);
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		return rte_flow_error_set(error, EEXIST,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 					  NULL, "cannot insert tag");
@@ -8895,7 +8899,7 @@ struct field_modify_info modify_tcp[] = {
 		LIST_REMOVE(matcher, next);
 		/* table ref-- in release interface. */
 		flow_dv_tbl_resource_release(dev, matcher->tbl);
-		rte_free(matcher);
+		mlx5_free(matcher);
 		DRV_LOG(DEBUG, "port %u matcher %p: removed",
 			dev->data->port_id, (void *)matcher);
 		return 0;
@@ -9037,7 +9041,7 @@ struct field_modify_info modify_tcp[] = {
 		claim_zero(mlx5_flow_os_destroy_flow_action
 						(cache_resource->action));
 		LIST_REMOVE(cache_resource, next);
-		rte_free(cache_resource);
+		mlx5_free(cache_resource);
 		DRV_LOG(DEBUG, "modify-header resource %p: removed",
 			(void *)cache_resource);
 		return 0;
@@ -9410,7 +9414,7 @@ struct field_modify_info modify_tcp[] = {
 		flow_dv_tbl_resource_release(dev, mtd->transfer.sfx_tbl);
 	if (mtd->drop_actn)
 		claim_zero(mlx5_flow_os_destroy_flow_action(mtd->drop_actn));
-	rte_free(mtd);
+	mlx5_free(mtd);
 	return 0;
 }
 
@@ -9543,7 +9547,7 @@ struct field_modify_info modify_tcp[] = {
 		rte_errno = ENOTSUP;
 		return NULL;
 	}
-	mtb = rte_calloc(__func__, 1, sizeof(*mtb), 0);
+	mtb = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*mtb), 0, SOCKET_ID_ANY);
 	if (!mtb) {
 		DRV_LOG(ERR, "Failed to allocate memory for meter.");
 		return NULL;
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index 86c334b..bf34687 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -10,6 +10,7 @@
 #include <rte_mtr_driver.h>
 
 #include <mlx5_devx_cmds.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_flow.h"
@@ -356,8 +357,8 @@
 	if (ret)
 		return ret;
 	/* Meter profile memory allocation. */
-	fmp = rte_calloc(__func__, 1, sizeof(struct mlx5_flow_meter_profile),
-			 RTE_CACHE_LINE_SIZE);
+	fmp = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_flow_meter_profile),
+			 RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (fmp == NULL)
 		return -rte_mtr_error_set(error, ENOMEM,
 					  RTE_MTR_ERROR_TYPE_UNSPECIFIED,
@@ -374,7 +375,7 @@
 	TAILQ_INSERT_TAIL(fmps, fmp, next);
 	return 0;
 error:
-	rte_free(fmp);
+	mlx5_free(fmp);
 	return ret;
 }
 
@@ -417,7 +418,7 @@
 					  NULL, "Meter profile is in use.");
 	/* Remove from list. */
 	TAILQ_REMOVE(&priv->flow_meter_profiles, fmp, next);
-	rte_free(fmp);
+	mlx5_free(fmp);
 	return 0;
 }
 
@@ -1286,7 +1287,7 @@ struct mlx5_flow_meter *
 		MLX5_ASSERT(!fmp->ref_cnt);
 		/* Remove from list. */
 		TAILQ_REMOVE(&priv->flow_meter_profiles, fmp, next);
-		rte_free(fmp);
+		mlx5_free(fmp);
 	}
 	return 0;
 }
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 781c97f..72106b4 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -28,6 +28,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_prm.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -188,14 +189,15 @@
 			/* Resize the container pool array. */
 			size = sizeof(struct mlx5_flow_counter_pool *) *
 				     (n_valid + MLX5_CNT_CONTAINER_RESIZE);
-			pools = rte_zmalloc(__func__, size, 0);
+			pools = mlx5_malloc(MLX5_MEM_ZERO, size, 0,
+					    SOCKET_ID_ANY);
 			if (!pools)
 				return 0;
 			if (n_valid) {
 				memcpy(pools, cont->pools,
 				       sizeof(struct mlx5_flow_counter_pool *) *
 				       n_valid);
-				rte_free(cont->pools);
+				mlx5_free(cont->pools);
 			}
 			cont->pools = pools;
 			cont->n += MLX5_CNT_CONTAINER_RESIZE;
@@ -203,7 +205,7 @@
 		/* Allocate memory for new pool*/
 		size = sizeof(*pool) + (sizeof(*cnt_ext) + sizeof(*cnt)) *
 		       MLX5_COUNTERS_PER_POOL;
-		pool = rte_calloc(__func__, 1, size, 0);
+		pool = mlx5_malloc(MLX5_MEM_ZERO, size, 0, SOCKET_ID_ANY);
 		if (!pool)
 			return 0;
 		pool->type |= CNT_POOL_TYPE_EXT;
diff --git a/drivers/net/mlx5/mlx5_mp.c b/drivers/net/mlx5/mlx5_mp.c
index a2b5c40..cf6e33b 100644
--- a/drivers/net/mlx5/mlx5_mp.c
+++ b/drivers/net/mlx5/mlx5_mp.c
@@ -12,6 +12,7 @@
 
 #include <mlx5_common_mp.h>
 #include <mlx5_common_mr.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -181,7 +182,7 @@
 		}
 	}
 exit:
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index 653b069..a49edbc 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -21,6 +21,8 @@
 #include <rte_malloc.h>
 #include <rte_ethdev_driver.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5_defs.h"
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -57,8 +59,10 @@
 			rte_errno = EINVAL;
 			return -rte_errno;
 		}
-		priv->rss_conf.rss_key = rte_realloc(priv->rss_conf.rss_key,
-						     rss_conf->rss_key_len, 0);
+		priv->rss_conf.rss_key = mlx5_realloc(priv->rss_conf.rss_key,
+						      MLX5_MEM_RTE,
+						      rss_conf->rss_key_len,
+						      0, SOCKET_ID_ANY);
 		if (!priv->rss_conf.rss_key) {
 			rte_errno = ENOMEM;
 			return -rte_errno;
@@ -131,8 +135,9 @@
 	if (priv->reta_idx_n == reta_size)
 		return 0;
 
-	mem = rte_realloc(priv->reta_idx,
-			  reta_size * sizeof((*priv->reta_idx)[0]), 0);
+	mem = mlx5_realloc(priv->reta_idx, MLX5_MEM_RTE,
+			   reta_size * sizeof((*priv->reta_idx)[0]), 0,
+			   SOCKET_ID_ANY);
 	if (!mem) {
 		rte_errno = ENOMEM;
 		return -rte_errno;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 7dd06e8..e8214d4 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -31,6 +31,7 @@
 
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5.h"
@@ -734,7 +735,9 @@
 	if (!dev->data->dev_conf.intr_conf.rxq)
 		return 0;
 	mlx5_rx_intr_vec_disable(dev);
-	intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
+	intr_handle->intr_vec = mlx5_malloc(0,
+				n * sizeof(intr_handle->intr_vec[0]),
+				0, SOCKET_ID_ANY);
 	if (intr_handle->intr_vec == NULL) {
 		DRV_LOG(ERR,
 			"port %u failed to allocate memory for interrupt"
@@ -831,7 +834,7 @@
 free:
 	rte_intr_free_epoll_fd(intr_handle);
 	if (intr_handle->intr_vec)
-		free(intr_handle->intr_vec);
+		mlx5_free(intr_handle->intr_vec);
 	intr_handle->nb_efd = 0;
 	intr_handle->intr_vec = NULL;
 }
@@ -2187,8 +2190,8 @@ enum mlx5_rxq_type
 	struct mlx5_ind_table_obj *ind_tbl;
 	unsigned int i = 0, j = 0, k = 0;
 
-	ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl) +
-			     queues_n * sizeof(uint16_t), 0);
+	ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl) +
+			      queues_n * sizeof(uint16_t), 0, SOCKET_ID_ANY);
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -2231,8 +2234,9 @@ enum mlx5_rxq_type
 			      log2above(queues_n) :
 			      log2above(priv->config.ind_table_max_size));
 
-		rqt_attr = rte_calloc(__func__, 1, sizeof(*rqt_attr) +
-				      rqt_n * sizeof(uint32_t), 0);
+		rqt_attr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt_attr) +
+				      rqt_n * sizeof(uint32_t), 0,
+				      SOCKET_ID_ANY);
 		if (!rqt_attr) {
 			DRV_LOG(ERR, "port %u cannot allocate RQT resources",
 				dev->data->port_id);
@@ -2254,7 +2258,7 @@ enum mlx5_rxq_type
 			rqt_attr->rq_list[k] = rqt_attr->rq_list[j];
 		ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->ctx,
 							rqt_attr);
-		rte_free(rqt_attr);
+		mlx5_free(rqt_attr);
 		if (!ind_tbl->rqt) {
 			DRV_LOG(ERR, "port %u cannot create DevX RQT",
 				dev->data->port_id);
@@ -2269,7 +2273,7 @@ enum mlx5_rxq_type
 error:
 	for (j = 0; j < i; j++)
 		mlx5_rxq_release(dev, ind_tbl->queues[j]);
-	rte_free(ind_tbl);
+	mlx5_free(ind_tbl);
 	DEBUG("port %u cannot create indirection table", dev->data->port_id);
 	return NULL;
 }
@@ -2339,7 +2343,7 @@ enum mlx5_rxq_type
 		claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i]));
 	if (!rte_atomic32_read(&ind_tbl->refcnt)) {
 		LIST_REMOVE(ind_tbl, next);
-		rte_free(ind_tbl);
+		mlx5_free(ind_tbl);
 		return 0;
 	}
 	return 1;
@@ -2761,7 +2765,7 @@ enum mlx5_rxq_type
 		rte_errno = errno;
 		goto error;
 	}
-	rxq = rte_calloc(__func__, 1, sizeof(*rxq), 0);
+	rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, SOCKET_ID_ANY);
 	if (!rxq) {
 		DEBUG("port %u cannot allocate drop Rx queue memory",
 		      dev->data->port_id);
@@ -2799,7 +2803,7 @@ enum mlx5_rxq_type
 		claim_zero(mlx5_glue->destroy_wq(rxq->wq));
 	if (rxq->cq)
 		claim_zero(mlx5_glue->destroy_cq(rxq->cq));
-	rte_free(rxq);
+	mlx5_free(rxq);
 	priv->drop_queue.rxq = NULL;
 }
 
@@ -2837,7 +2841,8 @@ enum mlx5_rxq_type
 		rte_errno = errno;
 		goto error;
 	}
-	ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl), 0);
+	ind_tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*ind_tbl), 0,
+			      SOCKET_ID_ANY);
 	if (!ind_tbl) {
 		rte_errno = ENOMEM;
 		goto error;
@@ -2863,7 +2868,7 @@ enum mlx5_rxq_type
 
 	claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table));
 	mlx5_rxq_obj_drop_release(dev);
-	rte_free(ind_tbl);
+	mlx5_free(ind_tbl);
 	priv->drop_queue.hrxq->ind_table = NULL;
 }
 
@@ -2888,7 +2893,7 @@ struct mlx5_hrxq *
 		rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt);
 		return priv->drop_queue.hrxq;
 	}
-	hrxq = rte_calloc(__func__, 1, sizeof(*hrxq), 0);
+	hrxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hrxq), 0, SOCKET_ID_ANY);
 	if (!hrxq) {
 		DRV_LOG(WARNING,
 			"port %u cannot allocate memory for drop queue",
@@ -2945,7 +2950,7 @@ struct mlx5_hrxq *
 		mlx5_ind_table_obj_drop_release(dev);
 	if (hrxq) {
 		priv->drop_queue.hrxq = NULL;
-		rte_free(hrxq);
+		mlx5_free(hrxq);
 	}
 	return NULL;
 }
@@ -2968,7 +2973,7 @@ struct mlx5_hrxq *
 #endif
 		claim_zero(mlx5_glue->destroy_qp(hrxq->qp));
 		mlx5_ind_table_obj_drop_release(dev);
-		rte_free(hrxq);
+		mlx5_free(hrxq);
 		priv->drop_queue.hrxq = NULL;
 	}
 }
diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c
index bf67192..25e8b27 100644
--- a/drivers/net/mlx5/mlx5_utils.c
+++ b/drivers/net/mlx5/mlx5_utils.c
@@ -5,6 +5,8 @@
 #include <rte_malloc.h>
 #include <rte_hash_crc.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5_utils.h"
 
 struct mlx5_hlist *
@@ -27,7 +29,8 @@ struct mlx5_hlist *
 	alloc_size = sizeof(struct mlx5_hlist) +
 		     sizeof(struct mlx5_hlist_head) * act_size;
 	/* Using zmalloc, then no need to initialize the heads. */
-	h = rte_zmalloc(name, alloc_size, RTE_CACHE_LINE_SIZE);
+	h = mlx5_malloc(MLX5_MEM_ZERO, alloc_size, RTE_CACHE_LINE_SIZE,
+			SOCKET_ID_ANY);
 	if (!h) {
 		DRV_LOG(ERR, "No memory for hash list %s creation",
 			name ? name : "None");
@@ -112,10 +115,10 @@ struct mlx5_hlist_entry *
 			if (cb)
 				cb(entry, ctx);
 			else
-				rte_free(entry);
+				mlx5_free(entry);
 		}
 	}
-	rte_free(h);
+	mlx5_free(h);
 }
 
 static inline void
@@ -193,16 +196,17 @@ struct mlx5_indexed_pool *
 	    (cfg->trunk_size && ((cfg->trunk_size & (cfg->trunk_size - 1)) ||
 	    ((__builtin_ffs(cfg->trunk_size) + TRUNK_IDX_BITS) > 32))))
 		return NULL;
-	pool = rte_zmalloc("mlx5_ipool", sizeof(*pool) + cfg->grow_trunk *
-				sizeof(pool->grow_tbl[0]), RTE_CACHE_LINE_SIZE);
+	pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool) + cfg->grow_trunk *
+			   sizeof(pool->grow_tbl[0]), RTE_CACHE_LINE_SIZE,
+			   SOCKET_ID_ANY);
 	if (!pool)
 		return NULL;
 	pool->cfg = *cfg;
 	if (!pool->cfg.trunk_size)
 		pool->cfg.trunk_size = MLX5_IPOOL_DEFAULT_TRUNK_SIZE;
 	if (!cfg->malloc && !cfg->free) {
-		pool->cfg.malloc = rte_malloc_socket;
-		pool->cfg.free = rte_free;
+		pool->cfg.malloc = mlx5_malloc;
+		pool->cfg.free = mlx5_free;
 	}
 	pool->free_list = TRUNK_INVALID;
 	if (pool->cfg.need_lock)
@@ -237,10 +241,9 @@ struct mlx5_indexed_pool *
 		int n_grow = pool->n_trunk_valid ? pool->n_trunk :
 			     RTE_CACHE_LINE_SIZE / sizeof(void *);
 
-		p = pool->cfg.malloc(pool->cfg.type,
-				 (pool->n_trunk_valid + n_grow) *
-				 sizeof(struct mlx5_indexed_trunk *),
-				 RTE_CACHE_LINE_SIZE, rte_socket_id());
+		p = pool->cfg.malloc(0, (pool->n_trunk_valid + n_grow) *
+				     sizeof(struct mlx5_indexed_trunk *),
+				     RTE_CACHE_LINE_SIZE, rte_socket_id());
 		if (!p)
 			return -ENOMEM;
 		if (pool->trunks)
@@ -268,7 +271,7 @@ struct mlx5_indexed_pool *
 	/* rte_bitmap requires memory cacheline aligned. */
 	trunk_size += RTE_CACHE_LINE_ROUNDUP(data_size * pool->cfg.size);
 	trunk_size += bmp_size;
-	trunk = pool->cfg.malloc(pool->cfg.type, trunk_size,
+	trunk = pool->cfg.malloc(0, trunk_size,
 				 RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (!trunk)
 		return -ENOMEM;
@@ -464,7 +467,7 @@ struct mlx5_indexed_pool *
 	if (!pool->trunks)
 		pool->cfg.free(pool->trunks);
 	mlx5_ipool_unlock(pool);
-	rte_free(pool);
+	mlx5_free(pool);
 	return 0;
 }
 
@@ -493,15 +496,16 @@ struct mlx5_l3t_tbl *
 		.grow_shift = 1,
 		.need_lock = 0,
 		.release_mem_en = 1,
-		.malloc = rte_malloc_socket,
-		.free = rte_free,
+		.malloc = mlx5_malloc,
+		.free = mlx5_free,
 	};
 
 	if (type >= MLX5_L3T_TYPE_MAX) {
 		rte_errno = EINVAL;
 		return NULL;
 	}
-	tbl = rte_zmalloc(NULL, sizeof(struct mlx5_l3t_tbl), 1);
+	tbl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_l3t_tbl), 1,
+			  SOCKET_ID_ANY);
 	if (!tbl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -532,7 +536,7 @@ struct mlx5_l3t_tbl *
 	tbl->eip = mlx5_ipool_create(&l3t_ip_cfg);
 	if (!tbl->eip) {
 		rte_errno = ENOMEM;
-		rte_free(tbl);
+		mlx5_free(tbl);
 		tbl = NULL;
 	}
 	return tbl;
@@ -565,17 +569,17 @@ struct mlx5_l3t_tbl *
 					break;
 			}
 			MLX5_ASSERT(!m_tbl->ref_cnt);
-			rte_free(g_tbl->tbl[i]);
+			mlx5_free(g_tbl->tbl[i]);
 			g_tbl->tbl[i] = 0;
 			if (!(--g_tbl->ref_cnt))
 				break;
 		}
 		MLX5_ASSERT(!g_tbl->ref_cnt);
-		rte_free(tbl->tbl);
+		mlx5_free(tbl->tbl);
 		tbl->tbl = 0;
 	}
 	mlx5_ipool_destroy(tbl->eip);
-	rte_free(tbl);
+	mlx5_free(tbl);
 }
 
 uint32_t
@@ -667,11 +671,11 @@ struct mlx5_l3t_tbl *
 		m_tbl->tbl[(idx >> MLX5_L3T_MT_OFFSET) & MLX5_L3T_MT_MASK] =
 									NULL;
 		if (!(--m_tbl->ref_cnt)) {
-			rte_free(m_tbl);
+			mlx5_free(m_tbl);
 			g_tbl->tbl
 			[(idx >> MLX5_L3T_GT_OFFSET) & MLX5_L3T_GT_MASK] = NULL;
 			if (!(--g_tbl->ref_cnt)) {
-				rte_free(g_tbl);
+				mlx5_free(g_tbl);
 				tbl->tbl = 0;
 			}
 		}
@@ -693,8 +697,10 @@ struct mlx5_l3t_tbl *
 	/* Check the global table, create it if empty. */
 	g_tbl = tbl->tbl;
 	if (!g_tbl) {
-		g_tbl = rte_zmalloc(NULL, sizeof(struct mlx5_l3t_level_tbl) +
-				    sizeof(void *) * MLX5_L3T_GT_SIZE, 1);
+		g_tbl = mlx5_malloc(MLX5_MEM_ZERO,
+				    sizeof(struct mlx5_l3t_level_tbl) +
+				    sizeof(void *) * MLX5_L3T_GT_SIZE, 1,
+				    SOCKET_ID_ANY);
 		if (!g_tbl) {
 			rte_errno = ENOMEM;
 			return -1;
@@ -707,8 +713,10 @@ struct mlx5_l3t_tbl *
 	 */
 	m_tbl = g_tbl->tbl[(idx >> MLX5_L3T_GT_OFFSET) & MLX5_L3T_GT_MASK];
 	if (!m_tbl) {
-		m_tbl = rte_zmalloc(NULL, sizeof(struct mlx5_l3t_level_tbl) +
-				    sizeof(void *) * MLX5_L3T_MT_SIZE, 1);
+		m_tbl = mlx5_malloc(MLX5_MEM_ZERO,
+				    sizeof(struct mlx5_l3t_level_tbl) +
+				    sizeof(void *) * MLX5_L3T_MT_SIZE, 1,
+				    SOCKET_ID_ANY);
 		if (!m_tbl) {
 			rte_errno = ENOMEM;
 			return -1;
diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h
index c4b9063..562b9b1 100644
--- a/drivers/net/mlx5/mlx5_utils.h
+++ b/drivers/net/mlx5/mlx5_utils.h
@@ -193,7 +193,7 @@ struct mlx5_indexed_pool_config {
 	/* Lock is needed for multiple thread usage. */
 	uint32_t release_mem_en:1; /* Rlease trunk when it is free. */
 	const char *type; /* Memory allocate type name. */
-	void *(*malloc)(const char *type, size_t size, unsigned int align,
+	void *(*malloc)(uint32_t flags, size_t size, unsigned int align,
 			int socket);
 	/* User defined memory allocator. */
 	void (*free)(void *addr); /* User defined memory release. */
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index f65e416..4308b71 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -33,6 +33,7 @@
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
 #include <mlx5_nl.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_autoconf.h"
@@ -288,7 +289,8 @@ struct mlx5_nl_vlan_vmwa_context *
 		 */
 		return NULL;
 	}
-	vmwa = rte_zmalloc(__func__, sizeof(*vmwa), sizeof(uint32_t));
+	vmwa = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*vmwa), sizeof(uint32_t),
+			   SOCKET_ID_ANY);
 	if (!vmwa) {
 		DRV_LOG(WARNING,
 			"Can not allocate memory"
@@ -300,7 +302,7 @@ struct mlx5_nl_vlan_vmwa_context *
 		DRV_LOG(WARNING,
 			"Can not create Netlink socket"
 			" for VLAN workaround context");
-		rte_free(vmwa);
+		mlx5_free(vmwa);
 		return NULL;
 	}
 	vmwa->vf_ifindex = ifindex;
@@ -323,5 +325,5 @@ void mlx5_vlan_vmwa_exit(struct mlx5_nl_vlan_vmwa_context *vmwa)
 	}
 	if (vmwa->nl_socket >= 0)
 		close(vmwa->nl_socket);
-	rte_free(vmwa);
+	mlx5_free(vmwa);
 }
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v3 4/7] common/mlx5: convert control path memory to unified malloc
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                     ` (2 preceding siblings ...)
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 3/7] net/mlx5: convert control path memory to unified malloc Suanming Mou
@ 2020-07-17 13:51   ` Suanming Mou
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 5/7] common/mlx5: convert data path objects " Suanming Mou
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-17 13:51 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocateis the control path objects memory from the unified
malloc function.

These objects are all used during the instances initialize, it will not
affect the data path.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/linux/mlx5_glue.c | 13 +++---
 drivers/common/mlx5/linux/mlx5_nl.c   |  5 ++-
 drivers/common/mlx5/mlx5_common_mp.c  |  7 +--
 drivers/common/mlx5/mlx5_devx_cmds.c  | 82 +++++++++++++++++++----------------
 4 files changed, 59 insertions(+), 48 deletions(-)

diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c
index 4d3875f..ea9c86b 100644
--- a/drivers/common/mlx5/linux/mlx5_glue.c
+++ b/drivers/common/mlx5/linux/mlx5_glue.c
@@ -184,7 +184,7 @@
 		res = ibv_destroy_flow_action(attr->action);
 		break;
 	}
-	free(action);
+	mlx5_free(action);
 	return res;
 #endif
 #else
@@ -617,7 +617,7 @@
 	struct mlx5dv_flow_action_attr *action;
 
 	(void)offset;
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_COUNTERS_DEVX;
@@ -641,7 +641,7 @@
 #else
 	struct mlx5dv_flow_action_attr *action;
 
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_DEST_IBV_QP;
@@ -686,7 +686,7 @@
 
 	(void)domain;
 	(void)flags;
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
@@ -726,7 +726,7 @@
 	(void)flags;
 	struct mlx5dv_flow_action_attr *action;
 
-	action = malloc(sizeof(*action));
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
@@ -755,7 +755,8 @@
 	return mlx5dv_dr_action_create_tag(tag);
 #else /* HAVE_MLX5DV_DR */
 	struct mlx5dv_flow_action_attr *action;
-	action = malloc(sizeof(*action));
+
+	action = mlx5_malloc(0, sizeof(*action), 0, SOCKET_ID_ANY);
 	if (!action)
 		return NULL;
 	action->type = MLX5DV_FLOW_ACTION_TAG;
diff --git a/drivers/common/mlx5/linux/mlx5_nl.c b/drivers/common/mlx5/linux/mlx5_nl.c
index dc504d8..8ab7f6b 100644
--- a/drivers/common/mlx5/linux/mlx5_nl.c
+++ b/drivers/common/mlx5/linux/mlx5_nl.c
@@ -22,6 +22,7 @@
 
 #include "mlx5_nl.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 #ifdef HAVE_DEVLINK
 #include <linux/devlink.h>
 #endif
@@ -330,7 +331,7 @@ struct mlx5_nl_ifindex_data {
 	     void *arg)
 {
 	struct sockaddr_nl sa;
-	void *buf = malloc(MLX5_RECV_BUF_SIZE);
+	void *buf = mlx5_malloc(0, MLX5_RECV_BUF_SIZE, 0, SOCKET_ID_ANY);
 	struct iovec iov = {
 		.iov_base = buf,
 		.iov_len = MLX5_RECV_BUF_SIZE,
@@ -393,7 +394,7 @@ struct mlx5_nl_ifindex_data {
 		}
 	} while (multipart);
 exit:
-	free(buf);
+	mlx5_free(buf);
 	return ret;
 }
 
diff --git a/drivers/common/mlx5/mlx5_common_mp.c b/drivers/common/mlx5/mlx5_common_mp.c
index da55143..40e3956 100644
--- a/drivers/common/mlx5/mlx5_common_mp.c
+++ b/drivers/common/mlx5/mlx5_common_mp.c
@@ -11,6 +11,7 @@
 
 #include "mlx5_common_mp.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 /**
  * Request Memory Region creation to the primary process.
@@ -49,7 +50,7 @@
 	ret = res->result;
 	if (ret)
 		rte_errno = -ret;
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 	return ret;
 }
 
@@ -89,7 +90,7 @@
 	mp_res = &mp_rep.msgs[0];
 	res = (struct mlx5_mp_param *)mp_res->param;
 	ret = res->result;
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 	return ret;
 }
 
@@ -136,7 +137,7 @@
 	DRV_LOG(DEBUG, "port %u command FD from primary is %d",
 		mp_id->port_id, ret);
 exit:
-	free(mp_rep.msgs);
+	mlx5_free(mp_rep.msgs);
 	return ret;
 }
 
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 0cfa4dc..a5f742d 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -9,6 +9,7 @@
 #include "mlx5_prm.h"
 #include "mlx5_devx_cmds.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 
 /**
@@ -88,7 +89,8 @@
 struct mlx5_devx_obj *
 mlx5_devx_cmd_flow_counter_alloc(void *ctx, uint32_t bulk_n_128)
 {
-	struct mlx5_devx_obj *dcs = rte_zmalloc("dcs", sizeof(*dcs), 0);
+	struct mlx5_devx_obj *dcs = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*dcs),
+						0, SOCKET_ID_ANY);
 	uint32_t in[MLX5_ST_SZ_DW(alloc_flow_counter_in)]   = {0};
 	uint32_t out[MLX5_ST_SZ_DW(alloc_flow_counter_out)] = {0};
 
@@ -104,7 +106,7 @@ struct mlx5_devx_obj *
 	if (!dcs->obj) {
 		DRV_LOG(ERR, "Can't allocate counters - error %d", errno);
 		rte_errno = errno;
-		rte_free(dcs);
+		mlx5_free(dcs);
 		return NULL;
 	}
 	dcs->id = MLX5_GET(alloc_flow_counter_out, out, flow_counter_id);
@@ -209,7 +211,8 @@ struct mlx5_devx_obj *
 	uint32_t in[in_size_dw];
 	uint32_t out[MLX5_ST_SZ_DW(create_mkey_out)] = {0};
 	void *mkc;
-	struct mlx5_devx_obj *mkey = rte_zmalloc("mkey", sizeof(*mkey), 0);
+	struct mlx5_devx_obj *mkey = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*mkey),
+						 0, SOCKET_ID_ANY);
 	size_t pgsize;
 	uint32_t translation_size;
 
@@ -268,7 +271,7 @@ struct mlx5_devx_obj *
 		DRV_LOG(ERR, "Can't create %sdirect mkey - error %d\n",
 			klm_num ? "an in" : "a ", errno);
 		rte_errno = errno;
-		rte_free(mkey);
+		mlx5_free(mkey);
 		return NULL;
 	}
 	mkey->id = MLX5_GET(create_mkey_out, out, mkey_index);
@@ -320,7 +323,7 @@ struct mlx5_devx_obj *
 	if (!obj)
 		return 0;
 	ret =  mlx5_glue->devx_obj_destroy(obj->obj);
-	rte_free(obj);
+	mlx5_free(obj);
 	return ret;
 }
 
@@ -522,11 +525,12 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_obj *parse_flex_obj = NULL;
 	uint32_t i;
 
-	parse_flex_obj = rte_calloc(__func__, 1, sizeof(*parse_flex_obj), 0);
+	parse_flex_obj = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*parse_flex_obj), 0,
+				     SOCKET_ID_ANY);
 	if (!parse_flex_obj) {
 		DRV_LOG(ERR, "Failed to allocate flex parser data");
 		rte_errno = ENOMEM;
-		rte_free(in);
+		mlx5_free(in);
 		return NULL;
 	}
 	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
@@ -610,7 +614,7 @@ struct mlx5_devx_obj *
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create FLEX PARSE GRAPH object "
 			"by using DevX.");
-		rte_free(parse_flex_obj);
+		mlx5_free(parse_flex_obj);
 		return NULL;
 	}
 	parse_flex_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
@@ -907,7 +911,7 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_wq_attr *wq_attr;
 	struct mlx5_devx_obj *rq = NULL;
 
-	rq = rte_calloc_socket(__func__, 1, sizeof(*rq), 0, socket);
+	rq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rq), 0, socket);
 	if (!rq) {
 		DRV_LOG(ERR, "Failed to allocate RQ data");
 		rte_errno = ENOMEM;
@@ -935,7 +939,7 @@ struct mlx5_devx_obj *
 	if (!rq->obj) {
 		DRV_LOG(ERR, "Failed to create RQ using DevX");
 		rte_errno = errno;
-		rte_free(rq);
+		mlx5_free(rq);
 		return NULL;
 	}
 	rq->id = MLX5_GET(create_rq_out, out, rqn);
@@ -1012,7 +1016,7 @@ struct mlx5_devx_obj *
 	void *tir_ctx, *outer, *inner, *rss_key;
 	struct mlx5_devx_obj *tir = NULL;
 
-	tir = rte_calloc(__func__, 1, sizeof(*tir), 0);
+	tir = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tir), 0, SOCKET_ID_ANY);
 	if (!tir) {
 		DRV_LOG(ERR, "Failed to allocate TIR data");
 		rte_errno = ENOMEM;
@@ -1054,7 +1058,7 @@ struct mlx5_devx_obj *
 	if (!tir->obj) {
 		DRV_LOG(ERR, "Failed to create TIR using DevX");
 		rte_errno = errno;
-		rte_free(tir);
+		mlx5_free(tir);
 		return NULL;
 	}
 	tir->id = MLX5_GET(create_tir_out, out, tirn);
@@ -1084,17 +1088,17 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_obj *rqt = NULL;
 	int i;
 
-	in = rte_calloc(__func__, 1, inlen, 0);
+	in = mlx5_malloc(MLX5_MEM_ZERO, inlen, 0, SOCKET_ID_ANY);
 	if (!in) {
 		DRV_LOG(ERR, "Failed to allocate RQT IN data");
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	rqt = rte_calloc(__func__, 1, sizeof(*rqt), 0);
+	rqt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rqt), 0, SOCKET_ID_ANY);
 	if (!rqt) {
 		DRV_LOG(ERR, "Failed to allocate RQT data");
 		rte_errno = ENOMEM;
-		rte_free(in);
+		mlx5_free(in);
 		return NULL;
 	}
 	MLX5_SET(create_rqt_in, in, opcode, MLX5_CMD_OP_CREATE_RQT);
@@ -1105,11 +1109,11 @@ struct mlx5_devx_obj *
 	for (i = 0; i < rqt_attr->rqt_actual_size; i++)
 		MLX5_SET(rqtc, rqt_ctx, rq_num[i], rqt_attr->rq_list[i]);
 	rqt->obj = mlx5_glue->devx_obj_create(ctx, in, inlen, out, sizeof(out));
-	rte_free(in);
+	mlx5_free(in);
 	if (!rqt->obj) {
 		DRV_LOG(ERR, "Failed to create RQT using DevX");
 		rte_errno = errno;
-		rte_free(rqt);
+		mlx5_free(rqt);
 		return NULL;
 	}
 	rqt->id = MLX5_GET(create_rqt_out, out, rqtn);
@@ -1134,7 +1138,7 @@ struct mlx5_devx_obj *
 	uint32_t inlen = MLX5_ST_SZ_BYTES(modify_rqt_in) +
 			 rqt_attr->rqt_actual_size * sizeof(uint32_t);
 	uint32_t out[MLX5_ST_SZ_DW(modify_rqt_out)] = {0};
-	uint32_t *in = rte_calloc(__func__, 1, inlen, 0);
+	uint32_t *in = mlx5_malloc(MLX5_MEM_ZERO, inlen, 0, SOCKET_ID_ANY);
 	void *rqt_ctx;
 	int i;
 	int ret;
@@ -1154,7 +1158,7 @@ struct mlx5_devx_obj *
 	for (i = 0; i < rqt_attr->rqt_actual_size; i++)
 		MLX5_SET(rqtc, rqt_ctx, rq_num[i], rqt_attr->rq_list[i]);
 	ret = mlx5_glue->devx_obj_modify(rqt->obj, in, inlen, out, sizeof(out));
-	rte_free(in);
+	mlx5_free(in);
 	if (ret) {
 		DRV_LOG(ERR, "Failed to modify RQT using DevX.");
 		rte_errno = errno;
@@ -1187,7 +1191,7 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_wq_attr *wq_attr;
 	struct mlx5_devx_obj *sq = NULL;
 
-	sq = rte_calloc(__func__, 1, sizeof(*sq), 0);
+	sq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*sq), 0, SOCKET_ID_ANY);
 	if (!sq) {
 		DRV_LOG(ERR, "Failed to allocate SQ data");
 		rte_errno = ENOMEM;
@@ -1223,7 +1227,7 @@ struct mlx5_devx_obj *
 	if (!sq->obj) {
 		DRV_LOG(ERR, "Failed to create SQ using DevX");
 		rte_errno = errno;
-		rte_free(sq);
+		mlx5_free(sq);
 		return NULL;
 	}
 	sq->id = MLX5_GET(create_sq_out, out, sqn);
@@ -1287,7 +1291,7 @@ struct mlx5_devx_obj *
 	struct mlx5_devx_obj *tis = NULL;
 	void *tis_ctx;
 
-	tis = rte_calloc(__func__, 1, sizeof(*tis), 0);
+	tis = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*tis), 0, SOCKET_ID_ANY);
 	if (!tis) {
 		DRV_LOG(ERR, "Failed to allocate TIS object");
 		rte_errno = ENOMEM;
@@ -1307,7 +1311,7 @@ struct mlx5_devx_obj *
 	if (!tis->obj) {
 		DRV_LOG(ERR, "Failed to create TIS using DevX");
 		rte_errno = errno;
-		rte_free(tis);
+		mlx5_free(tis);
 		return NULL;
 	}
 	tis->id = MLX5_GET(create_tis_out, out, tisn);
@@ -1329,7 +1333,7 @@ struct mlx5_devx_obj *
 	uint32_t out[MLX5_ST_SZ_DW(alloc_transport_domain_out)] = {0};
 	struct mlx5_devx_obj *td = NULL;
 
-	td = rte_calloc(__func__, 1, sizeof(*td), 0);
+	td = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*td), 0, SOCKET_ID_ANY);
 	if (!td) {
 		DRV_LOG(ERR, "Failed to allocate TD object");
 		rte_errno = ENOMEM;
@@ -1342,7 +1346,7 @@ struct mlx5_devx_obj *
 	if (!td->obj) {
 		DRV_LOG(ERR, "Failed to create TIS using DevX");
 		rte_errno = errno;
-		rte_free(td);
+		mlx5_free(td);
 		return NULL;
 	}
 	td->id = MLX5_GET(alloc_transport_domain_out, out,
@@ -1406,8 +1410,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_cq_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(create_cq_out)] = {0};
-	struct mlx5_devx_obj *cq_obj = rte_zmalloc(__func__, sizeof(*cq_obj),
-						   0);
+	struct mlx5_devx_obj *cq_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						   sizeof(*cq_obj),
+						   0, SOCKET_ID_ANY);
 	void *cqctx = MLX5_ADDR_OF(create_cq_in, in, cq_context);
 
 	if (!cq_obj) {
@@ -1442,7 +1447,7 @@ struct mlx5_devx_obj *
 	if (!cq_obj->obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create CQ using DevX errno=%d.", errno);
-		rte_free(cq_obj);
+		mlx5_free(cq_obj);
 		return NULL;
 	}
 	cq_obj->id = MLX5_GET(create_cq_out, out, cqn);
@@ -1466,8 +1471,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_virtq_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
-	struct mlx5_devx_obj *virtq_obj = rte_zmalloc(__func__,
-						     sizeof(*virtq_obj), 0);
+	struct mlx5_devx_obj *virtq_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						     sizeof(*virtq_obj),
+						     0, SOCKET_ID_ANY);
 	void *virtq = MLX5_ADDR_OF(create_virtq_in, in, virtq);
 	void *hdr = MLX5_ADDR_OF(create_virtq_in, in, hdr);
 	void *virtctx = MLX5_ADDR_OF(virtio_net_q, virtq, virtio_q_context);
@@ -1515,7 +1521,7 @@ struct mlx5_devx_obj *
 	if (!virtq_obj->obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create VIRTQ Obj using DevX.");
-		rte_free(virtq_obj);
+		mlx5_free(virtq_obj);
 		return NULL;
 	}
 	virtq_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
@@ -1637,8 +1643,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_qp_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(create_qp_out)] = {0};
-	struct mlx5_devx_obj *qp_obj = rte_zmalloc(__func__, sizeof(*qp_obj),
-						   0);
+	struct mlx5_devx_obj *qp_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						   sizeof(*qp_obj),
+						   0, SOCKET_ID_ANY);
 	void *qpc = MLX5_ADDR_OF(create_qp_in, in, qpc);
 
 	if (!qp_obj) {
@@ -1693,7 +1700,7 @@ struct mlx5_devx_obj *
 	if (!qp_obj->obj) {
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create QP Obj using DevX.");
-		rte_free(qp_obj);
+		mlx5_free(qp_obj);
 		return NULL;
 	}
 	qp_obj->id = MLX5_GET(create_qp_out, out, qpn);
@@ -1789,8 +1796,9 @@ struct mlx5_devx_obj *
 {
 	uint32_t in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] = {0};
 	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
-	struct mlx5_devx_obj *couners_obj = rte_zmalloc(__func__,
-						       sizeof(*couners_obj), 0);
+	struct mlx5_devx_obj *couners_obj = mlx5_malloc(MLX5_MEM_ZERO,
+						       sizeof(*couners_obj), 0,
+						       SOCKET_ID_ANY);
 	void *hdr = MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr);
 
 	if (!couners_obj) {
@@ -1808,7 +1816,7 @@ struct mlx5_devx_obj *
 		rte_errno = errno;
 		DRV_LOG(ERR, "Failed to create virtio queue counters Obj using"
 			" DevX.");
-		rte_free(couners_obj);
+		mlx5_free(couners_obj);
 		return NULL;
 	}
 	couners_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v3 5/7] common/mlx5: convert data path objects to unified malloc
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                     ` (3 preceding siblings ...)
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 4/7] common/mlx5: " Suanming Mou
@ 2020-07-17 13:51   ` Suanming Mou
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: convert configuration " Suanming Mou
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-17 13:51 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the data path object page and B-tree table memory
from unified malloc function with explicit flag MLX5_MEM_RTE.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/mlx5_common.c    | 10 ++++++----
 drivers/common/mlx5/mlx5_common_mr.c | 31 +++++++++++++++----------------
 2 files changed, 21 insertions(+), 20 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c
index 693e2c6..17168e6 100644
--- a/drivers/common/mlx5/mlx5_common.c
+++ b/drivers/common/mlx5/mlx5_common.c
@@ -13,6 +13,7 @@
 #include "mlx5_common.h"
 #include "mlx5_common_os.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 int mlx5_common_logtype;
 
@@ -169,8 +170,9 @@ static inline void mlx5_cpu_id(unsigned int level,
 	struct mlx5_devx_dbr_page *page;
 
 	/* Allocate space for door-bell page and management data. */
-	page = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_devx_dbr_page),
-				 RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+	page = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			   sizeof(struct mlx5_devx_dbr_page),
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!page) {
 		DRV_LOG(ERR, "cannot allocate dbr page");
 		return NULL;
@@ -180,7 +182,7 @@ static inline void mlx5_cpu_id(unsigned int level,
 					      MLX5_DBR_PAGE_SIZE, 0);
 	if (!page->umem) {
 		DRV_LOG(ERR, "cannot umem reg dbr page");
-		rte_free(page);
+		mlx5_free(page);
 		return NULL;
 	}
 	return page;
@@ -261,7 +263,7 @@ static inline void mlx5_cpu_id(unsigned int level,
 		LIST_REMOVE(page, next);
 		if (page->umem)
 			ret = -mlx5_glue->devx_umem_dereg(page->umem);
-		rte_free(page);
+		mlx5_free(page);
 	} else {
 		/* Mark in bitmap that this door-bell is not in use. */
 		offset /= MLX5_DBR_SIZE;
diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c
index 564d618..23324c0 100644
--- a/drivers/common/mlx5/mlx5_common_mr.c
+++ b/drivers/common/mlx5/mlx5_common_mr.c
@@ -12,6 +12,7 @@
 #include "mlx5_common_mp.h"
 #include "mlx5_common_mr.h"
 #include "mlx5_common_utils.h"
+#include "mlx5_malloc.h"
 
 struct mr_find_contig_memsegs_data {
 	uintptr_t addr;
@@ -47,7 +48,8 @@ struct mr_find_contig_memsegs_data {
 	 * Initially cache_bh[] will be given practically enough space and once
 	 * it is expanded, expansion wouldn't be needed again ever.
 	 */
-	mem = rte_realloc(bt->table, n * sizeof(struct mr_cache_entry), 0);
+	mem = mlx5_realloc(bt->table, MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			   n * sizeof(struct mr_cache_entry), 0, SOCKET_ID_ANY);
 	if (mem == NULL) {
 		/* Not an error, B-tree search will be skipped. */
 		DRV_LOG(WARNING, "failed to expand MR B-tree (%p) table",
@@ -180,9 +182,9 @@ struct mr_find_contig_memsegs_data {
 	}
 	MLX5_ASSERT(!bt->table && !bt->size);
 	memset(bt, 0, sizeof(*bt));
-	bt->table = rte_calloc_socket("B-tree table",
-				      n, sizeof(struct mr_cache_entry),
-				      0, socket);
+	bt->table = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				sizeof(struct mr_cache_entry) * n,
+				0, socket);
 	if (bt->table == NULL) {
 		rte_errno = ENOMEM;
 		DEBUG("failed to allocate memory for btree cache on socket %d",
@@ -212,7 +214,7 @@ struct mr_find_contig_memsegs_data {
 		return;
 	DEBUG("freeing B-tree %p with table %p",
 	      (void *)bt, (void *)bt->table);
-	rte_free(bt->table);
+	mlx5_free(bt->table);
 	memset(bt, 0, sizeof(*bt));
 }
 
@@ -443,7 +445,7 @@ struct mlx5_mr *
 	dereg_mr_cb(&mr->pmd_mr);
 	if (mr->ms_bmp != NULL)
 		rte_bitmap_free(mr->ms_bmp);
-	rte_free(mr);
+	mlx5_free(mr);
 }
 
 void
@@ -650,11 +652,9 @@ struct mlx5_mr *
 	      (void *)addr, data.start, data.end, msl->page_sz, ms_n);
 	/* Size of memory for bitmap. */
 	bmp_size = rte_bitmap_get_memory_footprint(ms_n);
-	mr = rte_zmalloc_socket(NULL,
-				RTE_ALIGN_CEIL(sizeof(*mr),
-					       RTE_CACHE_LINE_SIZE) +
-				bmp_size,
-				RTE_CACHE_LINE_SIZE, msl->socket_id);
+	mr = mlx5_malloc(MLX5_MEM_RTE |  MLX5_MEM_ZERO,
+			 RTE_ALIGN_CEIL(sizeof(*mr), RTE_CACHE_LINE_SIZE) +
+			 bmp_size, RTE_CACHE_LINE_SIZE, msl->socket_id);
 	if (mr == NULL) {
 		DEBUG("Unable to allocate memory for a new MR of"
 		      " address (%p).", (void *)addr);
@@ -1033,10 +1033,9 @@ struct mlx5_mr *
 {
 	struct mlx5_mr *mr = NULL;
 
-	mr = rte_zmalloc_socket(NULL,
-				RTE_ALIGN_CEIL(sizeof(*mr),
-					       RTE_CACHE_LINE_SIZE),
-				RTE_CACHE_LINE_SIZE, socket_id);
+	mr = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			 RTE_ALIGN_CEIL(sizeof(*mr), RTE_CACHE_LINE_SIZE),
+			 RTE_CACHE_LINE_SIZE, socket_id);
 	if (mr == NULL)
 		return NULL;
 	reg_mr_cb(pd, (void *)addr, len, &mr->pmd_mr);
@@ -1044,7 +1043,7 @@ struct mlx5_mr *
 		DRV_LOG(WARNING,
 			"Fail to create MR for address (%p)",
 			(void *)addr);
-		rte_free(mr);
+		mlx5_free(mr);
 		return NULL;
 	}
 	mr->msl = NULL; /* Mark it is external memory. */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v3 6/7] net/mlx5: convert configuration objects to unified malloc
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                     ` (4 preceding siblings ...)
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 5/7] common/mlx5: convert data path objects " Suanming Mou
@ 2020-07-17 13:51   ` Suanming Mou
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 7/7] net/mlx5: convert Rx/Tx queue " Suanming Mou
  2020-07-17 17:09   ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Raslan Darawsheh
  7 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-17 13:51 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the miscellaneous configuration objects from the
unified malloc function.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/linux/mlx5_ethdev_os.c |  8 +++++---
 drivers/net/mlx5/linux/mlx5_os.c        | 26 +++++++++++++-------------
 drivers/net/mlx5/mlx5.c                 | 14 +++++++-------
 3 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index 701614a..6b8a151 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -38,6 +38,7 @@
 #include <mlx5_glue.h>
 #include <mlx5_devx_cmds.h>
 #include <mlx5_common.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
@@ -1162,8 +1163,9 @@ int mlx5_get_module_eeprom(struct rte_eth_dev *dev,
 		rte_errno = EINVAL;
 		return -rte_errno;
 	}
-	eeprom = rte_calloc(__func__, 1,
-			    (sizeof(struct ethtool_eeprom) + info->length), 0);
+	eeprom = mlx5_malloc(MLX5_MEM_ZERO,
+			     (sizeof(struct ethtool_eeprom) + info->length), 0,
+			     SOCKET_ID_ANY);
 	if (!eeprom) {
 		DRV_LOG(WARNING, "port %u cannot allocate memory for "
 			"eeprom data", dev->data->port_id);
@@ -1182,6 +1184,6 @@ int mlx5_get_module_eeprom(struct rte_eth_dev *dev,
 			dev->data->port_id, strerror(rte_errno));
 	else
 		rte_memcpy(info->data, eeprom->data, info->length);
-	rte_free(eeprom);
+	mlx5_free(eeprom);
 	return ret;
 }
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index df0fae9..742e2fb 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -163,7 +163,7 @@
 		socket = ctrl->socket;
 	}
 	MLX5_ASSERT(data != NULL);
-	ret = rte_malloc_socket(__func__, size, alignment, socket);
+	ret = mlx5_malloc(0, size, alignment, socket);
 	if (!ret && size)
 		rte_errno = ENOMEM;
 	return ret;
@@ -181,7 +181,7 @@
 mlx5_free_verbs_buf(void *ptr, void *data __rte_unused)
 {
 	MLX5_ASSERT(data != NULL);
-	rte_free(ptr);
+	mlx5_free(ptr);
 }
 
 /**
@@ -618,9 +618,9 @@
 			mlx5_glue->port_state_str(port_attr.state),
 			port_attr.state);
 	/* Allocate private eth device data. */
-	priv = rte_zmalloc("ethdev private structure",
+	priv = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE,
 			   sizeof(*priv),
-			   RTE_CACHE_LINE_SIZE);
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (priv == NULL) {
 		DRV_LOG(ERR, "priv allocation failure");
 		err = ENOMEM;
@@ -1187,7 +1187,7 @@
 			mlx5_flow_id_pool_release(priv->qrss_id_pool);
 		if (own_domain_id)
 			claim_zero(rte_eth_switch_domain_free(priv->domain_id));
-		rte_free(priv);
+		mlx5_free(priv);
 		if (eth_dev != NULL)
 			eth_dev->data->dev_private = NULL;
 	}
@@ -1506,10 +1506,10 @@
 	 * Now we can determine the maximal
 	 * amount of devices to be spawned.
 	 */
-	list = rte_zmalloc("device spawn data",
-			 sizeof(struct mlx5_dev_spawn_data) *
-			 (np ? np : nd),
-			 RTE_CACHE_LINE_SIZE);
+	list = mlx5_malloc(MLX5_MEM_ZERO,
+			   sizeof(struct mlx5_dev_spawn_data) *
+			   (np ? np : nd),
+			   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!list) {
 		DRV_LOG(ERR, "spawn data array allocation failure");
 		rte_errno = ENOMEM;
@@ -1800,7 +1800,7 @@
 	if (nl_route >= 0)
 		close(nl_route);
 	if (list)
-		rte_free(list);
+		mlx5_free(list);
 	MLX5_ASSERT(ibv_list);
 	mlx5_glue->free_device_list(ibv_list);
 	return ret;
@@ -2281,8 +2281,8 @@
 	/* Allocate memory to grab stat names and values. */
 	str_sz = dev_stats_n * ETH_GSTRING_LEN;
 	strings = (struct ethtool_gstrings *)
-		  rte_malloc("xstats_strings",
-			     str_sz + sizeof(struct ethtool_gstrings), 0);
+		  mlx5_malloc(0, str_sz + sizeof(struct ethtool_gstrings), 0,
+			      SOCKET_ID_ANY);
 	if (!strings) {
 		DRV_LOG(WARNING, "port %u unable to allocate memory for xstats",
 		     dev->data->port_id);
@@ -2332,7 +2332,7 @@
 	mlx5_os_read_dev_stat(priv, "out_of_buffer", &stats_ctrl->imissed_base);
 	stats_ctrl->imissed = 0;
 free:
-	rte_free(strings);
+	mlx5_free(strings);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 3390869..0df6490 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -762,11 +762,11 @@ struct mlx5_dev_ctx_shared *
 	}
 	/* No device found, we have to create new shared context. */
 	MLX5_ASSERT(spawn->max_port);
-	sh = rte_zmalloc("ethdev shared ib context",
+	sh = mlx5_malloc(MLX5_MEM_ZERO | MLX5_MEM_RTE,
 			 sizeof(struct mlx5_dev_ctx_shared) +
 			 spawn->max_port *
 			 sizeof(struct mlx5_dev_shared_port),
-			 RTE_CACHE_LINE_SIZE);
+			 RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
 	if (!sh) {
 		DRV_LOG(ERR, "shared context allocation failure");
 		rte_errno  = ENOMEM;
@@ -899,7 +899,7 @@ struct mlx5_dev_ctx_shared *
 		claim_zero(mlx5_glue->close_device(sh->ctx));
 	if (sh->flow_id_pool)
 		mlx5_flow_id_pool_release(sh->flow_id_pool);
-	rte_free(sh);
+	mlx5_free(sh);
 	MLX5_ASSERT(err > 0);
 	rte_errno = err;
 	return NULL;
@@ -969,7 +969,7 @@ struct mlx5_dev_ctx_shared *
 	if (sh->flow_id_pool)
 		mlx5_flow_id_pool_release(sh->flow_id_pool);
 	pthread_mutex_destroy(&sh->txpp.mutex);
-	rte_free(sh);
+	mlx5_free(sh);
 exit:
 	pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex);
 }
@@ -1229,8 +1229,8 @@ struct mlx5_dev_ctx_shared *
 	 */
 	ppriv_size =
 		sizeof(struct mlx5_proc_priv) + priv->txqs_n * sizeof(void *);
-	ppriv = rte_malloc_socket("mlx5_proc_priv", ppriv_size,
-				  RTE_CACHE_LINE_SIZE, dev->device->numa_node);
+	ppriv = mlx5_malloc(MLX5_MEM_RTE, ppriv_size, RTE_CACHE_LINE_SIZE,
+			    dev->device->numa_node);
 	if (!ppriv) {
 		rte_errno = ENOMEM;
 		return -rte_errno;
@@ -1251,7 +1251,7 @@ struct mlx5_dev_ctx_shared *
 {
 	if (!dev->process_private)
 		return;
-	rte_free(dev->process_private);
+	mlx5_free(dev->process_private);
 	dev->process_private = NULL;
 }
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [dpdk-dev] [PATCH v3 7/7] net/mlx5: convert Rx/Tx queue objects to unified malloc
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                     ` (5 preceding siblings ...)
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: convert configuration " Suanming Mou
@ 2020-07-17 13:51   ` Suanming Mou
  2020-07-17 17:09   ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Raslan Darawsheh
  7 siblings, 0 replies; 25+ messages in thread
From: Suanming Mou @ 2020-07-17 13:51 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: orika, rasland, dev

This commit allocates the Rx/Tx queue objects from unified malloc
function.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c  | 37 ++++++++++----------
 drivers/net/mlx5/mlx5_txpp.c | 30 ++++++++--------
 drivers/net/mlx5/mlx5_txq.c  | 82 +++++++++++++++++++++-----------------------
 3 files changed, 73 insertions(+), 76 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e8214d4..67d996c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -641,7 +641,7 @@
 rxq_release_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
 {
 	if (rxq_ctrl->rxq.wqes) {
-		rte_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
+		mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
 		rxq_ctrl->rxq.wqes = NULL;
 	}
 	if (rxq_ctrl->wq_umem) {
@@ -707,7 +707,7 @@
 			claim_zero(mlx5_glue->destroy_comp_channel
 				   (rxq_obj->channel));
 		LIST_REMOVE(rxq_obj, next);
-		rte_free(rxq_obj);
+		mlx5_free(rxq_obj);
 		return 0;
 	}
 	return 1;
@@ -1233,15 +1233,15 @@
 	/* Calculate and allocate WQ memory space. */
 	wqe_size = 1 << log_wqe_size; /* round up power of two.*/
 	wq_size = wqe_n * wqe_size;
-	buf = rte_calloc_socket(__func__, 1, wq_size, MLX5_WQE_BUF_ALIGNMENT,
-				rxq_ctrl->socket);
+	buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, wq_size,
+			  MLX5_WQE_BUF_ALIGNMENT, rxq_ctrl->socket);
 	if (!buf)
 		return NULL;
 	rxq_data->wqes = buf;
 	rxq_ctrl->wq_umem = mlx5_glue->devx_umem_reg(priv->sh->ctx,
 						     buf, wq_size, 0);
 	if (!rxq_ctrl->wq_umem) {
-		rte_free(buf);
+		mlx5_free(buf);
 		return NULL;
 	}
 	mlx5_devx_wq_attr_fill(priv, rxq_ctrl, &rq_attr.wq_attr);
@@ -1275,8 +1275,8 @@
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(!rxq_ctrl->obj);
-	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
-				 rxq_ctrl->socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   rxq_ctrl->socket);
 	if (!tmpl) {
 		DRV_LOG(ERR,
 			"port %u Rx queue %u cannot allocate verbs resources",
@@ -1294,7 +1294,7 @@
 			DRV_LOG(ERR, "total data size %u power of 2 is "
 				"too large for hairpin",
 				priv->config.log_hp_size);
-			rte_free(tmpl);
+			mlx5_free(tmpl);
 			rte_errno = ERANGE;
 			return NULL;
 		}
@@ -1314,7 +1314,7 @@
 		DRV_LOG(ERR,
 			"port %u Rx hairpin queue %u can't create rq object",
 			dev->data->port_id, idx);
-		rte_free(tmpl);
+		mlx5_free(tmpl);
 		rte_errno = errno;
 		return NULL;
 	}
@@ -1362,8 +1362,8 @@ struct mlx5_rxq_obj *
 		return mlx5_rxq_obj_hairpin_new(dev, idx);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_RX_QUEUE;
 	priv->verbs_alloc_ctx.obj = rxq_ctrl;
-	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
-				 rxq_ctrl->socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   rxq_ctrl->socket);
 	if (!tmpl) {
 		DRV_LOG(ERR,
 			"port %u Rx queue %u cannot allocate verbs resources",
@@ -1503,7 +1503,7 @@ struct mlx5_rxq_obj *
 		if (tmpl->channel)
 			claim_zero(mlx5_glue->destroy_comp_channel
 							(tmpl->channel));
-		rte_free(tmpl);
+		mlx5_free(tmpl);
 		rte_errno = ret; /* Restore rte_errno. */
 	}
 	if (type == MLX5_RXQ_OBJ_TYPE_DEVX_RQ)
@@ -1825,10 +1825,8 @@ struct mlx5_rxq_ctrl *
 		rte_errno = ENOSPC;
 		return NULL;
 	}
-	tmpl = rte_calloc_socket("RXQ", 1,
-				 sizeof(*tmpl) +
-				 desc_n * sizeof(struct rte_mbuf *),
-				 0, socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl) +
+			   desc_n * sizeof(struct rte_mbuf *), 0, socket);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -2007,7 +2005,7 @@ struct mlx5_rxq_ctrl *
 	LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
 	return tmpl;
 error:
-	rte_free(tmpl);
+	mlx5_free(tmpl);
 	return NULL;
 }
 
@@ -2033,7 +2031,8 @@ struct mlx5_rxq_ctrl *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_rxq_ctrl *tmpl;
 
-	tmpl = rte_calloc_socket("RXQ", 1, sizeof(*tmpl), 0, SOCKET_ID_ANY);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   SOCKET_ID_ANY);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -2112,7 +2111,7 @@ struct mlx5_rxq_ctrl *
 		if (rxq_ctrl->type == MLX5_RXQ_TYPE_STANDARD)
 			mlx5_mr_btree_free(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
 		LIST_REMOVE(rxq_ctrl, next);
-		rte_free(rxq_ctrl);
+		mlx5_free(rxq_ctrl);
 		(*priv->rxqs)[idx] = NULL;
 		return 0;
 	}
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 15c9a8e..77c1866 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -11,6 +11,8 @@
 #include <rte_malloc.h>
 #include <rte_cycles.h>
 
+#include <mlx5_malloc.h>
+
 #include "mlx5.h"
 #include "mlx5_rxtx.h"
 #include "mlx5_common_os.h"
@@ -134,13 +136,13 @@
 	if (wq->sq_umem)
 		claim_zero(mlx5_glue->devx_umem_dereg(wq->sq_umem));
 	if (wq->sq_buf)
-		rte_free((void *)(uintptr_t)wq->sq_buf);
+		mlx5_free((void *)(uintptr_t)wq->sq_buf);
 	if (wq->cq)
 		claim_zero(mlx5_devx_cmd_destroy(wq->cq));
 	if (wq->cq_umem)
 		claim_zero(mlx5_glue->devx_umem_dereg(wq->cq_umem));
 	if (wq->cq_buf)
-		rte_free((void *)(uintptr_t)wq->cq_buf);
+		mlx5_free((void *)(uintptr_t)wq->cq_buf);
 	memset(wq, 0, sizeof(*wq));
 }
 
@@ -159,7 +161,7 @@
 
 	mlx5_txpp_destroy_send_queue(wq);
 	if (sh->txpp.tsa) {
-		rte_free(sh->txpp.tsa);
+		mlx5_free(sh->txpp.tsa);
 		sh->txpp.tsa = NULL;
 	}
 }
@@ -255,8 +257,8 @@
 	umem_size = sizeof(struct mlx5_cqe) * MLX5_TXPP_REARM_CQ_SIZE;
 	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
 	umem_size += MLX5_DBR_SIZE;
-	wq->cq_buf = rte_zmalloc_socket(__func__, umem_size,
-					page_size, sh->numa_node);
+	wq->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+				 page_size, sh->numa_node);
 	if (!wq->cq_buf) {
 		DRV_LOG(ERR, "Failed to allocate memory for Rearm Queue.");
 		return -ENOMEM;
@@ -304,8 +306,8 @@
 	umem_size =  MLX5_WQE_SIZE * wq->sq_size;
 	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
 	umem_size += MLX5_DBR_SIZE;
-	wq->sq_buf = rte_zmalloc_socket(__func__, umem_size,
-					page_size, sh->numa_node);
+	wq->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+				 page_size, sh->numa_node);
 	if (!wq->sq_buf) {
 		DRV_LOG(ERR, "Failed to allocate memory for Rearm Queue.");
 		rte_errno = ENOMEM;
@@ -474,10 +476,10 @@
 	uint32_t umem_size, umem_dbrec;
 	int ret;
 
-	sh->txpp.tsa = rte_zmalloc_socket(__func__,
-					   MLX5_TXPP_REARM_SQ_SIZE *
-					   sizeof(struct mlx5_txpp_ts),
-					   0, sh->numa_node);
+	sh->txpp.tsa = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				   MLX5_TXPP_REARM_SQ_SIZE *
+				   sizeof(struct mlx5_txpp_ts),
+				   0, sh->numa_node);
 	if (!sh->txpp.tsa) {
 		DRV_LOG(ERR, "Failed to allocate memory for CQ stats.");
 		return -ENOMEM;
@@ -488,7 +490,7 @@
 	umem_size = sizeof(struct mlx5_cqe) * MLX5_TXPP_CLKQ_SIZE;
 	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
 	umem_size += MLX5_DBR_SIZE;
-	wq->cq_buf = rte_zmalloc_socket(__func__, umem_size,
+	wq->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
 					page_size, sh->numa_node);
 	if (!wq->cq_buf) {
 		DRV_LOG(ERR, "Failed to allocate memory for Clock Queue.");
@@ -543,8 +545,8 @@
 	umem_size =  MLX5_WQE_SIZE * wq->sq_size;
 	umem_dbrec = RTE_ALIGN(umem_size, MLX5_DBR_SIZE);
 	umem_size += MLX5_DBR_SIZE;
-	wq->sq_buf = rte_zmalloc_socket(__func__, umem_size,
-					page_size, sh->numa_node);
+	wq->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, umem_size,
+				 page_size, sh->numa_node);
 	if (!wq->sq_buf) {
 		DRV_LOG(ERR, "Failed to allocate memory for Clock Queue.");
 		rte_errno = ENOMEM;
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 4ab6ac1..4a73299 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -32,6 +32,7 @@
 #include <mlx5_common.h>
 #include <mlx5_common_mr.h>
 #include <mlx5_common_os.h>
+#include <mlx5_malloc.h>
 
 #include "mlx5_defs.h"
 #include "mlx5_utils.h"
@@ -524,8 +525,8 @@
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(!txq_ctrl->obj);
-	tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0,
-				 txq_ctrl->socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   txq_ctrl->socket);
 	if (!tmpl) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u cannot allocate memory resources",
@@ -544,7 +545,7 @@
 			DRV_LOG(ERR, "total data size %u power of 2 is "
 				"too large for hairpin",
 				priv->config.log_hp_size);
-			rte_free(tmpl);
+			mlx5_free(tmpl);
 			rte_errno = ERANGE;
 			return NULL;
 		}
@@ -564,7 +565,7 @@
 		DRV_LOG(ERR,
 			"port %u tx hairpin queue %u can't create sq object",
 			dev->data->port_id, idx);
-		rte_free(tmpl);
+		mlx5_free(tmpl);
 		rte_errno = errno;
 		return NULL;
 	}
@@ -597,7 +598,7 @@
 	if (txq_obj->sq_umem)
 		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->sq_umem));
 	if (txq_obj->sq_buf)
-		rte_free(txq_obj->sq_buf);
+		mlx5_free(txq_obj->sq_buf);
 	if (txq_obj->cq_devx)
 		claim_zero(mlx5_devx_cmd_destroy(txq_obj->cq_devx));
 	if (txq_obj->cq_dbrec_page)
@@ -609,7 +610,7 @@
 	if (txq_obj->cq_umem)
 		claim_zero(mlx5_glue->devx_umem_dereg(txq_obj->cq_umem));
 	if (txq_obj->cq_buf)
-		rte_free(txq_obj->cq_buf);
+		mlx5_free(txq_obj->cq_buf);
 }
 
 /**
@@ -648,9 +649,9 @@
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(!txq_ctrl->obj);
-	txq_obj = rte_calloc_socket(__func__, 1,
-				    sizeof(struct mlx5_txq_obj), 0,
-				    txq_ctrl->socket);
+	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			      sizeof(struct mlx5_txq_obj), 0,
+			      txq_ctrl->socket);
 	if (!txq_obj) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u cannot allocate memory resources",
@@ -673,10 +674,10 @@
 		goto error;
 	}
 	/* Allocate memory buffer for CQEs. */
-	txq_obj->cq_buf = rte_zmalloc_socket(__func__,
-					     nqe * sizeof(struct mlx5_cqe),
-					     MLX5_CQE_BUF_ALIGNMENT,
-					     sh->numa_node);
+	txq_obj->cq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				      nqe * sizeof(struct mlx5_cqe),
+				      MLX5_CQE_BUF_ALIGNMENT,
+				      sh->numa_node);
 	if (!txq_obj->cq_buf) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u cannot allocate memory (CQ)",
@@ -741,10 +742,9 @@
 	/* Create the Work Queue. */
 	nqe = RTE_MIN(1UL << txq_data->elts_n,
 		      (uint32_t)sh->device_attr.max_qp_wr);
-	txq_obj->sq_buf = rte_zmalloc_socket(__func__,
-					     nqe * sizeof(struct mlx5_wqe),
-					     page_size,
-					     sh->numa_node);
+	txq_obj->sq_buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				      nqe * sizeof(struct mlx5_wqe),
+				      page_size, sh->numa_node);
 	if (!txq_obj->sq_buf) {
 		DRV_LOG(ERR,
 			"port %u Tx queue %u cannot allocate memory (SQ)",
@@ -825,11 +825,10 @@
 			dev->data->port_id, idx);
 		goto error;
 	}
-	txq_data->fcqs = rte_calloc_socket(__func__,
-					   txq_data->cqe_s,
-					   sizeof(*txq_data->fcqs),
-					   RTE_CACHE_LINE_SIZE,
-					   txq_ctrl->socket);
+	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
+				     RTE_CACHE_LINE_SIZE,
+				     txq_ctrl->socket);
 	if (!txq_data->fcqs) {
 		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory (FCQ)",
 			dev->data->port_id, idx);
@@ -857,10 +856,10 @@
 	ret = rte_errno; /* Save rte_errno before cleanup. */
 	txq_release_sq_resources(txq_obj);
 	if (txq_data->fcqs) {
-		rte_free(txq_data->fcqs);
+		mlx5_free(txq_data->fcqs);
 		txq_data->fcqs = NULL;
 	}
-	rte_free(txq_obj);
+	mlx5_free(txq_obj);
 	rte_errno = ret; /* Restore rte_errno. */
 	return NULL;
 #endif
@@ -1011,8 +1010,9 @@ struct mlx5_txq_obj *
 		rte_errno = errno;
 		goto error;
 	}
-	txq_obj = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_txq_obj), 0,
-				    txq_ctrl->socket);
+	txq_obj = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+			      sizeof(struct mlx5_txq_obj), 0,
+			      txq_ctrl->socket);
 	if (!txq_obj) {
 		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory",
 			dev->data->port_id, idx);
@@ -1054,11 +1054,9 @@ struct mlx5_txq_obj *
 	txq_data->wqe_pi = 0;
 	txq_data->wqe_comp = 0;
 	txq_data->wqe_thres = txq_data->wqe_s / MLX5_TX_COMP_THRESH_INLINE_DIV;
-	txq_data->fcqs = rte_calloc_socket(__func__,
-					   txq_data->cqe_s,
-					   sizeof(*txq_data->fcqs),
-					   RTE_CACHE_LINE_SIZE,
-					   txq_ctrl->socket);
+	txq_data->fcqs = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO,
+				     txq_data->cqe_s * sizeof(*txq_data->fcqs),
+				     RTE_CACHE_LINE_SIZE, txq_ctrl->socket);
 	if (!txq_data->fcqs) {
 		DRV_LOG(ERR, "port %u Tx queue %u cannot allocate memory (FCQ)",
 			dev->data->port_id, idx);
@@ -1114,11 +1112,11 @@ struct mlx5_txq_obj *
 	if (tmpl.qp)
 		claim_zero(mlx5_glue->destroy_qp(tmpl.qp));
 	if (txq_data && txq_data->fcqs) {
-		rte_free(txq_data->fcqs);
+		mlx5_free(txq_data->fcqs);
 		txq_data->fcqs = NULL;
 	}
 	if (txq_obj)
-		rte_free(txq_obj);
+		mlx5_free(txq_obj);
 	priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE;
 	rte_errno = ret; /* Restore rte_errno. */
 	return NULL;
@@ -1175,11 +1173,11 @@ struct mlx5_txq_obj *
 			claim_zero(mlx5_glue->destroy_cq(txq_obj->cq));
 		}
 		if (txq_obj->txq_ctrl->txq.fcqs) {
-			rte_free(txq_obj->txq_ctrl->txq.fcqs);
+			mlx5_free(txq_obj->txq_ctrl->txq.fcqs);
 			txq_obj->txq_ctrl->txq.fcqs = NULL;
 		}
 		LIST_REMOVE(txq_obj, next);
-		rte_free(txq_obj);
+		mlx5_free(txq_obj);
 		return 0;
 	}
 	return 1;
@@ -1595,10 +1593,8 @@ struct mlx5_txq_ctrl *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_ctrl *tmpl;
 
-	tmpl = rte_calloc_socket("TXQ", 1,
-				 sizeof(*tmpl) +
-				 desc * sizeof(struct rte_mbuf *),
-				 0, socket);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl) +
+			   desc * sizeof(struct rte_mbuf *), 0, socket);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -1638,7 +1634,7 @@ struct mlx5_txq_ctrl *
 	LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next);
 	return tmpl;
 error:
-	rte_free(tmpl);
+	mlx5_free(tmpl);
 	return NULL;
 }
 
@@ -1664,8 +1660,8 @@ struct mlx5_txq_ctrl *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_txq_ctrl *tmpl;
 
-	tmpl = rte_calloc_socket("TXQ", 1,
-				 sizeof(*tmpl), 0, SOCKET_ID_ANY);
+	tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl), 0,
+			   SOCKET_ID_ANY);
 	if (!tmpl) {
 		rte_errno = ENOMEM;
 		return NULL;
@@ -1734,7 +1730,7 @@ struct mlx5_txq_ctrl *
 		txq_free_elts(txq);
 		mlx5_mr_btree_free(&txq->txq.mr_ctrl.cache_bh);
 		LIST_REMOVE(txq, next);
-		rte_free(txq);
+		mlx5_free(txq);
 		(*priv->txqs)[idx] = NULL;
 		return 0;
 	}
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg
  2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
                     ` (6 preceding siblings ...)
  2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 7/7] net/mlx5: convert Rx/Tx queue " Suanming Mou
@ 2020-07-17 17:09   ` Raslan Darawsheh
  7 siblings, 0 replies; 25+ messages in thread
From: Raslan Darawsheh @ 2020-07-17 17:09 UTC (permalink / raw)
  To: Suanming Mou, Slava Ovsiienko, Matan Azrad; +Cc: Ori Kam, dev

Hi

> -----Original Message-----
> From: Suanming Mou <suanmingm@mellanox.com>
> Sent: Friday, July 17, 2020 4:51 PM
> To: Slava Ovsiienko <viacheslavo@mellanox.com>; Matan Azrad
> <matan@mellanox.com>
> Cc: Ori Kam <orika@mellanox.com>; Raslan Darawsheh
> <rasland@mellanox.com>; dev@dpdk.org
> Subject: [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg
> 
> Currently, for MLX5 PMD, once millions of flows created, the memory
> consumption of the flows are also very huge. For the system with limited
> memory, it means the system need to reserve most of the memory as huge
> page memory to serve the flows in advance. And other normal applications
> will have no chance to use this reserved memory any more. While most of
> the time, the system will not have lots of flows, the  reserved huge page
> memory becomes a bit waste of memory at most of the time.
> 
> By the new sys_mem_en devarg, once set it to be true, it allows the PMD
> allocate the memory from system by default with the new add mlx5 memory
> management functions. Only once the MLX5_MEM_RTE flag is set, the
> memory
> will be allocate from rte, otherwise, it allocates memory from system.
> 
> So in this case, the system with limited memory no need to reserve most
> of the memory for hugepage. Only some needed memory for datapath
> objects
> will be enough to allocated with explicitly flag. Other memory will be
> allocated from system. For system with enough memory, no need to care
> about the devarg, the memory will always be from rte hugepage.
> 
> One restriction is that for DPDK application with multiple PCI devices,
> if the sys_mem_en devargs are different between the devices, the
> sys_mem_en only gets the value from the first device devargs, and print
> out a message to warn that.
> 
> ---
> 
> v3:
>  - Rebase on top of latest code.
> 
> v2:
>  - Add memory function call statistic.
>  - Change msl to aotmic.
> 
> ---
> 
> Suanming Mou (7):
>   common/mlx5: add mlx5 memory management functions
>   net/mlx5: add allocate memory from system devarg
>   net/mlx5: convert control path memory to unified malloc
>   common/mlx5: convert control path memory to unified malloc
>   common/mlx5: convert data path objects to unified malloc
>   net/mlx5: convert configuration objects to unified malloc
>   net/mlx5: convert Rx/Tx queue objects to unified malloc
> 
>  doc/guides/nics/mlx5.rst                        |   7 +
>  drivers/common/mlx5/Makefile                    |   1 +
>  drivers/common/mlx5/linux/mlx5_glue.c           |  13 +-
>  drivers/common/mlx5/linux/mlx5_nl.c             |   5 +-
>  drivers/common/mlx5/meson.build                 |   1 +
>  drivers/common/mlx5/mlx5_common.c               |  10 +-
>  drivers/common/mlx5/mlx5_common_mp.c            |   7 +-
>  drivers/common/mlx5/mlx5_common_mr.c            |  31 ++-
>  drivers/common/mlx5/mlx5_devx_cmds.c            |  82 ++++---
>  drivers/common/mlx5/mlx5_malloc.c               | 306
> ++++++++++++++++++++++++
>  drivers/common/mlx5/mlx5_malloc.h               |  99 ++++++++
>  drivers/common/mlx5/rte_common_mlx5_version.map |   6 +
>  drivers/net/mlx5/linux/mlx5_ethdev_os.c         |   8 +-
>  drivers/net/mlx5/linux/mlx5_os.c                |  28 ++-
>  drivers/net/mlx5/mlx5.c                         | 108 +++++----
>  drivers/net/mlx5/mlx5.h                         |   1 +
>  drivers/net/mlx5/mlx5_ethdev.c                  |  15 +-
>  drivers/net/mlx5/mlx5_flow.c                    |  45 ++--
>  drivers/net/mlx5/mlx5_flow_dv.c                 |  46 ++--
>  drivers/net/mlx5/mlx5_flow_meter.c              |  11 +-
>  drivers/net/mlx5/mlx5_flow_verbs.c              |   8 +-
>  drivers/net/mlx5/mlx5_mp.c                      |   3 +-
>  drivers/net/mlx5/mlx5_rss.c                     |  13 +-
>  drivers/net/mlx5/mlx5_rxq.c                     |  74 +++---
>  drivers/net/mlx5/mlx5_txpp.c                    |  30 +--
>  drivers/net/mlx5/mlx5_txq.c                     |  82 +++----
>  drivers/net/mlx5/mlx5_utils.c                   |  60 +++--
>  drivers/net/mlx5/mlx5_utils.h                   |   2 +-
>  drivers/net/mlx5/mlx5_vlan.c                    |   8 +-
>  29 files changed, 797 insertions(+), 313 deletions(-)
>  create mode 100644 drivers/common/mlx5/mlx5_malloc.c
>  create mode 100644 drivers/common/mlx5/mlx5_malloc.h
> 
> --
> 1.8.3.1

Series applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2020-07-17 17:09 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-15  3:59 [dpdk-dev] [PATCH 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
2020-07-15  3:59 ` [dpdk-dev] [PATCH 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
2020-07-15  3:59 ` [dpdk-dev] [PATCH 2/7] net/mlx5: add allocate memory from system devarg Suanming Mou
2020-07-15  3:59 ` [dpdk-dev] [PATCH 3/7] net/mlx5: convert control path memory to unified malloc Suanming Mou
2020-07-15  4:00 ` [dpdk-dev] [PATCH 4/7] common/mlx5: " Suanming Mou
2020-07-15  4:00 ` [dpdk-dev] [PATCH 5/7] common/mlx5: convert data path objects " Suanming Mou
2020-07-15  4:00 ` [dpdk-dev] [PATCH 6/7] net/mlx5: convert configuration " Suanming Mou
2020-07-15  4:00 ` [dpdk-dev] [PATCH 7/7] net/mlx5: convert Rx/Tx queue " Suanming Mou
2020-07-16  9:20 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: add allocate memory from system devarg Suanming Mou
2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 3/7] net/mlx5: convert control path memory to unified malloc Suanming Mou
2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 4/7] common/mlx5: " Suanming Mou
2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 5/7] common/mlx5: convert data path objects " Suanming Mou
2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: convert configuration " Suanming Mou
2020-07-16  9:20   ` [dpdk-dev] [PATCH v2 7/7] net/mlx5: convert Rx/Tx queue " Suanming Mou
2020-07-17 13:50 ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Suanming Mou
2020-07-17 13:50   ` [dpdk-dev] [PATCH v3 1/7] common/mlx5: add mlx5 memory management functions Suanming Mou
2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add allocate memory from system devarg Suanming Mou
2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 3/7] net/mlx5: convert control path memory to unified malloc Suanming Mou
2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 4/7] common/mlx5: " Suanming Mou
2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 5/7] common/mlx5: convert data path objects " Suanming Mou
2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: convert configuration " Suanming Mou
2020-07-17 13:51   ` [dpdk-dev] [PATCH v3 7/7] net/mlx5: convert Rx/Tx queue " Suanming Mou
2020-07-17 17:09   ` [dpdk-dev] [PATCH v3 0/7] net/mlx5: add sys_mem_en devarg Raslan Darawsheh

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git