DPDK patches and discussions
 help / color / mirror / Atom feed
From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
To: Dmitry Kozlyuk <dkozlyuk@nvidia.com>, dev@dpdk.org
Cc: Matan Azrad <matan@nvidia.com>,
	Olivier Matz <olivier.matz@6wind.com>,
	Ray Kinsella <mdr@ashroe.eu>,
	Anatoly Burakov <anatoly.burakov@intel.com>
Subject: Re: [dpdk-dev] [PATCH v4 1/4] mempool: add event callbacks
Date: Fri, 15 Oct 2021 11:52:36 +0300	[thread overview]
Message-ID: <1dd0b17c-0009-46e7-0f14-c88ce2b58540@oktetlabs.ru> (raw)
In-Reply-To: <20211013110131.2909604-2-dkozlyuk@nvidia.com>

On 10/13/21 2:01 PM, Dmitry Kozlyuk wrote:
> Data path performance can benefit if the PMD knows which memory it will
> need to handle in advance, before the first mbuf is sent to the PMD.
> It is impractical, however, to consider all allocated memory for this
> purpose. Most often mbuf memory comes from mempools that can come and
> go. PMD can enumerate existing mempools on device start, but it also
> needs to track creation and destruction of mempools after the forwarding
> starts but before an mbuf from the new mempool is sent to the device.
> 
> Add an API to register callback for mempool life cycle events:
> * rte_mempool_event_callback_register()
> * rte_mempool_event_callback_unregister()
> Currently tracked events are:
> * RTE_MEMPOOL_EVENT_READY (after populating a mempool)
> * RTE_MEMPOOL_EVENT_DESTROY (before freeing a mempool)
> Provide a unit test for the new API.
> The new API is internal, because it is primarily demanded by PMDs that
> may need to deal with any mempools and do not control their creation,
> while an application, on the other hand, knows which mempools it creates
> and doesn't care about internal mempools PMDs might create.
> 
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>

With below review notes processed

Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

[snip]

> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> index c5f859ae71..51c0ba2931 100644
> --- a/lib/mempool/rte_mempool.c
> +++ b/lib/mempool/rte_mempool.c

[snip]

> @@ -360,6 +372,10 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
>  	STAILQ_INSERT_TAIL(&mp->mem_list, memhdr, next);
>  	mp->nb_mem_chunks++;
>  
> +	/* Report the mempool as ready only when fully populated. */
> +	if (mp->populated_size >= mp->size)
> +		mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_READY, mp);
> +
>  	rte_mempool_trace_populate_iova(mp, vaddr, iova, len, free_cb, opaque);
>  	return i;
>  
> @@ -722,6 +738,7 @@ rte_mempool_free(struct rte_mempool *mp)
>  	}
>  	rte_mcfg_tailq_write_unlock();
>  
> +	mempool_event_callback_invoke(RTE_MEMPOOL_EVENT_DESTROY, mp);
>  	rte_mempool_trace_free(mp);
>  	rte_mempool_free_memchunks(mp);
>  	rte_mempool_ops_free(mp);
> @@ -1343,3 +1360,123 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
>  
>  	rte_mcfg_mempool_read_unlock();
>  }
> +
> +struct mempool_callback {

It sounds like it is a mempool callback itself.
Consider: mempool_event_callback_data.
I think this way it will be consistent.

> +	rte_mempool_event_callback *func;
> +	void *user_data;
> +};
> +
> +static void
> +mempool_event_callback_invoke(enum rte_mempool_event event,
> +			      struct rte_mempool *mp)
> +{
> +	struct mempool_callback_list *list;
> +	struct rte_tailq_entry *te;
> +	void *tmp_te;
> +
> +	rte_mcfg_tailq_read_lock();
> +	list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list);
> +	RTE_TAILQ_FOREACH_SAFE(te, list, next, tmp_te) {
> +		struct mempool_callback *cb = te->data;
> +		rte_mcfg_tailq_read_unlock();
> +		cb->func(event, mp, cb->user_data);
> +		rte_mcfg_tailq_read_lock();
> +	}
> +	rte_mcfg_tailq_read_unlock();
> +}
> +
> +int
> +rte_mempool_event_callback_register(rte_mempool_event_callback *func,
> +				    void *user_data)
> +{
> +	struct mempool_callback_list *list;
> +	struct rte_tailq_entry *te = NULL;
> +	struct mempool_callback *cb;
> +	void *tmp_te;
> +	int ret;
> +
> +	if (func == NULL) {
> +		rte_errno = EINVAL;
> +		return -rte_errno;
> +	}
> +
> +	rte_mcfg_mempool_read_lock();
> +	rte_mcfg_tailq_write_lock();
> +
> +	list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list);
> +	RTE_TAILQ_FOREACH_SAFE(te, list, next, tmp_te) {
> +		struct mempool_callback *cb =
> +					(struct mempool_callback *)te->data;
> +		if (cb->func == func && cb->user_data == user_data) {
> +			ret = -EEXIST;
> +			goto exit;
> +		}
> +	}
> +
> +	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
> +	if (te == NULL) {
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Cannot allocate event callback tailq entry!\n");
> +		ret = -ENOMEM;
> +		goto exit;
> +	}
> +
> +	cb = rte_malloc("MEMPOOL_EVENT_CALLBACK", sizeof(*cb), 0);
> +	if (cb == NULL) {
> +		RTE_LOG(ERR, MEMPOOL,
> +			"Cannot allocate event callback!\n");
> +		rte_free(te);
> +		ret = -ENOMEM;
> +		goto exit;
> +	}
> +
> +	cb->func = func;
> +	cb->user_data = user_data;
> +	te->data = cb;
> +	TAILQ_INSERT_TAIL(list, te, next);
> +	ret = 0;
> +
> +exit:
> +	rte_mcfg_tailq_write_unlock();
> +	rte_mcfg_mempool_read_unlock();
> +	rte_errno = -ret;
> +	return ret;
> +}
> +
> +int
> +rte_mempool_event_callback_unregister(rte_mempool_event_callback *func,
> +				      void *user_data)
> +{
> +	struct mempool_callback_list *list;
> +	struct rte_tailq_entry *te = NULL;
> +	struct mempool_callback *cb;
> +	int ret;
> +
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> +		rte_errno = EPERM;
> +		return -1;

The function should behave consistencly. Below you
return negative error. Here just -1. I think it
would be more constent to return -rte_errno here.

> +	}
> +
> +	rte_mcfg_mempool_read_lock();
> +	rte_mcfg_tailq_write_lock();
> +	ret = -ENOENT;
> +	list = RTE_TAILQ_CAST(callback_tailq.head, mempool_callback_list);
> +	TAILQ_FOREACH(te, list, next) {
> +		cb = (struct mempool_callback *)te->data;
> +		if (cb->func == func && cb->user_data == user_data)
> +			break;
> +	}
> +	if (te != NULL) {

Here we rely on the fact that TAILQ_FOREACH()
exists with te==NULL in the case of no such
entry. I'd suggest to avoid the assumption.
I.e. do below two lines above before break and
have not the if condition her at all.

> +		TAILQ_REMOVE(list, te, next);
> +		ret = 0;
> +	}
> +	rte_mcfg_tailq_write_unlock();
> +	rte_mcfg_mempool_read_unlock();
> +
> +	if (ret == 0) {
> +		rte_free(te);
> +		rte_free(cb);
> +	}
> +	rte_errno = -ret;
> +	return ret;
> +}
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index f57ecbd6fc..663123042f 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -1774,6 +1774,67 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg),
>  int
>  rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz);
>  
> +/**
> + * Mempool event type.
> + * @internal

Shouldn't internal go first?

> + */
> +enum rte_mempool_event {
> +	/** Occurs after a mempool is successfully populated. */

successfully -> fully ?

> +	RTE_MEMPOOL_EVENT_READY = 0,
> +	/** Occurs before destruction of a mempool begins. */
> +	RTE_MEMPOOL_EVENT_DESTROY = 1,
> +};

[snip]


  reply	other threads:[~2021-10-15  8:52 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-18  9:07 [dpdk-dev] [PATCH 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-08-18  9:07 ` [dpdk-dev] [PATCH 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-12  3:12   ` Jerin Jacob
2021-08-18  9:07 ` [dpdk-dev] [PATCH 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-08-18  9:07 ` [dpdk-dev] [PATCH 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-08-18  9:07 ` [dpdk-dev] [PATCH 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-09-29 14:52 ` [dpdk-dev] [PATCH 0/4] net/mlx5: implicit " dkozlyuk
2021-09-29 14:52   ` [dpdk-dev] [PATCH v2 1/4] mempool: add event callbacks dkozlyuk
2021-10-05 16:34     ` Thomas Monjalon
2021-09-29 14:52   ` [dpdk-dev] [PATCH v2 2/4] mempool: add non-IO flag dkozlyuk
2021-10-05 16:39     ` Thomas Monjalon
2021-10-12  6:06       ` Andrew Rybchenko
2021-09-29 14:52   ` [dpdk-dev] [PATCH v2 3/4] common/mlx5: add mempool registration facilities dkozlyuk
2021-09-29 14:52   ` [dpdk-dev] [PATCH v2 4/4] net/mlx5: support mempool registration dkozlyuk
2021-10-12  0:04   ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-12  0:04     ` [dpdk-dev] [PATCH v3 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-12  6:33       ` Andrew Rybchenko
2021-10-12  9:37         ` Dmitry Kozlyuk
2021-10-12  9:46           ` Andrew Rybchenko
2021-10-12  0:04     ` [dpdk-dev] [PATCH v3 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-12  3:37       ` Jerin Jacob
2021-10-12  6:42       ` Andrew Rybchenko
2021-10-12 12:40         ` Dmitry Kozlyuk
2021-10-12 12:53           ` Andrew Rybchenko
2021-10-12 13:11             ` Dmitry Kozlyuk
2021-10-12  0:04     ` [dpdk-dev] [PATCH v3 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-12  0:04     ` [dpdk-dev] [PATCH v3 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-13 11:01     ` [dpdk-dev] [PATCH v4 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-13 11:01       ` [dpdk-dev] [PATCH v4 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-15  8:52         ` Andrew Rybchenko [this message]
2021-10-15  9:13           ` Dmitry Kozlyuk
2021-10-19 13:08           ` Dmitry Kozlyuk
2021-10-15 12:12         ` Olivier Matz
2021-10-15 13:07           ` Dmitry Kozlyuk
2021-10-15 13:40             ` Olivier Matz
2021-10-13 11:01       ` [dpdk-dev] [PATCH v4 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-15  9:01         ` Andrew Rybchenko
2021-10-15  9:18           ` Dmitry Kozlyuk
2021-10-15  9:33             ` Andrew Rybchenko
2021-10-15  9:38               ` Dmitry Kozlyuk
2021-10-15  9:43               ` Olivier Matz
2021-10-15  9:58                 ` Dmitry Kozlyuk
2021-10-15 12:11                   ` Olivier Matz
2021-10-15  9:25         ` David Marchand
2021-10-15 10:42           ` Dmitry Kozlyuk
2021-10-15 11:41             ` David Marchand
2021-10-15 12:13               ` Olivier Matz
2021-10-15 13:19         ` Olivier Matz
2021-10-15 13:27           ` Dmitry Kozlyuk
2021-10-15 13:43             ` Olivier Matz
2021-10-19 13:08               ` Dmitry Kozlyuk
2021-10-13 11:01       ` [dpdk-dev] [PATCH v4 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-13 11:01       ` [dpdk-dev] [PATCH v4 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-15 16:02       ` [dpdk-dev] [PATCH v5 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-15 16:02         ` [dpdk-dev] [PATCH v5 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-20  9:29           ` Kinsella, Ray
2021-10-15 16:02         ` [dpdk-dev] [PATCH v5 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-15 16:02         ` [dpdk-dev] [PATCH v5 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-20  9:30           ` Kinsella, Ray
2021-10-15 16:02         ` [dpdk-dev] [PATCH v5 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-16 20:00         ` [dpdk-dev] [PATCH v6 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-16 20:00           ` [dpdk-dev] [PATCH v6 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-16 20:00           ` [dpdk-dev] [PATCH v6 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-16 20:00           ` [dpdk-dev] [PATCH v6 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-16 20:00           ` [dpdk-dev] [PATCH v6 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-18 10:01           ` [dpdk-dev] [PATCH v7 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-18 10:01             ` [dpdk-dev] [PATCH v7 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-18 10:01             ` [dpdk-dev] [PATCH v7 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-18 10:01             ` [dpdk-dev] [PATCH v7 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-18 10:01             ` [dpdk-dev] [PATCH v7 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-18 14:40             ` [dpdk-dev] [PATCH v8 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-18 14:40               ` [dpdk-dev] [PATCH v8 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-18 14:40               ` [dpdk-dev] [PATCH v8 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-29  3:30                 ` Jiang, YuX
2021-10-18 14:40               ` [dpdk-dev] [PATCH v8 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-18 14:40               ` [dpdk-dev] [PATCH v8 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-18 22:43               ` [dpdk-dev] [PATCH v9 0/4] net/mlx5: implicit " Dmitry Kozlyuk
2021-10-18 22:43                 ` [dpdk-dev] [PATCH v9 1/4] mempool: add event callbacks Dmitry Kozlyuk
2021-10-18 22:43                 ` [dpdk-dev] [PATCH v9 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-10-18 22:43                 ` [dpdk-dev] [PATCH v9 3/4] common/mlx5: add mempool registration facilities Dmitry Kozlyuk
2021-10-18 22:43                 ` [dpdk-dev] [PATCH v9 4/4] net/mlx5: support mempool registration Dmitry Kozlyuk
2021-10-19 14:36                 ` [dpdk-dev] [PATCH v9 0/4] net/mlx5: implicit " Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1dd0b17c-0009-46e7-0f14-c88ce2b58540@oktetlabs.ru \
    --to=andrew.rybchenko@oktetlabs.ru \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=dkozlyuk@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=mdr@ashroe.eu \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).