From: Jerin Jacob <jerinjacobk@gmail.com>
To: Volodymyr Fialko <vfialko@marvell.com>
Cc: dpdk-dev <dev@dpdk.org>,
Pavan Nikhilesh <pbhagavatula@marvell.com>,
Shijith Thotton <sthotton@marvell.com>,
Jerin Jacob <jerinj@marvell.com>,
Anoob Joseph <anoobj@marvell.com>
Subject: Re: [PATCH] event/cnxk: add free for Tx adapter
Date: Mon, 13 Jun 2022 11:19:35 +0530 [thread overview]
Message-ID: <CALBAE1P_SE4V0FEkrpbjrbMWFpjwvWmJarFoWbB-D4_NGXkPoA@mail.gmail.com> (raw)
In-Reply-To: <20220530131936.1137628-1-vfialko@marvell.com>
On Mon, May 30, 2022 at 6:49 PM Volodymyr Fialko <vfialko@marvell.com> wrote:
>
> Tx adapter allocate data during eth_tx_adapter_queue_add() call and
> it's only cleaned but not freed during eth_tx_adapter_queue_del().
> Implemented eth_tx_adapter_free() callback to free adapter data.
>
> Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
Applied to dpdk-next-net-eventdev/for-main. Thanks
> ---
> drivers/event/cnxk/cn10k_eventdev.c | 3 +++
> drivers/event/cnxk/cn9k_eventdev.c | 3 +++
> drivers/event/cnxk/cnxk_eventdev.h | 4 +++
> drivers/event/cnxk/cnxk_eventdev_adptr.c | 34 ++++++++++++++++++++++++
> 4 files changed, 44 insertions(+)
>
> diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
> index 214eca4239..110c7af439 100644
> --- a/drivers/event/cnxk/cn10k_eventdev.c
> +++ b/drivers/event/cnxk/cn10k_eventdev.c
> @@ -878,6 +878,9 @@ static struct eventdev_ops cn10k_sso_dev_ops = {
> .eth_tx_adapter_caps_get = cn10k_sso_tx_adapter_caps_get,
> .eth_tx_adapter_queue_add = cn10k_sso_tx_adapter_queue_add,
> .eth_tx_adapter_queue_del = cn10k_sso_tx_adapter_queue_del,
> + .eth_tx_adapter_start = cnxk_sso_tx_adapter_start,
> + .eth_tx_adapter_stop = cnxk_sso_tx_adapter_stop,
> + .eth_tx_adapter_free = cnxk_sso_tx_adapter_free,
>
> .timer_adapter_caps_get = cnxk_tim_caps_get,
>
> diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
> index 076847d9a4..bf7bbc2c43 100644
> --- a/drivers/event/cnxk/cn9k_eventdev.c
> +++ b/drivers/event/cnxk/cn9k_eventdev.c
> @@ -1110,6 +1110,9 @@ static struct eventdev_ops cn9k_sso_dev_ops = {
> .eth_tx_adapter_caps_get = cn9k_sso_tx_adapter_caps_get,
> .eth_tx_adapter_queue_add = cn9k_sso_tx_adapter_queue_add,
> .eth_tx_adapter_queue_del = cn9k_sso_tx_adapter_queue_del,
> + .eth_tx_adapter_start = cnxk_sso_tx_adapter_start,
> + .eth_tx_adapter_stop = cnxk_sso_tx_adapter_stop,
> + .eth_tx_adapter_free = cnxk_sso_tx_adapter_free,
>
> .timer_adapter_caps_get = cnxk_tim_caps_get,
>
> diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h
> index e7cd90095d..baee5d11f0 100644
> --- a/drivers/event/cnxk/cnxk_eventdev.h
> +++ b/drivers/event/cnxk/cnxk_eventdev.h
> @@ -113,6 +113,7 @@ struct cnxk_sso_evdev {
> uint16_t max_port_id;
> uint16_t max_queue_id[RTE_MAX_ETHPORTS];
> uint8_t tx_adptr_configured;
> + uint32_t tx_adptr_active_mask;
> uint16_t tim_adptr_ring_cnt;
> uint16_t *timer_adptr_rings;
> uint64_t *timer_adptr_sz;
> @@ -310,5 +311,8 @@ int cnxk_sso_tx_adapter_queue_add(const struct rte_eventdev *event_dev,
> int cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev,
> const struct rte_eth_dev *eth_dev,
> int32_t tx_queue_id);
> +int cnxk_sso_tx_adapter_start(uint8_t id, const struct rte_eventdev *event_dev);
> +int cnxk_sso_tx_adapter_stop(uint8_t id, const struct rte_eventdev *event_dev);
> +int cnxk_sso_tx_adapter_free(uint8_t id, const struct rte_eventdev *event_dev);
>
> #endif /* __CNXK_EVENTDEV_H__ */
> diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
> index fa96090bfa..586a7751e2 100644
> --- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
> +++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
> @@ -589,3 +589,37 @@ cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev,
>
> return 0;
> }
> +
> +int
> +cnxk_sso_tx_adapter_start(uint8_t id, const struct rte_eventdev *event_dev)
> +{
> + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> +
> + dev->tx_adptr_active_mask |= (1 << id);
> +
> + return 0;
> +}
> +
> +int
> +cnxk_sso_tx_adapter_stop(uint8_t id, const struct rte_eventdev *event_dev)
> +{
> + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> +
> + dev->tx_adptr_active_mask &= ~(1 << id);
> +
> + return 0;
> +}
> +
> +int
> +cnxk_sso_tx_adapter_free(uint8_t id __rte_unused,
> + const struct rte_eventdev *event_dev)
> +{
> + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
> +
> + if (dev->tx_adptr_data_sz && dev->tx_adptr_active_mask == 0) {
> + dev->tx_adptr_data_sz = 0;
> + free(dev->tx_adptr_data);
> + }
> +
> + return 0;
> +}
> --
> 2.25.1
>
next prev parent reply other threads:[~2022-06-13 5:50 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-30 13:19 Volodymyr Fialko
2022-06-13 5:49 ` Jerin Jacob [this message]
-- strict thread matches above, loose matches on Subject: below --
2022-05-30 13:08 Volodymyr Fialko
2022-05-26 8:13 Volodymyr Fialko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALBAE1P_SE4V0FEkrpbjrbMWFpjwvWmJarFoWbB-D4_NGXkPoA@mail.gmail.com \
--to=jerinjacobk@gmail.com \
--cc=anoobj@marvell.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=pbhagavatula@marvell.com \
--cc=sthotton@marvell.com \
--cc=vfialko@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).