DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Pavan Nikhilesh <pbhagavatula@marvell.com>
Cc: Jerin Jacob <jerinj@marvell.com>,
	Shijith Thotton <sthotton@marvell.com>, dpdk-dev <dev@dpdk.org>
Subject: Re: [PATCH 1/2] event/cnxk: remove deschedule usage in CN9K
Date: Tue, 22 Feb 2022 15:21:20 +0530	[thread overview]
Message-ID: <CALBAE1MtFRnw7CR4PXM9U9bJ16r0WAf7uPivGC8+W0Ac1BR1jA@mail.gmail.com> (raw)
In-Reply-To: <20220219121338.2438-1-pbhagavatula@marvell.com>

On Sat, Feb 19, 2022 at 6:05 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Using deschedule cmd might incorrectly ignore updates to WQE, GGRP
> on CN9K.
> Use addwork to pipeline work instead.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>


Series applied to dpdk-next-net-eventdev/for-main. Thanks

> ---
>  drivers/event/cnxk/cn9k_worker.h | 41 +++++++++++++++++++++++++-------
>  1 file changed, 32 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
> index 79374b8d95..0905d744cc 100644
> --- a/drivers/event/cnxk/cn9k_worker.h
> +++ b/drivers/event/cnxk/cn9k_worker.h
> @@ -63,15 +63,18 @@ cn9k_sso_hws_fwd_swtag(uint64_t base, const struct rte_event *ev)
>  }
>
>  static __rte_always_inline void
> -cn9k_sso_hws_fwd_group(uint64_t base, const struct rte_event *ev,
> -                      const uint16_t grp)
> +cn9k_sso_hws_new_event_wait(struct cn9k_sso_hws *ws, const struct rte_event *ev)
>  {
>         const uint32_t tag = (uint32_t)ev->event;
>         const uint8_t new_tt = ev->sched_type;
> +       const uint64_t event_ptr = ev->u64;
> +       const uint16_t grp = ev->queue_id;
>
> -       plt_write64(ev->u64, base + SSOW_LF_GWS_OP_UPD_WQP_GRP1);
> -       cnxk_sso_hws_swtag_desched(tag, new_tt, grp,
> -                                  base + SSOW_LF_GWS_OP_SWTAG_DESCHED);
> +       while (ws->xaq_lmt <= __atomic_load_n(ws->fc_mem, __ATOMIC_RELAXED))
> +               ;
> +
> +       cnxk_sso_hws_add_work(event_ptr, tag, new_tt,
> +                             ws->grp_base + (grp << 12));
>  }
>
>  static __rte_always_inline void
> @@ -86,10 +89,12 @@ cn9k_sso_hws_forward_event(struct cn9k_sso_hws *ws, const struct rte_event *ev)
>         } else {
>                 /*
>                  * Group has been changed for group based work pipelining,
> -                * Use deschedule/add_work operation to transfer the event to
> +                * Use add_work operation to transfer the event to
>                  * new group/core
>                  */
> -               cn9k_sso_hws_fwd_group(ws->base, ev, grp);
> +               rte_atomic_thread_fence(__ATOMIC_RELEASE);
> +               roc_sso_hws_head_wait(ws->base);
> +               cn9k_sso_hws_new_event_wait(ws, ev);
>         }
>  }
>
> @@ -113,6 +118,22 @@ cn9k_sso_hws_dual_new_event(struct cn9k_sso_hws_dual *dws,
>         return 1;
>  }
>
> +static __rte_always_inline void
> +cn9k_sso_hws_dual_new_event_wait(struct cn9k_sso_hws_dual *dws,
> +                                const struct rte_event *ev)
> +{
> +       const uint32_t tag = (uint32_t)ev->event;
> +       const uint8_t new_tt = ev->sched_type;
> +       const uint64_t event_ptr = ev->u64;
> +       const uint16_t grp = ev->queue_id;
> +
> +       while (dws->xaq_lmt <= __atomic_load_n(dws->fc_mem, __ATOMIC_RELAXED))
> +               ;
> +
> +       cnxk_sso_hws_add_work(event_ptr, tag, new_tt,
> +                             dws->grp_base + (grp << 12));
> +}
> +
>  static __rte_always_inline void
>  cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws, uint64_t base,
>                                 const struct rte_event *ev)
> @@ -126,10 +147,12 @@ cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws, uint64_t base,
>         } else {
>                 /*
>                  * Group has been changed for group based work pipelining,
> -                * Use deschedule/add_work operation to transfer the event to
> +                * Use add_work operation to transfer the event to
>                  * new group/core
>                  */
> -               cn9k_sso_hws_fwd_group(base, ev, grp);
> +               rte_atomic_thread_fence(__ATOMIC_RELEASE);
> +               roc_sso_hws_head_wait(base);
> +               cn9k_sso_hws_dual_new_event_wait(dws, ev);
>         }
>  }
>
> --
> 2.17.1
>

      parent reply	other threads:[~2022-02-22  9:51 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-19 12:13 pbhagavatula
2022-02-19 12:13 ` [PATCH 2/2] event/cnxk: update SQB fc check for Tx adapter pbhagavatula
2022-02-22  9:51 ` Jerin Jacob [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALBAE1MtFRnw7CR4PXM9U9bJ16r0WAf7uPivGC8+W0Ac1BR1jA@mail.gmail.com \
    --to=jerinjacobk@gmail.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    --cc=pbhagavatula@marvell.com \
    --cc=sthotton@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).