DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Ashwin Sekhar T K <asekhar@marvell.com>
Cc: dev@dpdk.org, Nithin Dabilpuram <ndabilpuram@marvell.com>,
	 Kiran Kumar K <kirankumark@marvell.com>,
	Sunil Kumar Kori <skori@marvell.com>,
	 Satha Rao <skoteshwar@marvell.com>,
	Pavan Nikhilesh <pbhagavatula@marvell.com>,
	jerinj@marvell.com,  psatheesh@marvell.com, anoobj@marvell.com,
	gakhil@marvell.com,  hkalra@marvell.com
Subject: Re: [PATCH 1/2] mempool/cnxk: avoid indefinite wait
Date: Mon, 29 May 2023 14:44:27 +0530	[thread overview]
Message-ID: <CALBAE1PoxJadzWwU7ccMV382pv6Gu=Ym3R9WuuCS2g-AOPjJqQ@mail.gmail.com> (raw)
In-Reply-To: <20230526134507.885354-1-asekhar@marvell.com>

On Fri, May 26, 2023 at 7:15 PM Ashwin Sekhar T K <asekhar@marvell.com> wrote:
>
> Avoid waiting indefinitely when counting batch alloc
> pointers by adding a wait timeout.

Please add Fixes: and change the subject starts with "fix ..."
>
> Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
> ---
>  drivers/common/cnxk/roc_npa.h            | 15 +++++++++------
>  drivers/mempool/cnxk/cn10k_mempool_ops.c |  3 ++-
>  2 files changed, 11 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h
> index 21608a40d9..d3caa71586 100644
> --- a/drivers/common/cnxk/roc_npa.h
> +++ b/drivers/common/cnxk/roc_npa.h
> @@ -241,19 +241,23 @@ roc_npa_aura_batch_alloc_issue(uint64_t aura_handle, uint64_t *buf,
>  }
>
>  static inline void
> -roc_npa_batch_alloc_wait(uint64_t *cache_line)
> +roc_npa_batch_alloc_wait(uint64_t *cache_line, unsigned int wait_us)
>  {
> +       const uint64_t ticks = (uint64_t)wait_us * plt_tsc_hz() / (uint64_t)1E6;
> +       const uint64_t start = plt_tsc_cycles();
> +
>         /* Batch alloc status code is updated in bits [5:6] of the first word
>          * of the 128 byte cache line.
>          */
>         while (((__atomic_load_n(cache_line, __ATOMIC_RELAXED) >> 5) & 0x3) ==
>                ALLOC_CCODE_INVAL)
> -               ;
> +               if (wait_us && (plt_tsc_cycles() - start) >= ticks)
> +                       break;
>  }
>
>  static inline unsigned int
>  roc_npa_aura_batch_alloc_count(uint64_t *aligned_buf, unsigned int num,
> -                              unsigned int do_wait)
> +                              unsigned int wait_us)
>  {
>         unsigned int count, i;
>
> @@ -267,8 +271,7 @@ roc_npa_aura_batch_alloc_count(uint64_t *aligned_buf, unsigned int num,
>
>                 status = (struct npa_batch_alloc_status_s *)&aligned_buf[i];
>
> -               if (do_wait)
> -                       roc_npa_batch_alloc_wait(&aligned_buf[i]);
> +               roc_npa_batch_alloc_wait(&aligned_buf[i], wait_us);
>
>                 count += status->count;
>         }
> @@ -293,7 +296,7 @@ roc_npa_aura_batch_alloc_extract(uint64_t *buf, uint64_t *aligned_buf,
>
>                 status = (struct npa_batch_alloc_status_s *)&aligned_buf[i];
>
> -               roc_npa_batch_alloc_wait(&aligned_buf[i]);
> +               roc_npa_batch_alloc_wait(&aligned_buf[i], 0);
>
>                 line_count = status->count;
>
> diff --git a/drivers/mempool/cnxk/cn10k_mempool_ops.c b/drivers/mempool/cnxk/cn10k_mempool_ops.c
> index ba826f0f01..ff0015d8de 100644
> --- a/drivers/mempool/cnxk/cn10k_mempool_ops.c
> +++ b/drivers/mempool/cnxk/cn10k_mempool_ops.c
> @@ -9,6 +9,7 @@
>
>  #define BATCH_ALLOC_SZ              ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS
>  #define BATCH_OP_DATA_TABLE_MZ_NAME "batch_op_data_table_mz"
> +#define BATCH_ALLOC_WAIT_US         5
>
>  enum batch_op_status {
>         BATCH_ALLOC_OP_NOT_ISSUED = 0,
> @@ -178,7 +179,7 @@ cn10k_mempool_get_count(const struct rte_mempool *mp)
>
>                 if (mem->status == BATCH_ALLOC_OP_ISSUED)
>                         count += roc_npa_aura_batch_alloc_count(
> -                               mem->objs, BATCH_ALLOC_SZ, 1);
> +                               mem->objs, BATCH_ALLOC_SZ, BATCH_ALLOC_WAIT_US);
>
>                 if (mem->status == BATCH_ALLOC_OP_DONE)
>                         count += mem->sz;
> --
> 2.25.1
>

  parent reply	other threads:[~2023-05-29  9:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-26 13:45 Ashwin Sekhar T K
2023-05-26 13:45 ` [PATCH 2/2] common/cnxk: add new APIs for batch operations Ashwin Sekhar T K
2023-05-29  9:14 ` Jerin Jacob [this message]
2023-05-29  9:25 ` [PATCH v2 1/2] mempool/cnxk: fix indefinite wait in batch alloc Ashwin Sekhar T K
2023-05-29  9:25   ` [PATCH v2 2/2] common/cnxk: add new APIs for batch operations Ashwin Sekhar T K
2023-05-30  9:12 ` [PATCH v3] " Ashwin Sekhar T K
2023-05-30 16:51   ` Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALBAE1PoxJadzWwU7ccMV382pv6Gu=Ym3R9WuuCS2g-AOPjJqQ@mail.gmail.com' \
    --to=jerinjacobk@gmail.com \
    --cc=anoobj@marvell.com \
    --cc=asekhar@marvell.com \
    --cc=dev@dpdk.org \
    --cc=gakhil@marvell.com \
    --cc=hkalra@marvell.com \
    --cc=jerinj@marvell.com \
    --cc=kirankumark@marvell.com \
    --cc=ndabilpuram@marvell.com \
    --cc=pbhagavatula@marvell.com \
    --cc=psatheesh@marvell.com \
    --cc=skori@marvell.com \
    --cc=skoteshwar@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).