DPDK patches and discussions
 help / color / mirror / Atom feed
From: Andrzej Ostruszka <amo@semihalf.com>
To: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache
Date: Thu, 19 Mar 2020 10:36:34 +0100	[thread overview]
Message-ID: <c3605aaa-6c7c-d187-d02f-5a3fe60da5c3@semihalf.com> (raw)
In-Reply-To: <20200306163524.1650-1-pbhagavatula@marvell.com>

On 3/6/20 5:35 PM, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> Add device arguments to lock NPA aura and pool contexts in NDC cache.
> The device args take hexadecimal bitmask where each bit represent the
> corresponding aura/pool id.
> Example:
> 	-w 0002:02:00.0,npa_lock_mask=0xf // Lock first 4 aura/pool ctx
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
[...]
> +- ``Lock NPA contexts in NDC``
> +
> +   Lock NPA aura and pool contexts in NDC cache.
> +   The device args take hexadecimal bitmask where each bit represent the
> +   corresponding aura/pool id.
> +
> +   For example::
> +      -w 0002:0e:00.0,npa_lock_mask=0xf

I think you need to make a paragraph break (empty line) after "::" in
order to have this example treated as "literal block" (same as max_pool
above - not visible in diff).  At least it looks so when I build doc
with "ninja doc" and check the result in browser.

> diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
> index 2c9a0953b..c594934d8 100644
> --- a/doc/guides/mempool/octeontx2.rst
> +++ b/doc/guides/mempool/octeontx2.rst
> @@ -61,6 +61,15 @@ Runtime Config Options
>     provide ``max_pools`` parameter to the first PCIe device probed by the given
>     application.
>  
> +- ``Lock NPA contexts in NDC``
> +
> +   Lock NPA aura and pool contexts in NDC cache.
> +   The device args take hexadecimal bitmask where each bit represent the
> +   corresponding aura/pool id.
> +
> +   For example::
> +      -w 0002:02:00.0,npa_lock_mask=0xf

Ditto.

> diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
> index 60187ec72..819d09e11 100644
> --- a/doc/guides/nics/octeontx2.rst
> +++ b/doc/guides/nics/octeontx2.rst
> @@ -213,6 +213,15 @@ Runtime Config Options
>     parameters to all the PCIe devices if application requires to configure on
>     all the ethdev ports.
>  
> +- ``Lock NPA contexts in NDC``
> +
> +   Lock NPA aura and pool contexts in NDC cache.
> +   The device args take hexadecimal bitmask where each bit represent the
> +   corresponding aura/pool id.
> +
> +   For example::
> +      -w 0002:02:00.0,npa_lock_mask=0xf

Ditto - make that general comment (you might also want to fix other
places - not only those introduced).

[...]
> diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
> index 1a257cf07..684bb3a0f 100644
> --- a/drivers/common/octeontx2/otx2_common.c
> +++ b/drivers/common/octeontx2/otx2_common.c
> @@ -169,6 +169,41 @@ int otx2_npa_lf_obj_ref(void)
>  	return cnt ? 0 : -EINVAL;
>  }
>  
> +static int
> +parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
> +{
> +	RTE_SET_USED(key);
> +	uint64_t val;
> +
> +	val = strtoull(value, NULL, 16);
> +
> +	*(uint64_t *)extra_args = val;
> +
> +	return 0;
> +}
> +
> +#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
> +/*
> + * @internal
> + * Parse common device arguments
> + */
> +void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
> +{
> +
> +	struct otx2_idev_cfg *idev;
> +	uint64_t npa_lock_mask;

Missing initialization of 'npa_lock_mask' - when user does not supply
this devarg then no callback is called and you copy this to idev (below).

> +
> +	idev = otx2_intra_dev_get_cfg();
> +
> +	if (idev == NULL)
> +		return;
> +
> +	rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
> +			&parse_npa_lock_mask, &npa_lock_mask);
> +
> +	idev->npa_lock_mask = npa_lock_mask;
> +}
[...]
> diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
> index ac2d61861..5075b027a 100644
> --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
> +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
> @@ -348,6 +348,7 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
>  	struct npa_aq_enq_req *aura_init_req, *pool_init_req;
>  	struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
>  	struct otx2_mbox_dev *mdev = &mbox->dev[0];
> +	struct otx2_idev_cfg *idev;
>  	int rc, off;
>  
>  	aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> @@ -379,6 +380,46 @@ npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
>  		return 0;
>  	else
>  		return NPA_LF_ERR_AURA_POOL_INIT;
> +
> +	idev = otx2_intra_dev_get_cfg();
> +	if (idev == NULL)
> +		return 0;

Is this not an error?

> +
> +	if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
> +		return 0;
> +
> +	aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> +	aura_init_req->aura_id = aura_id;
> +	aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
> +	aura_init_req->op = NPA_AQ_INSTOP_LOCK;
> +
> +	pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> +	if (!pool_init_req) {
> +		/* The shared memory buffer can be full.
> +		 * Flush it and retry
> +		 */
> +		otx2_mbox_msg_send(mbox, 0);
> +		rc = otx2_mbox_wait_for_rsp(mbox, 0);
> +		if (rc < 0) {
> +			otx2_err("Failed to LOCK AURA context");
> +			return 0;

Same here and below - if these are not errors then maybe do not log them
as such.  If they are errors then we should probably signal them via
return value ("return rc;").

> +		}
> +
> +		pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> +		if (!pool_init_req) {
> +			otx2_err("Failed to LOCK POOL context");
> +			return 0;

See above.

> +		}
> +	}
> +	pool_init_req->aura_id = aura_id;
> +	pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
> +	pool_init_req->op = NPA_AQ_INSTOP_LOCK;
> +
> +	rc = otx2_mbox_process(mbox);
> +	if (rc < 0)
> +		otx2_err("Failed to lock POOL ctx to NDC");

See above.

> +
> +	return >  }
>  
>  static int
> @@ -390,6 +431,7 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
>  	struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
>  	struct otx2_mbox_dev *mdev = &mbox->dev[0];
>  	struct ndc_sync_op *ndc_req;
> +	struct otx2_idev_cfg *idev;
>  	int rc, off;
>  
>  	/* Procedure for disabling an aura/pool */
> @@ -434,6 +476,32 @@ npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
>  		otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
>  		return NPA_LF_ERR_AURA_POOL_FINI;
>  	}
> +
> +	idev = otx2_intra_dev_get_cfg();
> +	if (idev == NULL)
> +		return 0;
> +
> +	if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
> +		return 0;

Same comments here and below as for *pool_init above.

> +
> +	aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> +	aura_req->aura_id = aura_id;
> +	aura_req->ctype = NPA_AQ_CTYPE_AURA;
> +	aura_req->op = NPA_AQ_INSTOP_UNLOCK;
> +
> +	rc = otx2_mbox_process(mbox);
> +	if (rc < 0)
> +		otx2_err("Failed to unlock AURA ctx to NDC");
> +
> +	pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
> +	pool_req->aura_id = aura_id;
> +	pool_req->ctype = NPA_AQ_CTYPE_POOL;
> +	pool_req->op = NPA_AQ_INSTOP_UNLOCK;
> +
> +	rc = otx2_mbox_process(mbox);
> +	if (rc < 0)
> +		otx2_err("Failed to unlock POOL ctx to NDC");
> +
>  	return 0;
>  }
With regards
Andrzej Ostruszka

  parent reply	other threads:[~2020-03-19  9:36 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-06 16:35 pbhagavatula
2020-03-06 16:35 ` [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-19  9:36   ` Andrzej Ostruszka
2020-03-19 13:56     ` Pavan Nikhilesh Bhagavatula
2020-03-19  9:36 ` Andrzej Ostruszka [this message]
2020-03-19 13:35   ` [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Pavan Nikhilesh Bhagavatula
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] " pbhagavatula
2020-03-24 16:53   ` [dpdk-dev] [dpdk-dev v2] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-25  6:51   ` [dpdk-dev] [dpdk-dev v2] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Jerin Jacob
2020-03-26  6:33   ` [dpdk-dev] [dpdk-dev v3] [PATCH] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-26 15:56     ` Andrzej Ostruszka [C]
2020-03-27  9:53     ` [dpdk-dev] [PATCH v4] " pbhagavatula
2020-03-27 16:19       ` Andrzej Ostruszka
2020-03-27 17:49         ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-03-31 13:58       ` [dpdk-dev] [PATCH v5] " pbhagavatula
2020-06-26  5:00         ` Jerin Jacob
2020-06-28 22:18         ` [dpdk-dev] [PATCH v6] " pbhagavatula
2020-07-02  9:46           ` Jerin Jacob
2020-03-26  6:34   ` [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
2020-04-06  8:39     ` Jerin Jacob
2020-04-16 22:33       ` Thomas Monjalon
2020-04-21  7:37         ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-04-22  8:06     ` [dpdk-dev] [PATCH v4] " pbhagavatula
2020-05-01 10:21       ` Pavan Nikhilesh Bhagavatula
2020-05-04 22:43         ` Thomas Monjalon
2020-05-10 22:35           ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-05-11 10:07       ` [dpdk-dev] [PATCH v5] " pbhagavatula
2020-05-19 16:15         ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c3605aaa-6c7c-d187-d02f-5a3fe60da5c3@semihalf.com \
    --to=amo@semihalf.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).