From: Andrzej Ostruszka <amo@semihalf.com>
To: pbhagavatula@marvell.com, jerinj@marvell.com,
Nithin Dabilpuram <ndabilpuram@marvell.com>,
Kiran Kumar K <kirankumark@marvell.com>,
John McNamara <john.mcnamara@intel.com>,
Marko Kovacevic <marko.kovacevic@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v4] net/octeontx2: add devargs to lock Rx/Tx ctx
Date: Fri, 27 Mar 2020 17:19:43 +0100 [thread overview]
Message-ID: <9144a7f0-81f3-4e41-9a65-61a3bd02d70c@semihalf.com> (raw)
In-Reply-To: <20200327095359.1934-1-pbhagavatula@marvell.com>
On 3/27/20 10:53 AM, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add device arguments to lock Rx/Tx contexts.
> Application can either choose to lock Rx or Tx contexts by using
> 'lock_rx_ctx' or 'lock_tx_ctx' respectively per each port.
>
> Example:
> -w 0002:02:00.0,lock_rx_ctx=1 -w 0002:03:00.0,lock_tx_ctx=1
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
> ---
> Depends on http://patches.dpdk.org/patch/67178/
> v4 Changes:
> - Fix return path using unnecessary goto.(Andrzej)
> - Fix datatype of values passed to devargs parser.(Andrzej)
>
> v3 Changes:
> - Split series into individual patches as targets are different.
>
> doc/guides/nics/octeontx2.rst | 16 ++
> drivers/net/octeontx2/otx2_ethdev.c | 187 +++++++++++++++++++-
> drivers/net/octeontx2/otx2_ethdev.h | 2 +
> drivers/net/octeontx2/otx2_ethdev_devargs.c | 16 +-
> drivers/net/octeontx2/otx2_rss.c | 23 +++
> 5 files changed, 241 insertions(+), 3 deletions(-)
[...]
> diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
> index e60f4901c..a6f2c0f42 100644
> --- a/drivers/net/octeontx2/otx2_ethdev.c
> +++ b/drivers/net/octeontx2/otx2_ethdev.c
> @@ -381,6 +381,40 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
> goto fail;
> }
>
> + if (dev->lock_rx_ctx) {
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_CQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + /* The shared memory buffer can be full.
> + * Flush it and retry
> + */
> + otx2_mbox_msg_send(mbox, 0);
> + rc = otx2_mbox_wait_for_rsp(mbox, 0);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK cq context");
> + return rc;
> + }
> +
> + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
> + if (!aq) {
> + otx2_err("Failed to LOCK rq context");
> + return -ENOMEM;
> + }
> + }
> + aq->qidx = qid;
> + aq->ctype = NIX_AQ_CTYPE_RQ;
> + aq->op = NIX_AQ_INSTOP_LOCK;
> + rc = otx2_mbox_process(mbox);
> + if (rc < 0) {
> + otx2_err("Failed to LOCK rq context");
> + return rc;
> + }
> + }
> +
> return 0;
> fail:
> return rc;
Pavan - sorry for being so ... finicky :)
I've said 'replace all "goto fail" with "return rc"' and I meant that.
So not only the "goto fail" in you changes but all "goto fail" in that
function.
Apart from that:
Reviewed-by: Andrzej Ostruszka <aostruszka@marvell.com>
With regards
Andrzej
PS. Thanks for the patience ;)
next prev parent reply other threads:[~2020-03-27 16:19 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-06 16:35 [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
2020-03-06 16:35 ` [dpdk-dev] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-19 9:36 ` Andrzej Ostruszka
2020-03-19 13:56 ` Pavan Nikhilesh Bhagavatula
2020-03-19 9:36 ` [dpdk-dev] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Andrzej Ostruszka
2020-03-19 13:35 ` Pavan Nikhilesh Bhagavatula
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] " pbhagavatula
2020-03-24 16:53 ` [dpdk-dev] [dpdk-dev v2] [PATCH 2/2] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-25 6:51 ` [dpdk-dev] [dpdk-dev v2] [PATCH 1/2] mempool/octeontx2: add devargs to lock ctx in cache Jerin Jacob
2020-03-26 6:33 ` [dpdk-dev] [dpdk-dev v3] [PATCH] net/octeontx2: add devargs to lock Rx/Tx ctx pbhagavatula
2020-03-26 15:56 ` Andrzej Ostruszka [C]
2020-03-27 9:53 ` [dpdk-dev] [PATCH v4] " pbhagavatula
2020-03-27 16:19 ` Andrzej Ostruszka [this message]
2020-03-27 17:49 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-03-31 13:58 ` [dpdk-dev] [PATCH v5] " pbhagavatula
2020-06-26 5:00 ` Jerin Jacob
2020-06-28 22:18 ` [dpdk-dev] [PATCH v6] " pbhagavatula
2020-07-02 9:46 ` Jerin Jacob
2020-03-26 6:34 ` [dpdk-dev] [dpdk-dev v3] [PATCH] mempool/octeontx2: add devargs to lock ctx in cache pbhagavatula
2020-04-06 8:39 ` Jerin Jacob
2020-04-16 22:33 ` Thomas Monjalon
2020-04-21 7:37 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-04-22 8:06 ` [dpdk-dev] [PATCH v4] " pbhagavatula
2020-05-01 10:21 ` Pavan Nikhilesh Bhagavatula
2020-05-04 22:43 ` Thomas Monjalon
2020-05-10 22:35 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2020-05-11 10:07 ` [dpdk-dev] [PATCH v5] " pbhagavatula
2020-05-19 16:15 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9144a7f0-81f3-4e41-9a65-61a3bd02d70c@semihalf.com \
--to=amo@semihalf.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=john.mcnamara@intel.com \
--cc=kirankumark@marvell.com \
--cc=marko.kovacevic@intel.com \
--cc=ndabilpuram@marvell.com \
--cc=pbhagavatula@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).