DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerinjacobk@gmail.com>
To: Pavan Nikhilesh <pbhagavatula@marvell.com>,
	Ferruh Yigit <ferruh.yigit@intel.com>
Cc: Jerin Jacob <jerinj@marvell.com>,
	Harman Kalra <hkalra@marvell.com>, dpdk-dev <dev@dpdk.org>,
	dpdk stable <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 2/2] net/octeontx: fix Tx xmit command preparation
Date: Sun, 27 Sep 2020 18:36:45 +0530	[thread overview]
Message-ID: <CALBAE1PUSDnefB8=906c8FfLj0S=Y_f7OxU6LZ0wpjMn3vRoTg@mail.gmail.com> (raw)
In-Reply-To: <20200728184347.3105-2-pbhagavatula@marvell.com>

On Wed, Jul 29, 2020 at 12:14 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> When building send command for a given descriptor it expects
> it to contain the AURA identifier of the pool that it belongs
> to rather than the pool identifier itself.
>
> Fixes: 7f4116bdbb1c ("net/octeontx: add framework for Rx/Tx offloads")
> Cc: stable@dpdk.org
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

Series Acked-by: Jerin Jacob <jerinj@marvell.com>

Applied to dpdk-next-net-mrvl/master. Thanks



> ---
>  drivers/net/octeontx/octeontx_rxtx.h | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/octeontx/octeontx_rxtx.h b/drivers/net/octeontx/octeontx_rxtx.h
> index 8b46105b6..4dcd94530 100644
> --- a/drivers/net/octeontx/octeontx_rxtx.h
> +++ b/drivers/net/octeontx/octeontx_rxtx.h
> @@ -337,8 +337,7 @@ __octeontx_xmit_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf,
>                 __mempool_check_cookies(tx_pkt->pool, (void **)&tx_pkt,
>                                         1, 0);
>         /* Get the gaura Id */
> -       gaura_id = octeontx_fpa_bufpool_gpool((uintptr_t)
> -                                             tx_pkt->pool->pool_id);
> +       gaura_id = octeontx_fpa_bufpool_gaura((uintptr_t)tx_pkt->pool->pool_id);
>
>         /* Setup PKO_SEND_BUFLINK_S */
>         cmd_buf[nb_desc++] = PKO_SEND_BUFLINK_SUBDC |
> @@ -373,7 +372,7 @@ __octeontx_xmit_mseg_prepare(struct rte_mbuf *tx_pkt, uint64_t *cmd_buf,
>                 /* To handle case where mbufs belong to diff pools, like
>                  * fragmentation
>                  */
> -               gaura_id = octeontx_fpa_bufpool_gpool((uintptr_t)
> +               gaura_id = octeontx_fpa_bufpool_gaura((uintptr_t)
>                                                       tx_pkt->pool->pool_id);
>
>                 /* Setup PKO_SEND_GATHER_S */
> --
> 2.17.1
>

      reply	other threads:[~2020-09-27 13:07 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-28 18:43 [dpdk-dev] [PATCH 1/2] mempool/octeontx: fix aura to pool mapping pbhagavatula
2020-07-28 18:43 ` [dpdk-dev] [PATCH 2/2] net/octeontx: fix Tx xmit command preparation pbhagavatula
2020-09-27 13:06   ` Jerin Jacob [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALBAE1PUSDnefB8=906c8FfLj0S=Y_f7OxU6LZ0wpjMn3vRoTg@mail.gmail.com' \
    --to=jerinjacobk@gmail.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=hkalra@marvell.com \
    --cc=jerinj@marvell.com \
    --cc=pbhagavatula@marvell.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).