patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Rasesh Mody <rmody@marvell.com>
To: Jerin Jacob <jerinjacobk@gmail.com>,
	David Marchand <david.marchand@redhat.com>
Cc: dpdk-dev <dev@dpdk.org>, "stable@dpdk.org" <stable@dpdk.org>,
	"Shahed Shaikh" <shshaikh@marvell.com>
Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH] net/qede: only access sw rx ring index for	debug
Date: Thu, 3 Oct 2019 18:30:39 +0000	[thread overview]
Message-ID: <BYAPR18MB2838B341D0F0731789EB077CB59F0@BYAPR18MB2838.namprd18.prod.outlook.com> (raw)
In-Reply-To: <CALBAE1MwyMqGhzdjLXcVTiDTqYmXXZYu3PCeBeFvkarSHaWnjw@mail.gmail.com>

>From: dev <dev-bounces@dpdk.org> On Behalf Of Jerin Jacob
>Sent: Thursday, October 03, 2019 7:40 AM
>
>On Fri, Sep 27, 2019 at 4:59 PM David Marchand
><david.marchand@redhat.com> wrote:
>>
>> Caught by clang, this idx value is only used for a debug message when
>> the mbufs allocation fails.
>> No need to use idx as a temporary storage.
>>
>> Fixes: 8f2312474529 ("net/qede: fix performance bottleneck in Rx
>> path")
>> Cc: stable@dpdk.org
>
>Rasesh, Shahed,
>
>Please review this patch.

Acked.

Thanks!
-Rasesh
>
>>
>> Signed-off-by: David Marchand <david.marchand@redhat.com>
>> ---
>>  drivers/net/qede/qede_rxtx.c | 6 +++---
>>  1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/net/qede/qede_rxtx.c
>> b/drivers/net/qede/qede_rxtx.c index c38cbb9..1fbeba2 100644
>> --- a/drivers/net/qede/qede_rxtx.c
>> +++ b/drivers/net/qede/qede_rxtx.c
>> @@ -46,8 +46,6 @@ static inline int qede_alloc_rx_bulk_mbufs(struct
>qede_rx_queue *rxq, int count)
>>         int i, ret = 0;
>>         uint16_t idx;
>>
>> -       idx = rxq->sw_rx_prod & NUM_RX_BDS(rxq);
>> -
>>         if (count > QEDE_MAX_BULK_ALLOC_COUNT)
>>                 count = QEDE_MAX_BULK_ALLOC_COUNT;
>>
>> @@ -56,7 +54,9 @@ static inline int qede_alloc_rx_bulk_mbufs(struct
>qede_rx_queue *rxq, int count)
>>                 PMD_RX_LOG(ERR, rxq,
>>                            "Failed to allocate %d rx buffers "
>>                             "sw_rx_prod %u sw_rx_cons %u mp entries %u free %u",
>> -                           count, idx, rxq->sw_rx_cons & NUM_RX_BDS(rxq),
>> +                           count,
>> +                           rxq->sw_rx_prod & NUM_RX_BDS(rxq),
>> +                           rxq->sw_rx_cons & NUM_RX_BDS(rxq),
>>                             rte_mempool_avail_count(rxq->mb_pool),
>>                             rte_mempool_in_use_count(rxq->mb_pool));
>>                 return -ENOMEM;
>> --
>> 1.8.3.1
>>

  reply	other threads:[~2019-10-03 18:30 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-27 11:28 [dpdk-stable] " David Marchand
2019-10-03 14:39 ` [dpdk-stable] [dpdk-dev] " Jerin Jacob
2019-10-03 18:30   ` Rasesh Mody [this message]
2019-10-03 18:29 [dpdk-stable] " Rasesh Mody
2019-10-04  8:49 ` [dpdk-stable] [dpdk-dev] " Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR18MB2838B341D0F0731789EB077CB59F0@BYAPR18MB2838.namprd18.prod.outlook.com \
    --to=rmody@marvell.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=jerinjacobk@gmail.com \
    --cc=shshaikh@marvell.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).