From: Kevin Traynor <ktraynor@redhat.com>
To: "Mody, Rasesh" <Rasesh.Mody@cavium.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "ferruh.yigit@intel.com" <ferruh.yigit@intel.com>,
Dept-Eng DPDK Dev <Dept-EngDPDKDev@cavium.com>,
"stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] net/qede: fix L2-handles used for RSS hash update
Date: Tue, 5 Jun 2018 17:41:43 +0100 [thread overview]
Message-ID: <5abb59fb-5d42-2e91-3004-bf462695958c@redhat.com> (raw)
In-Reply-To: <SN1PR07MB40323F495F2AD26A0593D0F19F660@SN1PR07MB4032.namprd07.prod.outlook.com>
On 06/05/2018 05:16 PM, Mody, Rasesh wrote:
>> From: Kevin Traynor [mailto:ktraynor@redhat.com]
>> Sent: Tuesday, June 05, 2018 6:40 AM
>>
>> On 06/01/2018 06:16 PM, Rasesh Mody wrote:
>>> Fix fast path array index which is used for passing L2 handles to RSS
>>> indirection table. Currently, it is using the local copy of
>>> indirection table. When the RX queue configuration changes the local
>>> copy becomes invalid.
>>>
>>> Fixes: 69d7ba88f1a1 ("net/qede/base: use L2-handles for RSS
>>> configuration")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
>>> ---
>>> drivers/net/qede/qede_ethdev.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/net/qede/qede_ethdev.c
>>> b/drivers/net/qede/qede_ethdev.c index 3206cc6..6e9e76d 100644
>>> --- a/drivers/net/qede/qede_ethdev.c
>>> +++ b/drivers/net/qede/qede_ethdev.c
>>> @@ -2210,7 +2210,7 @@ int qede_rss_hash_update(struct rte_eth_dev
>> *eth_dev,
>>> vport_update_params.vport_id = 0;
>>> /* pass the L2 handles instead of qids */
>>> for (i = 0 ; i < ECORE_RSS_IND_TABLE_SIZE ; i++) {
>>> - idx = qdev->rss_ind_table[i];
>>> + idx = ECORE_RSS_IND_TABLE_SIZE %
>> QEDE_RSS_COUNT(qdev);
>>> rss_params.rss_ind_table[i] = qdev->fp_array[idx].rxq-
>>> handle;
>>
>> hi, idx never changes in the loop, so the same rxq handle is in every
>> rss_ind_table entry - is it right?
>
> The idx depends on number of RXQs. If a single RXQ is configured then idx does not change in the loop, in which case same RXQ handle is in every entry.
The value depends on number of Rxqs, but that value will not change for
each of 128 iterations *regardless* of the number of Rxq's configured
(assuming that will be static during the loop). Perhaps that's what you
want, but it looks odd to be calculating the idx in each loop iteration
when it won't change.
idx = ECORE_RSS_IND_TABLE_SIZE % QEDE_RSS_COUNT(qdev)
=>
idx = 128 % qdev->num_rx_queues
>
> Thanks!
> -Rasesh
>>
>>> }
>>> vport_update_params.rss_params = &rss_params;
>>>
>
next prev parent reply other threads:[~2018-06-05 16:41 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-01 17:16 Rasesh Mody
2018-06-05 13:39 ` Kevin Traynor
2018-06-05 16:16 ` Mody, Rasesh
2018-06-05 16:41 ` Kevin Traynor [this message]
2018-06-05 17:14 ` Mody, Rasesh
2018-06-05 23:03 ` [dpdk-dev] [PATCH v2] " Rasesh Mody
2018-06-06 11:11 ` Kevin Traynor
2018-06-06 18:40 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5abb59fb-5d42-2e91-3004-bf462695958c@redhat.com \
--to=ktraynor@redhat.com \
--cc=Dept-EngDPDKDev@cavium.com \
--cc=Rasesh.Mody@cavium.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).