DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: "Min Hu (Connor)" <humin29@huawei.com>, dev@dpdk.org
Cc: bruce.richardson@intel.com, thomas.monjalon@6wind.com,
	lihuisong@huawei.com
Subject: Re: [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration
Date: Thu, 12 Nov 2020 09:52:09 +0000	[thread overview]
Message-ID: <531504ce-0547-48c2-4e34-c2ed6cb12e57@intel.com> (raw)
In-Reply-To: <3f67aa47-7bec-d31c-8162-e800c9800e8a@huawei.com>

On 11/12/2020 2:28 AM, Min Hu (Connor) wrote:
> Hi Ferruh, any suggestions?
> 
> 
> 在 2020/11/3 14:30, Min Hu (Connor) 写道:
>> Hi Ferruh,
>>
>> I agree with your proposal. But if we remove record structures, we will
>> not be able to query the current queue stats mapping configuration. Or
>> we can provide a query API for the PMD driver that uses the
>> set_queue_stats_mapping API, and driver records these mapping
>> information from user.
>>
>> What do you think?
>>

Sorry for delay,

Yes that information will be lost, but since the queue stats mapping is not 
commonly used, I think it can be OK to loose it.

As you said another option is to add a new ethdev API to get queue stats 
mapping, but because of same reason not sure about adding it. We can add it 
later if there is a request for it.

>>
>> 在 2020/10/31 4:54, Ferruh Yigit 写道:
>>> On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>
>>>> Currently, the queue stats mapping has the following problems:
>>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>>> failure message after executing the command "set stat_qmap rx 0 2 2".
>>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>>> displayed.
>>>> 3) There is no need to keep cache line alignment for
>>>> 'struct queue_stats_mappings'.
>>>> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
>>>> 'rx_queue_stats_mappings_array' are global and their sizes are based on
>>>> fixed max port and queue size assumptions.
>>>> 5) The configuration result does not take effect or can not be queried
>>>> in real time.
>>>>
>>>> Therefore, we have made the following adjustments:
>>>> 1) If PMD supports queue stats mapping, configure to driver in real time
>>>> after executing the command "set stat_qmap rx/tx ...". If not,
>>>> the command can not be accepted.
>>>> 2) Only display queues that mapping done by adding a new 'active' field
>>>>   in queue_stats_mappings struct.
>>>> 3) Remove cache alignment for 'struct queue_stats_mappings'.
>>>> 4) Add a new port_stats_mappings struct in rte_port.
>>>> The struct contains number of rx/txq stats mapping, rx/tx
>>>> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
>>>> Size of queue_stats_mapping_array is set to "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>>>>   to ensure that the same number of queues can be set for each port.
>>>>
>>>
>>> Hi Connor,
>>>
>>> I think above adjustment are good, but after the decision to use xstats for 
>>> the queue stats, what do you think about more simplification,
>>>
>>> 1)
>>> What testpmd does is, records the queue stats mapping commands and registers 
>>> them later on port start & forwarding start.
>>> What happens if recording and registering completely removed?
>>> When "set stat_qmap .." issued, it just call the ethdev APIs to do the 
>>> mapping in device.
>>> This lets us removing record structures, "struct port_stats_mappings 
>>> p_stats_map"
>>> Also can remove 'map_port_queue_stats_mapping_registers()' and its sub 
>>> functions.
>>>
>>> 2)
>>> Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" 
>>> parameters, which enables removing 'parse_queue_stats_mapping_config()' 
>>> function too
>>>
>>> 3)
>>> Another problem is to display the queue stats, in 'fwd_stats_display()' & 
>>> 'nic_stats_display()', there is a check if the queue stats mapping enable or 
>>> not ('rx_queue_stats_mapping_enabled' & 'tx_queue_stats_mapping_enabled'),
>>> I think displaying queue stats and queue stat mapping should be separate, why 
>>> not drop checks for queue stats mapping and display queue stats for 'nb_rxq' 
>>> & 'nb_txq' queues?
>>>
>>> Does above make sense?
>>>
>>>
>>> Majority of the drivers doesn't require queue stat mapping to get the queue 
>>> stats, lets don't pollute main usage with this requirement.
>>>
>>>
>>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
>>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>>
>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>
>>> <...>
>>>
>>> .


  reply	other threads:[~2020-11-12  9:52 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-20  8:26 [dpdk-dev] [RFC V2 0/2] fix queue stats mapping Min Hu (Connor)
2020-10-20  8:26 ` [dpdk-dev] [RFC V2 1/2] app/testpmd: fix queue stats mapping configuration Min Hu (Connor)
2020-10-30 20:54   ` Ferruh Yigit
2020-11-03  6:30     ` Min Hu (Connor)
2020-11-12  2:28       ` Min Hu (Connor)
2020-11-12  9:52         ` Ferruh Yigit [this message]
2020-11-18  3:39           ` Min Hu (Connor)
2020-11-20 11:50           ` [dpdk-dev] [RFC V4] " Min Hu (Connor)
2020-11-20 17:26             ` Ferruh Yigit
2020-11-20 23:21               ` Stephen Hemminger
2020-11-20 23:33                 ` Ferruh Yigit
2020-11-21  4:29                   ` Stephen Hemminger
2020-11-23  7:22                     ` Min Hu (Connor)
2020-11-23  9:51                     ` Ferruh Yigit
2020-11-30  8:29                       ` Min Hu (Connor)
2020-12-02 10:44                         ` Ferruh Yigit
2020-12-02 12:48                           ` [dpdk-dev] [PATCH V1] " Min Hu (Connor)
2020-12-08 15:48                             ` Ferruh Yigit
2020-12-07  1:28                           ` [dpdk-dev] [RFC V4] " Min Hu (Connor)
2020-10-20  8:26 ` [dpdk-dev] [RFC V2 2/2] app/testpmd: fix starting failed with queue-stats-mapping Min Hu (Connor)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=531504ce-0547-48c2-4e34-c2ed6cb12e57@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=humin29@huawei.com \
    --cc=lihuisong@huawei.com \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).