From: "Min Hu (Connor)" <humin29@huawei.com>
To: "Li, Xiaoyun" <xiaoyun.li@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "Yigit, Ferruh" <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] [PATCH] app/testpmd: fix statistics in multiple process
Date: Fri, 17 Sep 2021 11:33:59 +0800 [thread overview]
Message-ID: <d2f6a530-376d-5e34-3a64-d7dbc307c933@huawei.com> (raw)
In-Reply-To: <DM4PR11MB55348C21A90288AA6875C1E699DC9@DM4PR11MB5534.namprd11.prod.outlook.com>
Hi, Xiaoyun,
在 2021/9/16 13:17, Li, Xiaoyun 写道:
> Hi
>
>> -----Original Message-----
>> From: Min Hu (Connor) <humin29@huawei.com>
>> Sent: Tuesday, September 14, 2021 11:13
>> To: dev@dpdk.org
>> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>
>> Subject: [PATCH] app/testpmd: fix statistics in multiple process
>>
>> Currently, 'clear port stats all' in secondary will clear stats in PMD, and also
>> update stats which APP stores in 'ports[]' array in secondary process, note that,
>> that in primary process remains unchanged. So, when 'show fwd stats all' in
>> primary process, stats in PMD may be less than stats which APP stores in 'ports[]'
>> array(the stats is different with that in secondary). So forward statistics(stats in
>> PMD minus stats which APP stores in 'ports[]' array) will be wrong. Like this:
>> ---------------------- Forward statistics for port 0 --------------
>> RX-packets: 18446744073703120168 RX-dropped: 18446744073704076766
>> RX-total: 18446744073697645318
>> TX-packets: 18446744073703122216 TX-dropped: 0
>> TX-total: 18446744073703122216
>> --------------------------------------------------------------------
>> Stats in PMD are shared between multiple processes, but stats which APP stores
>> have their own copies in multiple processes. And this will result in bugs.
>>
>> This patch will fix it by creating shared memory to store last stats for multiple
>> and secondary process in testpmd.
>
> Why not just limit "clear port stats " behavior to only primary process?
> Is there any particular reason second process has to do clear stats?
While, I have no idea if some particular reason exists second process
has to do clear stats. I just want to implement this function.
>
> Stats is quite complicate struct. I feel like it will have race condition issue if you allow two processes to clear it.
> Only allow getting for multiple processes makes more sense.
Yes, I agree with you, maybe I can add constrains in testpmd guide doc.
BTW, what abot any others' opinion ?
>
> Also your patch has issues even if this idea is accepted by others. Please see below.
>
>>
>> Fixes: 184de26c78d0 ("app/testpmd: support multi-process")
>>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> ---
>> app/test-pmd/config.c | 4 +--
>> app/test-pmd/testpmd.c | 63 ++++++++++++++++++++++++++++++++++++------
>> app/test-pmd/testpmd.h | 2 +-
>> 3 files changed, 58 insertions(+), 11 deletions(-)
>>
> <snip>
>> @@ -3572,6 +3614,7 @@ init_port_config(void)
>> port->dev_conf.intr_conf.lsc = 1;
>> if (rmv_interrupt && (*port->dev_info.dev_flags &
>> RTE_ETH_DEV_INTR_RMV))
>> port->dev_conf.intr_conf.rmv = 1;
>> + port_stats_init(pid);
>
> Init_port_config is called in several places. Why init stats as 0 for port->stats here?
> You did so. Then if user use cmd like "port config 0 rxq 2" which calls init_port_config, port->stats is cleared while device stats isn't.
> Then the fwd stats you got will be wrong.
>
> Please read code carefully. Port->stats will be reset as device stats when start_fwd. Not reset as 0.
>
> Port->stats should be only cleared when you do clear_nic_stats. Otherwise, it doesn't make sense as "last port stats".
>
I know your meaning.
Actually, My purpose for 'port_stats_init' is only called for all port
at init stage. While 'init_port config' confusd me, May I can put the
'port_stats_init' in 'init_config', this may can work out.
Thanks.
>> }
>> }
>>
>> @@ -3882,6 +3925,10 @@ main(int argc, char** argv)
>> rte_exit(EXIT_FAILURE, "Cannot init EAL: %s\n",
>> rte_strerror(rte_errno));
>>
>> + ret = eth_stats_zone_reserve();
>> + if (ret != 0)
>> + rte_exit(EXIT_FAILURE, "Allocate memzone ethdev stats failed");
>> +
>> ret = register_eth_event_callback();
>> if (ret != 0)
>> rte_exit(EXIT_FAILURE, "Cannot register for ethdev events");
>> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
>> 5863b2f43f..a48a302343 100644
>> --- a/app/test-pmd/testpmd.h
>> +++ b/app/test-pmd/testpmd.h
>> @@ -202,7 +202,7 @@ struct rte_port {
>> struct rte_eth_dev_info dev_info; /**< PCI info + driver name */
>> struct rte_eth_conf dev_conf; /**< Port configuration. */
>> struct rte_ether_addr eth_addr; /**< Port ethernet address */
>> - struct rte_eth_stats stats; /**< Last port statistics */
>> + struct rte_eth_stats *stats; /**< Last port statistics */
>> unsigned int socket_id; /**< For NUMA support */
>> uint16_t parse_tunnel:1; /**< Parse internal headers */
>> uint16_t tso_segsz; /**< Segmentation offload MSS for non-
>> tunneled packets. */
>> --
>> 2.33.0
>
> .
>
next prev parent reply other threads:[~2021-09-17 3:34 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-14 3:12 Min Hu (Connor)
2021-09-16 5:17 ` Li, Xiaoyun
2021-09-17 3:33 ` Min Hu (Connor) [this message]
2021-10-26 16:22 ` Ferruh Yigit
2021-10-27 0:09 ` Min Hu (Connor)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d2f6a530-376d-5e34-3a64-d7dbc307c933@huawei.com \
--to=humin29@huawei.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=xiaoyun.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).