From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BD4B5A04B1; Wed, 23 Sep 2020 10:07:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A24281D6F2; Wed, 23 Sep 2020 10:07:45 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by dpdk.org (Postfix) with ESMTP id B30901D6CB for ; Wed, 23 Sep 2020 10:07:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600848463; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=Z5JEqkviWI3Ca7hKBkLGXmk1h/hzINFvTFm8E6ZKGOQ=; b=bkjmPzzRChTs5KeDn0N6EdQksam+Me1+z6i6VywM7PAMwm9ROkx3cBOAcm05yOJao/dBkC ozW3kGs2JGUGkfnbyGFFiXvw/kdox5pzBaEmwtFaVjvbYUXjS7SfBrQVH5mH/PsLxOhaT1 u0OYk6ujuXj+y33dORxSCOOCGcSZKko= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-171-WHhgPkGBMbGtr5IAloo-Gw-1; Wed, 23 Sep 2020 04:07:39 -0400 X-MC-Unique: WHhgPkGBMbGtr5IAloo-Gw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 18AF4802B45; Wed, 23 Sep 2020 08:07:38 +0000 (UTC) Received: from [10.36.110.9] (unknown [10.36.110.9]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7D45627BAC; Wed, 23 Sep 2020 08:07:36 +0000 (UTC) To: "Xia, Chenbo" , David Christensen , "Wang, Zhihong" , "dev@dpdk.org" References: <20200902170309.16513-1-drc@linux.vnet.ibm.com> From: Maxime Coquelin Autocrypt: addr=maxime.coquelin@redhat.com; keydata= mQINBFOEQQIBEADjNLYZZqghYuWv1nlLisptPJp+TSxE/KuP7x47e1Gr5/oMDJ1OKNG8rlNg kLgBQUki3voWhUbMb69ybqdMUHOl21DGCj0BTU3lXwapYXOAnsh8q6RRM+deUpasyT+Jvf3a gU35dgZcomRh5HPmKMU4KfeA38cVUebsFec1HuJAWzOb/UdtQkYyZR4rbzw8SbsOemtMtwOx YdXodneQD7KuRU9IhJKiEfipwqk2pufm2VSGl570l5ANyWMA/XADNhcEXhpkZ1Iwj3TWO7XR uH4xfvPl8nBsLo/EbEI7fbuUULcAnHfowQslPUm6/yaGv6cT5160SPXT1t8U9QDO6aTSo59N jH519JS8oeKZB1n1eLDslCfBpIpWkW8ZElGkOGWAN0vmpLfdyiqBNNyS3eGAfMkJ6b1A24un /TKc6j2QxM0QK4yZGfAxDxtvDv9LFXec8ENJYsbiR6WHRHq7wXl/n8guyh5AuBNQ3LIK44x0 KjGXP1FJkUhUuruGyZsMrDLBRHYi+hhDAgRjqHgoXi5XGETA1PAiNBNnQwMf5aubt+mE2Q5r qLNTgwSo2dpTU3+mJ3y3KlsIfoaxYI7XNsPRXGnZi4hbxmeb2NSXgdCXhX3nELUNYm4ArKBP LugOIT/zRwk0H0+RVwL2zHdMO1Tht1UOFGfOZpvuBF60jhMzbQARAQABtCxNYXhpbWUgQ29x dWVsaW4gPG1heGltZS5jb3F1ZWxpbkByZWRoYXQuY29tPokCOAQTAQIAIgUCV3u/5QIbAwYL CQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQyjiNKEaHD4ma2g/+P+Hg9WkONPaY1J4AR7Uf kBneosS4NO3CRy0x4WYmUSLYMLx1I3VH6SVjqZ6uBoYy6Fs6TbF6SHNc7QbB6Qjo3neqnQR1 71Ua1MFvIob8vUEl3jAR/+oaE1UJKrxjWztpppQTukIk4oJOmXbL0nj3d8dA2QgHdTyttZ1H xzZJWWz6vqxCrUqHU7RSH9iWg9R2iuTzii4/vk1oi4Qz7y/q8ONOq6ffOy/t5xSZOMtZCspu Mll2Szzpc/trFO0pLH4LZZfz/nXh2uuUbk8qRIJBIjZH3ZQfACffgfNefLe2PxMqJZ8mFJXc RQO0ONZvwoOoHL6CcnFZp2i0P5ddduzwPdGsPq1bnIXnZqJSl3dUfh3xG5ArkliZ/++zGF1O wvpGvpIuOgLqjyCNNRoR7cP7y8F24gWE/HqJBXs1qzdj/5Hr68NVPV1Tu/l2D1KMOcL5sOrz 2jLXauqDWn1Okk9hkXAP7+0Cmi6QwAPuBT3i6t2e8UdtMtCE4sLesWS/XohnSFFscZR6Vaf3 gKdWiJ/fW64L6b9gjkWtHd4jAJBAIAx1JM6xcA1xMbAFsD8gA2oDBWogHGYcScY/4riDNKXi lw92d6IEHnSf6y7KJCKq8F+Jrj2BwRJiFKTJ6ChbOpyyR6nGTckzsLgday2KxBIyuh4w+hMq TGDSp2rmWGJjASq5Ag0EVPSbkwEQAMkaNc084Qvql+XW+wcUIY+Dn9A2D1gMr2BVwdSfVDN7 0ZYxo9PvSkzh6eQmnZNQtl8WSHl3VG3IEDQzsMQ2ftZn2sxjcCadexrQQv3Lu60Tgj7YVYRM H+fLYt9W5YuWduJ+FPLbjIKynBf6JCRMWr75QAOhhhaI0tsie3eDsKQBA0w7WCuPiZiheJaL 4MDe9hcH4rM3ybnRW7K2dLszWNhHVoYSFlZGYh+MGpuODeQKDS035+4H2rEWgg+iaOwqD7bg CQXwTZ1kSrm8NxIRVD3MBtzp9SZdUHLfmBl/tLVwDSZvHZhhvJHC6Lj6VL4jPXF5K2+Nn/Su CQmEBisOmwnXZhhu8ulAZ7S2tcl94DCo60ReheDoPBU8PR2TLg8rS5f9w6mLYarvQWL7cDtT d2eX3Z6TggfNINr/RTFrrAd7NHl5h3OnlXj7PQ1f0kfufduOeCQddJN4gsQfxo/qvWVB7PaE 1WTIggPmWS+Xxijk7xG6x9McTdmGhYaPZBpAxewK8ypl5+yubVsE9yOOhKMVo9DoVCjh5To5 aph7CQWfQsV7cd9PfSJjI2lXI0dhEXhQ7lRCFpf3V3mD6CyrhpcJpV6XVGjxJvGUale7+IOp sQIbPKUHpB2F+ZUPWds9yyVxGwDxD8WLqKKy0WLIjkkSsOb9UBNzgRyzrEC9lgQ/ABEBAAGJ Ah8EGAECAAkFAlT0m5MCGwwACgkQyjiNKEaHD4nU8hAAtt0xFJAy0sOWqSmyxTc7FUcX+pbD KVyPlpl6urKKMk1XtVMUPuae/+UwvIt0urk1mXi6DnrAN50TmQqvdjcPTQ6uoZ8zjgGeASZg jj0/bJGhgUr9U7oG7Hh2F8vzpOqZrdd65MRkxmc7bWj1k81tOU2woR/Gy8xLzi0k0KUa8ueB iYOcZcIGTcs9CssVwQjYaXRoeT65LJnTxYZif2pfNxfINFzCGw42s3EtZFteczClKcVSJ1+L +QUY/J24x0/ocQX/M1PwtZbB4c/2Pg/t5FS+s6UB1Ce08xsJDcwyOPIH6O3tccZuriHgvqKP yKz/Ble76+NFlTK1mpUlfM7PVhD5XzrDUEHWRTeTJSvJ8TIPL4uyfzhjHhlkCU0mw7Pscyxn DE8G0UYMEaNgaZap8dcGMYH/96EfE5s/nTX0M6MXV0yots7U2BDb4soLCxLOJz4tAFDtNFtA wLBhXRSvWhdBJZiig/9CG3dXmKfi2H+wdUCSvEFHRpgo7GK8/Kh3vGhgKmnnxhl8ACBaGy9n fxjSxjSO6rj4/MeenmlJw1yebzkX8ZmaSi8BHe+n6jTGEFNrbiOdWpJgc5yHIZZnwXaW54QT UhhSjDL1rV2B4F28w30jYmlRmm2RdN7iCZfbyP3dvFQTzQ4ySquuPkIGcOOHrvZzxbRjzMx1 Mwqu3GQ= Message-ID: <6f325c96-beca-fe9c-0622-90a8a6522e0c@redhat.com> Date: Wed, 23 Sep 2020 10:07:34 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Content-Language: en-US Subject: Re: [dpdk-dev] [PATCH] net/vhost: fix xstats wrong after clearing stats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi David, Could you please post a v2 with Chenbo's comments taken into account? Thanks, Maxime On 9/11/20 9:44 AM, Xia, Chenbo wrote: > Hi David, > > Thanks for working on this. Comments inline. > >> -----Original Message----- >> From: David Christensen >> Sent: Thursday, September 3, 2020 1:03 AM >> To: maxime.coquelin@redhat.com; Xia, Chenbo ; Wang, >> Zhihong ; dev@dpdk.org >> Cc: David Christensen >> Subject: [PATCH] net/vhost: fix xstats wrong after clearing stats >> >> The PMD API allows stats and xstats values to be cleared separately. >> This is a problem for the vhost PMD since some of the xstats values are >> derived from existing stats values. For example: >> >> testpmd> show port xstats all >> ... >> tx_unicast_packets: 17562959 >> ... >> testpmd> clear port stats all >> ... >> show port xstats all >> ... >> tx_unicast_packets: 18446744073709551615 >> ... >> >> Modify the driver so that stats and xstats values are stored, updated, >> and cleared separately. > > I think it's fix patch. So please add 'Fixes:XXX' and cc to stable@dpdk.org > in your commit message. > >> >> Signed-off-by: David Christensen >> --- >> drivers/net/vhost/rte_eth_vhost.c | 54 ++++++++++++++++++------------- >> 1 file changed, 32 insertions(+), 22 deletions(-) >> >> diff --git a/drivers/net/vhost/rte_eth_vhost.c >> b/drivers/net/vhost/rte_eth_vhost.c >> index e55278af6..4e72cc2ca 100644 >> --- a/drivers/net/vhost/rte_eth_vhost.c >> +++ b/drivers/net/vhost/rte_eth_vhost.c >> @@ -73,6 +73,9 @@ enum vhost_xstats_pkts { >> VHOST_BROADCAST_PKT, >> VHOST_MULTICAST_PKT, >> VHOST_UNICAST_PKT, >> + VHOST_PKT, >> + VHOST_BYTE, >> + VHOST_MISSED_PKT, >> VHOST_ERRORS_PKT, >> VHOST_ERRORS_FRAGMENTED, >> VHOST_ERRORS_JABBER, >> @@ -149,11 +152,11 @@ struct vhost_xstats_name_off { >> /* [rx]_is prepended to the name string here */ >> static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = { >> {"good_packets", >> - offsetof(struct vhost_queue, stats.pkts)}, >> + offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, >> {"total_bytes", >> - offsetof(struct vhost_queue, stats.bytes)}, >> + offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, >> {"missed_pkts", >> - offsetof(struct vhost_queue, stats.missed_pkts)}, >> + offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, >> {"broadcast_packets", >> offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, >> {"multicast_packets", >> @@ -189,11 +192,11 @@ static const struct vhost_xstats_name_off >> vhost_rxport_stat_strings[] = { >> /* [tx]_ is prepended to the name string here */ >> static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = { >> {"good_packets", >> - offsetof(struct vhost_queue, stats.pkts)}, >> + offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, >> {"total_bytes", >> - offsetof(struct vhost_queue, stats.bytes)}, >> + offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, >> {"missed_pkts", >> - offsetof(struct vhost_queue, stats.missed_pkts)}, >> + offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, >> {"broadcast_packets", >> offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, >> {"multicast_packets", >> @@ -291,18 +294,11 @@ vhost_dev_xstats_get(struct rte_eth_dev *dev, struct >> rte_eth_xstat *xstats, >> vq = dev->data->rx_queues[i]; >> if (!vq) >> continue; >> - vq->stats.xstats[VHOST_UNICAST_PKT] = vq->stats.pkts >> - - (vq->stats.xstats[VHOST_BROADCAST_PKT] >> - + vq->stats.xstats[VHOST_MULTICAST_PKT]); >> } > > Why not delete the for loop here? > >> for (i = 0; i < dev->data->nb_tx_queues; i++) { >> vq = dev->data->tx_queues[i]; >> if (!vq) >> continue; >> - vq->stats.xstats[VHOST_UNICAST_PKT] = vq->stats.pkts >> - + vq->stats.missed_pkts >> - - (vq->stats.xstats[VHOST_BROADCAST_PKT] >> - + vq->stats.xstats[VHOST_MULTICAST_PKT]); >> } > > Ditto. > >> for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { >> xstats[count].value = 0; >> @@ -346,20 +342,27 @@ vhost_count_multicast_broadcast(struct vhost_queue >> *vq, >> pstats->xstats[VHOST_BROADCAST_PKT]++; >> else >> pstats->xstats[VHOST_MULTICAST_PKT]++; >> + } else { >> + pstats->xstats[VHOST_UNICAST_PKT]++; > > As this function also count unicast pkts now. The function name should better > be changed. Besides, in 'eth_vhost_tx' which calls this function, there's a > comment about why we calls the function. I think that should also be updated. > > Thanks! > Chenbo > >> } >> } >> >> static void >> -vhost_update_packet_xstats(struct vhost_queue *vq, >> - struct rte_mbuf **bufs, >> - uint16_t count) >> +vhost_update_packet_xstats(struct vhost_queue *vq, struct rte_mbuf **bufs, >> + uint16_t count, uint64_t nb_bytes, >> + uint64_t nb_missed) >> { >> uint32_t pkt_len = 0; >> uint64_t i = 0; >> uint64_t index; >> struct vhost_stats *pstats = &vq->stats; >> >> + pstats->xstats[VHOST_BYTE] += nb_bytes; >> + pstats->xstats[VHOST_MISSED_PKT] += nb_missed; >> + pstats->xstats[VHOST_UNICAST_PKT] += nb_missed; >> + >> for (i = 0; i < count ; i++) { >> + pstats->xstats[VHOST_PKT]++; >> pkt_len = bufs[i]->pkt_len; >> if (pkt_len == 64) { >> pstats->xstats[VHOST_64_PKT]++; >> @@ -385,6 +388,7 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t >> nb_bufs) >> struct vhost_queue *r = q; >> uint16_t i, nb_rx = 0; >> uint16_t nb_receive = nb_bufs; >> + uint64_t nb_bytes = 0; >> >> if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) >> return 0; >> @@ -419,10 +423,11 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, >> uint16_t nb_bufs) >> if (r->internal->vlan_strip) >> rte_vlan_strip(bufs[i]); >> >> - r->stats.bytes += bufs[i]->pkt_len; >> + nb_bytes += bufs[i]->pkt_len; >> } >> >> - vhost_update_packet_xstats(r, bufs, nb_rx); >> + r->stats.bytes += nb_bytes; >> + vhost_update_packet_xstats(r, bufs, nb_rx, nb_bytes, 0); >> >> out: >> rte_atomic32_set(&r->while_queuing, 0); >> @@ -436,6 +441,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t >> nb_bufs) >> struct vhost_queue *r = q; >> uint16_t i, nb_tx = 0; >> uint16_t nb_send = 0; >> + uint64_t nb_bytes = 0; >> + uint64_t nb_missed = 0; >> >> if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) >> return 0; >> @@ -476,13 +483,16 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, >> uint16_t nb_bufs) >> break; >> } >> >> + for (i = 0; likely(i < nb_tx); i++) >> + nb_bytes += bufs[i]->pkt_len; >> + >> + nb_missed = nb_bufs - nb_tx; >> + >> r->stats.pkts += nb_tx; >> + r->stats.bytes += nb_bytes; >> r->stats.missed_pkts += nb_bufs - nb_tx; >> >> - for (i = 0; likely(i < nb_tx); i++) >> - r->stats.bytes += bufs[i]->pkt_len; >> - >> - vhost_update_packet_xstats(r, bufs, nb_tx); >> + vhost_update_packet_xstats(r, bufs, nb_tx, nb_bytes, nb_missed); >> >> /* According to RFC2863 page42 section ifHCOutMulticastPkts and >> * ifHCOutBroadcastPkts, the counters "multicast" and "broadcast" >> -- >> 2.18.4 >