From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9FFD6A04DB for ; Tue, 6 Oct 2020 23:23:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7375829C6; Tue, 6 Oct 2020 23:23:26 +0200 (CEST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by dpdk.org (Postfix) with ESMTP id A078CDE0; Tue, 6 Oct 2020 23:23:22 +0200 (CEST) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 096LChUO003037; Tue, 6 Oct 2020 17:23:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=pp1; bh=uMcGrsW/MAfPCYHlrXZCQvboabZg4Diu7bS8GHUlD7I=; b=R0lhUs3zXFy+Mo/LClwoJgNE+YRFi+Lzv+Tk6I/JTD12MDCR0ObYznH1Z0/W245l4KUN BU3phzqJ49nxnScqWMp7mH2dZco2VsDWw6zCyoYr6IZAdb3EFXZFTchJQWiX5/YcSOMA qSNpr6b1nXWncaXuacycQSd+Mp4JqpNfkFSoQnvlpko5ekiUvSXR9DIQXlKRpG2iYnoZ bvPUeT49Kw5svyd6QUGha1S9xfKLikFqEK+muAkqPd03dNg5mgRG3/D04H6IdUiiP3vc 8I+xKIXubCMqbXH/ygt/FemvY0Ezro7MwHzmd0IIyAofbNJXpdArqD8+iBIOG0NE7fyT rQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 3410a0g69n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 06 Oct 2020 17:23:20 -0400 Received: from m0098420.ppops.net (m0098420.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 096LGcTs017507; Tue, 6 Oct 2020 17:23:20 -0400 Received: from ppma04dal.us.ibm.com (7a.29.35a9.ip4.static.sl-reverse.com [169.53.41.122]) by mx0b-001b2d01.pphosted.com with ESMTP id 3410a0g69g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 06 Oct 2020 17:23:20 -0400 Received: from pps.filterd (ppma04dal.us.ibm.com [127.0.0.1]) by ppma04dal.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 096LIM6p011766; Tue, 6 Oct 2020 21:23:19 GMT Received: from b01cxnp23034.gho.pok.ibm.com (b01cxnp23034.gho.pok.ibm.com [9.57.198.29]) by ppma04dal.us.ibm.com with ESMTP id 33xgx9gxpj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 06 Oct 2020 21:23:19 +0000 Received: from b01ledav006.gho.pok.ibm.com (b01ledav006.gho.pok.ibm.com [9.57.199.111]) by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 096LNJbA54329662 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 6 Oct 2020 21:23:19 GMT Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 12A38AC05B; Tue, 6 Oct 2020 21:23:19 +0000 (GMT) Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F2C11AC05E; Tue, 6 Oct 2020 21:23:18 +0000 (GMT) Received: from localhost.localdomain (unknown [9.114.224.51]) by b01ledav006.gho.pok.ibm.com (Postfix) with ESMTP; Tue, 6 Oct 2020 21:23:18 +0000 (GMT) From: David Christensen To: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, zhihong.wang@intel.com Cc: stable@dpdk.org, David Christensen , zhiyong.yang@intel.com Date: Tue, 6 Oct 2020 14:23:16 -0700 Message-Id: <20201006212316.409587-1-drc@linux.vnet.ibm.com> X-Mailer: git-send-email 2.18.4 In-Reply-To: <20200902170309.16513-1-drc@linux.vnet.ibm.com> References: <20200902170309.16513-1-drc@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-10-06_12:2020-10-06, 2020-10-06 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0 mlxlogscore=734 clxscore=1011 suspectscore=2 impostorscore=0 spamscore=0 lowpriorityscore=0 bulkscore=0 adultscore=0 phishscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2010060135 Subject: [dpdk-stable] [PATCH v2] net/vhost: fix xstats wrong after clearing stats X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" The PMD API allows stats and xstats values to be cleared separately. This is a problem for the vhost PMD since some of the xstats values are derived from existing stats values. For example: testpmd> show port xstats all ... tx_unicast_packets: 17562959 ... testpmd> clear port stats all ... show port xstats all ... tx_unicast_packets: 18446744073709551615 ... Modify the driver so that stats and xstats values are stored, updated, and cleared separately. Fixes: 4d6cf2ac93dc ("net/vhost: add extended statistics") Cc: zhiyong.yang@intel.com Signed-off-by: David Christensen --- v2: * Removed newly unused vq loops * Added "fixes" message * Renamed vhost_count_multicast_broadcast to vhost_count_xcast_packets drivers/net/vhost/rte_eth_vhost.c | 70 +++++++++++++++---------------- 1 file changed, 35 insertions(+), 35 deletions(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index e55278af6..163cf9409 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -73,6 +73,9 @@ enum vhost_xstats_pkts { VHOST_BROADCAST_PKT, VHOST_MULTICAST_PKT, VHOST_UNICAST_PKT, + VHOST_PKT, + VHOST_BYTE, + VHOST_MISSED_PKT, VHOST_ERRORS_PKT, VHOST_ERRORS_FRAGMENTED, VHOST_ERRORS_JABBER, @@ -149,11 +152,11 @@ struct vhost_xstats_name_off { /* [rx]_is prepended to the name string here */ static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = { {"good_packets", - offsetof(struct vhost_queue, stats.pkts)}, + offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, {"total_bytes", - offsetof(struct vhost_queue, stats.bytes)}, + offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, {"missed_pkts", - offsetof(struct vhost_queue, stats.missed_pkts)}, + offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, {"broadcast_packets", offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, {"multicast_packets", @@ -189,11 +192,11 @@ static const struct vhost_xstats_name_off vhost_rxport_stat_strings[] = { /* [tx]_ is prepended to the name string here */ static const struct vhost_xstats_name_off vhost_txport_stat_strings[] = { {"good_packets", - offsetof(struct vhost_queue, stats.pkts)}, + offsetof(struct vhost_queue, stats.xstats[VHOST_PKT])}, {"total_bytes", - offsetof(struct vhost_queue, stats.bytes)}, + offsetof(struct vhost_queue, stats.xstats[VHOST_BYTE])}, {"missed_pkts", - offsetof(struct vhost_queue, stats.missed_pkts)}, + offsetof(struct vhost_queue, stats.xstats[VHOST_MISSED_PKT])}, {"broadcast_packets", offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, {"multicast_packets", @@ -287,23 +290,6 @@ vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, if (n < nxstats) return nxstats; - for (i = 0; i < dev->data->nb_rx_queues; i++) { - vq = dev->data->rx_queues[i]; - if (!vq) - continue; - vq->stats.xstats[VHOST_UNICAST_PKT] = vq->stats.pkts - - (vq->stats.xstats[VHOST_BROADCAST_PKT] - + vq->stats.xstats[VHOST_MULTICAST_PKT]); - } - for (i = 0; i < dev->data->nb_tx_queues; i++) { - vq = dev->data->tx_queues[i]; - if (!vq) - continue; - vq->stats.xstats[VHOST_UNICAST_PKT] = vq->stats.pkts - + vq->stats.missed_pkts - - (vq->stats.xstats[VHOST_BROADCAST_PKT] - + vq->stats.xstats[VHOST_MULTICAST_PKT]); - } for (t = 0; t < VHOST_NB_XSTATS_RXPORT; t++) { xstats[count].value = 0; for (i = 0; i < dev->data->nb_rx_queues; i++) { @@ -334,7 +320,7 @@ vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, } static inline void -vhost_count_multicast_broadcast(struct vhost_queue *vq, +vhost_count_xcast_packets(struct vhost_queue *vq, struct rte_mbuf *mbuf) { struct rte_ether_addr *ea = NULL; @@ -346,20 +332,27 @@ vhost_count_multicast_broadcast(struct vhost_queue *vq, pstats->xstats[VHOST_BROADCAST_PKT]++; else pstats->xstats[VHOST_MULTICAST_PKT]++; + } else { + pstats->xstats[VHOST_UNICAST_PKT]++; } } static void -vhost_update_packet_xstats(struct vhost_queue *vq, - struct rte_mbuf **bufs, - uint16_t count) +vhost_update_packet_xstats(struct vhost_queue *vq, struct rte_mbuf **bufs, + uint16_t count, uint64_t nb_bytes, + uint64_t nb_missed) { uint32_t pkt_len = 0; uint64_t i = 0; uint64_t index; struct vhost_stats *pstats = &vq->stats; + pstats->xstats[VHOST_BYTE] += nb_bytes; + pstats->xstats[VHOST_MISSED_PKT] += nb_missed; + pstats->xstats[VHOST_UNICAST_PKT] += nb_missed; + for (i = 0; i < count ; i++) { + pstats->xstats[VHOST_PKT]++; pkt_len = bufs[i]->pkt_len; if (pkt_len == 64) { pstats->xstats[VHOST_64_PKT]++; @@ -375,7 +368,7 @@ vhost_update_packet_xstats(struct vhost_queue *vq, else if (pkt_len > 1522) pstats->xstats[VHOST_1523_TO_MAX_PKT]++; } - vhost_count_multicast_broadcast(vq, bufs[i]); + vhost_count_xcast_packets(vq, bufs[i]); } } @@ -385,6 +378,7 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) struct vhost_queue *r = q; uint16_t i, nb_rx = 0; uint16_t nb_receive = nb_bufs; + uint64_t nb_bytes = 0; if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) return 0; @@ -419,10 +413,11 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if (r->internal->vlan_strip) rte_vlan_strip(bufs[i]); - r->stats.bytes += bufs[i]->pkt_len; + nb_bytes += bufs[i]->pkt_len; } - vhost_update_packet_xstats(r, bufs, nb_rx); + r->stats.bytes += nb_bytes; + vhost_update_packet_xstats(r, bufs, nb_rx, nb_bytes, 0); out: rte_atomic32_set(&r->while_queuing, 0); @@ -436,6 +431,8 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) struct vhost_queue *r = q; uint16_t i, nb_tx = 0; uint16_t nb_send = 0; + uint64_t nb_bytes = 0; + uint64_t nb_missed = 0; if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) return 0; @@ -476,20 +473,23 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) break; } + for (i = 0; likely(i < nb_tx); i++) + nb_bytes += bufs[i]->pkt_len; + + nb_missed = nb_bufs - nb_tx; + r->stats.pkts += nb_tx; + r->stats.bytes += nb_bytes; r->stats.missed_pkts += nb_bufs - nb_tx; - for (i = 0; likely(i < nb_tx); i++) - r->stats.bytes += bufs[i]->pkt_len; - - vhost_update_packet_xstats(r, bufs, nb_tx); + vhost_update_packet_xstats(r, bufs, nb_tx, nb_bytes, nb_missed); /* According to RFC2863 page42 section ifHCOutMulticastPkts and * ifHCOutBroadcastPkts, the counters "multicast" and "broadcast" * are increased when packets are not transmitted successfully. */ for (i = nb_tx; i < nb_bufs; i++) - vhost_count_multicast_broadcast(r, bufs[i]); + vhost_count_xcast_packets(r, bufs[i]); for (i = 0; likely(i < nb_tx); i++) rte_pktmbuf_free(bufs[i]); -- 2.18.4