From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id ACD3368F5 for ; Tue, 20 Sep 2016 12:56:12 +0200 (CEST) Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP; 20 Sep 2016 03:56:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,367,1470726000"; d="scan'208";a="11241260" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.162]) by fmsmga005.fm.intel.com with ESMTP; 20 Sep 2016 03:56:07 -0700 Date: Tue, 20 Sep 2016 18:56:38 +0800 From: Yuanhan Liu To: Zhiyong Yang Cc: dev@dpdk.org, thomas.monjalon@6wind.com, pmatilai@redhat.com, harry.van.haaren@intel.com Message-ID: <20160920105638.GT23158@yliu-dev.sh.intel.com> References: <1473408927-40364-1-git-send-email-zhiyong.yang@intel.com> <1474364205-111569-1-git-send-email-zhiyong.yang@intel.com> <1474364205-111569-3-git-send-email-zhiyong.yang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1474364205-111569-3-git-send-email-zhiyong.yang@intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH v3 2/2] net/vhost: add pmd xstats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Sep 2016 10:56:13 -0000 On Tue, Sep 20, 2016 at 05:36:45PM +0800, Zhiyong Yang wrote: > diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c > index 9157bf1..c3f404d 100644 > --- a/drivers/net/vhost/rte_eth_vhost.c > +++ b/drivers/net/vhost/rte_eth_vhost.c > @@ -78,6 +78,7 @@ struct vhost_stats { > uint64_t missed_pkts; > uint64_t rx_bytes; > uint64_t tx_bytes; > + uint64_t xstats[16]; > }; > > struct vhost_queue { > @@ -131,6 +132,252 @@ struct rte_vhost_vring_state { > > static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS]; > > +enum vhost_xstats_pkts { > + VHOST_UNDERSIZE_PKT = 0, > + VHOST_64_PKT, > + VHOST_65_TO_127_PKT, > + VHOST_128_TO_255_PKT, > + VHOST_256_TO_511_PKT, > + VHOST_512_TO_1023_PKT, > + VHOST_1024_TO_1522_PKT, > + VHOST_1523_TO_MAX_PKT, > + VHOST_BROADCAST_PKT, > + VHOST_MULTICAST_PKT, > + VHOST_ERRORS_PKT, > + VHOST_ERRORS_FRAGMENTED, > + VHOST_ERRORS_JABBER, > + VHOST_UNKNOWN_PROTOCOL, > +}; The definetion should go before "struct vhost_stats". And VHOST_UNKNOWN_PROTOCOL should be VHOST_XSTATS_MAX. With that, the hardcoded number "16" could be avoided and replaced by it. > + > +#define VHOST_XSTATS_NAME_SIZE 64 > + > +struct vhost_xstats_name_off { > + char name[VHOST_XSTATS_NAME_SIZE]; > + uint64_t offset; > +}; > + > +/* [rt]_qX_ is prepended to the name string here */ > +static const struct vhost_xstats_name_off vhost_rxq_stat_strings[] = { > + {"good_packets", > + offsetof(struct vhost_queue, stats.rx_pkts)}, > + {"total_bytes", > + offsetof(struct vhost_queue, stats.rx_bytes)}, > + {"missed_pkts", > + offsetof(struct vhost_queue, stats.missed_pkts)}, > + {"broadcast_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, > + {"multicast_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, > + {"undersize_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_UNDERSIZE_PKT])}, > + {"size_64_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, > + {"size_65_to_127_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, > + {"size_128_to_255_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, > + {"size_256_to_511_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, > + {"size_512_to_1023_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, > + {"size_1024_to_1522_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, > + {"size_1523_to_max_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, > + {"errors_with_bad_CRC", > + offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, > + {"fragmented_errors", > + offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_FRAGMENTED])}, > + {"jabber_errors", > + offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_JABBER])}, > + {"unknown_protos_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_UNKNOWN_PROTOCOL])}, Hmm, is "unknown_protos_packets" a valid stats? Why we should handle it here? Besides, it will always not be accessed, right? I mean, VHOST_NB_RXQ_XSTATS makes sure that. > +}; > + > +/* [tx]_qX_ is prepended to the name string here */ > +static const struct vhost_xstats_name_off vhost_txq_stat_strings[] = { > + {"good_packets", > + offsetof(struct vhost_queue, stats.tx_pkts)}, > + {"total_bytes", > + offsetof(struct vhost_queue, stats.tx_bytes)}, > + {"missed_pkts", > + offsetof(struct vhost_queue, stats.missed_pkts)}, > + {"broadcast_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_BROADCAST_PKT])}, > + {"multicast_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_MULTICAST_PKT])}, > + {"size_64_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])}, > + {"size_65_to_127_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_65_TO_127_PKT])}, > + {"size_128_to_255_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_128_TO_255_PKT])}, > + {"size_256_to_511_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_256_TO_511_PKT])}, > + {"size_512_to_1023_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_512_TO_1023_PKT])}, > + {"size_1024_to_1522_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_1024_TO_1522_PKT])}, > + {"size_1523_to_max_packets", > + offsetof(struct vhost_queue, stats.xstats[VHOST_1523_TO_MAX_PKT])}, > + {"errors_with_bad_CRC", > + offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, > +}; > + > +#define VHOST_NB_RXQ_XSTATS (sizeof(vhost_rxq_stat_strings) / \ > + sizeof(vhost_rxq_stat_strings[0])) > + > +#define VHOST_NB_TXQ_XSTATS (sizeof(vhost_txq_stat_strings) / \ > + sizeof(vhost_txq_stat_strings[0])) > + > +static void > +vhost_dev_xstats_reset(struct rte_eth_dev *dev) > +{ > + struct vhost_queue *vqrx = NULL; > + struct vhost_queue *vqtx = NULL; I will not introduce two var here: I'd just define one, vq. > + unsigned int i = 0; > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > + vqrx = dev->data->rx_queues[i]; > + if (!vqrx) > + continue; > + memset(&vqrx->stats, 0, sizeof(vqrx->stats)); > + } > + for (i = 0; i < dev->data->nb_tx_queues; i++) { > + vqtx = dev->data->tx_queues[i]; > + if (!vqtx) > + continue; > + memset(&vqtx->stats, 0, sizeof(vqtx->stats)); > + } > +} > + > +static int > +vhost_dev_xstats_get_names(struct rte_eth_dev *dev, > + struct rte_eth_xstat_name *xstats_names, > + unsigned int limit __rte_unused) > +{ > + unsigned int i = 0; > + unsigned int t = 0; > + int count = 0; > + int nstats = dev->data->nb_rx_queues * VHOST_NB_RXQ_XSTATS > + + dev->data->nb_tx_queues * VHOST_NB_TXQ_XSTATS; > + > + if (!xstats_names) > + return nstats; > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > + if (!dev->data->rx_queues[i]) > + continue; > + for (t = 0; t < VHOST_NB_RXQ_XSTATS; t++) { > + snprintf(xstats_names[count].name, > + sizeof(xstats_names[count].name), > + "rx_q%u_%s", i, > + vhost_rxq_stat_strings[t].name); > + count++; > + } > + } > + for (i = 0; i < dev->data->nb_tx_queues; i++) { > + if (!dev->data->tx_queues[i]) > + continue; > + for (t = 0; t < VHOST_NB_TXQ_XSTATS; t++) { > + snprintf(xstats_names[count].name, > + sizeof(xstats_names[count].name), > + "tx_q%u_%s", i, > + vhost_txq_stat_strings[t].name); > + count++; > + } > + } > + return count; > +} > + > +static int > +vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, > + unsigned int n) > +{ > + unsigned int i; > + unsigned int t; > + unsigned int count = 0; > + unsigned int nxstats = dev->data->nb_rx_queues * VHOST_NB_RXQ_XSTATS > + + dev->data->nb_tx_queues * VHOST_NB_TXQ_XSTATS; > + > + if (n < nxstats) > + return nxstats; > + > + for (i = 0; i < dev->data->nb_rx_queues; i++) { > + if (!dev->data->rx_queues[i]) > + continue; > + for (t = 0; t < VHOST_NB_RXQ_XSTATS; t++) { > + xstats[count].value = > + *(uint64_t *)(((char *)dev->data->rx_queues[i]) > + + vhost_rxq_stat_strings[t].offset); > + count++; > + } > + } > + for (i = 0; i < dev->data->nb_tx_queues; i++) { > + if (!dev->data->tx_queues[i]) > + continue; > + for (t = 0; t < VHOST_NB_TXQ_XSTATS; t++) { > + xstats[count].value = > + *(uint64_t *)(((char *)dev->data->tx_queues[i]) > + + vhost_txq_stat_strings[t].offset); > + count++; > + } > + } > + return count; > +} > +static void > +vhost_count_multicast_broadcast(struct vhost_queue *vq, > + struct rte_mbuf **bufs, I would let this function to count one mbuf, so that ---> > + uint16_t begin, > + uint16_t end) > +{ > + uint64_t i = 0; > + struct ether_addr *ea = NULL; > + struct vhost_stats *pstats = &vq->stats; > + > + for (i = begin; i < end; i++) { > + ea = rte_pktmbuf_mtod(bufs[i], struct ether_addr *); > + if (is_multicast_ether_addr(ea)) { > + if (is_broadcast_ether_addr(ea)) > + pstats->xstats[VHOST_BROADCAST_PKT]++; > + else > + pstats->xstats[VHOST_MULTICAST_PKT]++; > + } > + } > +} > + > +static void > +vhost_update_packet_xstats(struct vhost_queue *vq, > + struct rte_mbuf **bufs, > + uint16_t nb_rxtx) > +{ > + uint32_t pkt_len = 0; > + uint64_t i = 0; > + uint64_t index; > + struct vhost_stats *pstats = &vq->stats; > + > + for (i = 0; i < nb_rxtx ; i++) { > + pkt_len = bufs[i]->pkt_len; > + if (pkt_len == 64) { > + pstats->xstats[VHOST_64_PKT]++; > + } else if (pkt_len > 64 && pkt_len < 1024) { > + index = (sizeof(pkt_len) * 8) > + - __builtin_clz(pkt_len) - 5; > + pstats->xstats[index]++; > + } else { > + if (pkt_len < 64) > + pstats->xstats[VHOST_UNDERSIZE_PKT]++; > + else if (pkt_len <= 1522) > + pstats->xstats[VHOST_1024_TO_1522_PKT]++; > + else if (pkt_len > 1522) > + pstats->xstats[VHOST_1523_TO_MAX_PKT]++; > + } ... you could put it here, and save another for() loop introdueced by vhost_count_multicast_broadcast(). BTW, I will use simple vars, say "count", but not "nb_rxtx". --yliu