From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <zhiyong.yang@intel.com>
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by dpdk.org (Postfix) with ESMTP id 6C5A9532C
 for <dev@dpdk.org>; Wed, 21 Sep 2016 09:23:17 +0200 (CEST)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga101.jf.intel.com with ESMTP; 21 Sep 2016 00:23:10 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.30,372,1470726000"; d="scan'208";a="1054209604"
Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201])
 by orsmga002.jf.intel.com with ESMTP; 21 Sep 2016 00:23:09 -0700
Received: from fmsmsx157.amr.corp.intel.com (10.18.116.73) by
 FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Wed, 21 Sep 2016 00:22:57 -0700
Received: from bgsmsx154.gar.corp.intel.com (10.224.48.47) by
 FMSMSX157.amr.corp.intel.com (10.18.116.73) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Wed, 21 Sep 2016 00:22:57 -0700
Received: from bgsmsx104.gar.corp.intel.com ([169.254.5.244]) by
 BGSMSX154.gar.corp.intel.com ([169.254.7.140]) with mapi id 14.03.0248.002;
 Wed, 21 Sep 2016 12:52:54 +0530
From: "Yang, Zhiyong" <zhiyong.yang@intel.com>
To: Yuanhan Liu <yuanhan.liu@linux.intel.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "thomas.monjalon@6wind.com"
 <thomas.monjalon@6wind.com>, "pmatilai@redhat.com" <pmatilai@redhat.com>,
 "Van Haaren, Harry" <harry.van.haaren@intel.com>
Thread-Topic: [PATCH v3 2/2] net/vhost: add pmd xstats
Thread-Index: AQHSEyLY+h9VJVGMbk2KthRefoncgaCB2NkAgAGvZSA=
Date: Wed, 21 Sep 2016 07:22:53 +0000
Message-ID: <E182254E98A5DA4EB1E657AC7CB9BD2A3EB0B972@BGSMSX104.gar.corp.intel.com>
References: <1473408927-40364-1-git-send-email-zhiyong.yang@intel.com>
 <1474364205-111569-1-git-send-email-zhiyong.yang@intel.com>
 <1474364205-111569-3-git-send-email-zhiyong.yang@intel.com>
 <20160920105638.GT23158@yliu-dev.sh.intel.com>
In-Reply-To: <20160920105638.GT23158@yliu-dev.sh.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZDZiNzU5NzMtNTAwOS00ZjVmLTlkNWUtZGFmYjEyMjA4OWUzIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6ImF0VEV0cG1xUHU2S0NiMTlcL1pMUmh0QStDZVBIcjJmTmJadzBlK1JVeGNFPSJ9
x-ctpclassification: CTP_IC
x-originating-ip: [10.223.10.10]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH v3 2/2] net/vhost: add pmd xstats
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 21 Sep 2016 07:23:18 -0000

Hi, yuanhan:

I will fix your comments in V4 patch.

Thanks
--Zhiyong
> -----Original Message-----
> From: Yuanhan Liu [mailto:yuanhan.liu@linux.intel.com]
> Sent: Tuesday, September 20, 2016 6:57 PM
> To: Yang, Zhiyong <zhiyong.yang@intel.com>
> Cc: dev@dpdk.org; thomas.monjalon@6wind.com; pmatilai@redhat.com;
> Van Haaren, Harry <harry.van.haaren@intel.com>
> Subject: Re: [PATCH v3 2/2] net/vhost: add pmd xstats
>=20
> On Tue, Sep 20, 2016 at 05:36:45PM +0800, Zhiyong Yang wrote:
> > diff --git a/drivers/net/vhost/rte_eth_vhost.c
> > b/drivers/net/vhost/rte_eth_vhost.c
> > index 9157bf1..c3f404d 100644
> > --- a/drivers/net/vhost/rte_eth_vhost.c
> > +++ b/drivers/net/vhost/rte_eth_vhost.c
> > @@ -78,6 +78,7 @@ struct vhost_stats {
> >  	uint64_t missed_pkts;
> >  	uint64_t rx_bytes;
> >  	uint64_t tx_bytes;
> > +	uint64_t xstats[16];
> >  };
> >
> >  struct vhost_queue {
> > @@ -131,6 +132,252 @@ struct rte_vhost_vring_state {
> >
> >  static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS];
> >
> > +enum vhost_xstats_pkts {
> > +	VHOST_UNDERSIZE_PKT =3D 0,
> > +	VHOST_64_PKT,
> > +	VHOST_65_TO_127_PKT,
> > +	VHOST_128_TO_255_PKT,
> > +	VHOST_256_TO_511_PKT,
> > +	VHOST_512_TO_1023_PKT,
> > +	VHOST_1024_TO_1522_PKT,
> > +	VHOST_1523_TO_MAX_PKT,
> > +	VHOST_BROADCAST_PKT,
> > +	VHOST_MULTICAST_PKT,
> > +	VHOST_ERRORS_PKT,
> > +	VHOST_ERRORS_FRAGMENTED,
> > +	VHOST_ERRORS_JABBER,
> > +	VHOST_UNKNOWN_PROTOCOL,
> > +};
>=20
> The definetion should go before "struct vhost_stats".
>=20
> And VHOST_UNKNOWN_PROTOCOL should be VHOST_XSTATS_MAX. With
> that, the hardcoded number "16" could be avoided and replaced by it.
>=20

Ok.

> > +
> > +#define VHOST_XSTATS_NAME_SIZE 64
> > +
> > +struct vhost_xstats_name_off {
> > +	char name[VHOST_XSTATS_NAME_SIZE];
> > +	uint64_t offset;
> > +};
> > +
> > +/* [rt]_qX_ is prepended to the name string here */ static const
> > +struct vhost_xstats_name_off vhost_rxq_stat_strings[] =3D {
> > +	{"good_packets",
> > +	 offsetof(struct vhost_queue, stats.rx_pkts)},
> > +	{"total_bytes",
> > +	 offsetof(struct vhost_queue, stats.rx_bytes)},
> > +	{"missed_pkts",
> > +	 offsetof(struct vhost_queue, stats.missed_pkts)},
> > +	{"broadcast_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_BROADCAST_PKT])},
> > +	{"multicast_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_MULTICAST_PKT])},
> > +	{"undersize_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_UNDERSIZE_PKT])},
> > +	{"size_64_packets",
> > +	 offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
> > +	{"size_65_to_127_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_65_TO_127_PKT])},
> > +	{"size_128_to_255_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_128_TO_255_PKT])},
> > +	{"size_256_to_511_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_256_TO_511_PKT])},
> > +	{"size_512_to_1023_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_512_TO_1023_PKT])},
> > +	{"size_1024_to_1522_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_1024_TO_1522_PKT])},
> > +	{"size_1523_to_max_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_1523_TO_MAX_PKT])},
> > +	{"errors_with_bad_CRC",
> > +	 offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])},
> > +	{"fragmented_errors",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_ERRORS_FRAGMENTED])},
> > +	{"jabber_errors",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_ERRORS_JABBER])},
> > +	{"unknown_protos_packets",
> > +	 offsetof(struct vhost_queue,
> > +stats.xstats[VHOST_UNKNOWN_PROTOCOL])},
>=20
> Hmm, is "unknown_protos_packets" a valid stats? Why we should handle it
> here? Besides, it will always not be accessed, right? I mean,
> VHOST_NB_RXQ_XSTATS makes sure that.
>=20

Yes, unknown_protos_packets is not accessed and always zero.
 I check the code that I40e driver implements similar counter by NIC regist=
er.
I introduce It to want  vhost xstats look like physical NIC for the applica=
tions
According  to RFC2863 Section ifInUnknownProtos.

 "For packet-oriented interfaces, the number of packets received via=20
the interface which were discarded because of an unknown or=20
unsupported protocol. For character-oriented or fixed-length=20
interfaces that support protocol multiplexing the number of=20
transmission units received via the interface which were discarded=20
because of an unknown or unsupported protocol. For any interface=20
that does not support protocol multiplexing, this counter will always be 0.

> > +
> > +/* [tx]_qX_ is prepended to the name string here */ static const
> > +struct vhost_xstats_name_off vhost_txq_stat_strings[] =3D {
> > +	{"good_packets",
> > +	 offsetof(struct vhost_queue, stats.tx_pkts)},
> > +	{"total_bytes",
> > +	 offsetof(struct vhost_queue, stats.tx_bytes)},
> > +	{"missed_pkts",
> > +	 offsetof(struct vhost_queue, stats.missed_pkts)},
> > +	{"broadcast_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_BROADCAST_PKT])},
> > +	{"multicast_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_MULTICAST_PKT])},
> > +	{"size_64_packets",
> > +	 offsetof(struct vhost_queue, stats.xstats[VHOST_64_PKT])},
> > +	{"size_65_to_127_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_65_TO_127_PKT])},
> > +	{"size_128_to_255_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_128_TO_255_PKT])},
> > +	{"size_256_to_511_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_256_TO_511_PKT])},
> > +	{"size_512_to_1023_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_512_TO_1023_PKT])},
> > +	{"size_1024_to_1522_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_1024_TO_1522_PKT])},
> > +	{"size_1523_to_max_packets",
> > +	 offsetof(struct vhost_queue,
> stats.xstats[VHOST_1523_TO_MAX_PKT])},
> > +	{"errors_with_bad_CRC",
> > +	 offsetof(struct vhost_queue, stats.xstats[VHOST_ERRORS_PKT])}, };
> > +
> > +#define VHOST_NB_RXQ_XSTATS (sizeof(vhost_rxq_stat_strings) / \
> > +			     sizeof(vhost_rxq_stat_strings[0]))
> > +
> > +#define VHOST_NB_TXQ_XSTATS (sizeof(vhost_txq_stat_strings) / \
> > +			     sizeof(vhost_txq_stat_strings[0]))
> > +
> > +static void
> > +vhost_dev_xstats_reset(struct rte_eth_dev *dev) {
> > +	struct vhost_queue *vqrx =3D NULL;
> > +	struct vhost_queue *vqtx =3D NULL;
>=20
> I will not introduce two var here: I'd just define one, vq.
>=20

Ok

> > +	unsigned int i =3D 0;
> > +
> > +	for (i =3D 0; i < dev->data->nb_rx_queues; i++) {
> > +		vqrx =3D dev->data->rx_queues[i];
> > +		if (!vqrx)
> > +			continue;
> > +		memset(&vqrx->stats, 0, sizeof(vqrx->stats));
> > +	}
> > +	for (i =3D 0; i < dev->data->nb_tx_queues; i++) {
> > +		vqtx =3D dev->data->tx_queues[i];
> > +		if (!vqtx)
> > +			continue;
> > +		memset(&vqtx->stats, 0, sizeof(vqtx->stats));
> > +	}
> > +}
> > +
> > +static int
> > +vhost_dev_xstats_get_names(struct rte_eth_dev *dev,
> > +			   struct rte_eth_xstat_name *xstats_names,
> > +			   unsigned int limit __rte_unused) {
> > +	unsigned int i =3D 0;
> > +	unsigned int t =3D 0;
> > +	int count =3D 0;
> > +	int nstats =3D dev->data->nb_rx_queues * VHOST_NB_RXQ_XSTATS
> > +			+ dev->data->nb_tx_queues *
> VHOST_NB_TXQ_XSTATS;
> > +
> > +	if (!xstats_names)
> > +		return nstats;
> > +
> > +	for (i =3D 0; i < dev->data->nb_rx_queues; i++) {
> > +		if (!dev->data->rx_queues[i])
> > +			continue;
> > +		for (t =3D 0; t < VHOST_NB_RXQ_XSTATS; t++) {
> > +			snprintf(xstats_names[count].name,
> > +				 sizeof(xstats_names[count].name),
> > +				 "rx_q%u_%s", i,
> > +				 vhost_rxq_stat_strings[t].name);
> > +				 count++;
> > +		}
> > +	}
> > +	for (i =3D 0; i < dev->data->nb_tx_queues; i++) {
> > +		if (!dev->data->tx_queues[i])
> > +			continue;
> > +		for (t =3D 0; t < VHOST_NB_TXQ_XSTATS; t++) {
> > +			snprintf(xstats_names[count].name,
> > +				 sizeof(xstats_names[count].name),
> > +				 "tx_q%u_%s", i,
> > +				 vhost_txq_stat_strings[t].name);
> > +				 count++;
> > +		}
> > +	}
> > +	return count;
> > +}
> > +
> > +static int
> > +vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat
> *xstats,
> > +		     unsigned int n)
> > +{
> > +	unsigned int i;
> > +	unsigned int t;
> > +	unsigned int count =3D 0;
> > +	unsigned int nxstats =3D dev->data->nb_rx_queues *
> VHOST_NB_RXQ_XSTATS
> > +			     + dev->data->nb_tx_queues *
> VHOST_NB_TXQ_XSTATS;
> > +
> > +	if (n < nxstats)
> > +		return nxstats;
> > +
> > +	for (i =3D 0; i < dev->data->nb_rx_queues; i++) {
> > +		if (!dev->data->rx_queues[i])
> > +			continue;
> > +		for (t =3D 0; t < VHOST_NB_RXQ_XSTATS; t++) {
> > +			xstats[count].value =3D
> > +				*(uint64_t *)(((char *)dev->data-
> >rx_queues[i])
> > +				+ vhost_rxq_stat_strings[t].offset);
> > +			count++;
> > +		}
> > +	}
> > +	for (i =3D 0; i < dev->data->nb_tx_queues; i++) {
> > +		if (!dev->data->tx_queues[i])
> > +			continue;
> > +		for (t =3D 0; t < VHOST_NB_TXQ_XSTATS; t++) {
> > +			xstats[count].value =3D
> > +				*(uint64_t *)(((char *)dev->data-
> >tx_queues[i])
> > +				+ vhost_txq_stat_strings[t].offset);
> > +			count++;
> > +		}
> > +	}
> > +	return count;
> > +}
>=20
> > +static void
> > +vhost_count_multicast_broadcast(struct vhost_queue *vq,
> > +				struct rte_mbuf **bufs,
>=20
> I would let this function to count one mbuf, so that --->
>=20
> > +				uint16_t begin,
> > +				uint16_t end)
> > +{
> > +	uint64_t i =3D 0;
> > +	struct ether_addr *ea =3D NULL;
> > +	struct vhost_stats *pstats =3D &vq->stats;
> > +
> > +	for (i =3D begin; i < end; i++) {
> > +		ea =3D rte_pktmbuf_mtod(bufs[i], struct ether_addr *);
> > +		if (is_multicast_ether_addr(ea)) {
> > +			if (is_broadcast_ether_addr(ea))
> > +				pstats->xstats[VHOST_BROADCAST_PKT]++;
> > +			else
> > +				pstats->xstats[VHOST_MULTICAST_PKT]++;
> > +		}
> > +	}
> > +}
> > +
> > +static void
> > +vhost_update_packet_xstats(struct vhost_queue *vq,
> > +			   struct rte_mbuf **bufs,
> > +			   uint16_t nb_rxtx)
> > +{
> > +	uint32_t pkt_len =3D 0;
> > +	uint64_t i =3D 0;
> > +	uint64_t index;
> > +	struct vhost_stats *pstats =3D &vq->stats;
> > +
> > +	for (i =3D 0; i < nb_rxtx ; i++) {
> > +		pkt_len =3D bufs[i]->pkt_len;
> > +		if (pkt_len =3D=3D 64) {
> > +			pstats->xstats[VHOST_64_PKT]++;
> > +		} else if (pkt_len > 64 && pkt_len < 1024) {
> > +			index =3D (sizeof(pkt_len) * 8)
> > +				- __builtin_clz(pkt_len) - 5;
> > +			pstats->xstats[index]++;
> > +		} else {
> > +			if (pkt_len < 64)
> > +				pstats->xstats[VHOST_UNDERSIZE_PKT]++;
> > +			else if (pkt_len <=3D 1522)
> > +				pstats-
> >xstats[VHOST_1024_TO_1522_PKT]++;
> > +			else if (pkt_len > 1522)
> > +				pstats-
> >xstats[VHOST_1523_TO_MAX_PKT]++;
> > +		}
>=20
> ... you could put it here, and save another for() loop introdueced by
> vhost_count_multicast_broadcast().
>=20
> BTW, I will use simple vars, say "count", but not "nb_rxtx".

Ok.

>=20
> 	--yliu