From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <zhiyong.yang@intel.com>
Received: from mga09.intel.com (mga09.intel.com [134.134.136.24])
 by dpdk.org (Postfix) with ESMTP id EB9DD8D39
 for <dev@dpdk.org>; Wed, 14 Sep 2016 09:43:40 +0200 (CEST)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by orsmga102.jf.intel.com with ESMTP; 14 Sep 2016 00:43:39 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.30,333,1470726000"; 
   d="scan'208";a="8591075"
Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205])
 by orsmga004.jf.intel.com with ESMTP; 14 Sep 2016 00:43:39 -0700
Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by
 fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Wed, 14 Sep 2016 00:43:39 -0700
Received: from bgsmsx153.gar.corp.intel.com (10.224.23.4) by
 fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Wed, 14 Sep 2016 00:43:39 -0700
Received: from bgsmsx101.gar.corp.intel.com ([169.254.1.182]) by
 BGSMSX153.gar.corp.intel.com ([169.254.2.172]) with mapi id 14.03.0248.002;
 Wed, 14 Sep 2016 13:13:36 +0530
From: "Yang, Zhiyong" <zhiyong.yang@intel.com>
To: Yuanhan Liu <yuanhan.liu@linux.intel.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "thomas.monjalon@6wind.com"
 <thomas.monjalon@6wind.com>, "pmatilai@redhat.com" <pmatilai@redhat.com>
Thread-Topic: [PATCH v2] net/vhost: add pmd xstats
Thread-Index: AQHSCnKpomteripApkeAP15w8lsg66B4LwuAgABiJFA=
Date: Wed, 14 Sep 2016 07:43:36 +0000
Message-ID: <E182254E98A5DA4EB1E657AC7CB9BD2A3EB01AD8@BGSMSX101.gar.corp.intel.com>
References: <1471608966-39077-1-git-send-email-zhiyong.yang@intel.com>
 <1473408927-40364-1-git-send-email-zhiyong.yang@intel.com>
 <20160914062021.GZ23158@yliu-dev.sh.intel.com>
In-Reply-To: <20160914062021.GZ23158@yliu-dev.sh.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYzEwZWQ1ZjItMTFiNy00NGNiLThjMTEtMDhkOTcxYzc2MmI2IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6ImVHWk5DNm5HcmJaSHpic1g2UlhraFduSGZINXRXNzArNlhDUVZ1ajhDa1k9In0=
x-ctpclassification: CTP_IC
x-originating-ip: [10.223.10.10]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH v2] net/vhost: add pmd xstats
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 14 Sep 2016 07:43:41 -0000

Hi, yuanhan:
Thanks so much for your detailed comments.=20

> -----Original Message-----
> From: Yuanhan Liu [mailto:yuanhan.liu@linux.intel.com]
> Sent: Wednesday, September 14, 2016 2:20 PM
> To: Yang, Zhiyong <zhiyong.yang@intel.com>
> Cc: dev@dpdk.org; thomas.monjalon@6wind.com; pmatilai@redhat.com
> Subject: Re: [PATCH v2] net/vhost: add pmd xstats
>=20
> On Fri, Sep 09, 2016 at 04:15:27PM +0800, Zhiyong Yang wrote:
> > +struct vhost_xstats {
> > +	uint64_t stat[16];
> > +};
> > +
> >  struct vhost_queue {
> >  	int vid;
> >  	rte_atomic32_t allow_queuing;
> > @@ -85,7 +89,8 @@ struct vhost_queue {
> >  	uint64_t missed_pkts;
> >  	uint64_t rx_bytes;
> >  	uint64_t tx_bytes;
>=20
> I'd suggest to put those statistic counters to vhost_stats struct, which =
could
> simplify the xstats_reset code a bit.
>=20

I consider this point when I define it, but those statistic counters are us=
ed by pmd stats,=20
not only by pmd xstats,  I'm not clear if  I can modify those code.  If per=
mitted,=20
I will do it as you said.

> And please do it in two patches, one to introduce vhost_stats, another on=
e
> to add xstats.
>=20

Ok.

> > -};
> > +	struct vhost_xstats xstats;
> > +	};
>=20
> A format issue here.
>=20

Naming issue? such as struct vhost_xstats vhost_stats;?

> >
> >  struct pmd_internal {
> >  	char *dev_name;
> > @@ -127,6 +132,274 @@ struct rte_vhost_vring_state {
> >
> >  static struct rte_vhost_vring_state *vring_states[RTE_MAX_ETHPORTS];
> >
> > +enum rte_vhostqueue_rxtx {
>=20
> Don't use "rte_" prefix for internal function, vars, etc. "rte_" prefix i=
s
> reserved for those that will be exported for public use.
>=20

Ok, I will modify it.

> > +	RTE_VHOSTQUEUE_RX =3D 0,
> > +	RTE_VHOSTQUEUE_TX =3D 1
> > +};
> > +
> > +#define RTE_ETH_VHOST_XSTATS_NAME_SIZE 64
>=20
> ditto.
>=20

Ok, I will modify it.

> > +
> > +struct rte_vhost_xstats_name_off {
>=20
> ditto.
>=20

Ok, I will modify it.

> > +	char name[RTE_ETH_VHOST_XSTATS_NAME_SIZE];
> > +	uint64_t offset;
> > +};
> > +
> > +/* [rt]_qX_ is prepended to the name string here */ static void
> > +vhost_dev_xstats_reset(struct rte_eth_dev *dev) {
> > +	struct vhost_queue *vqrx =3D NULL;
> > +	struct vhost_queue *vqtx =3D NULL;
> > +	unsigned int i =3D 0;
> > +
> > +	for (i =3D 0; i < dev->data->nb_rx_queues; i++) {
> > +		if (!dev->data->rx_queues[i])
> > +			continue;
> > +		vqrx =3D (struct vhost_queue *)dev->data->rx_queues[i];
>=20
> Unnecessary cast.
>=20
The definition of rx_queues is  void **rx_queues; when we use it to access =
its
Data member,   Maybe explicit cast is needed, otherwise, we have to cast
Every time when using it.

> > +		vqrx->rx_pkts =3D 0;
> > +		vqrx->rx_bytes =3D 0;
> > +		vqrx->missed_pkts =3D 0;
> > +		memset(&vqrx->xstats, 0, sizeof(vqrx->xstats));
> > +	}
> > +	for (i =3D 0; i < dev->data->nb_tx_queues; i++) {
> > +		if (!dev->data->tx_queues[i])
> > +			continue;
> > +		vqtx =3D (struct vhost_queue *)dev->data->tx_queues[i];
> > +		vqtx->tx_pkts =3D 0;
> > +		vqtx->tx_bytes =3D 0;
> > +		vqtx->missed_pkts =3D 0;
> > +		memset(&vqtx->xstats, 0, sizeof(vqtx->xstats));
> > +	}
> > +}
> > +
> > +static int
> > +vhost_dev_xstats_get_names(struct rte_eth_dev *dev,
> > +			   struct rte_eth_xstat_name *xstats_names,
> > +			   __rte_unused unsigned int limit)
>=20
> The typical way is to put __rte_unused after the key word.
>=20

Ok.

> > +{
> > +	unsigned int i =3D 0;
> > +	unsigned int t =3D 0;
> > +	int count =3D 0;
> > +	int nstats =3D dev->data->nb_rx_queues * VHOST_NB_RXQ_XSTATS
> > +			+ dev->data->nb_tx_queues *
> VHOST_NB_TXQ_XSTATS;
> > +
> > +	if (xstats_names) {
>=20
> I know you are following virtio pmd style, but you don't have to. I'd sug=
gest
> to return early for (!xstats_names) case, then we could save one indentio=
n
> level for following code block.
>=20

OK.

> > +		for (i =3D 0; i < dev->data->nb_rx_queues; i++) {
> > +			struct vhost_queue *rxvq =3D dev->data-
> >rx_queues[i];
> > +
> > +			if (!rxvq)
> > +				continue;
> > +			for (t =3D 0; t < VHOST_NB_RXQ_XSTATS; t++) {
> > +				snprintf(xstats_names[count].name,
> > +					 sizeof(xstats_names[count].name),
> > +					 "rx_q%u_%s", i,
> > +					 rte_vhost_rxq_stat_strings[t].name);
> > +				count++;
> > +			}
> > +		}
> > +		for (i =3D 0; i < dev->data->nb_tx_queues; i++) {
> > +			struct vhost_queue *txvq =3D dev->data-
> >tx_queues[i];
> > +
> > +			if (!txvq)
> > +				continue;
> > +			for (t =3D 0; t < VHOST_NB_TXQ_XSTATS; t++) {
> > +				snprintf(xstats_names[count].name,
> > +					 sizeof(xstats_names[count].name),
> > +					 "tx_q%u_%s", i,
> > +					 rte_vhost_txq_stat_strings[t].name);
> > +				count++;
> > +			}
> > +		}
> > +		return count;
> > +	}
> > +	return nstats;
> > +}
> > +
> > +static int
> > +vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat
> *xstats,
> > +		     unsigned int n)
> > +{
> > +	unsigned int i;
> > +	unsigned int t;
> > +	unsigned int count =3D 0;
> > +
> > +	unsigned int nxstats =3D dev->data->nb_rx_queues *
> VHOST_NB_RXQ_XSTATS
> > +				+ dev->data->nb_tx_queues *
> VHOST_NB_TXQ_XSTATS;
> > +
> > +	if (n < nxstats)
> > +		return nxstats;
> > +
> > +	for (i =3D 0; i < dev->data->nb_rx_queues; i++) {
> > +		struct vhost_queue *rxvq =3D
> > +			(struct vhost_queue *)dev->data->rx_queues[i];
>=20
> Unnecessary cast.
>=20

Ok. I will remove it.

> > +
> > +		if (!rxvq)
> > +			continue;
> > +
> > +		for (t =3D 0; t < VHOST_NB_RXQ_XSTATS; t++) {
> > +			xstats[count].value =3D *(uint64_t *)(((char *)rxvq)
> > +				+ rte_vhost_rxq_stat_strings[t].offset);
> > +			count++;
> > +		}
> > +	}
> > +
> > +	for (i =3D 0; i < dev->data->nb_tx_queues; i++) {
> > +		struct vhost_queue *txvq =3D
> > +			(struct vhost_queue *)dev->data->tx_queues[i];
> > +
> > +		if (!txvq)
> > +			continue;
> > +
> > +		for (t =3D 0; t < VHOST_NB_TXQ_XSTATS; t++) {
> > +			xstats[count].value =3D *(uint64_t *)(((char *)txvq)
> > +				+ rte_vhost_txq_stat_strings[t].offset);
> > +			count++;
> > +		}
> > +	}
> > +
> > +	return count;
> > +}
> > +
> > +static void
> > +vhost_update_packet_xstats(struct vhost_queue *vq,
> > +			   struct rte_mbuf **bufs,
> > +			   uint16_t nb_rxtx,
> > +			   uint16_t nb_bufs,
> > +			   enum rte_vhostqueue_rxtx vqueue_rxtx) {
> > +	uint32_t pkt_len =3D 0;
> > +	uint64_t i =3D 0;
> > +	uint64_t index;
> > +	struct ether_addr *ea =3D NULL;
> > +	struct vhost_xstats *xstats_update =3D &vq->xstats;
> > +
> > +	for (i =3D 0; i < nb_rxtx ; i++) {
> > +		pkt_len =3D bufs[i]->pkt_len;
> > +		if (pkt_len =3D=3D 64) {
> > +			xstats_update->stat[1]++;
> > +
> Unnecessary blank line.
>=20
Ok. I will remove it.

> > +		} else if (pkt_len > 64 && pkt_len < 1024) {
> > +			index =3D (sizeof(pkt_len) * 8)
> > +				- __builtin_clz(pkt_len) - 5;
> > +			xstats_update->stat[index]++;
> > +		} else {
> > +			if (pkt_len < 64)
> > +				xstats_update->stat[0]++;
> > +			else if (pkt_len <=3D 1522)
> > +				xstats_update->stat[6]++;
> > +			else if (pkt_len > 1522)
> > +				xstats_update->stat[7]++;
> > +		}
> > +
> > +		ea =3D rte_pktmbuf_mtod(bufs[i], struct ether_addr *);
> > +		if (is_multicast_ether_addr(ea)) {
> > +			if (is_broadcast_ether_addr(ea))
> > +				/* broadcast++; */
> > +				xstats_update->stat[8]++;
> > +			else
> > +				/* multicast++; */
> > +				xstats_update->stat[9]++;
>=20
> The comment could be avoided if you define a field in vhost_stats struct =
like
> "broadcast" or "multicast". I don't object the way Harry proposed tough, =
to
> use enum to access the stat array.
>=20

I agree with you and Harry and will do like that.

> > +		}
> > +	}
> > +	/* non-multi/broadcast, multi/broadcast, including those
> > +	 * that were discarded or not sent.
>=20
> Hmmm, I don't follow it. You may want to reword it.
>=20
> > from rfc2863
>=20
> Which section and which page?
>=20
The packets received are not considered "discarded", because receiving pack=
ets via
Memory, not by physical NIC. Mainly for the number of transmit the packets
RFC2863 page 35  ifOutUcastPkts(non-multi/broadcast)
 Page 42 ifHCOutMulticastPkts
 Page 42 ifHCOutBroadcastPkts

> 	--yliu