From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id A0AF82142 for ; Mon, 19 Sep 2016 04:48:36 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP; 18 Sep 2016 19:48:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,360,1470726000"; d="scan'208";a="1053035969" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by orsmga002.jf.intel.com with ESMTP; 18 Sep 2016 19:48:35 -0700 Received: from bgsmsx103.gar.corp.intel.com (10.223.4.130) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.248.2; Sun, 18 Sep 2016 19:48:34 -0700 Received: from bgsmsx104.gar.corp.intel.com ([169.254.5.244]) by BGSMSX103.gar.corp.intel.com ([169.254.4.251]) with mapi id 14.03.0248.002; Mon, 19 Sep 2016 08:18:32 +0530 From: "Yang, Zhiyong" To: Yuanhan Liu CC: "dev@dpdk.org" , "thomas.monjalon@6wind.com" , "pmatilai@redhat.com" Thread-Topic: [PATCH v2] net/vhost: add pmd xstats Thread-Index: AQHSCnKpomteripApkeAP15w8lsg66B4LwuAgABiJFCABluKAIABORLQ Date: Mon, 19 Sep 2016 02:48:31 +0000 Message-ID: References: <1471608966-39077-1-git-send-email-zhiyong.yang@intel.com> <1473408927-40364-1-git-send-email-zhiyong.yang@intel.com> <20160914062021.GZ23158@yliu-dev.sh.intel.com> <20160918131648.GG23158@yliu-dev.sh.intel.com> In-Reply-To: <20160918131648.GG23158@yliu-dev.sh.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMTEzOGJhNTYtN2M4ZC00N2I0LWJlMTAtMzE1ZDNhMGQwNDkwIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6Iml1aUJRXC9hTHY1VzlGWThNNnR5ejYzaHM2c0Yxcjg5WVYrbU1ISktcL2wwYz0ifQ== x-ctpclassification: CTP_IC x-originating-ip: [10.223.10.10] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2] net/vhost: add pmd xstats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Sep 2016 02:48:37 -0000 Hi, Yuanhan: Thanks a lot for your comments and suggestion in so detail. I will send v3 = patch as soon as possible. --Zhiyong > -----Original Message----- > From: Yuanhan Liu [mailto:yuanhan.liu@linux.intel.com] > Sent: Sunday, September 18, 2016 9:17 PM > To: Yang, Zhiyong > Cc: dev@dpdk.org; thomas.monjalon@6wind.com; pmatilai@redhat.com > Subject: Re: [PATCH v2] net/vhost: add pmd xstats >=20 > On Wed, Sep 14, 2016 at 07:43:36AM +0000, Yang, Zhiyong wrote: > > Hi, yuanhan: > > Thanks so much for your detailed comments. > > > > > -----Original Message----- > > > From: Yuanhan Liu [mailto:yuanhan.liu@linux.intel.com] > > > Sent: Wednesday, September 14, 2016 2:20 PM > > > To: Yang, Zhiyong > > > Cc: dev@dpdk.org; thomas.monjalon@6wind.com; pmatilai@redhat.com > > > Subject: Re: [PATCH v2] net/vhost: add pmd xstats > > > > > > On Fri, Sep 09, 2016 at 04:15:27PM +0800, Zhiyong Yang wrote: > > > > +struct vhost_xstats { > > > > + uint64_t stat[16]; > > > > +}; > > > > + > > > > struct vhost_queue { > > > > int vid; > > > > rte_atomic32_t allow_queuing; > > > > @@ -85,7 +89,8 @@ struct vhost_queue { > > > > uint64_t missed_pkts; > > > > uint64_t rx_bytes; > > > > uint64_t tx_bytes; > > > > > > I'd suggest to put those statistic counters to vhost_stats struct, > > > which could simplify the xstats_reset code a bit. > > > > > > > I consider this point when I define it, but those statistic counters > > are used by pmd stats, not only by pmd xstats, I'm not clear if I > > can modify those code. If permitted, I will do it as you said. >=20 > For sure you can modify the code :) I just would suggest to do in an sing= le > patch, as stated before (and below). >=20 Ok. I will do that. > > > > + char name[RTE_ETH_VHOST_XSTATS_NAME_SIZE]; > > > > + uint64_t offset; > > > > +}; > > > > + > > > > +/* [rt]_qX_ is prepended to the name string here */ static void > > > > +vhost_dev_xstats_reset(struct rte_eth_dev *dev) { > > > > + struct vhost_queue *vqrx =3D NULL; > > > > + struct vhost_queue *vqtx =3D NULL; > > > > + unsigned int i =3D 0; > > > > + > > > > + for (i =3D 0; i < dev->data->nb_rx_queues; i++) { > > > > + if (!dev->data->rx_queues[i]) > > > > + continue; > > > > + vqrx =3D (struct vhost_queue *)dev->data->rx_queues[i]; > > > > > > Unnecessary cast. > > > > > The definition of rx_queues is void **rx_queues; >=20 > Yes, but rx_queues[i] is with type "void *", so the cast is not necessary= . >=20 I'm confused if void * ptr can access a member of struct directly. Or Need I explicitly cast it when using it? =20 > > > > + } > > > > + } > > > > + /* non-multi/broadcast, multi/broadcast, including those > > > > + * that were discarded or not sent. > > > > > > Hmmm, I don't follow it. You may want to reword it. > > > > > > > from rfc2863 > > > > > > Which section and which page? > > > > > The packets received are not considered "discarded", because receiving > > packets via Memory, not by physical NIC. Mainly for the number of > > transmit the packets >=20 > It still took me some time to understand you. >=20 > > RFC2863 page 35 ifOutUcastPkts(non-multi/broadcast) >=20 > So, by your term, unicast =3D=3D non-multicast/broadcast? If so, I'd sugg= est to > use term "unicast" but not something else: it just introduces confusions. >=20 You are right . non-multicast/broadcast is equal to unicast. I agree with = you And will modify it as you say. > And in this case, I guess you were trying to say: >=20 > /* > * According to RFC 2863 section ifHCOutMulticastPkts, ... and ..., > * the counter "unicast", "multicast" and "broadcast" are also > * increased when packets are not transmitted successfully. > */ >=20 Yes. =20 > Well, you might still need reword it. >=20 > After taking a bit closer look at the code, I'd suggest to do the countin= gs like > following: >=20 > - introduce a help function to increase the "broadcast" or "multicast" > counter, say "count_multicast_broadcast(struct vhost_stats *stats, mbuf= )". >=20 > - introduce a generic function to update the generic counters: this > function just counts those packets have been successfully transmitted > or received. >=20 > It also invoke above helper function for multicast counting. >=20 > It basically looks like the function vhost_update_packet_xstats, > execpt that it doesn't handle those failure packets: it will be > handled in following part. >=20 > - since the case "couting multicast and broadcast with failure packts" > only happens in the Tx side, we could do those countings only in > eth_vhost_tx(): >=20 > nb_tx =3D rte_vhost_enqueue_burst(r->vid, > r->virtqueue_id, bufs, nb_bufs); >=20 > missed =3D nb_bufs - nb_tx; > /* put above comment here */ > if (missed) { > for_each_mbuf(mbuf) { > count_multicast_broadcast(&vq->stats, mbuf); > } > } >=20 > - there is no need to update "stat[10]" (unicast), since it can be calcul= ated > from other counters, meaning we could just get the right value on query= . >=20 > This could save some cycles. >=20 Ok, Your comments are detailed and clear. I will consider to remove unicast= =20 and modify update functions. > Feel free to phone me if you have any doubts. >=20 > --yliu