From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D6950A04F5; Fri, 19 Jun 2020 04:44:13 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 34456F12; Fri, 19 Jun 2020 04:44:13 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 4C0B7DE3 for ; Fri, 19 Jun 2020 04:44:11 +0200 (CEST) IronPort-SDR: UunHnJpyt+aububTlsrMK3+9VyLU0ZC68sBSXxucNb44gndjvy3/A1iQp3l+lAhDsh/Fvewqk1 vMeh0s2qRt9g== X-IronPort-AV: E=McAfee;i="6000,8403,9656"; a="140349270" X-IronPort-AV: E=Sophos;i="5.75,253,1589266800"; d="scan'208";a="140349270" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jun 2020 19:44:10 -0700 IronPort-SDR: NUip//fZKCVcSLy/LApBPqdRMpQPUOzh5Gol4EJbvKIMYNLXD/dLli33zt16IM3gYB5slO1GoM TsUWW8qN8RRA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,253,1589266800"; d="scan'208";a="310027488" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by fmsmga002.fm.intel.com with ESMTP; 18 Jun 2020 19:44:10 -0700 Received: from shsmsx601.ccr.corp.intel.com (10.109.6.141) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.439.0; Thu, 18 Jun 2020 19:44:09 -0700 Received: from shsmsx605.ccr.corp.intel.com (10.109.6.215) by SHSMSX601.ccr.corp.intel.com (10.109.6.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Fri, 19 Jun 2020 10:44:07 +0800 Received: from shsmsx605.ccr.corp.intel.com ([10.109.6.215]) by SHSMSX605.ccr.corp.intel.com ([10.109.6.215]) with mapi id 15.01.1713.004; Fri, 19 Jun 2020 10:44:07 +0800 From: "Xu, Ting" To: "Yang, Qiming" , "dev@dpdk.org" CC: "Zhang, Qi Z" , "Wu, Jingjing" , "Xing, Beilei" , "Kovacevic, Marko" , "Mcnamara, John" , "Ye, Xiaolong" Thread-Topic: [PATCH v3 10/12] net/ice: enable stats for DCF Thread-Index: AQHWRToh+aA22hJueECTYF5TNZN7JajfOW2A Date: Fri, 19 Jun 2020 02:44:07 +0000 Message-ID: References: <20200616124112.108014-1-ting.xu@intel.com> <20200616124112.108014-11-ting.xu@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-version: 11.2.0.6 dlp-product: dlpe-windows dlp-reaction: no-action x-originating-ip: [10.239.127.36] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 10/12] net/ice: enable stats for DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, Qiming, > -----Original Message----- > From: Yang, Qiming > Sent: Thursday, June 18, 2020 2:32 PM > To: Xu, Ting ; dev@dpdk.org > Cc: Zhang, Qi Z ; Wu, Jingjing ; > Xing, Beilei ; Kovacevic, Marko > ; Mcnamara, John ; > Ye, Xiaolong > Subject: RE: [PATCH v3 10/12] net/ice: enable stats for DCF >=20 >=20 >=20 > > -----Original Message----- > > From: Xu, Ting > > Sent: Tuesday, June 16, 2020 20:41 > > To: dev@dpdk.org > > Cc: Zhang, Qi Z ; Yang, Qiming > > ; Wu, Jingjing ; Xing, > > Beilei ; Kovacevic, Marko > > ; Mcnamara, John > ; > > Ye, Xiaolong > > Subject: [PATCH v3 10/12] net/ice: enable stats for DCF > > > > From: Qi Zhang > > > > Add support to get and reset Rx/Tx stats in DCF. Query stats from PF. > > > > Signed-off-by: Qi Zhang > > Signed-off-by: Ting Xu > > --- > > drivers/net/ice/ice_dcf.c | 27 ++++++++ > > drivers/net/ice/ice_dcf.h | 4 ++ > > drivers/net/ice/ice_dcf_ethdev.c | 102 +++++++++++++++++++++++++++-- > > -- > > 3 files changed, 120 insertions(+), 13 deletions(-) > > > > diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c > > index > > f18c0f16a..fbeb58ee1 100644 > > --- a/drivers/net/ice/ice_dcf.c > > +++ b/drivers/net/ice/ice_dcf.c > > @@ -993,3 +993,30 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw) > > > > return err; > > } > > + > > +int > > +ice_dcf_query_stats(struct ice_dcf_hw *hw, > > + struct virtchnl_eth_stats *pstats) { struct virtchnl_queue_select > > +q_stats; struct dcf_virtchnl_cmd args; int err; > > + > > +memset(&q_stats, 0, sizeof(q_stats)); q_stats.vsi_id =3D > > +hw->vsi_res->vsi_id; > > + > > +args.v_op =3D VIRTCHNL_OP_GET_STATS; > > +args.req_msg =3D (uint8_t *)&q_stats; > > +args.req_msglen =3D sizeof(q_stats); > > +args.rsp_msglen =3D sizeof(*pstats); > > +args.rsp_msgbuf =3D (uint8_t *)pstats; > > +args.rsp_buflen =3D sizeof(*pstats); > > + > > +err =3D ice_dcf_execute_virtchnl_cmd(hw, &args); >=20 > Why don't use ice_dcf_send_cmd_req_no_irq? all the other virtual channel > interface are called by ice_dcf_send_cmd_req_no_irq in dcf, like > ice_dcf_get_vf_vsi_map >=20 ice_dcf_send_cmd_req_no_irq() are always used combined with ice_dcf_recv_cm= d_rsp_no_irq(), their function is the same as ice_dcf_execute_virtchnl_cmd(= ). The difference is that ice_dcf_send_cmd_req_no_irq() is used in initializat= ion, before the interrupt is enabled. ice_dcf_execute_virtchnl_cmd() should= be used after interrupt enable for safety. In this way, I think it is better to use ice_dcf_execute_virtchnl_cmd() her= e since these functions will be called after initialization. Best Regards, Xu, Ting > > +if (err) { > > +PMD_DRV_LOG(ERR, "fail to execute command > > OP_GET_STATS"); > > +return err; > > +} > > + > > +return 0; > > +} > > diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h > > index > > 68e1661c0..e82bc7748 100644 > > --- a/drivers/net/ice/ice_dcf.h > > +++ b/drivers/net/ice/ice_dcf.h > > @@ -58,6 +58,7 @@ struct ice_dcf_hw { > > uint16_t msix_base; > > uint16_t nb_msix; > > uint16_t rxq_map[16]; > > +struct virtchnl_eth_stats eth_stats_offset; > > }; > > > > int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, @@ -72,4 > > +73,7 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw); int > > ice_dcf_config_irq_map(struct ice_dcf_hw *hw); int > > ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, > > bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); > > +int ice_dcf_query_stats(struct ice_dcf_hw *hw, struct > > +virtchnl_eth_stats *pstats); > > + > > #endif /* _ICE_DCF_H_ */ > > diff --git a/drivers/net/ice/ice_dcf_ethdev.c > > b/drivers/net/ice/ice_dcf_ethdev.c > > index 239426b09..1a675064a 100644 > > --- a/drivers/net/ice/ice_dcf_ethdev.c > > +++ b/drivers/net/ice/ice_dcf_ethdev.c > > @@ -695,19 +695,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, > > return 0; } > > > > -static int > > -ice_dcf_stats_get(__rte_unused struct rte_eth_dev *dev, > > - __rte_unused struct rte_eth_stats *igb_stats) -{ -return 0; -} > > - > > -static int > > -ice_dcf_stats_reset(__rte_unused struct rte_eth_dev *dev) -{ -return > > 0; -} > > - > > static int > > ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev) > > { @@ -760,6 +747,95 @@ ice_dcf_dev_filter_ctrl(struct rte_eth_dev > > *dev, return ret; } > > > > +#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4) #define > > +ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6) #define ICE_DCF_48_BIT_MASK > > +RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t) > > + > > +static void > > +ice_dcf_stat_update_48(uint64_t *offset, uint64_t *stat) { if (*stat > > +>=3D *offset) *stat =3D *stat - *offset; else *stat =3D (uint64_t)((*s= tat + > > +((uint64_t)1 << ICE_DCF_48_BIT_WIDTH)) - *offset); > > + > > +*stat &=3D ICE_DCF_48_BIT_MASK; > > +} > > + > > +static void > > +ice_dcf_stat_update_32(uint64_t *offset, uint64_t *stat) { if (*stat > > +>=3D *offset) *stat =3D (uint64_t)(*stat - *offset); else *stat =3D > > +(uint64_t)((*stat + > > +((uint64_t)1 << ICE_DCF_32_BIT_WIDTH)) - *offset); } > > + > > +static void > > +ice_dcf_update_stats(struct virtchnl_eth_stats *oes, > > + struct virtchnl_eth_stats *nes) > > +{ > > +ice_dcf_stat_update_48(&oes->rx_bytes, &nes->rx_bytes); > > +ice_dcf_stat_update_48(&oes->rx_unicast, &nes->rx_unicast); > > +ice_dcf_stat_update_48(&oes->rx_multicast, &nes->rx_multicast); > > +ice_dcf_stat_update_48(&oes->rx_broadcast, &nes->rx_broadcast); > > +ice_dcf_stat_update_32(&oes->rx_discards, &nes->rx_discards); > > +ice_dcf_stat_update_48(&oes->tx_bytes, &nes->tx_bytes); > > +ice_dcf_stat_update_48(&oes->tx_unicast, &nes->tx_unicast); > > +ice_dcf_stat_update_48(&oes->tx_multicast, &nes->tx_multicast); > > +ice_dcf_stat_update_48(&oes->tx_broadcast, &nes->tx_broadcast); > > +ice_dcf_stat_update_32(&oes->tx_errors, &nes->tx_errors); > > +ice_dcf_stat_update_32(&oes->tx_discards, &nes->tx_discards); } > > + > > + > > +static int > > +ice_dcf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats > > +*stats) { struct ice_dcf_adapter *ad =3D dev->data->dev_private; struc= t > > +ice_dcf_hw *hw =3D &ad->real_hw; struct virtchnl_eth_stats pstats; int > > +ret; > > + > > +ret =3D ice_dcf_query_stats(hw, &pstats); if (ret =3D=3D 0) { > > +ice_dcf_update_stats(&hw->eth_stats_offset, &pstats); > > +stats->ipackets =3D pstats.rx_unicast + pstats.rx_multicast + > > +pstats.rx_broadcast - pstats.rx_discards; > > +stats->opackets =3D pstats.tx_broadcast + pstats.tx_multicast + > > +pstats.tx_unicast; > > +stats->imissed =3D pstats.rx_discards; > > +stats->oerrors =3D pstats.tx_errors + pstats.tx_discards; ibytes =3D > > +stats->pstats.rx_bytes; ibytes -=3D stats->ipackets * > > +stats->RTE_ETHER_CRC_LEN; obytes =3D pstats.tx_bytes; > > +} else { > > +PMD_DRV_LOG(ERR, "Get statistics failed"); } return ret; } > > + > > +static int > > +ice_dcf_stats_reset(struct rte_eth_dev *dev) { struct ice_dcf_adapter > > +*ad =3D dev->data->dev_private; struct ice_dcf_hw *hw =3D &ad->real_hw= ; > > +struct virtchnl_eth_stats pstats; int ret; > > + > > +/* read stat values to clear hardware registers */ ret =3D > > +ice_dcf_query_stats(hw, &pstats); if (ret !=3D 0) return ret; > > + > > +/* set stats offset base on current values */ > > +hw->eth_stats_offset =3D pstats; > > + > > +return 0; > > +} > > + > > static void > > ice_dcf_dev_close(struct rte_eth_dev *dev) { > > -- > > 2.17.1 >=20