From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E1D0A0517; Wed, 10 Jun 2020 07:03:37 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C4A2D1DB8; Wed, 10 Jun 2020 07:03:36 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id DE4CF1D9E for ; Wed, 10 Jun 2020 07:03:34 +0200 (CEST) IronPort-SDR: 26dwlodoX+pJ2oyG2Xi418g5eCrpAk1v9XBYR5XBstPpY6+Wfe00HLbruVpr0V+6W8rh/f6bah FeKS1biKvfBA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jun 2020 22:03:33 -0700 IronPort-SDR: hc7o/AuP6YphGIjH/XDt/pF6nkGLwmx6LBl2+E45FCe0ETYse8BCiwcHNlFA8zkYhFSmiwF7gp EwEAxifb81dg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,494,1583222400"; d="scan'208";a="418626908" Received: from orsmsx109.amr.corp.intel.com ([10.22.240.7]) by orsmga004.jf.intel.com with ESMTP; 09 Jun 2020 22:03:33 -0700 Received: from orsmsx156.amr.corp.intel.com (10.22.240.22) by ORSMSX109.amr.corp.intel.com (10.22.240.7) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 9 Jun 2020 22:03:33 -0700 Received: from ORSEDG001.ED.cps.intel.com (10.7.248.4) by ORSMSX156.amr.corp.intel.com (10.22.240.22) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 9 Jun 2020 22:03:33 -0700 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.102) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 9 Jun 2020 22:03:33 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eqUFiV54UhtZvwha3S776fM9VnaBu5u+2Q3+E/0Ueo4KuTfstt6uW8qDDJjUno4cENVW8ANfZV3ARtgmoZrVqCoGloYeN85UgodyjY/5pxacihsgN6NEVGUOdpnVyEgUmmiEx98PMVtjlvYmJURd5vZaGJPPSPLam0YEO4qbTTyxaYsJld/LdNRZB07O5E2lxcRQqcgJt8RxPLKTjFYjb+iEbvSs1VGZ8EJq6mBK2l1oWnnrUsboJGLt4iRKDbSvaRxFtbUO/IDU777DGHhrMGmPTW8J9VdDdGHyt20LEWq8lvYwm91mPl3DdtyRAgSZJncQruNPjuZjZqd5c/czaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uLs4cRjk+MxIzMnH9AzKUWTHYHvfglrQI8OqIWY0ugg=; b=NaBIiQHb8wHSo+/wRiuvmSNC2hV4lKEtCA7lmM/Q43dk8ux8wXE+lZP8eyNmLtdOfMaB699dScNEmvTJAHIHMvujUb4YZsNxfKopejLMWvWtPn23EWYMySt/zrVxWZhspdDDqUvuJdhFyvPa5vpkI3PbLxnYjBkEIDay9lAMLMZWc9DqLXHFL0YKA6fUyKFaoHJQOLAab+pC8Y/ZL4VY5J/7UevvpvdbCvzHE0pmG1hEm4+OZTVaJlIfgNP41U7Lx5E1bJW1gTNTTgJtzyzoEO9o55cDNg8VKAn/CBGMXySqIeCNZwxatsoskVg0iEarmjZKxaV5kN7lixCAaFgWMA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uLs4cRjk+MxIzMnH9AzKUWTHYHvfglrQI8OqIWY0ugg=; b=VUtYe+ypL+KM909/9YX5m6T8seb0KadBazN3CgS1ew4l9xj1xo2SVbjVz/342/zdizlfHz+z5zOyhiyh9MxKvVPDtMoA9+DOVEjrap6xQPMpsnYGgRaTTlej2ZkfeQ+2Ib9LJcXIV0PiegD5H02LRf60M5Ccas4A65xA388ZJj8= Received: from BN6PR11MB0017.namprd11.prod.outlook.com (2603:10b6:405:6c::34) by BN6PR11MB1233.namprd11.prod.outlook.com (2603:10b6:404:44::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3088.18; Wed, 10 Jun 2020 05:03:30 +0000 Received: from BN6PR11MB0017.namprd11.prod.outlook.com ([fe80::c8eb:f4c9:5b58:3120]) by BN6PR11MB0017.namprd11.prod.outlook.com ([fe80::c8eb:f4c9:5b58:3120%6]) with mapi id 15.20.3088.021; Wed, 10 Jun 2020 05:03:30 +0000 From: "Yang, Qiming" To: "Xu, Ting" , "dev@dpdk.org" CC: "Zhang, Qi Z" , "Mcnamara, John" , "Kovacevic, Marko" , "Ye, Xiaolong" Thread-Topic: [PATCH v1 09/12] net/ice: add queue start and stop for DCF Thread-Index: AQHWOzOG80+n10gdLke3UMwoINsvdqjOVzPwgAGTXACAATP8UA== Date: Wed, 10 Jun 2020 05:03:30 +0000 Message-ID: References: <20200605201737.33766-1-ting.xu@intel.com> <20200605201737.33766-10-ting.xu@intel.com> <0ee801adc7374b2e940d65bd98ca5cca@intel.com> In-Reply-To: <0ee801adc7374b2e940d65bd98ca5cca@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [192.55.46.36] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 7428d0d9-9c02-4d42-1064-08d80cfb9e3a x-ms-traffictypediagnostic: BN6PR11MB1233: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:44; x-forefront-prvs: 0430FA5CB7 x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: taJ8bTsF/K+kfrS1EtmaUY88nTyFbx8XANs8PyetL4bK0yW75sXK6EpdsWe4z8hZymP8MOhEwIXHNmp15esalkbaUtnf1t942fyi8PzWH11YrJLVd7Rv1KIoMwo/T14a6+sqLgFc2sHlUhNot/7zK8M8K+p2J13rS+ThLr7u05cKGyjp0ITdDVIMcKLUj/xmls2K/ZqBoR7TnaWzBU8Q2azjAu760OJTAqwFIPZLDEjRa5QPiu/ohU4jwGZzYdmWxuEQ2IZAQnxdOKP4hOBU27+fprcMOMXIu3bMLNpnOiC8Ji5vXHMRSPqhqfdLJ/roK0BcljNDSnsP7rH7p7dnYw== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BN6PR11MB0017.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(366004)(396003)(136003)(39860400002)(376002)(346002)(2906002)(86362001)(8936002)(5660300002)(33656002)(478600001)(53546011)(316002)(55016002)(26005)(4326008)(9686003)(110136005)(186003)(7696005)(54906003)(52536014)(66556008)(107886003)(66476007)(76116006)(64756008)(66946007)(66446008)(8676002)(71200400001)(83380400001)(6506007); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: /dYyB1UkKsRg4oaq6CX7IwpaIk+au4nDlBjzFvs/22sAhWnM3lyCCwXqLBIARNjM+9mwKyTov4xF5uXtSuczjGDuhVelSz5+l2nWyTZY0KnNIcP2Jqs0JAfl5PPcpnL0E10nPM7X10pK+eywTdfNFUscSOrJCzMVutDj9iuRLlB4DC2+di5TYCfAQF0rMeciyZ4uP0hud+n0TPy1ADq7SvcSCcS9+y/NPgtVIXWcnn67tkpRw0ZR5gsVsHhNXQFrxLM9i8wjOWQ7Npc2ICCEowlpDJVXzP27Vvf9Hbn/6cp/gzNvo6+Jr07kDLzu93+S9O0bXHP6RJHGiCBKnXTkDNXjeNAxP9q2jmdlfmqcYfKGhNZ+XC+vI65ZQtpsON5QDiCr3nYA3TPKGuoTzIiQ48H8GJqIrjmUAG91df2NMQ7vcQRMc9dYToWatr4DqMnmkpjTtx98rz8Fwt+aaDiojljiuKuMe2cakyCOOgvVsiA= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 7428d0d9-9c02-4d42-1064-08d80cfb9e3a X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Jun 2020 05:03:30.3345 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: /6tmzAqMdDWi/VV+kPrn7eZupTuR6Q6UpTRsm0KjKTmqLKW99W1pVqStCbs9mbVLJw2fX1wd/OQ2e/+MDOTZVw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR11MB1233 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Xu, Ting > Sent: Tuesday, June 9, 2020 15:35 > To: Yang, Qiming ; dev@dpdk.org > Cc: Zhang, Qi Z ; Mcnamara, John > ; Kovacevic, Marko > ; Ye, Xiaolong > Subject: RE: [PATCH v1 09/12] net/ice: add queue start and stop for DCF >=20 > Hi, Qiming, >=20 > > -----Original Message----- > > From: Yang, Qiming > > Sent: Monday, June 8, 2020 3:36 PM > > To: Xu, Ting ; dev@dpdk.org > > Cc: Zhang, Qi Z ; Mcnamara, John > > ; Kovacevic, Marko > > > > Subject: RE: [PATCH v1 09/12] net/ice: add queue start and stop for > > DCF > > > > > > > > > -----Original Message----- > > > From: Xu, Ting > > > Sent: Saturday, June 6, 2020 04:18 > > > To: dev@dpdk.org > > > Cc: Zhang, Qi Z ; Yang, Qiming > > > ; Mcnamara, John > ; > > > Kovacevic, Marko > > > Subject: [PATCH v1 09/12] net/ice: add queue start and stop for DCF > > > > > > From: Qi Zhang > > > > > > Add queue start and stop in DCF. Support queue enable and disable > > > through virtual channel. Add support for Rx queue mbufs allocation > > > and > > queue reset. > > > > > > Signed-off-by: Qi Zhang > > > --- > > > drivers/net/ice/ice_dcf.c | 57 ++++++ > > > drivers/net/ice/ice_dcf.h | 3 +- > > > drivers/net/ice/ice_dcf_ethdev.c | 309 > > > +++++++++++++++++++++++++++++++ > > > 3 files changed, 368 insertions(+), 1 deletion(-) > > > > > > > Snip... > > > > > +} > > > diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h > > > index > > > 9470d1df7..68e1661c0 100644 > > > --- a/drivers/net/ice/ice_dcf.h > > > +++ b/drivers/net/ice/ice_dcf.h > > > @@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev > > > *eth_dev, struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct > > > ice_dcf_hw *hw); int ice_dcf_configure_queues(struct ice_dcf_hw > > > *hw); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); > > > - > > > +int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool > > > +rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); > > > #endif /* _ICE_DCF_H_ */ > > > diff --git a/drivers/net/ice/ice_dcf_ethdev.c > > > b/drivers/net/ice/ice_dcf_ethdev.c > > > index 9605fb8ed..59113fc4b 100644 > > > --- a/drivers/net/ice/ice_dcf_ethdev.c > > > +++ b/drivers/net/ice/ice_dcf_ethdev.c > > > @@ -226,6 +226,259 @@ static int > > > ice_dcf_config_rx_queues_irqs(struct > > > rte_eth_dev *dev, > > > return 0; > > > } > > > > > . > > > +static int > > > +ice_dcf_start_queues(struct rte_eth_dev *dev) { struct ice_rx_queue > > > +*rxq; struct ice_tx_queue *txq; int i; > > > + > > > +for (i =3D 0; i < dev->data->nb_tx_queues; i++) { txq =3D > > > +dev->data->tx_queues[i]; if (txq->tx_deferred_start) continue; if > > > +(ice_dcf_tx_queue_start(dev, i) !=3D 0) { PMD_DRV_LOG(ERR, "Fail to > > > +start queue %u", i); return -1; > > > > If queue start fail, should stop the queue already started > > >=20 > This operation can only be seen in ice and i40e PF driver. In iavf or eve= n > earlier i40evf, they did not stop the already started queues when failed. > I am not sure if this operation is suitable for DCF? Or we should not fol= low the > current iavf, since it actually needs this modification to stop started q= ueues > as well? >=20 I think that's the correct behavior. We'd better fix the gap if iavf and i4= 0evf not act as that. > > > +} > > > +} > > > + > > > +for (i =3D 0; i < dev->data->nb_rx_queues; i++) { rxq =3D > > > +dev->data->rx_queues[i]; if (rxq->rx_deferred_start) continue; if > > > +(ice_dcf_rx_queue_start(dev, i) !=3D 0) { PMD_DRV_LOG(ERR, "Fail to > > > +start queue %u", i); return -1; } } > > > + > > > +return 0; > > > +} > > > + > > > static int > > > ice_dcf_dev_start(struct rte_eth_dev *dev) { @@ -266,20 +519,72 @@ > > > ice_dcf_dev_start(struct rte_eth_dev *dev) return ret; } > > > > > > +if (dev->data->dev_conf.intr_conf.rxq !=3D 0) { > > > +rte_intr_disable(intr_handle); rte_intr_enable(intr_handle); } > > > + > > > +ret =3D ice_dcf_start_queues(dev); > > > +if (ret) { > > > +PMD_DRV_LOG(ERR, "Failed to enable queues"); return ret; } > > > + > > > dev->data->dev_link.link_status =3D ETH_LINK_UP; > > > > > > return 0; > > > } > > > > > > +static void > > > +ice_dcf_stop_queues(struct rte_eth_dev *dev) { struct > > > +ice_dcf_adapter *ad =3D dev->data->dev_private; struct ice_dcf_hw *h= w > > > +=3D &ad->real_hw; struct ice_rx_queue *rxq; struct ice_tx_queue *txq= ; > > > +int ret, i; > > > + > > > +/* Stop All queues */ > > > +ret =3D ice_dcf_disable_queues(hw); > > > +if (ret) > > > +PMD_DRV_LOG(WARNING, "Fail to stop queues"); > > > + > > > +for (i =3D 0; i < dev->data->nb_tx_queues; i++) { txq =3D > > > +dev->data->tx_queues[i]; if (!txq) continue; > > > +txq->tx_rel_mbufs(txq); > > > +reset_tx_queue(txq); > > > +dev->data->tx_queue_state[i] =3D > > > RTE_ETH_QUEUE_STATE_STOPPED; > > > +} > > > +for (i =3D 0; i < dev->data->nb_rx_queues; i++) { rxq =3D > > > +dev->data->rx_queues[i]; if (!rxq) continue; > > > +rxq->rx_rel_mbufs(rxq); > > > +reset_rx_queue(rxq); > > > +dev->data->rx_queue_state[i] =3D > > > RTE_ETH_QUEUE_STATE_STOPPED; > > > +} > > > +} > > > + > > > static void > > > ice_dcf_dev_stop(struct rte_eth_dev *dev) { struct > > > ice_dcf_adapter *dcf_ad =3D dev->data->dev_private; > > > +struct rte_intr_handle *intr_handle =3D dev->intr_handle; > > > struct ice_adapter *ad =3D &dcf_ad->parent; > > > > > > if (ad->pf.adapter_stopped =3D=3D 1) > > > return; > > > > > > +ice_dcf_stop_queues(dev); > > > + > > > +rte_intr_efd_disable(intr_handle); > > > +if (intr_handle->intr_vec) { > > > +rte_free(intr_handle->intr_vec); > > > +intr_handle->intr_vec =3D NULL; > > > +} > > > + > > > dev->data->dev_link.link_status =3D ETH_LINK_DOWN; > > > ad->pf.adapter_stopped =3D 1; } @@ -476,6 +781,10 @@ static const > > > struct eth_dev_ops ice_dcf_eth_dev_ops =3D { > > > .tx_queue_setup =3D ice_tx_queue_setup, > > > .rx_queue_release =3D ice_rx_queue_release, > > > .tx_queue_release =3D ice_tx_queue_release, > > > +.rx_queue_start =3D ice_dcf_rx_queue_start, > > > +.tx_queue_start =3D ice_dcf_tx_queue_start, > > > +.rx_queue_stop =3D ice_dcf_rx_queue_stop, > > > +.tx_queue_stop =3D ice_dcf_tx_queue_stop, > > > .link_update =3D ice_dcf_link_update, > > > .stats_get =3D ice_dcf_stats_get, > > > .stats_reset =3D ice_dcf_stats_reset, > > > -- > > > 2.17.1 > > >=20