From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 21E42A04A5; Thu, 18 Jun 2020 08:40:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EDA2A1B13C; Thu, 18 Jun 2020 08:40:03 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 528C85F2F for ; Thu, 18 Jun 2020 08:40:02 +0200 (CEST) IronPort-SDR: Xk8dvtQvKVe6//YdGaFuflS592JBupraUYD+U6dokuxVoXEUg6FwbVYNcX9QSU3vy4WYLZeyx6 4B2/sZFGUx7w== X-IronPort-AV: E=McAfee;i="6000,8403,9655"; a="122685949" X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="122685949" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2020 23:40:01 -0700 IronPort-SDR: l1GFE6qT8VmyFI/WxGV6sH/PTZtUy9MztKtmi6XPMQQZfjIsfxrujiae0/ne0nkDNGBtmoSvfx 8hVBQTtqb6lw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,525,1583222400"; d="scan'208";a="277524165" Received: from orsmsx110.amr.corp.intel.com ([10.22.240.8]) by orsmga006.jf.intel.com with ESMTP; 17 Jun 2020 23:40:01 -0700 Received: from orsmsx607.amr.corp.intel.com (10.22.229.20) by ORSMSX110.amr.corp.intel.com (10.22.240.8) with Microsoft SMTP Server (TLS) id 14.3.439.0; Wed, 17 Jun 2020 23:40:00 -0700 Received: from orsmsx607.amr.corp.intel.com (10.22.229.20) by ORSMSX607.amr.corp.intel.com (10.22.229.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Wed, 17 Jun 2020 23:40:00 -0700 Received: from ORSEDG001.ED.cps.intel.com (10.7.248.4) by orsmsx607.amr.corp.intel.com (10.22.229.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5 via Frontend Transport; Wed, 17 Jun 2020 23:40:00 -0700 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.105) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (TLS) id 14.3.439.0; Wed, 17 Jun 2020 23:39:58 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=a80GtOFPbk+CXiKKkbDL8TFUP/6mXCSGrX3r4lxPNECWbliv0eydNmSRbP/gFSPIQw24/qGcePRvwIwD74dYCqfHWy7aIuKxkzDs4hDF/4FVxWsl7f0mvBHqgJ0+yVbbtu3/ryc4i4dBAPxctCDY/ZrTA+dPRn1Yrs23/2hmdhTHo1Bhda8eP81HERsSCEVGBJsmMzPMchFxdMCpGIu+xet/O/uijFkObB4Nk+9uDObG9eMkDUdMJWPAFYJlEWi2XbyOVKSjmcGAUhLpO77ryy1SuV+2EgiZ40afPHjFi/5ujowlHTotls3r51kR+V9nhT/p8OKGUy9UPt/GxsuzMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ggu6ccvDyepa26RyGjy7G+RVduAuSQqEYUujG7BZYc8=; b=myKD3zo8WMDUbgk8k8sOj4rqxIv5WdFP95MxSS3glxT/G6K1u/NSpJC1v/gFbpvVFEKFFC4ORh5axeC1glvG49iQQrGdb41DOll5CDY7HJZRvoRRtQpBYx+ZKNiKabGJIxon2gfYtPMxbp/FvPU9tHfWNUdEzAFu1267iFeh4Er3pLtNaa8gBCpC5pMMjJ8x2oA+Hnbw0V+eEi53W/cVrFPZMIgxjy/8JZeMxvG5zTS/LgvQhXlS2kttFNnKk4sQFIb62KaeMCydw3EQnEb3dhkgzre1bV6Sjrg99M68uqJTHCzOBP4Q2Iy7bjM5rWIKJg9pKuZdM8rGjSmLasca1g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ggu6ccvDyepa26RyGjy7G+RVduAuSQqEYUujG7BZYc8=; b=sryIlamz3DidaLZslqDyBkuMfrN+1PgENfuCZtWMlrQKihNg+h5muyzbyUvA201GCQSNZnZYSWHbzfSFBnzDIFecv+3P160rPOtqVIf0aEw52rqUwLYYYssE42mBc0O6+j8vmLMlUWxHVgsOoDaeUPRabISv9lHjP9+qle+jJo4= Received: from BN6PR11MB0017.namprd11.prod.outlook.com (2603:10b6:405:6c::34) by BN6PR11MB1394.namprd11.prod.outlook.com (2603:10b6:404:49::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3088.21; Thu, 18 Jun 2020 06:39:55 +0000 Received: from BN6PR11MB0017.namprd11.prod.outlook.com ([fe80::c8eb:f4c9:5b58:3120]) by BN6PR11MB0017.namprd11.prod.outlook.com ([fe80::c8eb:f4c9:5b58:3120%6]) with mapi id 15.20.3088.025; Thu, 18 Jun 2020 06:39:54 +0000 From: "Yang, Qiming" To: "Xu, Ting" , "dev@dpdk.org" CC: "Zhang, Qi Z" , "Wu, Jingjing" , "Xing, Beilei" , "Kovacevic, Marko" , "Mcnamara, John" , "Ye, Xiaolong" Thread-Topic: [PATCH v3 09/12] net/ice: add queue start and stop for DCF Thread-Index: AQHWQ5iYB1DUAyi+Hkmkk0aBxxbXWajd7a4w Date: Thu, 18 Jun 2020 06:39:54 +0000 Message-ID: References: <20200616124112.108014-1-ting.xu@intel.com> <20200616124112.108014-10-ting.xu@intel.com> In-Reply-To: <20200616124112.108014-10-ting.xu@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [192.55.46.36] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: a64179dc-82ed-4767-ae66-08d81352696b x-ms-traffictypediagnostic: BN6PR11MB1394: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:13; x-forefront-prvs: 0438F90F17 x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: oT6tr3cSeVfjV85KMqmT1xxCJWVylttDF0A+NQ0szDRKp/9X0jpVpNz4KfLhzN42UeFOulMmy8jwRzF778bSECnWKVW9YIXST18J8ELeT7v0C5y3JUcr/rdV0s5iBF/TwAfvbQi0Tb2KYKgGSsRnmluP6V07prN/RLdi1WG5naV/4P2fYAiHezKMhHH5DynHkiV40t+1PwY2f8agOFo2nJUqJVfZumgzdLlV3heBe2+N+O9xbMWAjqFdcvMZk8/vbiF44nP0nycXceAwlp0m0MvFetP1rhm1LAhfGQUZHPnMzPMEHoVT6V1pj8MM+CR4A5RDXqWB0OOays8oxJNp+A== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BN6PR11MB0017.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(4636009)(346002)(396003)(136003)(376002)(39860400002)(366004)(55016002)(76116006)(52536014)(66446008)(4326008)(83380400001)(9686003)(64756008)(66556008)(66946007)(478600001)(186003)(2906002)(86362001)(26005)(71200400001)(66476007)(107886003)(5660300002)(53546011)(7696005)(6506007)(8676002)(316002)(54906003)(110136005)(8936002)(33656002); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: RJ8PQ1YzQlDVu7wCqowdHITpA7Hwl5Ue/v+oOjROYAGjCfIXtTzRBHmV5LxuLEmSY0qf7BKTpUMIHnEDuJkkIYYFJeWaZbXYy2FPQIUYGDDny67x1iZVMuI5fZzcDLBHSszZHvRf84tr5q5hlqUu3pjTx8YYznJHlqhz5CsIoFDYh5umrPu+iJtKCBeQwfYBD6MUM3Pd2P6VyBr8FHgrmnEen9BeMkPwIwKskHYQb0cRinmHl0UgDC2uhmnw8bGuEzzWPPJrV22heNLi7Ldw2bEyLAcsb3mEqd1/It0vWMj+s7c8KJQO1fPlfJQB15+lbxs5q5NZsCDWwNxuWEOiv6XjC6TvWw44rCe8euspf4ehHfhYXXWm5hRjUwDhfM/ZIsR67U9eX9KLM4mHQ7Py3F0EKguDTWb7mScXlQ+d3KM2zDkipBoZPuoF6xCUDtvkQ18LdzCVp9QdefSDs5xFbj26K6yjxnCX863i39D7QtM= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: a64179dc-82ed-4767-ae66-08d81352696b X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2020 06:39:54.7517 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: schxCfD2fyfOZhJME3Q9ceQWCBvWihyXLvI+ZlWVIzA/BXmpzM0w11yyAsweZTGsR4Y0lS9pFfe68tnB1dKS1Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR11MB1394 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v3 09/12] net/ice: add queue start and stop for DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Xu, Ting > Sent: Tuesday, June 16, 2020 20:41 > To: dev@dpdk.org > Cc: Zhang, Qi Z ; Yang, Qiming > ; Wu, Jingjing ; Xing, Beil= ei > ; Kovacevic, Marko ; > Mcnamara, John ; Ye, Xiaolong > > Subject: [PATCH v3 09/12] net/ice: add queue start and stop for DCF >=20 > From: Qi Zhang >=20 > Add queue start and stop in DCF. Support queue enable and disable through > virtual channel. Add support for Rx queue mbufs allocation and queue rese= t. >=20 > Signed-off-by: Qi Zhang > Signed-off-by: Ting Xu > --- > drivers/net/ice/ice_dcf.c | 57 ++++++ > drivers/net/ice/ice_dcf.h | 3 +- > drivers/net/ice/ice_dcf_ethdev.c | 320 > +++++++++++++++++++++++++++++++ > 3 files changed, 379 insertions(+), 1 deletion(-) >=20 > diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index > 8869e0d1c..f18c0f16a 100644 > --- a/drivers/net/ice/ice_dcf.c > +++ b/drivers/net/ice/ice_dcf.c > @@ -936,3 +936,60 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw) > rte_free(map_info); > return err; > } > + > [snip] > + > +static int > +ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { > + struct ice_dcf_adapter *ad =3D dev->data->dev_private; > + struct iavf_hw *hw =3D &ad->real_hw.avf; > + struct ice_rx_queue *rxq; > + int err =3D 0; > + > + if (rx_queue_id >=3D dev->data->nb_rx_queues) > + return -EINVAL; > + > + rxq =3D dev->data->rx_queues[rx_queue_id]; > + > + err =3D alloc_rxq_mbufs(rxq); > + if (err) { > + PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf"); > + return err; > + } > + > + rte_wmb(); > + > + /* Init the RX tail register. */ > + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); > + IAVF_WRITE_FLUSH(hw); > + > + /* Ready to switch the queue on */ > + err =3D ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true); > + if (err) > + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", > + rx_queue_id); > + else > + dev->data->rx_queue_state[rx_queue_id] =3D > + RTE_ETH_QUEUE_STATE_STARTED; > + > + return err; The 'else' is no need in this function. Follow the clean code rule this par= t should be If (err) PMD_DRV_LOG() Return err; ..... Return 0; > +} > + > +static inline void > +reset_rx_queue(struct ice_rx_queue *rxq) { > + uint16_t len; > + uint32_t i; > + > + if (!rxq) > + return; > + > + len =3D rxq->nb_rx_desc + ICE_RX_MAX_BURST; > + > + for (i =3D 0; i < len * sizeof(union ice_rx_flex_desc); i++) > + ((volatile char *)rxq->rx_ring)[i] =3D 0; > + > + memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf)); > + > + for (i =3D 0; i < ICE_RX_MAX_BURST; i++) > + rxq->sw_ring[rxq->nb_rx_desc + i].mbuf =3D &rxq- > >fake_mbuf; > + > + /* for rx bulk */ > + rxq->rx_nb_avail =3D 0; > + rxq->rx_next_avail =3D 0; > + rxq->rx_free_trigger =3D (uint16_t)(rxq->rx_free_thresh - 1); > + > + rxq->rx_tail =3D 0; > + rxq->nb_rx_hold =3D 0; > + rxq->pkt_first_seg =3D NULL; > + rxq->pkt_last_seg =3D NULL; > +} > + > +static inline void > +reset_tx_queue(struct ice_tx_queue *txq) { > + struct ice_tx_entry *txe; > + uint32_t i, size; > + uint16_t prev; > + > + if (!txq) { > + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); > + return; > + } > + > + txe =3D txq->sw_ring; > + size =3D sizeof(struct ice_tx_desc) * txq->nb_tx_desc; > + for (i =3D 0; i < size; i++) > + ((volatile char *)txq->tx_ring)[i] =3D 0; > + > + prev =3D (uint16_t)(txq->nb_tx_desc - 1); > + for (i =3D 0; i < txq->nb_tx_desc; i++) { > + txq->tx_ring[i].cmd_type_offset_bsz =3D > + > rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE); > + txe[i].mbuf =3D NULL; > + txe[i].last_id =3D i; > + txe[prev].next_id =3D i; > + prev =3D i; > + } > + > + txq->tx_tail =3D 0; > + txq->nb_tx_used =3D 0; > + > + txq->last_desc_cleaned =3D txq->nb_tx_desc - 1; > + txq->nb_tx_free =3D txq->nb_tx_desc - 1; > + > + txq->tx_next_dd =3D txq->tx_rs_thresh - 1; > + txq->tx_next_rs =3D txq->tx_rs_thresh - 1; } > + > +static int > +ice_dcf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) { > + struct ice_dcf_adapter *ad =3D dev->data->dev_private; > + struct ice_dcf_hw *hw =3D &ad->real_hw; > + struct ice_rx_queue *rxq; > + int err; > + > + if (rx_queue_id >=3D dev->data->nb_rx_queues) > + return -EINVAL; > + > + err =3D ice_dcf_switch_queue(hw, rx_queue_id, true, false); > + if (err) { > + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", > + rx_queue_id); > + return err; > + } > + > + rxq =3D dev->data->rx_queues[rx_queue_id]; > + rxq->rx_rel_mbufs(rxq); > + reset_rx_queue(rxq); > + dev->data->rx_queue_state[rx_queue_id] =3D > RTE_ETH_QUEUE_STATE_STOPPED; > + > + return 0; > +} > + > +static int > +ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { > + struct ice_dcf_adapter *ad =3D dev->data->dev_private; > + struct iavf_hw *hw =3D &ad->real_hw.avf; > + struct ice_tx_queue *txq; > + int err =3D 0; > + > + if (tx_queue_id >=3D dev->data->nb_tx_queues) > + return -EINVAL; > + > + txq =3D dev->data->tx_queues[tx_queue_id]; > + > + /* Init the RX tail register. */ > + txq->qtx_tail =3D hw->hw_addr + IAVF_QTX_TAIL1(tx_queue_id); > + IAVF_PCI_REG_WRITE(txq->qtx_tail, 0); > + IAVF_WRITE_FLUSH(hw); > + > + /* Ready to switch the queue on */ > + err =3D ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, > true); > + > + if (err) > + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", > + tx_queue_id); > + else > + dev->data->tx_queue_state[tx_queue_id] =3D > + RTE_ETH_QUEUE_STATE_STARTED; > + > + return err; > +} > + > +static int > +ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) { > + struct ice_dcf_adapter *ad =3D dev->data->dev_private; > + struct ice_dcf_hw *hw =3D &ad->real_hw; > + struct ice_tx_queue *txq; > + int err; > + > + if (tx_queue_id >=3D dev->data->nb_tx_queues) > + return -EINVAL; > + > + err =3D ice_dcf_switch_queue(hw, tx_queue_id, false, false); > + if (err) { > + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", > + tx_queue_id); > + return err; > + } > + > + txq =3D dev->data->tx_queues[tx_queue_id]; > + txq->tx_rel_mbufs(txq); > + reset_tx_queue(txq); > + dev->data->tx_queue_state[tx_queue_id] =3D > RTE_ETH_QUEUE_STATE_STOPPED; > + > + return 0; > +} > + > +static int > +ice_dcf_start_queues(struct rte_eth_dev *dev) { > + struct ice_rx_queue *rxq; > + struct ice_tx_queue *txq; > + int nb_rxq =3D 0; > + int nb_txq, i; > + > + for (nb_txq =3D 0; nb_txq < dev->data->nb_tx_queues; nb_txq++) { > + txq =3D dev->data->tx_queues[nb_txq]; > + if (txq->tx_deferred_start) > + continue; > + if (ice_dcf_tx_queue_start(dev, nb_txq) !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to start queue %u", > nb_txq); > + goto tx_err; > + } > + } > + > + for (nb_rxq =3D 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) { > + rxq =3D dev->data->rx_queues[nb_rxq]; > + if (rxq->rx_deferred_start) > + continue; > + if (ice_dcf_rx_queue_start(dev, nb_rxq) !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to start queue %u", > nb_rxq); > + goto rx_err; > + } > + } > + > + return 0; > + > + /* stop the started queues if failed to start all queues */ > +rx_err: > + for (i =3D 0; i < nb_rxq; i++) > + ice_dcf_rx_queue_stop(dev, i); > +tx_err: > + for (i =3D 0; i < nb_txq; i++) > + ice_dcf_tx_queue_stop(dev, i); > + > + return -1; > +} > + > static int > ice_dcf_dev_start(struct rte_eth_dev *dev) { @@ -267,20 +531,72 @@ > ice_dcf_dev_start(struct rte_eth_dev *dev) > return ret; > } >=20 > + if (dev->data->dev_conf.intr_conf.rxq !=3D 0) { > + rte_intr_disable(intr_handle); > + rte_intr_enable(intr_handle); > + } > + > + ret =3D ice_dcf_start_queues(dev); > + if (ret) { > + PMD_DRV_LOG(ERR, "Failed to enable queues"); > + return ret; > + } > + > dev->data->dev_link.link_status =3D ETH_LINK_UP; >=20 > return 0; > } >=20 > +static void > +ice_dcf_stop_queues(struct rte_eth_dev *dev) { > + struct ice_dcf_adapter *ad =3D dev->data->dev_private; > + struct ice_dcf_hw *hw =3D &ad->real_hw; > + struct ice_rx_queue *rxq; > + struct ice_tx_queue *txq; > + int ret, i; > + > + /* Stop All queues */ > + ret =3D ice_dcf_disable_queues(hw); > + if (ret) > + PMD_DRV_LOG(WARNING, "Fail to stop queues"); > + > + for (i =3D 0; i < dev->data->nb_tx_queues; i++) { > + txq =3D dev->data->tx_queues[i]; > + if (!txq) > + continue; > + txq->tx_rel_mbufs(txq); > + reset_tx_queue(txq); > + dev->data->tx_queue_state[i] =3D > RTE_ETH_QUEUE_STATE_STOPPED; > + } > + for (i =3D 0; i < dev->data->nb_rx_queues; i++) { > + rxq =3D dev->data->rx_queues[i]; > + if (!rxq) > + continue; > + rxq->rx_rel_mbufs(rxq); > + reset_rx_queue(rxq); > + dev->data->rx_queue_state[i] =3D > RTE_ETH_QUEUE_STATE_STOPPED; > + } > +} > + > static void > ice_dcf_dev_stop(struct rte_eth_dev *dev) { > struct ice_dcf_adapter *dcf_ad =3D dev->data->dev_private; > + struct rte_intr_handle *intr_handle =3D dev->intr_handle; > struct ice_adapter *ad =3D &dcf_ad->parent; >=20 > if (ad->pf.adapter_stopped =3D=3D 1) > return; >=20 > + ice_dcf_stop_queues(dev); > + > + rte_intr_efd_disable(intr_handle); > + if (intr_handle->intr_vec) { > + rte_free(intr_handle->intr_vec); > + intr_handle->intr_vec =3D NULL; > + } > + > dev->data->dev_link.link_status =3D ETH_LINK_DOWN; > ad->pf.adapter_stopped =3D 1; > } > @@ -477,6 +793,10 @@ static const struct eth_dev_ops > ice_dcf_eth_dev_ops =3D { > .tx_queue_setup =3D ice_tx_queue_setup, > .rx_queue_release =3D ice_rx_queue_release, > .tx_queue_release =3D ice_tx_queue_release, > + .rx_queue_start =3D ice_dcf_rx_queue_start, > + .tx_queue_start =3D ice_dcf_tx_queue_start, > + .rx_queue_stop =3D ice_dcf_rx_queue_stop, > + .tx_queue_stop =3D ice_dcf_tx_queue_stop, > .link_update =3D ice_dcf_link_update, > .stats_get =3D ice_dcf_stats_get, > .stats_reset =3D ice_dcf_stats_reset, > -- > 2.17.1