From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 332A0A046B for ; Mon, 22 Jul 2019 08:17:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E550D1B957; Mon, 22 Jul 2019 08:17:34 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id D0B5258C6; Mon, 22 Jul 2019 08:17:32 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Jul 2019 23:16:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,294,1559545200"; d="scan'208";a="188539543" Received: from npg-dpdk-zhangxiao.sh.intel.com ([10.67.110.190]) by fmsmga001.fm.intel.com with ESMTP; 21 Jul 2019 23:16:42 -0700 From: Xiao Zhang To: dev@dpdk.org Cc: wenzhuo.lu@intel.com, wei.zhao1@intel.com, xiaolong.ye@intel.com, Xiao Zhang , stable@dpdk.org Date: Mon, 22 Jul 2019 23:11:52 +0800 Message-Id: <1563808312-64145-1-git-send-email-xiao.zhang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1563797960-58560-1-git-send-email-xiao.zhang@intel.com> References: <1563797960-58560-1-git-send-email-xiao.zhang@intel.com> Subject: [dpdk-dev] [v8] net/e1000: fix i219 hang on reset/close X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Unit hang may occur if multiple descriptors are available in the rings during reset or close. This state can be detected by configure status by bit 8 in register. If the bit is set and there are pending descriptors in one of the rings, we must flush them before reset or close. Fixes: 80580344("e1000: support EM devices (also known as e1000/e1000e)") Cc: stable@dpdk.org Signed-off-by: Xiao Zhang --- v8 Modify to follow code style of dpdk community. v7 Add fix line. v6 Change the fix on em driver instead of igb driver and update the register address according to C-Spec. v5 Change the subject. v4 Correct the tail descriptor of tx ring. v3 Add loop to handle all tx and rx queues. v2 Use configuration register instead of NVM7 to get the hang state. --- drivers/net/e1000/e1000_ethdev.h | 4 ++ drivers/net/e1000/em_ethdev.c | 5 ++ drivers/net/e1000/em_rxtx.c | 111 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 120 insertions(+) diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h index 67acb73..01ff943 100644 --- a/drivers/net/e1000/e1000_ethdev.h +++ b/drivers/net/e1000/e1000_ethdev.h @@ -35,6 +35,9 @@ #define IGB_MAX_RX_QUEUE_NUM 8 #define IGB_MAX_RX_QUEUE_NUM_82576 16 +#define E1000_I219_MAX_RX_QUEUE_NUM 2 +#define E1000_I219_MAX_TX_QUEUE_NUM 2 + #define E1000_SYN_FILTER_ENABLE 0x00000001 /* syn filter enable field */ #define E1000_SYN_FILTER_QUEUE 0x0000000E /* syn filter queue field */ #define E1000_SYN_FILTER_QUEUE_SHIFT 1 /* syn filter queue field */ @@ -522,5 +525,6 @@ int igb_action_rss_same(const struct rte_flow_action_rss *comp, int igb_config_rss_filter(struct rte_eth_dev *dev, struct igb_rte_flow_rss_conf *conf, bool add); +void em_flush_desc_rings(struct rte_eth_dev *dev); #endif /* _E1000_ETHDEV_H_ */ diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index dc88661..62d3a95 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -738,6 +738,11 @@ eth_em_stop(struct rte_eth_dev *dev) em_lsc_intr_disable(hw); e1000_reset_hw(hw); + + /* Flush desc rings for i219 */ + if (hw->mac.type >= e1000_pch_spt) + em_flush_desc_rings(dev); + if (hw->mac.type >= e1000_82544) E1000_WRITE_REG(hw, E1000_WUC, 0); diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c index 708f832..55d8a67 100644 --- a/drivers/net/e1000/em_rxtx.c +++ b/drivers/net/e1000/em_rxtx.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -59,6 +60,11 @@ #define E1000_TX_OFFLOAD_NOTSUP_MASK \ (PKT_TX_OFFLOAD_MASK ^ E1000_TX_OFFLOAD_MASK) +/* PCI offset for querying configuration status register */ +#define PCI_CFG_STATUS_REG 0x06 +#define FLUSH_DESC_REQUIRED 0x100 + + /** * Structure associated with each descriptor of the RX ring of a RX queue. */ @@ -2000,3 +2006,108 @@ em_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh; qinfo->conf.offloads = txq->offloads; } + +static void +e1000_flush_tx_ring(struct rte_eth_dev *dev) +{ + struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); + volatile struct e1000_data_desc *tx_desc; + volatile uint32_t *tdt_reg_addr; + uint32_t tdt, tctl, txd_lower = E1000_TXD_CMD_IFCS; + uint16_t size = 512; + struct em_tx_queue *txq; + int i; + + if (dev->data->tx_queues == NULL) + return; + tctl = E1000_READ_REG(hw, E1000_TCTL); + E1000_WRITE_REG(hw, E1000_TCTL, tctl | E1000_TCTL_EN); + for (i = 0; i < dev->data->nb_tx_queues && + i < E1000_I219_MAX_TX_QUEUE_NUM; i++) { + txq = dev->data->tx_queues[i]; + tdt = E1000_READ_REG(hw, E1000_TDT(i)); + if (tdt != txq->tx_tail) + return; + tx_desc = &txq->tx_ring[txq->tx_tail]; + tx_desc->buffer_addr = rte_cpu_to_le_64(txq->tx_ring_phys_addr); + tx_desc->lower.data = rte_cpu_to_le_32(txd_lower | size); + tx_desc->upper.data = 0; + + rte_wmb(); + txq->tx_tail++; + if (txq->tx_tail == txq->nb_tx_desc) + txq->tx_tail = 0; + rte_io_wmb(); + tdt_reg_addr = E1000_PCI_REG_ADDR(hw, E1000_TDT(i)); + E1000_PCI_REG_WRITE_RELAXED(tdt_reg_addr, txq->tx_tail); + usec_delay(250); + } +} + +static void +e1000_flush_rx_ring(struct rte_eth_dev *dev) +{ + uint32_t rctl, rxdctl; + struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int i; + + rctl = E1000_READ_REG(hw, E1000_RCTL); + E1000_WRITE_REG(hw, E1000_RCTL, rctl & ~E1000_RCTL_EN); + E1000_WRITE_FLUSH(hw); + usec_delay(150); + + for (i = 0; i < dev->data->nb_rx_queues && + i < E1000_I219_MAX_RX_QUEUE_NUM; i++) { + rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(i)); + /* zero the lower 14 bits (prefetch and host thresholds) */ + rxdctl &= 0xffffc000; + + /* update thresholds: prefetch threshold to 31, + * host threshold to 1 and make sure the granularity + * is "descriptors" and not "cache lines" + */ + rxdctl |= (0x1F | (1UL << 8) | E1000_RXDCTL_THRESH_UNIT_DESC); + + E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl); + } + /* momentarily enable the RX ring for the changes to take effect */ + E1000_WRITE_REG(hw, E1000_RCTL, rctl | E1000_RCTL_EN); + E1000_WRITE_FLUSH(hw); + usec_delay(150); + E1000_WRITE_REG(hw, E1000_RCTL, rctl & ~E1000_RCTL_EN); +} + +/** + * em_flush_desc_rings - remove all descriptors from the descriptor rings + * + * In i219, the descriptor rings must be emptied before resetting/closing the + * HW. Failure to do this will cause the HW to enter a unit hang state which + * can only be released by PCI reset on the device + * + */ + +void +em_flush_desc_rings(struct rte_eth_dev *dev) +{ + uint32_t fextnvm11, tdlen; + struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + uint16_t pci_cfg_status = 0; + + fextnvm11 = E1000_READ_REG(hw, E1000_FEXTNVM11); + E1000_WRITE_REG(hw, E1000_FEXTNVM11, + fextnvm11 | E1000_FEXTNVM11_DISABLE_MULR_FIX); + tdlen = E1000_READ_REG(hw, E1000_TDLEN(0)); + rte_pci_read_config(pci_dev, &pci_cfg_status, sizeof(pci_cfg_status), + PCI_CFG_STATUS_REG); + + /* do nothing if we're not in faulty state, or if the queue is empty */ + if ((pci_cfg_status & FLUSH_DESC_REQUIRED) && tdlen) { + /* flush desc ring */ + e1000_flush_tx_ring(dev); + rte_pci_read_config(pci_dev, &pci_cfg_status, + sizeof(pci_cfg_status), PCI_CFG_STATUS_REG); + if (pci_cfg_status & FLUSH_DESC_REQUIRED) + e1000_flush_rx_ring(dev); + } +} -- 2.7.4