From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 5F02A68A1 for ; Thu, 15 May 2014 04:18:29 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 14 May 2014 19:18:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,1055,1389772800"; d="scan'208";a="532148099" Received: from shilc102.sh.intel.com ([10.239.39.44]) by fmsmga001.fm.intel.com with ESMTP; 14 May 2014 19:18:33 -0700 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shilc102.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id s4F2IR2S024612; Thu, 15 May 2014 10:18:29 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id s4F2IN3m015916; Thu, 15 May 2014 10:18:25 +0800 Received: (from jijiangl@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id s4F2INe9015912; Thu, 15 May 2014 10:18:23 +0800 From: Jijiang Liu To: dev@dpdk.org Date: Thu, 15 May 2014 10:18:12 +0800 Message-Id: <1400120294-15871-2-git-send-email-jijiang.liu@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <1400120294-15871-1-git-send-email-jijiang.liu@intel.com> References: <1400120294-15871-1-git-send-email-jijiang.liu@intel.com> Subject: [dpdk-dev] [PATCH 1/3] Upgrade share codes:upgrade NIC shared code in ixgbe & e1000 directories X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 May 2014 02:18:34 -0000 Signed-off-by: Jijiang Liu --- lib/librte_eal/common/include/rte_pci_dev_ids.h | 8 - lib/librte_pmd_e1000/e1000/README | 2 +- lib/librte_pmd_e1000/e1000/e1000_80003es2lan.c | 44 +- lib/librte_pmd_e1000/e1000/e1000_80003es2lan.h | 2 +- lib/librte_pmd_e1000/e1000/e1000_82540.c | 24 +- lib/librte_pmd_e1000/e1000/e1000_82541.c | 14 +- lib/librte_pmd_e1000/e1000/e1000_82541.h | 2 +- lib/librte_pmd_e1000/e1000/e1000_82542.c | 4 +- lib/librte_pmd_e1000/e1000/e1000_82543.c | 38 +- lib/librte_pmd_e1000/e1000/e1000_82543.h | 2 +- lib/librte_pmd_e1000/e1000/e1000_82571.c | 40 +- lib/librte_pmd_e1000/e1000/e1000_82571.h | 2 +- lib/librte_pmd_e1000/e1000/e1000_82575.c | 134 +++++- lib/librte_pmd_e1000/e1000/e1000_82575.h | 4 +- lib/librte_pmd_e1000/e1000/e1000_api.c | 10 +- lib/librte_pmd_e1000/e1000/e1000_api.h | 2 +- lib/librte_pmd_e1000/e1000/e1000_defines.h | 7 +- lib/librte_pmd_e1000/e1000/e1000_hw.h | 13 +- lib/librte_pmd_e1000/e1000/e1000_i210.c | 94 +++- lib/librte_pmd_e1000/e1000/e1000_i210.h | 15 +- lib/librte_pmd_e1000/e1000/e1000_ich8lan.c | 250 +++------- lib/librte_pmd_e1000/e1000/e1000_ich8lan.h | 11 +- lib/librte_pmd_e1000/e1000/e1000_mac.c | 5 +- lib/librte_pmd_e1000/e1000/e1000_mac.h | 5 +- lib/librte_pmd_e1000/e1000/e1000_manage.c | 2 +- lib/librte_pmd_e1000/e1000/e1000_manage.h | 2 +- lib/librte_pmd_e1000/e1000/e1000_mbx.c | 2 +- lib/librte_pmd_e1000/e1000/e1000_mbx.h | 2 +- lib/librte_pmd_e1000/e1000/e1000_nvm.c | 16 +- lib/librte_pmd_e1000/e1000/e1000_nvm.h | 2 +- lib/librte_pmd_e1000/e1000/e1000_phy.c | 38 +- lib/librte_pmd_e1000/e1000/e1000_phy.h | 5 +- lib/librte_pmd_e1000/e1000/e1000_regs.h | 2 +- lib/librte_pmd_e1000/e1000/e1000_vf.c | 4 +- lib/librte_pmd_e1000/e1000/e1000_vf.h | 2 +- lib/librte_pmd_ixgbe/ixgbe/README | 2 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.c | 136 ++++-- lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.h | 3 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.c | 506 +++++++++++------- lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.h | 12 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.c | 54 ++- lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.h | 22 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.c | 652 +++++++++++++++-------- lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.h | 22 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.c | 4 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.h | 2 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.c | 2 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.h | 2 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.c | 11 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.h | 5 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.c | 26 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.h | 2 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_osdep.h | 7 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.c | 570 +++++++++++++------- lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.h | 23 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h | 230 +++++++-- lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.c | 133 +++-- lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.h | 10 +- lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.c | 131 +++-- lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.h | 8 +- lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 2 +- 61 files changed, 2174 insertions(+), 1212 deletions(-) diff --git a/lib/librte_eal/common/include/rte_pci_dev_ids.h b/lib/librte_eal/common/include/rte_pci_dev_ids.h index a51c1ef..a09c942 100644 --- a/lib/librte_eal/common/include/rte_pci_dev_ids.h +++ b/lib/librte_eal/common/include/rte_pci_dev_ids.h @@ -232,14 +232,6 @@ #define E1000_DEV_ID_PCH_LPT_I217_V 0x153B #define E1000_DEV_ID_PCH_LPTLP_I218_LM 0x155A #define E1000_DEV_ID_PCH_LPTLP_I218_V 0x1559 -#ifdef NAHUM6_LPTH_I218_HW -#define E1000_DEV_ID_PCH_I218_LM2 0x15A0 -#define E1000_DEV_ID_PCH_I218_V2 0x15A1 -#endif /* NAHUM6_LPTH_I218_HW */ -#ifdef NAHUM6_WPT_HW -#define E1000_DEV_ID_PCH_I218_LM3 0x15A2 /* Wildcat Point PCH */ -#define E1000_DEV_ID_PCH_I218_V3 0x15A3 /* Wildcat Point PCH */ -#endif /* NAHUM6_WPT_HW */ /* * Tested (supported) on VM emulated HW. diff --git a/lib/librte_pmd_e1000/e1000/README b/lib/librte_pmd_e1000/e1000/README index 29981f5..851e54e 100644 --- a/lib/librte_pmd_e1000/e1000/README +++ b/lib/librte_pmd_e1000/e1000/README @@ -31,7 +31,7 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. This directory contains source code of FreeBSD em & igb drivers of version -cid-shared-code.2012.11.09 released by LAD. The sub-directory of lad/ +cid-shared-code.2014.04.21 released by LAD. The sub-directory of lad/ contains the original source package. Few changes to the original FreeBSD sources were made to: - Adopt it for PMD usage mode: diff --git a/lib/librte_pmd_e1000/e1000/e1000_80003es2lan.c b/lib/librte_pmd_e1000/e1000/e1000_80003es2lan.c index 60d7c2a..72692d9 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_80003es2lan.c +++ b/lib/librte_pmd_e1000/e1000/e1000_80003es2lan.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -58,16 +58,16 @@ STATIC s32 e1000_reset_hw_80003es2lan(struct e1000_hw *hw); STATIC s32 e1000_init_hw_80003es2lan(struct e1000_hw *hw); STATIC s32 e1000_setup_copper_link_80003es2lan(struct e1000_hw *hw); STATIC void e1000_clear_hw_cntrs_80003es2lan(struct e1000_hw *hw); -static s32 e1000_acquire_swfw_sync_80003es2lan(struct e1000_hw *hw, u16 mask); -static s32 e1000_cfg_kmrn_10_100_80003es2lan(struct e1000_hw *hw, u16 duplex); -static s32 e1000_cfg_kmrn_1000_80003es2lan(struct e1000_hw *hw); -static s32 e1000_cfg_on_link_up_80003es2lan(struct e1000_hw *hw); -static s32 e1000_read_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, +STATIC s32 e1000_acquire_swfw_sync_80003es2lan(struct e1000_hw *hw, u16 mask); +STATIC s32 e1000_cfg_kmrn_10_100_80003es2lan(struct e1000_hw *hw, u16 duplex); +STATIC s32 e1000_cfg_kmrn_1000_80003es2lan(struct e1000_hw *hw); +STATIC s32 e1000_cfg_on_link_up_80003es2lan(struct e1000_hw *hw); +STATIC s32 e1000_read_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, u16 *data); -static s32 e1000_write_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, +STATIC s32 e1000_write_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, u16 data); -static void e1000_initialize_hw_bits_80003es2lan(struct e1000_hw *hw); -static void e1000_release_swfw_sync_80003es2lan(struct e1000_hw *hw, u16 mask); +STATIC void e1000_initialize_hw_bits_80003es2lan(struct e1000_hw *hw); +STATIC void e1000_release_swfw_sync_80003es2lan(struct e1000_hw *hw, u16 mask); STATIC s32 e1000_read_mac_addr_80003es2lan(struct e1000_hw *hw); STATIC void e1000_power_down_phy_copper_80003es2lan(struct e1000_hw *hw); @@ -75,7 +75,7 @@ STATIC void e1000_power_down_phy_copper_80003es2lan(struct e1000_hw *hw); * with a lower bound at "index" and the upper bound at * "index + 5". */ -static const u16 e1000_gg82563_cable_length_table[] = { +STATIC const u16 e1000_gg82563_cable_length_table[] = { 0, 60, 115, 150, 150, 60, 115, 150, 180, 180, 0xFF }; #define GG82563_CABLE_LENGTH_TABLE_SIZE \ (sizeof(e1000_gg82563_cable_length_table) / \ @@ -398,7 +398,7 @@ STATIC void e1000_release_nvm_80003es2lan(struct e1000_hw *hw) * Acquire the SW/FW semaphore to access the PHY or NVM. The mask * will also specify which port we're acquiring the lock for. **/ -static s32 e1000_acquire_swfw_sync_80003es2lan(struct e1000_hw *hw, u16 mask) +STATIC s32 e1000_acquire_swfw_sync_80003es2lan(struct e1000_hw *hw, u16 mask) { u32 swfw_sync; u32 swmask = mask; @@ -445,7 +445,7 @@ static s32 e1000_acquire_swfw_sync_80003es2lan(struct e1000_hw *hw, u16 mask) * Release the SW/FW semaphore used to access the PHY or NVM. The mask * will also specify which port we're releasing the lock for. **/ -static void e1000_release_swfw_sync_80003es2lan(struct e1000_hw *hw, u16 mask) +STATIC void e1000_release_swfw_sync_80003es2lan(struct e1000_hw *hw, u16 mask) { u32 swfw_sync; @@ -850,8 +850,10 @@ STATIC s32 e1000_reset_hw_80003es2lan(struct e1000_hw *hw) e1000_release_phy_80003es2lan(hw); /* Disable IBIST slave mode (far-end loopback) */ - e1000_read_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM, - &kum_reg_data); + ret_val = e1000_read_kmrn_reg_80003es2lan(hw, + E1000_KMRNCTRLSTA_INBAND_PARAM, &kum_reg_data); + if (ret_val) + return ret_val; kum_reg_data |= E1000_KMRNCTRLSTA_IBIST_DISABLE; e1000_write_kmrn_reg_80003es2lan(hw, E1000_KMRNCTRLSTA_INBAND_PARAM, kum_reg_data); @@ -977,7 +979,7 @@ STATIC s32 e1000_init_hw_80003es2lan(struct e1000_hw *hw) * * Initializes required hardware-dependent bits needed for normal operation. **/ -static void e1000_initialize_hw_bits_80003es2lan(struct e1000_hw *hw) +STATIC void e1000_initialize_hw_bits_80003es2lan(struct e1000_hw *hw) { u32 reg; @@ -1024,7 +1026,7 @@ static void e1000_initialize_hw_bits_80003es2lan(struct e1000_hw *hw) * * Setup some GG82563 PHY registers for obtaining link **/ -static s32 e1000_copper_link_setup_gg82563_80003es2lan(struct e1000_hw *hw) +STATIC s32 e1000_copper_link_setup_gg82563_80003es2lan(struct e1000_hw *hw) { struct e1000_phy_info *phy = &hw->phy; s32 ret_val; @@ -1231,7 +1233,7 @@ STATIC s32 e1000_setup_copper_link_80003es2lan(struct e1000_hw *hw) * Configure the KMRN interface by applying last minute quirks for * 10/100 operation. **/ -static s32 e1000_cfg_on_link_up_80003es2lan(struct e1000_hw *hw) +STATIC s32 e1000_cfg_on_link_up_80003es2lan(struct e1000_hw *hw) { s32 ret_val = E1000_SUCCESS; u16 speed; @@ -1262,7 +1264,7 @@ static s32 e1000_cfg_on_link_up_80003es2lan(struct e1000_hw *hw) * Configure the KMRN interface by applying last minute quirks for * 10/100 operation. **/ -static s32 e1000_cfg_kmrn_10_100_80003es2lan(struct e1000_hw *hw, u16 duplex) +STATIC s32 e1000_cfg_kmrn_10_100_80003es2lan(struct e1000_hw *hw, u16 duplex) { s32 ret_val; u32 tipg; @@ -1313,7 +1315,7 @@ static s32 e1000_cfg_kmrn_10_100_80003es2lan(struct e1000_hw *hw, u16 duplex) * Configure the KMRN interface by applying last minute quirks for * gigabit operation. **/ -static s32 e1000_cfg_kmrn_1000_80003es2lan(struct e1000_hw *hw) +STATIC s32 e1000_cfg_kmrn_1000_80003es2lan(struct e1000_hw *hw) { s32 ret_val; u16 reg_data, reg_data2; @@ -1364,7 +1366,7 @@ static s32 e1000_cfg_kmrn_1000_80003es2lan(struct e1000_hw *hw) * using the kumeran interface. The information retrieved is stored in data. * Release the semaphore before exiting. **/ -static s32 e1000_read_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, +STATIC s32 e1000_read_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, u16 *data) { u32 kmrnctrlsta; @@ -1401,7 +1403,7 @@ static s32 e1000_read_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, * at the offset using the kumeran interface. Release semaphore * before exiting. **/ -static s32 e1000_write_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, +STATIC s32 e1000_write_kmrn_reg_80003es2lan(struct e1000_hw *hw, u32 offset, u16 data) { u32 kmrnctrlsta; diff --git a/lib/librte_pmd_e1000/e1000/e1000_80003es2lan.h b/lib/librte_pmd_e1000/e1000/e1000_80003es2lan.h index b187e8f..f5fe967 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_80003es2lan.h +++ b/lib/librte_pmd_e1000/e1000/e1000_80003es2lan.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_82540.c b/lib/librte_pmd_e1000/e1000/e1000_82540.c index 451928d..fc1fa94 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82540.c +++ b/lib/librte_pmd_e1000/e1000/e1000_82540.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -47,12 +47,12 @@ POSSIBILITY OF SUCH DAMAGE. STATIC s32 e1000_init_phy_params_82540(struct e1000_hw *hw); STATIC s32 e1000_init_nvm_params_82540(struct e1000_hw *hw); STATIC s32 e1000_init_mac_params_82540(struct e1000_hw *hw); -static s32 e1000_adjust_serdes_amplitude_82540(struct e1000_hw *hw); +STATIC s32 e1000_adjust_serdes_amplitude_82540(struct e1000_hw *hw); STATIC void e1000_clear_hw_cntrs_82540(struct e1000_hw *hw); STATIC s32 e1000_init_hw_82540(struct e1000_hw *hw); STATIC s32 e1000_reset_hw_82540(struct e1000_hw *hw); -static s32 e1000_set_phy_mode_82540(struct e1000_hw *hw); -static s32 e1000_set_vco_speed_82540(struct e1000_hw *hw); +STATIC s32 e1000_set_phy_mode_82540(struct e1000_hw *hw); +STATIC s32 e1000_set_vco_speed_82540(struct e1000_hw *hw); STATIC s32 e1000_setup_copper_link_82540(struct e1000_hw *hw); STATIC s32 e1000_setup_fiber_serdes_link_82540(struct e1000_hw *hw); STATIC void e1000_power_down_phy_copper_82540(struct e1000_hw *hw); @@ -65,7 +65,7 @@ STATIC s32 e1000_read_mac_addr_82540(struct e1000_hw *hw); STATIC s32 e1000_init_phy_params_82540(struct e1000_hw *hw) { struct e1000_phy_info *phy = &hw->phy; - s32 ret_val = E1000_SUCCESS; + s32 ret_val; phy->addr = 1; phy->autoneg_mask = AUTONEG_ADVERTISE_SPEED_DEFAULT; @@ -328,7 +328,7 @@ STATIC s32 e1000_init_hw_82540(struct e1000_hw *hw) { struct e1000_mac_info *mac = &hw->mac; u32 txdctl, ctrl_ext; - s32 ret_val = E1000_SUCCESS; + s32 ret_val; u16 i; DEBUGFUNC("e1000_init_hw_82540"); @@ -410,7 +410,7 @@ STATIC s32 e1000_init_hw_82540(struct e1000_hw *hw) STATIC s32 e1000_setup_copper_link_82540(struct e1000_hw *hw) { u32 ctrl; - s32 ret_val = E1000_SUCCESS; + s32 ret_val; u16 data; DEBUGFUNC("e1000_setup_copper_link_82540"); @@ -495,9 +495,9 @@ out: * * Adjust the SERDES output amplitude based on the EEPROM settings. **/ -static s32 e1000_adjust_serdes_amplitude_82540(struct e1000_hw *hw) +STATIC s32 e1000_adjust_serdes_amplitude_82540(struct e1000_hw *hw) { - s32 ret_val = E1000_SUCCESS; + s32 ret_val; u16 nvm_data; DEBUGFUNC("e1000_adjust_serdes_amplitude_82540"); @@ -525,9 +525,9 @@ out: * * Set the VCO speed to improve Bit Error Rate (BER) performance. **/ -static s32 e1000_set_vco_speed_82540(struct e1000_hw *hw) +STATIC s32 e1000_set_vco_speed_82540(struct e1000_hw *hw) { - s32 ret_val = E1000_SUCCESS; + s32 ret_val; u16 default_page = 0; u16 phy_data; @@ -584,7 +584,7 @@ out: * 1. Do a PHY soft reset. * 2. Restart auto-negotiation or force link. **/ -static s32 e1000_set_phy_mode_82540(struct e1000_hw *hw) +STATIC s32 e1000_set_phy_mode_82540(struct e1000_hw *hw) { s32 ret_val = E1000_SUCCESS; u16 nvm_data; diff --git a/lib/librte_pmd_e1000/e1000/e1000_82541.c b/lib/librte_pmd_e1000/e1000/e1000_82541.c index 6f47096..952aea2 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82541.c +++ b/lib/librte_pmd_e1000/e1000/e1000_82541.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -58,12 +58,12 @@ STATIC s32 e1000_set_d3_lplu_state_82541(struct e1000_hw *hw, STATIC s32 e1000_setup_led_82541(struct e1000_hw *hw); STATIC s32 e1000_cleanup_led_82541(struct e1000_hw *hw); STATIC void e1000_clear_hw_cntrs_82541(struct e1000_hw *hw); -static s32 e1000_config_dsp_after_link_change_82541(struct e1000_hw *hw, +STATIC s32 e1000_config_dsp_after_link_change_82541(struct e1000_hw *hw, bool link_up); -static s32 e1000_phy_init_script_82541(struct e1000_hw *hw); +STATIC s32 e1000_phy_init_script_82541(struct e1000_hw *hw); STATIC void e1000_power_down_phy_copper_82541(struct e1000_hw *hw); -static const u16 e1000_igp_cable_length_table[] = { +STATIC const u16 e1000_igp_cable_length_table[] = { 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 10, 10, 10, 10, 10, 10, 10, 20, 20, 20, 20, 20, 25, 25, 25, 25, 25, 25, 25, 30, 30, 30, 30, 40, 40, 40, 40, 40, 40, 40, 40, 40, 50, 50, 50, 50, 50, 50, 50, 60, 60, @@ -83,7 +83,7 @@ static const u16 e1000_igp_cable_length_table[] = { STATIC s32 e1000_init_phy_params_82541(struct e1000_hw *hw) { struct e1000_phy_info *phy = &hw->phy; - s32 ret_val = E1000_SUCCESS; + s32 ret_val; DEBUGFUNC("e1000_init_phy_params_82541"); @@ -661,7 +661,7 @@ out: * 82541_rev_2 & 82547_rev_2 have the capability to configure the DSP when a * gigabit link is achieved to improve link quality. **/ -static s32 e1000_config_dsp_after_link_change_82541(struct e1000_hw *hw, +STATIC s32 e1000_config_dsp_after_link_change_82541(struct e1000_hw *hw, bool link_up) { struct e1000_phy_info *phy = &hw->phy; @@ -1084,7 +1084,7 @@ out: * * Initializes the IGP PHY. **/ -static s32 e1000_phy_init_script_82541(struct e1000_hw *hw) +STATIC s32 e1000_phy_init_script_82541(struct e1000_hw *hw) { struct e1000_dev_spec_82541 *dev_spec = &hw->dev_spec._82541; u32 ret_val; diff --git a/lib/librte_pmd_e1000/e1000/e1000_82541.h b/lib/librte_pmd_e1000/e1000/e1000_82541.h index c8b495b..0f50f55 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82541.h +++ b/lib/librte_pmd_e1000/e1000/e1000_82541.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_82542.c b/lib/librte_pmd_e1000/e1000/e1000_82542.c index 0d5b536..afea469 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82542.c +++ b/lib/librte_pmd_e1000/e1000/e1000_82542.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -316,7 +316,7 @@ STATIC s32 e1000_init_hw_82542(struct e1000_hw *hw) STATIC s32 e1000_setup_link_82542(struct e1000_hw *hw) { struct e1000_mac_info *mac = &hw->mac; - s32 ret_val = E1000_SUCCESS; + s32 ret_val; DEBUGFUNC("e1000_setup_link_82542"); diff --git a/lib/librte_pmd_e1000/e1000/e1000_82543.c b/lib/librte_pmd_e1000/e1000/e1000_82543.c index da76965..36335ba 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82543.c +++ b/lib/librte_pmd_e1000/e1000/e1000_82543.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -63,16 +63,16 @@ STATIC s32 e1000_led_off_82543(struct e1000_hw *hw); STATIC void e1000_write_vfta_82543(struct e1000_hw *hw, u32 offset, u32 value); STATIC void e1000_clear_hw_cntrs_82543(struct e1000_hw *hw); -static s32 e1000_config_mac_to_phy_82543(struct e1000_hw *hw); -static bool e1000_init_phy_disabled_82543(struct e1000_hw *hw); -static void e1000_lower_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl); -static s32 e1000_polarity_reversal_workaround_82543(struct e1000_hw *hw); -static void e1000_raise_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl); -static u16 e1000_shift_in_mdi_bits_82543(struct e1000_hw *hw); -static void e1000_shift_out_mdi_bits_82543(struct e1000_hw *hw, u32 data, +STATIC s32 e1000_config_mac_to_phy_82543(struct e1000_hw *hw); +STATIC bool e1000_init_phy_disabled_82543(struct e1000_hw *hw); +STATIC void e1000_lower_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl); +STATIC s32 e1000_polarity_reversal_workaround_82543(struct e1000_hw *hw); +STATIC void e1000_raise_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl); +STATIC u16 e1000_shift_in_mdi_bits_82543(struct e1000_hw *hw); +STATIC void e1000_shift_out_mdi_bits_82543(struct e1000_hw *hw, u32 data, u16 count); -static bool e1000_tbi_compatibility_enabled_82543(struct e1000_hw *hw); -static void e1000_set_tbi_sbp_82543(struct e1000_hw *hw, bool state); +STATIC bool e1000_tbi_compatibility_enabled_82543(struct e1000_hw *hw); +STATIC void e1000_set_tbi_sbp_82543(struct e1000_hw *hw, bool state); /** * e1000_init_phy_params_82543 - Init PHY func ptrs. @@ -277,7 +277,7 @@ void e1000_init_function_pointers_82543(struct e1000_hw *hw) * Returns the current status of 10-bit Interface (TBI) compatibility * (enabled/disabled). **/ -static bool e1000_tbi_compatibility_enabled_82543(struct e1000_hw *hw) +STATIC bool e1000_tbi_compatibility_enabled_82543(struct e1000_hw *hw) { struct e1000_dev_spec_82543 *dev_spec = &hw->dev_spec._82543; bool state = false; @@ -354,7 +354,7 @@ out: * * Enables or disabled 10-bit Interface (TBI) store bad packet (SBP). **/ -static void e1000_set_tbi_sbp_82543(struct e1000_hw *hw, bool state) +STATIC void e1000_set_tbi_sbp_82543(struct e1000_hw *hw, bool state) { struct e1000_dev_spec_82543 *dev_spec = &hw->dev_spec._82543; @@ -375,7 +375,7 @@ static void e1000_set_tbi_sbp_82543(struct e1000_hw *hw, bool state) * Returns the current status of whether PHY initialization is disabled. * True if PHY initialization is disabled else false. **/ -static bool e1000_init_phy_disabled_82543(struct e1000_hw *hw) +STATIC bool e1000_init_phy_disabled_82543(struct e1000_hw *hw) { struct e1000_dev_spec_82543 *dev_spec = &hw->dev_spec._82543; bool ret_val; @@ -582,7 +582,7 @@ out: * Raise the management data input clock by setting the MDC bit in the control * register. **/ -static void e1000_raise_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl) +STATIC void e1000_raise_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl) { /* * Raise the clock input to the Management Data Clock (by setting the @@ -601,7 +601,7 @@ static void e1000_raise_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl) * Lower the management data input clock by clearing the MDC bit in the * control register. **/ -static void e1000_lower_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl) +STATIC void e1000_lower_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl) { /* * Lower the clock input to the Management Data Clock (by clearing the @@ -622,7 +622,7 @@ static void e1000_lower_mdi_clk_82543(struct e1000_hw *hw, u32 *ctrl) * "data" parameter will be shifted out to the PHY one bit at a time. * In order to do this, "data" must be broken down into bits. **/ -static void e1000_shift_out_mdi_bits_82543(struct e1000_hw *hw, u32 data, +STATIC void e1000_shift_out_mdi_bits_82543(struct e1000_hw *hw, u32 data, u16 count) { u32 ctrl, mask; @@ -674,7 +674,7 @@ static void e1000_shift_out_mdi_bits_82543(struct e1000_hw *hw, u32 data, * the PHY (setting the MDC bit), and then reading the value of the data out * MDIO bit. **/ -static u16 e1000_shift_in_mdi_bits_82543(struct e1000_hw *hw) +STATIC u16 e1000_shift_in_mdi_bits_82543(struct e1000_hw *hw) { u32 ctrl; u16 data = 0; @@ -759,7 +759,7 @@ out: * inadvertently. To workaround the issue, we disable the transmitter on * the PHY until we have established the link partner's link parameters. **/ -static s32 e1000_polarity_reversal_workaround_82543(struct e1000_hw *hw) +STATIC s32 e1000_polarity_reversal_workaround_82543(struct e1000_hw *hw) { s32 ret_val = E1000_SUCCESS; u16 mii_status_reg; @@ -1395,7 +1395,7 @@ out: * For the 82543 silicon, we need to set the MAC to match the settings * of the PHY, even if the PHY is auto-negotiating. **/ -static s32 e1000_config_mac_to_phy_82543(struct e1000_hw *hw) +STATIC s32 e1000_config_mac_to_phy_82543(struct e1000_hw *hw) { u32 ctrl; s32 ret_val = E1000_SUCCESS; diff --git a/lib/librte_pmd_e1000/e1000/e1000_82543.h b/lib/librte_pmd_e1000/e1000/e1000_82543.h index 7c2494c..51056db 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82543.h +++ b/lib/librte_pmd_e1000/e1000/e1000_82543.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_82571.c b/lib/librte_pmd_e1000/e1000/e1000_82571.c index 9c6fb15..bb5d429 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82571.c +++ b/lib/librte_pmd_e1000/e1000/e1000_82571.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -69,19 +69,19 @@ STATIC s32 e1000_check_for_serdes_link_82571(struct e1000_hw *hw); STATIC s32 e1000_setup_fiber_serdes_link_82571(struct e1000_hw *hw); STATIC s32 e1000_valid_led_default_82571(struct e1000_hw *hw, u16 *data); STATIC void e1000_clear_hw_cntrs_82571(struct e1000_hw *hw); -static s32 e1000_get_hw_semaphore_82571(struct e1000_hw *hw); -static s32 e1000_fix_nvm_checksum_82571(struct e1000_hw *hw); -static s32 e1000_get_phy_id_82571(struct e1000_hw *hw); -static void e1000_put_hw_semaphore_82571(struct e1000_hw *hw); -static void e1000_put_hw_semaphore_82573(struct e1000_hw *hw); -static s32 e1000_get_hw_semaphore_82574(struct e1000_hw *hw); -static void e1000_put_hw_semaphore_82574(struct e1000_hw *hw); +STATIC s32 e1000_get_hw_semaphore_82571(struct e1000_hw *hw); +STATIC s32 e1000_fix_nvm_checksum_82571(struct e1000_hw *hw); +STATIC s32 e1000_get_phy_id_82571(struct e1000_hw *hw); +STATIC void e1000_put_hw_semaphore_82571(struct e1000_hw *hw); +STATIC void e1000_put_hw_semaphore_82573(struct e1000_hw *hw); +STATIC s32 e1000_get_hw_semaphore_82574(struct e1000_hw *hw); +STATIC void e1000_put_hw_semaphore_82574(struct e1000_hw *hw); STATIC s32 e1000_set_d0_lplu_state_82574(struct e1000_hw *hw, bool active); STATIC s32 e1000_set_d3_lplu_state_82574(struct e1000_hw *hw, bool active); -static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw); -static s32 e1000_write_nvm_eewr_82571(struct e1000_hw *hw, u16 offset, +STATIC void e1000_initialize_hw_bits_82571(struct e1000_hw *hw); +STATIC s32 e1000_write_nvm_eewr_82571(struct e1000_hw *hw, u16 offset, u16 words, u16 *data); STATIC s32 e1000_read_mac_addr_82571(struct e1000_hw *hw); STATIC void e1000_power_down_phy_copper_82571(struct e1000_hw *hw); @@ -460,7 +460,7 @@ void e1000_init_function_pointers_82571(struct e1000_hw *hw) * Reads the PHY registers and stores the PHY ID and possibly the PHY * revision in the hardware structure. **/ -static s32 e1000_get_phy_id_82571(struct e1000_hw *hw) +STATIC s32 e1000_get_phy_id_82571(struct e1000_hw *hw) { struct e1000_phy_info *phy = &hw->phy; s32 ret_val; @@ -510,7 +510,7 @@ static s32 e1000_get_phy_id_82571(struct e1000_hw *hw) * * Acquire the HW semaphore to access the PHY or NVM **/ -static s32 e1000_get_hw_semaphore_82571(struct e1000_hw *hw) +STATIC s32 e1000_get_hw_semaphore_82571(struct e1000_hw *hw) { u32 swsm; s32 sw_timeout = hw->nvm.word_size + 1; @@ -571,7 +571,7 @@ static s32 e1000_get_hw_semaphore_82571(struct e1000_hw *hw) * * Release hardware semaphore used to access the PHY or NVM **/ -static void e1000_put_hw_semaphore_82571(struct e1000_hw *hw) +STATIC void e1000_put_hw_semaphore_82571(struct e1000_hw *hw) { u32 swsm; @@ -591,7 +591,7 @@ static void e1000_put_hw_semaphore_82571(struct e1000_hw *hw) * Acquire the HW semaphore during reset. * **/ -static s32 e1000_get_hw_semaphore_82573(struct e1000_hw *hw) +STATIC s32 e1000_get_hw_semaphore_82573(struct e1000_hw *hw) { u32 extcnf_ctrl; s32 i = 0; @@ -628,7 +628,7 @@ static s32 e1000_get_hw_semaphore_82573(struct e1000_hw *hw) * Release hardware semaphore used during reset. * **/ -static void e1000_put_hw_semaphore_82573(struct e1000_hw *hw) +STATIC void e1000_put_hw_semaphore_82573(struct e1000_hw *hw) { u32 extcnf_ctrl; @@ -646,7 +646,7 @@ static void e1000_put_hw_semaphore_82573(struct e1000_hw *hw) * Acquire the HW semaphore to access the PHY or NVM. * **/ -static s32 e1000_get_hw_semaphore_82574(struct e1000_hw *hw) +STATIC s32 e1000_get_hw_semaphore_82574(struct e1000_hw *hw) { s32 ret_val; @@ -666,7 +666,7 @@ static s32 e1000_get_hw_semaphore_82574(struct e1000_hw *hw) * Release hardware semaphore used to access the PHY or NVM * **/ -static void e1000_put_hw_semaphore_82574(struct e1000_hw *hw) +STATIC void e1000_put_hw_semaphore_82574(struct e1000_hw *hw) { DEBUGFUNC("e1000_put_hw_semaphore_82574"); @@ -907,7 +907,7 @@ STATIC s32 e1000_validate_nvm_checksum_82571(struct e1000_hw *hw) * If e1000_update_nvm_checksum is not called after this function, the * EEPROM will most likely contain an invalid checksum. **/ -static s32 e1000_write_nvm_eewr_82571(struct e1000_hw *hw, u16 offset, +STATIC s32 e1000_write_nvm_eewr_82571(struct e1000_hw *hw, u16 offset, u16 words, u16 *data) { struct e1000_nvm_info *nvm = &hw->nvm; @@ -1266,7 +1266,7 @@ STATIC s32 e1000_init_hw_82571(struct e1000_hw *hw) * * Initializes required hardware-dependent bits needed for normal operation. **/ -static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw) +STATIC void e1000_initialize_hw_bits_82571(struct e1000_hw *hw) { u32 reg; @@ -1887,7 +1887,7 @@ void e1000_set_laa_state_82571(struct e1000_hw *hw, bool state) * the checksum. Otherwise, if bit 15 is set and the checksum is incorrect, * we need to return bad checksum. **/ -static s32 e1000_fix_nvm_checksum_82571(struct e1000_hw *hw) +STATIC s32 e1000_fix_nvm_checksum_82571(struct e1000_hw *hw) { struct e1000_nvm_info *nvm = &hw->nvm; s32 ret_val; diff --git a/lib/librte_pmd_e1000/e1000/e1000_82571.h b/lib/librte_pmd_e1000/e1000/e1000_82571.h index 6aa7a21..bdf6446 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82571.h +++ b/lib/librte_pmd_e1000/e1000/e1000_82571.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_82575.c b/lib/librte_pmd_e1000/e1000/e1000_82575.c index fd15b7b..25fa672 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82575.c +++ b/lib/librte_pmd_e1000/e1000/e1000_82575.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -55,7 +55,6 @@ STATIC s32 e1000_check_for_link_media_swap(struct e1000_hw *hw); STATIC s32 e1000_get_cfg_done_82575(struct e1000_hw *hw); STATIC s32 e1000_get_link_up_info_82575(struct e1000_hw *hw, u16 *speed, u16 *duplex); -STATIC s32 e1000_init_hw_82575(struct e1000_hw *hw); STATIC s32 e1000_phy_hw_reset_sgmii_82575(struct e1000_hw *hw); STATIC s32 e1000_read_phy_reg_sgmii_82575(struct e1000_hw *hw, u32 offset, u16 *data); @@ -80,11 +79,11 @@ STATIC s32 e1000_write_phy_reg_sgmii_82575(struct e1000_hw *hw, u32 offset, u16 data); STATIC void e1000_clear_hw_cntrs_82575(struct e1000_hw *hw); STATIC s32 e1000_acquire_swfw_sync_82575(struct e1000_hw *hw, u16 mask); -static s32 e1000_get_pcs_speed_and_duplex_82575(struct e1000_hw *hw, +STATIC s32 e1000_get_pcs_speed_and_duplex_82575(struct e1000_hw *hw, u16 *speed, u16 *duplex); -static s32 e1000_get_phy_id_82575(struct e1000_hw *hw); +STATIC s32 e1000_get_phy_id_82575(struct e1000_hw *hw); STATIC void e1000_release_swfw_sync_82575(struct e1000_hw *hw, u16 mask); -static bool e1000_sgmii_active_82575(struct e1000_hw *hw); +STATIC bool e1000_sgmii_active_82575(struct e1000_hw *hw); STATIC s32 e1000_reset_init_script_82575(struct e1000_hw *hw); STATIC s32 e1000_read_mac_addr_82575(struct e1000_hw *hw); STATIC void e1000_config_collision_dist_82575(struct e1000_hw *hw); @@ -116,10 +115,11 @@ STATIC void e1000_lower_i2c_clk(struct e1000_hw *hw, u32 *i2cctl); STATIC s32 e1000_set_i2c_data(struct e1000_hw *hw, u32 *i2cctl, bool data); STATIC bool e1000_get_i2c_data(u32 *i2cctl); -static const u16 e1000_82580_rxpbs_table[] = { +STATIC const u16 e1000_82580_rxpbs_table[] = { 36, 72, 144, 1, 2, 4, 8, 16, 35, 70, 140 }; #define E1000_82580_RXPBS_TABLE_SIZE \ - (sizeof(e1000_82580_rxpbs_table)/sizeof(u16)) + (sizeof(e1000_82580_rxpbs_table) / \ + sizeof(e1000_82580_rxpbs_table[0])) /** @@ -272,6 +272,11 @@ STATIC s32 e1000_init_phy_params_82575(struct e1000_hw *hw) hw->mac.ops.check_for_link = e1000_check_for_link_media_swap; } + if (phy->id == M88E1512_E_PHY_ID) { + ret_val = e1000_initialize_M88E1512_phy(hw); + if (ret_val) + goto out; + } break; case IGP03E1000_E_PHY_ID: case IGP04E1000_E_PHY_ID: @@ -386,6 +391,7 @@ s32 e1000_init_nvm_params_82575(struct e1000_hw *hw) nvm->ops.update = e1000_update_nvm_checksum_82580; break; case e1000_i350: + case e1000_i354: nvm->ops.validate = e1000_validate_nvm_checksum_i350; nvm->ops.update = e1000_update_nvm_checksum_i350; break; @@ -448,6 +454,9 @@ STATIC s32 e1000_init_mac_params_82575(struct e1000_hw *hw) else mac->ops.reset_hw = e1000_reset_hw_82575; /* hw initialization */ + if ((mac->type == e1000_i210) || (mac->type == e1000_i211)) + mac->ops.init_hw = e1000_init_hw_i210; + else mac->ops.init_hw = e1000_init_hw_82575; /* link setup */ mac->ops.setup_link = e1000_setup_link_generic; @@ -748,6 +757,7 @@ out: STATIC s32 e1000_phy_hw_reset_sgmii_82575(struct e1000_hw *hw) { s32 ret_val = E1000_SUCCESS; + struct e1000_phy_info *phy = &hw->phy; DEBUGFUNC("e1000_phy_hw_reset_sgmii_82575"); @@ -770,7 +780,11 @@ STATIC s32 e1000_phy_hw_reset_sgmii_82575(struct e1000_hw *hw) goto out; ret_val = hw->phy.ops.commit(hw); + if (ret_val) + goto out; + if (phy->id == M88E1512_E_PHY_ID) + ret_val = e1000_initialize_M88E1512_phy(hw); out: return ret_val; } @@ -1244,6 +1258,11 @@ STATIC s32 e1000_check_for_link_media_swap(struct e1000_hw *hw) if (ret_val) return ret_val; + /* reset page to 0 */ + ret_val = phy->ops.write_reg(hw, E1000_M88E1112_PAGE_ADDR, 0); + if (ret_val) + return ret_val; + if (data & E1000_M88E1112_STATUS_LINK) port = E1000_MEDIA_PORT_OTHER; @@ -1296,7 +1315,7 @@ STATIC void e1000_power_up_serdes_link_82575(struct e1000_hw *hw) * Using the physical coding sub-layer (PCS), retrieve the current speed and * duplex, then store the values in the pointers provided. **/ -static s32 e1000_get_pcs_speed_and_duplex_82575(struct e1000_hw *hw, +STATIC s32 e1000_get_pcs_speed_and_duplex_82575(struct e1000_hw *hw, u16 *speed, u16 *duplex) { struct e1000_mac_info *mac = &hw->mac; @@ -1459,7 +1478,7 @@ STATIC s32 e1000_reset_hw_82575(struct e1000_hw *hw) * * This inits the hardware readying it for operation. **/ -STATIC s32 e1000_init_hw_82575(struct e1000_hw *hw) +s32 e1000_init_hw_82575(struct e1000_hw *hw) { struct e1000_mac_info *mac = &hw->mac; s32 ret_val; @@ -1928,7 +1947,7 @@ out: * which can be enabled for use in the embedded applications. Simply * return the current state of the sgmii interface. **/ -static bool e1000_sgmii_active_82575(struct e1000_hw *hw) +STATIC bool e1000_sgmii_active_82575(struct e1000_hw *hw) { struct e1000_dev_spec_82575 *dev_spec = &hw->dev_spec._82575; return dev_spec->sgmii_active; @@ -1978,7 +1997,7 @@ STATIC s32 e1000_reset_init_script_82575(struct e1000_hw *hw) **/ STATIC s32 e1000_read_mac_addr_82575(struct e1000_hw *hw) { - s32 ret_val = E1000_SUCCESS; + s32 ret_val; DEBUGFUNC("e1000_read_mac_addr_82575"); @@ -2610,7 +2629,7 @@ out: **/ STATIC s32 e1000_validate_nvm_checksum_82580(struct e1000_hw *hw) { - s32 ret_val = E1000_SUCCESS; + s32 ret_val; u16 eeprom_regions_count = 1; u16 j, nvm_data; u16 nvm_offset; @@ -2750,7 +2769,7 @@ out: STATIC s32 __e1000_access_emi_reg(struct e1000_hw *hw, u16 address, u16 *data, bool read) { - s32 ret_val = E1000_SUCCESS; + s32 ret_val; DEBUGFUNC("__e1000_access_emi_reg"); @@ -2780,6 +2799,95 @@ s32 e1000_read_emi_reg(struct e1000_hw *hw, u16 addr, u16 *data) } /** + * e1000_initialize_M88E1512_phy - Initialize M88E1512 PHY + * @hw: pointer to the HW structure + * + * Initialize Marverl 1512 to work correctly with Avoton. + **/ +s32 e1000_initialize_M88E1512_phy(struct e1000_hw *hw) +{ + struct e1000_phy_info *phy = &hw->phy; + s32 ret_val = E1000_SUCCESS; + + DEBUGFUNC("e1000_initialize_M88E1512_phy"); + + /* Check if this is correct PHY. */ + if (phy->id != M88E1512_E_PHY_ID) + goto out; + + /* Switch to PHY page 0xFF. */ + ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x00FF); + if (ret_val) + goto out; + + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0x214B); + if (ret_val) + goto out; + + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x2144); + if (ret_val) + goto out; + + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0x0C28); + if (ret_val) + goto out; + + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x2146); + if (ret_val) + goto out; + + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0xB233); + if (ret_val) + goto out; + + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x214D); + if (ret_val) + goto out; + + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_2, 0xCC0C); + if (ret_val) + goto out; + + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_1, 0x2159); + if (ret_val) + goto out; + + /* Switch to PHY page 0xFB. */ + ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x00FB); + if (ret_val) + goto out; + + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_CFG_REG_3, 0x000D); + if (ret_val) + goto out; + + /* Switch to PHY page 0x12. */ + ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0x12); + if (ret_val) + goto out; + + /* Change mode to SGMII-to-Copper */ + ret_val = phy->ops.write_reg(hw, E1000_M88E1512_MODE, 0x8001); + if (ret_val) + goto out; + + /* Return the PHY to page 0. */ + ret_val = phy->ops.write_reg(hw, E1000_M88E1543_PAGE_ADDR, 0); + if (ret_val) + goto out; + + ret_val = phy->ops.commit(hw); + if (ret_val) { + DEBUGOUT("Error committing the PHY changes\n"); + return ret_val; + } + + msec_delay(1000); +out: + return ret_val; +} + +/** * e1000_set_eee_i350 - Enable/disable EEE support * @hw: pointer to the HW structure * diff --git a/lib/librte_pmd_e1000/e1000/e1000_82575.h b/lib/librte_pmd_e1000/e1000/e1000_82575.h index 2450e6a..09b7bf2 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_82575.h +++ b/lib/librte_pmd_e1000/e1000/e1000_82575.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -479,6 +479,7 @@ void e1000_vmdq_set_loopback_pf(struct e1000_hw *hw, bool enable); void e1000_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf); void e1000_vmdq_set_replication_pf(struct e1000_hw *hw, bool enable); s32 e1000_init_nvm_params_82575(struct e1000_hw *hw); +s32 e1000_init_hw_82575(struct e1000_hw *hw); enum e1000_promisc_type { e1000_promisc_disabled = 0, /* all promisc modes disabled */ @@ -496,6 +497,7 @@ s32 e1000_read_emi_reg(struct e1000_hw *hw, u16 addr, u16 *data); s32 e1000_set_eee_i350(struct e1000_hw *); s32 e1000_set_eee_i354(struct e1000_hw *); s32 e1000_get_eee_status_i354(struct e1000_hw *, bool *); +s32 e1000_initialize_M88E1512_phy(struct e1000_hw *hw); /* I2C SDA and SCL timing parameters for standard mode */ #define E1000_I2C_T_HD_STA 4 diff --git a/lib/librte_pmd_e1000/e1000/e1000_api.c b/lib/librte_pmd_e1000/e1000/e1000_api.c index efffab5..a064565 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_api.c +++ b/lib/librte_pmd_e1000/e1000/e1000_api.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -292,14 +292,6 @@ s32 e1000_set_mac_type(struct e1000_hw *hw) case E1000_DEV_ID_PCH_LPT_I217_V: case E1000_DEV_ID_PCH_LPTLP_I218_LM: case E1000_DEV_ID_PCH_LPTLP_I218_V: -#ifdef NAHUM6_LPTH_I218_HW - case E1000_DEV_ID_PCH_I218_LM2: - case E1000_DEV_ID_PCH_I218_V2: -#endif /* NAHUM6_LPTH_I218_HW */ -#ifdef NAHUM6_WPT_HW - case E1000_DEV_ID_PCH_I218_LM3: - case E1000_DEV_ID_PCH_I218_V3: -#endif /* NAHUM6_WPT_HW */ mac->type = e1000_pch_lpt; break; case E1000_DEV_ID_82575EB_COPPER: diff --git a/lib/librte_pmd_e1000/e1000/e1000_api.h b/lib/librte_pmd_e1000/e1000/e1000_api.h index 619a852..02b16da 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_api.h +++ b/lib/librte_pmd_e1000/e1000/e1000_api.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_defines.h b/lib/librte_pmd_e1000/e1000/e1000_defines.h index febf304..278c507 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_defines.h +++ b/lib/librte_pmd_e1000/e1000/e1000_defines.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -76,6 +76,7 @@ POSSIBILITY OF SUCH DAMAGE. #define E1000_CTRL_EXT_EE_RST 0x00002000 /* Reinitialize from EEPROM */ /* Physical Func Reset Done Indication */ #define E1000_CTRL_EXT_PFRSTD 0x00004000 +#define E1000_CTRL_EXT_SDLPE 0X00040000 /* SerDes Low Power Enable */ #define E1000_CTRL_EXT_SPD_BYPS 0x00008000 /* Speed Select Bypass */ #define E1000_CTRL_EXT_RO_DIS 0x00020000 /* Relaxed Ordering disable */ #define E1000_CTRL_EXT_DMA_DYN_CLK_EN 0x00080000 /* DMA Dynamic Clk Gating */ @@ -852,6 +853,10 @@ POSSIBILITY OF SUCH DAMAGE. #define E1000_PCS_STATUS_ADDR_I354 1 #define E1000_PCS_STATUS_RX_LPI_RCVD 0x0400 #define E1000_PCS_STATUS_TX_LPI_RCVD 0x0800 +#define E1000_M88E1512_CFG_REG_1 0x0010 +#define E1000_M88E1512_CFG_REG_2 0x0011 +#define E1000_M88E1512_CFG_REG_3 0x0007 +#define E1000_M88E1512_MODE 0x0014 #define E1000_EEE_SU_LPI_CLK_STP 0x00800000 /* EEE LPI Clock Stop */ #define E1000_EEE_LP_ADV_DEV_I210 7 /* EEE LP Adv Device */ #define E1000_EEE_LP_ADV_ADDR_I210 61 /* EEE LP Adv Register */ diff --git a/lib/librte_pmd_e1000/e1000/e1000_hw.h b/lib/librte_pmd_e1000/e1000/e1000_hw.h index bc741c8..4dd92a3 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_hw.h +++ b/lib/librte_pmd_e1000/e1000/e1000_hw.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -132,14 +132,6 @@ struct e1000_hw; #define E1000_DEV_ID_PCH_LPT_I217_V 0x153B #define E1000_DEV_ID_PCH_LPTLP_I218_LM 0x155A #define E1000_DEV_ID_PCH_LPTLP_I218_V 0x1559 -#ifdef NAHUM6_LPTH_I218_HW -#define E1000_DEV_ID_PCH_I218_LM2 0x15A0 -#define E1000_DEV_ID_PCH_I218_V2 0x15A1 -#endif /* NAHUM6_LPTH_I218_HW */ -#ifdef NAHUM6_WPT_HW -#define E1000_DEV_ID_PCH_I218_LM3 0x15A2 /* Wildcat Point PCH */ -#define E1000_DEV_ID_PCH_I218_V3 0x15A3 /* Wildcat Point PCH */ -#endif /* NAHUM6_WPT_HW */ #define E1000_DEV_ID_82576 0x10C9 #define E1000_DEV_ID_82576_FIBER 0x10E6 #define E1000_DEV_ID_82576_SERDES 0x10E7 @@ -959,10 +951,9 @@ struct e1000_dev_spec_ich8lan { #if defined(NAHUM6LP_HW) && defined(ULP_SUPPORT) enum e1000_ulp_state ulp_state; #endif /* NAHUM6LP_HW && ULP_SUPPORT */ -#ifdef C10_SUPPORT u16 lat_enc; u16 max_ltr_enc; -#endif + bool smbus_disable; }; struct e1000_dev_spec_82575 { diff --git a/lib/librte_pmd_e1000/e1000/e1000_i210.c b/lib/librte_pmd_e1000/e1000/e1000_i210.c index 722877a..1f5600d 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_i210.c +++ b/lib/librte_pmd_e1000/e1000/e1000_i210.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -635,7 +635,7 @@ s32 e1000_validate_nvm_checksum_i210(struct e1000_hw *hw) **/ s32 e1000_update_nvm_checksum_i210(struct e1000_hw *hw) { - s32 ret_val = E1000_SUCCESS; + s32 ret_val; u16 checksum = 0; u16 i, nvm_data; @@ -714,7 +714,7 @@ bool e1000_get_flash_presence_i210(struct e1000_hw *hw) **/ s32 e1000_update_flash_i210(struct e1000_hw *hw) { - s32 ret_val = E1000_SUCCESS; + s32 ret_val; u32 flup; DEBUGFUNC("e1000_update_flash_i210"); @@ -770,7 +770,7 @@ s32 e1000_pool_flash_update_done_i210(struct e1000_hw *hw) **/ STATIC s32 e1000_init_nvm_params_i210(struct e1000_hw *hw) { - s32 ret_val = E1000_SUCCESS; + s32 ret_val; struct e1000_nvm_info *nvm = &hw->nvm; DEBUGFUNC("e1000_init_nvm_params_i210"); @@ -855,7 +855,7 @@ out: STATIC s32 __e1000_access_xmdio_reg(struct e1000_hw *hw, u16 address, u8 dev_addr, u16 *data, bool read) { - s32 ret_val = E1000_SUCCESS; + s32 ret_val; DEBUGFUNC("__e1000_access_xmdio_reg"); @@ -914,3 +914,87 @@ s32 e1000_write_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 data) return __e1000_access_xmdio_reg(hw, addr, dev_addr, &data, false); } + +/** + * e1000_pll_workaround_i210 + * @hw: pointer to the HW structure + * + * Works around an errata in the PLL circuit where it occasionally + * provides the wrong clock frequency after power up. + **/ +STATIC s32 e1000_pll_workaround_i210(struct e1000_hw *hw) +{ + s32 ret_val; + u32 wuc, mdicnfg, ctrl_ext, reg_val; + u16 nvm_word, phy_word, pci_word, tmp_nvm; + int i; + + /* Get and set needed register values */ + wuc = E1000_READ_REG(hw, E1000_WUC); + mdicnfg = E1000_READ_REG(hw, E1000_MDICNFG); + reg_val = mdicnfg & ~E1000_MDICNFG_EXT_MDIO; + E1000_WRITE_REG(hw, E1000_MDICNFG, reg_val); + + /* Get data from NVM, or set default */ + ret_val = e1000_read_invm_word_i210(hw, E1000_INVM_AUTOLOAD, + &nvm_word); + if (ret_val != E1000_SUCCESS) + nvm_word = E1000_INVM_DEFAULT_AL; + tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; + for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { + /* check current state */ + hw->phy.ops.read_reg(hw, (E1000_PHY_PLL_FREQ_PAGE | + E1000_PHY_PLL_FREQ_REG), &phy_word); + if ((phy_word & E1000_PHY_PLL_UNCONF) + != E1000_PHY_PLL_UNCONF) { + ret_val = E1000_SUCCESS; + break; + } else { + ret_val = -E1000_ERR_PHY; + } + hw->phy.ops.reset(hw); + ctrl_ext = E1000_READ_REG(hw, E1000_CTRL_EXT); + ctrl_ext |= (E1000_CTRL_EXT_PHYPDEN | E1000_CTRL_EXT_SDLPE); + E1000_WRITE_REG(hw, E1000_CTRL_EXT, ctrl_ext); + + E1000_WRITE_REG(hw, E1000_WUC, 0); + reg_val = (E1000_INVM_AUTOLOAD << 4) | (tmp_nvm << 16); + E1000_WRITE_REG(hw, E1000_EEARBC, reg_val); + + e1000_read_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); + pci_word |= E1000_PCI_PMCSR_D3; + e1000_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); + msec_delay(1); + pci_word &= ~E1000_PCI_PMCSR_D3; + e1000_write_pci_cfg(hw, E1000_PCI_PMCSR, &pci_word); + reg_val = (E1000_INVM_AUTOLOAD << 4) | (nvm_word << 16); + E1000_WRITE_REG(hw, E1000_EEARBC, reg_val); + + /* restore WUC register */ + E1000_WRITE_REG(hw, E1000_WUC, wuc); + } + /* restore MDICNFG setting */ + E1000_WRITE_REG(hw, E1000_MDICNFG, mdicnfg); + return ret_val; +} + +/** + * e1000_init_hw_i210 - Init hw for I210/I211 + * @hw: pointer to the HW structure + * + * Called to initialize hw for i210 hw family. + **/ +s32 e1000_init_hw_i210(struct e1000_hw *hw) +{ + s32 ret_val; + + DEBUGFUNC("e1000_init_hw_i210"); + if ((hw->mac.type >= e1000_i210) && + !(e1000_get_flash_presence_i210(hw))) { + ret_val = e1000_pll_workaround_i210(hw); + if (ret_val != E1000_SUCCESS) + return ret_val; + } + ret_val = e1000_init_hw_82575(hw); + return ret_val; +} diff --git a/lib/librte_pmd_e1000/e1000/e1000_i210.h b/lib/librte_pmd_e1000/e1000/e1000_i210.h index 44de54b..f2bd43b 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_i210.h +++ b/lib/librte_pmd_e1000/e1000/e1000_i210.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -50,6 +50,7 @@ s32 e1000_read_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 *data); s32 e1000_write_xmdio_reg(struct e1000_hw *hw, u16 addr, u8 dev_addr, u16 data); +s32 e1000_init_hw_i210(struct e1000_hw *hw); #define E1000_STM_OPCODE 0xDB00 #define E1000_EEPROM_FLASH_SIZE_WORD 0x11 @@ -94,4 +95,16 @@ enum E1000_INVM_STRUCTURE_TYPE { #define NVM_INIT_CTRL_4_DEFAULT_I211 0x00C1 #define NVM_LED_1_CFG_DEFAULT_I211 0x0184 #define NVM_LED_0_2_CFG_DEFAULT_I211 0x200C + +/* PLL Defines */ +#define E1000_PCI_PMCSR 0x44 +#define E1000_PCI_PMCSR_D3 0x03 +#define E1000_MAX_PLL_TRIES 5 +#define E1000_PHY_PLL_UNCONF 0xFF +#define E1000_PHY_PLL_FREQ_PAGE 0xFC0000 +#define E1000_PHY_PLL_FREQ_REG 0x000E +#define E1000_INVM_DEFAULT_AL 0x202F +#define E1000_INVM_AUTOLOAD 0x0A +#define E1000_INVM_PLL_WO_VAL 0x0010 + #endif diff --git a/lib/librte_pmd_e1000/e1000/e1000_ich8lan.c b/lib/librte_pmd_e1000/e1000/e1000_ich8lan.c index d382f77..3b1627b 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_ich8lan.c +++ b/lib/librte_pmd_e1000/e1000/e1000_ich8lan.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -62,21 +62,11 @@ POSSIBILITY OF SUCH DAMAGE. * Ethernet Connection I217-V * Ethernet Connection I218-V * Ethernet Connection I218-LM -#ifdef NAHUM6_LPTH_I218_HW - * Ethernet Connection (2) I218-LM - * Ethernet Connection (2) I218-V -#endif -#ifdef NAHUM6_WPT_HW - * Ethernet Connection (3) I218-LM - * Ethernet Connection (3) I218-V -#endif */ #include "e1000_api.h" -#if defined(NAHUM6LP_HW) && defined(ULP_IN_D0_SUPPORT) -static s32 e1000_oem_bits_config_ich8lan(struct e1000_hw *hw, bool d0_state); -#endif /* NAHUM6LP_HW && ULP_SUPPORT */ +STATIC s32 e1000_oem_bits_config_ich8lan(struct e1000_hw *hw, bool d0_state); STATIC s32 e1000_acquire_swflag_ich8lan(struct e1000_hw *hw); STATIC void e1000_release_swflag_ich8lan(struct e1000_hw *hw); STATIC s32 e1000_acquire_nvm_ich8lan(struct e1000_hw *hw); @@ -125,19 +115,19 @@ STATIC s32 e1000_led_on_pchlan(struct e1000_hw *hw); STATIC s32 e1000_led_off_pchlan(struct e1000_hw *hw); STATIC void e1000_clear_hw_cntrs_ich8lan(struct e1000_hw *hw); STATIC s32 e1000_erase_flash_bank_ich8lan(struct e1000_hw *hw, u32 bank); -static void e1000_initialize_hw_bits_ich8lan(struct e1000_hw *hw); -static s32 e1000_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw); +STATIC void e1000_initialize_hw_bits_ich8lan(struct e1000_hw *hw); +STATIC s32 e1000_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw); STATIC s32 e1000_read_flash_byte_ich8lan(struct e1000_hw *hw, u32 offset, u8 *data); -static s32 e1000_read_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, +STATIC s32 e1000_read_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, u8 size, u16 *data); STATIC s32 e1000_read_flash_word_ich8lan(struct e1000_hw *hw, u32 offset, u16 *data); -static s32 e1000_retry_write_flash_byte_ich8lan(struct e1000_hw *hw, +STATIC s32 e1000_retry_write_flash_byte_ich8lan(struct e1000_hw *hw, u32 offset, u8 byte); STATIC s32 e1000_get_cfg_done_ich8lan(struct e1000_hw *hw); STATIC void e1000_power_down_phy_copper_ich8lan(struct e1000_hw *hw); -static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw); +STATIC s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw); STATIC s32 e1000_set_mdio_slow_mode_hv(struct e1000_hw *hw); STATIC s32 e1000_k1_workaround_lv(struct e1000_hw *hw); STATIC void e1000_gate_hw_phy_config_ich8lan(struct e1000_hw *hw, bool gate); @@ -193,7 +183,7 @@ union ich8_hws_flash_regacc { * * Assumes the sw/fw/hw semaphore is already acquired. **/ -static bool e1000_phy_is_accessible_pchlan(struct e1000_hw *hw) +STATIC bool e1000_phy_is_accessible_pchlan(struct e1000_hw *hw) { u16 phy_reg = 0; u32 phy_id = 0; @@ -261,7 +251,7 @@ out: * Toggling the LANPHYPC pin value fully power-cycles the PHY and is * used to reset the PHY to a quiescent state when necessary. **/ -void e1000_toggle_lanphypc_pch_lpt(struct e1000_hw *hw) +STATIC void e1000_toggle_lanphypc_pch_lpt(struct e1000_hw *hw) { u32 mac_reg; @@ -305,7 +295,7 @@ void e1000_toggle_lanphypc_pch_lpt(struct e1000_hw *hw) * Workarounds/flow necessary for PHY initialization during driver load * and resume paths. **/ -static s32 e1000_init_phy_workarounds_pchlan(struct e1000_hw *hw) +STATIC s32 e1000_init_phy_workarounds_pchlan(struct e1000_hw *hw) { u32 mac_reg, fwsm = E1000_READ_REG(hw, E1000_FWSM); s32 ret_val; @@ -629,13 +619,12 @@ STATIC s32 e1000_init_nvm_params_ich8lan(struct e1000_hw *hw) DEBUGFUNC("e1000_init_nvm_params_ich8lan"); /* Can't read flash registers if the register set isn't mapped. */ + nvm->type = e1000_nvm_flash_sw; if (!hw->flash_address) { DEBUGOUT("ERROR: Flash registers not mapped\n"); return -E1000_ERR_CONFIG; } - nvm->type = e1000_nvm_flash_sw; - gfpreg = E1000_READ_FLASH_REG(hw, ICH_FLASH_GFPREG); /* sector_X_addr is a "sector"-aligned address (4096 bytes) @@ -965,7 +954,7 @@ release: * Also, set appropriate Tx re-transmission timeouts for 10 and 100Half link * speeds in order to avoid Tx hangs. **/ -static s32 e1000_k1_workaround_lpt_lp(struct e1000_hw *hw, bool link) +STATIC s32 e1000_k1_workaround_lpt_lp(struct e1000_hw *hw, bool link) { u32 fextnvm6 = E1000_READ_REG(hw, E1000_FEXTNVM6); u32 status = E1000_READ_REG(hw, E1000_STATUS); @@ -1043,40 +1032,6 @@ update_fextnvm6: return ret_val; } -#ifdef C10_SUPPORT -/** - * e1000_demote_ltr - Demote/Promote the LTR value - * @hw: pointer to the HW structure - * @demote: boolean value to control whether we are demoting or promoting - * the LTR value (promoting allows deeper C-States). - * @link: boolean value stating whether we currently have link - * - * Configure the LTRV register with the proper LTR value - **/ -void e1000_demote_ltr(struct e1000_hw *hw, bool demote, bool link) -{ - u32 reg = link << (E1000_LTRV_REQ_SHIFT + E1000_LTRV_NOSNOOP_SHIFT) | - link << E1000_LTRV_REQ_SHIFT | E1000_LTRV_SEND; - - if ((hw->device_id != E1000_DEV_ID_PCH_I218_LM3) && - (hw->device_id != E1000_DEV_ID_PCH_I218_V3)) - return; - - if (demote) { - reg |= hw->dev_spec.ich8lan.lat_enc | - (hw->dev_spec.ich8lan.lat_enc << - E1000_LTRV_NOSNOOP_SHIFT); - } else { - reg |= hw->dev_spec.ich8lan.max_ltr_enc | - (hw->dev_spec.ich8lan.max_ltr_enc << - E1000_LTRV_NOSNOOP_SHIFT); - } - - E1000_WRITE_REG(hw, E1000_LTRV, reg); - return; -} - -#endif /* C10_SUPPORT */ #if defined(NAHUM6LP_HW) && defined(ULP_SUPPORT) /** * e1000_enable_ulp_lpt_lp - configure Ultra Low Power mode for LynxPoint-LP @@ -1097,14 +1052,9 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx) if ((hw->mac.type < e1000_pch_lpt) || (hw->device_id == E1000_DEV_ID_PCH_LPT_I217_LM) || (hw->device_id == E1000_DEV_ID_PCH_LPT_I217_V) || -#ifdef NAHUM6_LPTH_I218_HW - (hw->device_id == E1000_DEV_ID_PCH_I218_LM2) || - (hw->device_id == E1000_DEV_ID_PCH_I218_V2) || -#endif (hw->dev_spec.ich8lan.ulp_state == e1000_ulp_state_on)) return 0; -#ifdef ULP_IN_D0_SUPPORT if (!to_sx) { int i = 0; @@ -1126,7 +1076,6 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx) i * 50); } -#endif /* ULP_IN_D0_SUPPORT */ if (E1000_READ_REG(hw, E1000_FWSM) & E1000_ICH_FWSM_FW_VALID) { /* Request ME configure ULP mode in the PHY */ mac_reg = E1000_READ_REG(hw, E1000_H2ME); @@ -1136,33 +1085,14 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx) goto out; } -#ifndef ULP_IN_D0_SUPPORT - if (!to_sx) { - int i = 0; - - /* Poll up to 5 seconds for Cable Disconnected indication */ - while (!(E1000_READ_REG(hw, E1000_FEXT) & - E1000_FEXT_PHY_CABLE_DISCONNECTED)) { - /* Bail if link is re-acquired */ - if (E1000_READ_REG(hw, E1000_STATUS) & E1000_STATUS_LU) - return -E1000_ERR_PHY; - - if (i++ == 100) - break; - - msec_delay(50); - } - DEBUGOUT("CABLE_DISCONNECTED %s set after %dmsec\n", - (E1000_READ_REG(hw, E1000_FEXT) & - E1000_FEXT_PHY_CABLE_DISCONNECTED) ? "" : "not", - i * 50); - } - -#endif /* !ULP_IN_D0_SUPPORT */ ret_val = hw->phy.ops.acquire(hw); if (ret_val) goto out; + /* During S0 Idle keep the phy in PCI-E mode */ + if (hw->dev_spec.ich8lan.smbus_disable) + goto skip_smbus; + /* Force SMBus mode in PHY */ ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL, &phy_reg); if (ret_val) @@ -1175,7 +1105,7 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx) mac_reg |= E1000_CTRL_EXT_FORCE_SMBUS; E1000_WRITE_REG(hw, E1000_CTRL_EXT, mac_reg); -#ifdef ULP_IN_D0_SUPPORT +skip_smbus: if (!to_sx) { /* Change the 'Link Status Change' interrupt to trigger * on 'Cable Status Change' @@ -1190,7 +1120,6 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx) phy_reg); } -#endif /* ULP_IN_D0_SUPPORT */ /* Set Inband ULP Exit, Reset to SMBus mode and * Disable SMBus Release on PERST# in PHY */ @@ -1217,7 +1146,6 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx) /* Commit ULP changes in PHY by starting auto ULP configuration */ phy_reg |= I218_ULP_CONFIG1_START; e1000_write_phy_reg_hv_locked(hw, I218_ULP_CONFIG1, phy_reg); -#ifdef ULP_IN_D0_SUPPORT if (!to_sx) { /* Disable Tx so that the MAC doesn't send any (buffered) @@ -1227,7 +1155,6 @@ s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx) mac_reg &= ~E1000_TCTL_EN; E1000_WRITE_REG(hw, E1000_TCTL, mac_reg); } -#endif release: hw->phy.ops.release(hw); out: @@ -1253,14 +1180,12 @@ out: * to disable ULP mode (force=false); otherwise, for example when unloading * the driver or during Sx->S0 transitions, this is called with force=true * to forcibly disable ULP. -#ifdef ULP_IN_D0_SUPPORT * When the cable is plugged in while the device is in D0, a Cable Status * Change interrupt is generated which causes this function to be called * to partially disable ULP mode and restart autonegotiation. This function * is then called again due to the resulting Link Status Change interrupt * to finish cleaning up after the ULP flow. -#endif */ s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) { @@ -1272,10 +1197,6 @@ s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) if ((hw->mac.type < e1000_pch_lpt) || (hw->device_id == E1000_DEV_ID_PCH_LPT_I217_LM) || (hw->device_id == E1000_DEV_ID_PCH_LPT_I217_V) || -#ifdef NAHUM6_LPTH_I218_HW - (hw->device_id == E1000_DEV_ID_PCH_I218_LM2) || - (hw->device_id == E1000_DEV_ID_PCH_I218_V2) || -#endif (hw->dev_spec.ich8lan.ulp_state == e1000_ulp_state_off)) return 0; @@ -1309,7 +1230,6 @@ s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) mac_reg = E1000_READ_REG(hw, E1000_H2ME); mac_reg &= ~E1000_H2ME_ULP; E1000_WRITE_REG(hw, E1000_H2ME, mac_reg); -#ifdef ULP_IN_D0_SUPPORT /* Restore link speed advertisements and restart * Auto-negotiation @@ -1319,21 +1239,15 @@ s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) goto out; ret_val = e1000_oem_bits_config_ich8lan(hw, true); -#endif } goto out; } - if (force) - /* Toggle LANPHYPC Value bit */ - e1000_toggle_lanphypc_pch_lpt(hw); - ret_val = hw->phy.ops.acquire(hw); if (ret_val) goto out; -#ifdef ULP_IN_D0_SUPPORT /* Revert the change to the 'Link Status Change' * interrupt to trigger on 'Cable Status Change' */ @@ -1344,7 +1258,10 @@ s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) phy_reg &= ~E1000_KMRNCTRLSTA_OP_MODES_LSC2CSC; e1000_write_kmrn_reg_locked(hw, E1000_KMRNCTRLSTA_OP_MODES, phy_reg); -#endif + if (force) + /* Toggle LANPHYPC Value bit */ + e1000_toggle_lanphypc_pch_lpt(hw); + /* Unforce SMBus mode in PHY */ ret_val = e1000_read_phy_reg_hv_locked(hw, CV_SMB_CTRL, &phy_reg); if (ret_val) { @@ -1383,10 +1300,8 @@ s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) ret_val = e1000_read_phy_reg_hv_locked(hw, I218_ULP_CONFIG1, &phy_reg); if (ret_val) goto release; -#ifdef ULP_IN_D0_SUPPORT /* CSC interrupt received due to ULP Indication */ if ((phy_reg & I218_ULP_CONFIG1_IND) || force) { -#endif phy_reg &= ~(I218_ULP_CONFIG1_IND | I218_ULP_CONFIG1_STICKY_ULP | I218_ULP_CONFIG1_RESET_TO_SMBUS | @@ -1404,7 +1319,6 @@ s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) mac_reg &= ~E1000_FEXTNVM7_DISABLE_SMB_PERST; E1000_WRITE_REG(hw, E1000_FEXTNVM7, mac_reg); -#ifdef ULP_IN_D0_SUPPORT if (!force) { hw->phy.ops.release(hw); @@ -1431,7 +1345,6 @@ s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force) mac_reg |= E1000_TCTL_EN; E1000_WRITE_REG(hw, E1000_TCTL, mac_reg); -#endif release: hw->phy.ops.release(hw); if (force) { @@ -1456,15 +1369,11 @@ out: * change in link status has been detected, then we read the PHY registers * to get the current speed/duplex if link exists. **/ -static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) +STATIC s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) { struct e1000_mac_info *mac = &hw->mac; s32 ret_val; -#if defined(NAHUM6LP_HW) && defined(ULP_IN_D0_SUPPORT) bool link = false; -#else - bool link; -#endif u16 phy_reg; DEBUGFUNC("e1000_check_for_copper_link_ich8lan"); @@ -1477,11 +1386,9 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) if (!mac->get_link_status) return E1000_SUCCESS; -#if defined(NAHUM6LP_HW) && defined(ULP_IN_D0_SUPPORT) if ((hw->mac.type < e1000_pch_lpt) || (hw->device_id == E1000_DEV_ID_PCH_LPT_I217_LM) || (hw->device_id == E1000_DEV_ID_PCH_LPT_I217_V)) { -#endif /* NAHUM6LP_HW && ULP_IN_D0_SUPPORT */ /* First we want to see if the MII Status Register reports * link. If so, then we want to get the current speed/duplex * of the PHY. @@ -1489,7 +1396,6 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) ret_val = e1000_phy_has_link_generic(hw, 1, 0, &link); if (ret_val) return ret_val; -#if defined(NAHUM6LP_HW) && defined(ULP_IN_D0_SUPPORT) } else { /* Check the MAC's STATUS register to determine link state * since the PHY could be inaccessible while in ULP mode. @@ -1503,7 +1409,6 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) if (ret_val) return ret_val; } -#endif /* NAHUM6LP_HW && ULP_IN_D0_SUPPORT */ if (hw->mac.type == e1000_pchlan) { ret_val = e1000_k1_gig_workaround_hv(hw, link); @@ -1511,14 +1416,17 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) return ret_val; } - /* When connected at 10Mbps half-duplex, 82579 parts are excessively + /* When connected at 10Mbps half-duplex, some parts are excessively * aggressive resulting in many collisions. To avoid this, increase * the IPG and reduce Rx latency in the PHY. */ - if ((hw->mac.type == e1000_pch2lan) && link) { + if (((hw->mac.type == e1000_pch2lan) || + (hw->mac.type == e1000_pch_lpt)) && link) { u32 reg; reg = E1000_READ_REG(hw, E1000_STATUS); if (!(reg & (E1000_STATUS_FD | E1000_STATUS_SPEED_MASK))) { + u16 emi_addr; + reg = E1000_READ_REG(hw, E1000_TIPG); reg &= ~E1000_TIPG_IPGT_MASK; reg |= 0xFF; @@ -1529,7 +1437,11 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) if (ret_val) return ret_val; - ret_val = e1000_write_emi_reg_locked(hw, I82579_RX_CONFIG, 0); + if (hw->mac.type == e1000_pch2lan) + emi_addr = I82579_RX_CONFIG; + else + emi_addr = I217_RX_CONFIG; + ret_val = e1000_write_emi_reg_locked(hw, emi_addr, 0); hw->phy.ops.release(hw); @@ -1538,17 +1450,6 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) } } -#if defined(NAHUM6LP_HW) && defined(NAHUM6_WPT_HW) - /* Work-around I218 hang issue */ - if ((hw->device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) || - (hw->device_id == E1000_DEV_ID_PCH_LPTLP_I218_V) || - (hw->device_id == E1000_DEV_ID_PCH_I218_LM3) || - (hw->device_id == E1000_DEV_ID_PCH_I218_V3)) { - ret_val = e1000_k1_workaround_lpt_lp(hw, link); - if (ret_val) - return ret_val; - } -#else /* Work-around I218 hang issue */ if ((hw->device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) || (hw->device_id == E1000_DEV_ID_PCH_LPTLP_I218_V)) { @@ -1557,7 +1458,6 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) return ret_val; } -#endif /* defined(NAHUM6LP_HW) && defined(NAHUM6_WPT_HW) */ /* Clear link partner's EEE ability */ hw->dev_spec.ich8lan.eee_lp_ability = 0; @@ -1795,9 +1695,9 @@ STATIC bool e1000_check_mng_mode_ich8lan(struct e1000_hw *hw) fwsm = E1000_READ_REG(hw, E1000_FWSM); - return ((fwsm & E1000_ICH_FWSM_FW_VALID) && - ((fwsm & E1000_FWSM_MODE_MASK) == - (E1000_ICH_MNG_IAMT_MODE << E1000_FWSM_MODE_SHIFT))); + return (fwsm & E1000_ICH_FWSM_FW_VALID) && + ((fwsm & E1000_FWSM_MODE_MASK) == + (E1000_ICH_MNG_IAMT_MODE << E1000_FWSM_MODE_SHIFT)); } /** @@ -1861,7 +1761,7 @@ STATIC void e1000_rar_set_pch2lan(struct e1000_hw *hw, u8 *addr, u32 index) /* RAR[1-6] are owned by manageability. Skip those and program the * next address into the SHRA register array. */ - if (index < (u32) (hw->mac.rar_entry_count - 6)) { + if (index < (u32) (hw->mac.rar_entry_count)) { s32 ret_val; ret_val = e1000_acquire_swflag_ich8lan(hw); @@ -2348,7 +2248,7 @@ s32 e1000_configure_k1_ich8lan(struct e1000_hw *hw, bool k1_enable) * collectively called OEM bits. The OEM Write Enable bit and SW Config bit * in NVM determines whether HW should configure LPLU and Gbe Disable. **/ -static s32 e1000_oem_bits_config_ich8lan(struct e1000_hw *hw, bool d0_state) +STATIC s32 e1000_oem_bits_config_ich8lan(struct e1000_hw *hw, bool d0_state) { s32 ret_val = 0; u32 mac_reg; @@ -2560,7 +2460,7 @@ release: } #ifndef CRC32_OS_SUPPORT -static u32 e1000_calc_rx_da_crc(u8 mac[]) +STATIC u32 e1000_calc_rx_da_crc(u8 mac[]) { u32 poly = 0xEDB88320; /* Polynomial for 802.3 CRC calculation */ u32 i, j, mask, crc; @@ -2796,55 +2696,47 @@ release: * e1000_k1_gig_workaround_lv - K1 Si workaround * @hw: pointer to the HW structure * - * Workaround to set the K1 beacon duration for 82579 parts + * Workaround to set the K1 beacon duration for 82579 parts in 10Mbps + * Disable K1 for 1000 and 100 speeds **/ STATIC s32 e1000_k1_workaround_lv(struct e1000_hw *hw) { s32 ret_val = E1000_SUCCESS; u16 status_reg = 0; - u32 mac_reg; - u16 phy_reg; DEBUGFUNC("e1000_k1_workaround_lv"); if (hw->mac.type != e1000_pch2lan) return E1000_SUCCESS; - /* Set K1 beacon duration based on 1Gbps speed or otherwise */ + /* Set K1 beacon duration based on 10Mbs speed */ ret_val = hw->phy.ops.read_reg(hw, HV_M_STATUS, &status_reg); if (ret_val) return ret_val; if ((status_reg & (HV_M_STATUS_LINK_UP | HV_M_STATUS_AUTONEG_COMPLETE)) == (HV_M_STATUS_LINK_UP | HV_M_STATUS_AUTONEG_COMPLETE)) { - mac_reg = E1000_READ_REG(hw, E1000_FEXTNVM4); - mac_reg &= ~E1000_FEXTNVM4_BEACON_DURATION_MASK; - - ret_val = hw->phy.ops.read_reg(hw, I82579_LPI_CTRL, &phy_reg); - if (ret_val) - return ret_val; - - if (status_reg & HV_M_STATUS_SPEED_1000) { + if (status_reg & + (HV_M_STATUS_SPEED_1000 | HV_M_STATUS_SPEED_100)) { u16 pm_phy_reg; - mac_reg |= E1000_FEXTNVM4_BEACON_DURATION_8USEC; - phy_reg &= ~I82579_LPI_CTRL_FORCE_PLL_LOCK_COUNT; - /* LV 1G Packet drop issue wa */ + /* LV 1G/100 Packet drop issue wa */ ret_val = hw->phy.ops.read_reg(hw, HV_PM_CTRL, &pm_phy_reg); if (ret_val) return ret_val; - pm_phy_reg &= ~HV_PM_CTRL_PLL_STOP_IN_K1_GIGA; + pm_phy_reg &= ~HV_PM_CTRL_K1_ENABLE; ret_val = hw->phy.ops.write_reg(hw, HV_PM_CTRL, pm_phy_reg); if (ret_val) return ret_val; } else { + u32 mac_reg; + mac_reg = E1000_READ_REG(hw, E1000_FEXTNVM4); + mac_reg &= ~E1000_FEXTNVM4_BEACON_DURATION_MASK; mac_reg |= E1000_FEXTNVM4_BEACON_DURATION_16USEC; - phy_reg |= I82579_LPI_CTRL_FORCE_PLL_LOCK_COUNT; + E1000_WRITE_REG(hw, E1000_FEXTNVM4, mac_reg); } - E1000_WRITE_REG(hw, E1000_FEXTNVM4, mac_reg); - ret_val = hw->phy.ops.write_reg(hw, I82579_LPI_CTRL, phy_reg); } return ret_val; @@ -3363,7 +3255,7 @@ out: * This function does initial flash setup so that a new read/write/erase cycle * can be started. **/ -static s32 e1000_flash_cycle_init_ich8lan(struct e1000_hw *hw) +STATIC s32 e1000_flash_cycle_init_ich8lan(struct e1000_hw *hw) { union ich8_hws_flash_status hsfsts; s32 ret_val = -E1000_ERR_NVM; @@ -3381,7 +3273,6 @@ static s32 e1000_flash_cycle_init_ich8lan(struct e1000_hw *hw) /* Clear FCERR and DAEL in hw status by writing 1 */ hsfsts.hsf_status.flcerr = 1; hsfsts.hsf_status.dael = 1; - E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFSTS, hsfsts.regval); /* Either we should have a hardware SPI cycle in progress @@ -3437,7 +3328,7 @@ static s32 e1000_flash_cycle_init_ich8lan(struct e1000_hw *hw) * * This function starts a flash cycle and waits for its completion. **/ -static s32 e1000_flash_cycle_ich8lan(struct e1000_hw *hw, u32 timeout) +STATIC s32 e1000_flash_cycle_ich8lan(struct e1000_hw *hw, u32 timeout) { union ich8_hws_flash_ctrl hsflctl; union ich8_hws_flash_status hsfsts; @@ -3448,6 +3339,7 @@ static s32 e1000_flash_cycle_ich8lan(struct e1000_hw *hw, u32 timeout) /* Start a cycle by writing 1 in Flash Cycle Go in Hw Flash Control */ hsflctl.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFCTL); hsflctl.hsf_ctrl.flcgo = 1; + E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFCTL, hsflctl.regval); /* wait till FDONE bit is set to 1 */ @@ -3502,6 +3394,7 @@ STATIC s32 e1000_read_flash_byte_ich8lan(struct e1000_hw *hw, u32 offset, u16 word = 0; ret_val = e1000_read_flash_data_ich8lan(hw, offset, 1, &word); + if (ret_val) return ret_val; @@ -3519,7 +3412,7 @@ STATIC s32 e1000_read_flash_byte_ich8lan(struct e1000_hw *hw, u32 offset, * * Reads a byte or word from the NVM using the flash access registers. **/ -static s32 e1000_read_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, +STATIC s32 e1000_read_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, u8 size, u16 *data) { union ich8_hws_flash_status hsfsts; @@ -3533,7 +3426,6 @@ static s32 e1000_read_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, if (size < 1 || size > 2 || offset > ICH_FLASH_LINEAR_ADDR_MASK) return -E1000_ERR_NVM; - flash_linear_addr = ((ICH_FLASH_LINEAR_ADDR_MASK & offset) + hw->nvm.flash_base_addr); @@ -3543,8 +3435,8 @@ static s32 e1000_read_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, ret_val = e1000_flash_cycle_init_ich8lan(hw); if (ret_val != E1000_SUCCESS) break; - hsflctl.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFCTL); + /* 0b/1b corresponds to 1 or 2 byte size, respectively. */ hsflctl.hsf_ctrl.fldbcount = size - 1; hsflctl.hsf_ctrl.flcycle = ICH_CYCLE_READ; @@ -3841,7 +3733,7 @@ STATIC s32 e1000_validate_nvm_checksum_ich8lan(struct e1000_hw *hw) * * Writes one/two bytes to the NVM using the flash access registers. **/ -static s32 e1000_write_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, +STATIC s32 e1000_write_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, u8 size, u16 data) { union ich8_hws_flash_status hsfsts; @@ -3853,8 +3745,7 @@ static s32 e1000_write_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, DEBUGFUNC("e1000_write_ich8_data"); - if (size < 1 || size > 2 || data > size * 0xff || - offset > ICH_FLASH_LINEAR_ADDR_MASK) + if (size < 1 || size > 2 || offset > ICH_FLASH_LINEAR_ADDR_MASK) return -E1000_ERR_NVM; flash_linear_addr = ((ICH_FLASH_LINEAR_ADDR_MASK & offset) + @@ -3866,8 +3757,8 @@ static s32 e1000_write_flash_data_ich8lan(struct e1000_hw *hw, u32 offset, ret_val = e1000_flash_cycle_init_ich8lan(hw); if (ret_val != E1000_SUCCESS) break; - hsflctl.regval = E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFCTL); + /* 0b/1b corresponds to 1 or 2 byte size, respectively. */ hsflctl.hsf_ctrl.fldbcount = size - 1; hsflctl.hsf_ctrl.flcycle = ICH_CYCLE_WRITE; @@ -3936,7 +3827,7 @@ STATIC s32 e1000_write_flash_byte_ich8lan(struct e1000_hw *hw, u32 offset, * Writes a single byte to the NVM using the flash access registers. * Goes through a retry algorithm before giving up. **/ -static s32 e1000_retry_write_flash_byte_ich8lan(struct e1000_hw *hw, +STATIC s32 e1000_retry_write_flash_byte_ich8lan(struct e1000_hw *hw, u32 offset, u8 byte) { s32 ret_val; @@ -4035,8 +3926,9 @@ STATIC s32 e1000_erase_flash_bank_ich8lan(struct e1000_hw *hw, u32 bank) /* Write a value 11 (block Erase) in Flash * Cycle field in hw flash control */ - hsflctl.regval = E1000_READ_FLASH_REG16(hw, - ICH_FLASH_HSFCTL); + hsflctl.regval = + E1000_READ_FLASH_REG16(hw, ICH_FLASH_HSFCTL); + hsflctl.hsf_ctrl.flcycle = ICH_CYCLE_ERASE; E1000_WRITE_FLASH_REG16(hw, ICH_FLASH_HSFCTL, hsflctl.regval); @@ -4411,7 +4303,7 @@ STATIC s32 e1000_init_hw_ich8lan(struct e1000_hw *hw) * Sets/Clears required hardware bits necessary for correctly setting up the * hardware for transmit and receive. **/ -static void e1000_initialize_hw_bits_ich8lan(struct e1000_hw *hw) +STATIC void e1000_initialize_hw_bits_ich8lan(struct e1000_hw *hw) { u32 reg; @@ -4704,7 +4596,7 @@ STATIC s32 e1000_get_link_up_info_ich8lan(struct e1000_hw *hw, u16 *speed, * 5) repeat up to 10 times * Note: this is only called for IGP3 copper when speed is 1gb. **/ -static s32 e1000_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw) +STATIC s32 e1000_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw) { struct e1000_dev_spec_ich8lan *dev_spec = &hw->dev_spec.ich8lan; u32 phy_ctrl; @@ -4901,17 +4793,6 @@ void e1000_suspend_workarounds_ich8lan(struct e1000_hw *hw) if (hw->phy.type == e1000_phy_i217) { u16 phy_reg, device_id = hw->device_id; -#ifdef NAHUM6_WPT_HW - if ((device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) || - (device_id == E1000_DEV_ID_PCH_LPTLP_I218_V) || - (device_id == E1000_DEV_ID_PCH_I218_LM3) || - (device_id == E1000_DEV_ID_PCH_I218_V3)) { - u32 fextnvm6 = E1000_READ_REG(hw, E1000_FEXTNVM6); - - E1000_WRITE_REG(hw, E1000_FEXTNVM6, - fextnvm6 & ~E1000_FEXTNVM6_REQ_PLL_CLK); - } -#else if ((device_id == E1000_DEV_ID_PCH_LPTLP_I218_LM) || (device_id == E1000_DEV_ID_PCH_LPTLP_I218_V)) { u32 fextnvm6 = E1000_READ_REG(hw, E1000_FEXTNVM6); @@ -4919,7 +4800,6 @@ void e1000_suspend_workarounds_ich8lan(struct e1000_hw *hw) E1000_WRITE_REG(hw, E1000_FEXTNVM6, fextnvm6 & ~E1000_FEXTNVM6_REQ_PLL_CLK); } -#endif ret_val = hw->phy.ops.acquire(hw); if (ret_val) @@ -4966,7 +4846,7 @@ void e1000_suspend_workarounds_ich8lan(struct e1000_hw *hw) * The SMBus release must also be disabled on LCD reset. */ if (!(E1000_READ_REG(hw, E1000_FWSM) & - E1000_ICH_FWSM_FW_VALID)) { + E1000_ICH_FWSM_FW_VALID)) { /* Enable proxy to reset only on power good. */ hw->phy.ops.read_reg_locked(hw, I217_PROXY_CTRL, &phy_reg); diff --git a/lib/librte_pmd_e1000/e1000/e1000_ich8lan.h b/lib/librte_pmd_e1000/e1000/e1000_ich8lan.h index eb0580f..8c5e9c3 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_ich8lan.h +++ b/lib/librte_pmd_e1000/e1000/e1000_ich8lan.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -122,7 +122,7 @@ POSSIBILITY OF SUCH DAMAGE. #define PCIE_ICH8_SNOOP_ALL PCIE_NO_SNOOP_ALL #define E1000_ICH_RAR_ENTRIES 7 -#define E1000_PCH2_RAR_ENTRIES 11 /* RAR[0-6], SHRA[0-3] */ +#define E1000_PCH2_RAR_ENTRIES 5 /* RAR[0], SHRA[0-3] */ #define E1000_PCH_LPT_RAR_ENTRIES 12 /* RAR[0], SHRA[0-10] */ #define PHY_PAGE_SHIFT 5 @@ -231,9 +231,7 @@ POSSIBILITY OF SUCH DAMAGE. /* PHY Power Management Control */ #define HV_PM_CTRL PHY_REG(770, 17) #define HV_PM_CTRL_PLL_STOP_IN_K1_GIGA 0x100 -#if !defined(EXTERNAL_RELEASE) || (defined(NAHUM6LP_HW) && defined(ULP_SUPPORT)) #define HV_PM_CTRL_K1_ENABLE 0x4000 -#endif /* !EXTERNAL_RELEASE || (NAHUM6LP_HW && ULP_SUPPORT) */ #define SW_FLAG_TIMEOUT 1000 /* SW Semaphore flag timeout in ms */ @@ -251,7 +249,6 @@ POSSIBILITY OF SUCH DAMAGE. #define I82579_LPI_CTRL_100_ENABLE 0x2000 #define I82579_LPI_CTRL_1000_ENABLE 0x4000 #define I82579_LPI_CTRL_ENABLE_MASK 0x6000 -#define I82579_LPI_CTRL_FORCE_PLL_LOCK_COUNT 0x80 /* 82579 DFT Control */ #define I82579_DFT_CTRL PHY_REG(769, 20) @@ -275,6 +272,7 @@ POSSIBILITY OF SUCH DAMAGE. #define I217_EEE_CAPABILITY 0x8000 /* IEEE MMD Register 3.20 */ #define I217_EEE_ADVERTISEMENT 0x8001 /* IEEE MMD Register 7.60 */ #define I217_EEE_LP_ABILITY 0x8002 /* IEEE MMD Register 7.61 */ +#define I217_RX_CONFIG 0xB20C /* Receive configuration */ #define E1000_EEE_RX_LPI_RCVD 0x0400 /* Tx LP idle received */ #define E1000_EEE_TX_LPI_RCVD 0x0800 /* Rx LP idle received */ @@ -310,9 +308,6 @@ s32 e1000_set_eee_pchlan(struct e1000_hw *hw); #if defined(NAHUM6LP_HW) && defined(ULP_SUPPORT) s32 e1000_enable_ulp_lpt_lp(struct e1000_hw *hw, bool to_sx); s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force); -void e1000_toggle_lanphypc_pch_lpt(struct e1000_hw *hw); #endif /* NAHUM6LP_HW && ULP_SUPPORT */ #endif /* _E1000_ICH8LAN_H_ */ -#ifdef C10_SUPPORT void e1000_demote_ltr(struct e1000_hw *hw, bool demote, bool link); -#endif /* C10_SUPPORT */ diff --git a/lib/librte_pmd_e1000/e1000/e1000_mac.c b/lib/librte_pmd_e1000/e1000/e1000_mac.c index 1028fd2..c8ec049 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_mac.c +++ b/lib/librte_pmd_e1000/e1000/e1000_mac.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -2091,7 +2091,8 @@ s32 e1000_disable_pcie_master_generic(struct e1000_hw *hw) while (timeout) { if (!(E1000_READ_REG(hw, E1000_STATUS) & - E1000_STATUS_GIO_MASTER_ENABLE)) + E1000_STATUS_GIO_MASTER_ENABLE) || + E1000_REMOVED(hw->hw_addr)) break; usec_delay(100); timeout--; diff --git a/lib/librte_pmd_e1000/e1000/e1000_mac.h b/lib/librte_pmd_e1000/e1000/e1000_mac.h index c4c6b04..5a7ce4a 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_mac.h +++ b/lib/librte_pmd_e1000/e1000/e1000_mac.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -35,6 +35,9 @@ POSSIBILITY OF SUCH DAMAGE. #define _E1000_MAC_H_ void e1000_init_mac_ops_generic(struct e1000_hw *hw); +#ifndef E1000_REMOVED +#define E1000_REMOVED(a) (0) +#endif /* E1000_REMOVED */ void e1000_null_mac_generic(struct e1000_hw *hw); s32 e1000_null_ops_generic(struct e1000_hw *hw); s32 e1000_null_link_info(struct e1000_hw *hw, u16 *s, u16 *d); diff --git a/lib/librte_pmd_e1000/e1000/e1000_manage.c b/lib/librte_pmd_e1000/e1000/e1000_manage.c index b2ca174..30db892 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_manage.c +++ b/lib/librte_pmd_e1000/e1000/e1000_manage.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_manage.h b/lib/librte_pmd_e1000/e1000/e1000_manage.h index 38a97b2..e6f92c0 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_manage.h +++ b/lib/librte_pmd_e1000/e1000/e1000_manage.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_mbx.c b/lib/librte_pmd_e1000/e1000/e1000_mbx.c index 5c45431..fd85301 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_mbx.c +++ b/lib/librte_pmd_e1000/e1000/e1000_mbx.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_mbx.h b/lib/librte_pmd_e1000/e1000/e1000_mbx.h index 3bb4482..e9524fc 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_mbx.h +++ b/lib/librte_pmd_e1000/e1000/e1000_mbx.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_nvm.c b/lib/librte_pmd_e1000/e1000/e1000_nvm.c index 37295e9..8be437a 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_nvm.c +++ b/lib/librte_pmd_e1000/e1000/e1000_nvm.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -114,7 +114,7 @@ s32 e1000_null_write_nvm(struct e1000_hw E1000_UNUSEDARG *hw, * * Enable/Raise the EEPROM clock bit. **/ -static void e1000_raise_eec_clk(struct e1000_hw *hw, u32 *eecd) +STATIC void e1000_raise_eec_clk(struct e1000_hw *hw, u32 *eecd) { *eecd = *eecd | E1000_EECD_SK; E1000_WRITE_REG(hw, E1000_EECD, *eecd); @@ -129,7 +129,7 @@ static void e1000_raise_eec_clk(struct e1000_hw *hw, u32 *eecd) * * Clear/Lower the EEPROM clock bit. **/ -static void e1000_lower_eec_clk(struct e1000_hw *hw, u32 *eecd) +STATIC void e1000_lower_eec_clk(struct e1000_hw *hw, u32 *eecd) { *eecd = *eecd & ~E1000_EECD_SK; E1000_WRITE_REG(hw, E1000_EECD, *eecd); @@ -147,7 +147,7 @@ static void e1000_lower_eec_clk(struct e1000_hw *hw, u32 *eecd) * "data" parameter will be shifted out to the EEPROM one bit at a time. * In order to do this, "data" must be broken down into bits. **/ -static void e1000_shift_out_eec_bits(struct e1000_hw *hw, u16 data, u16 count) +STATIC void e1000_shift_out_eec_bits(struct e1000_hw *hw, u16 data, u16 count) { struct e1000_nvm_info *nvm = &hw->nvm; u32 eecd = E1000_READ_REG(hw, E1000_EECD); @@ -194,7 +194,7 @@ static void e1000_shift_out_eec_bits(struct e1000_hw *hw, u16 data, u16 count) * "DO" bit. During this "shifting in" process the data in "DI" bit should * always be clear. **/ -static u16 e1000_shift_in_eec_bits(struct e1000_hw *hw, u16 count) +STATIC u16 e1000_shift_in_eec_bits(struct e1000_hw *hw, u16 count) { u32 eecd; u32 i; @@ -295,7 +295,7 @@ s32 e1000_acquire_nvm_generic(struct e1000_hw *hw) * * Return the EEPROM to a standby state. **/ -static void e1000_standby_nvm(struct e1000_hw *hw) +STATIC void e1000_standby_nvm(struct e1000_hw *hw) { struct e1000_nvm_info *nvm = &hw->nvm; u32 eecd = E1000_READ_REG(hw, E1000_EECD); @@ -381,7 +381,7 @@ void e1000_release_nvm_generic(struct e1000_hw *hw) * * Setups the EEPROM for reading and writing. **/ -static s32 e1000_ready_nvm_eeprom(struct e1000_hw *hw) +STATIC s32 e1000_ready_nvm_eeprom(struct e1000_hw *hw) { struct e1000_nvm_info *nvm = &hw->nvm; u32 eecd = E1000_READ_REG(hw, E1000_EECD); @@ -1023,7 +1023,7 @@ s32 e1000_read_pba_raw(struct e1000_hw *hw, u16 *eeprom_buf, return ret_val; } else { if (eeprom_buf_size > (u32)(pba->word[1] + - pba->pba_block[0])) { + pba_block_size)) { memcpy(pba->pba_block, &eeprom_buf[pba->word[1]], pba_block_size * sizeof(u16)); diff --git a/lib/librte_pmd_e1000/e1000/e1000_nvm.h b/lib/librte_pmd_e1000/e1000/e1000_nvm.h index 10f0ae8..dee1f62 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_nvm.h +++ b/lib/librte_pmd_e1000/e1000/e1000_nvm.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_phy.c b/lib/librte_pmd_e1000/e1000/e1000_phy.c index cc454d9..e214f17 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_phy.c +++ b/lib/librte_pmd_e1000/e1000/e1000_phy.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -33,7 +33,7 @@ POSSIBILITY OF SUCH DAMAGE. #include "e1000_api.h" -static s32 e1000_wait_autoneg(struct e1000_hw *hw); +STATIC s32 e1000_wait_autoneg(struct e1000_hw *hw); STATIC s32 e1000_access_phy_wakeup_reg_bm(struct e1000_hw *hw, u32 offset, u16 *data, bool read, bool page_set); STATIC u32 e1000_get_phy_addr_for_hv_page(u32 page); @@ -41,13 +41,13 @@ STATIC s32 e1000_access_phy_debug_regs_hv(struct e1000_hw *hw, u32 offset, u16 *data, bool read); /* Cable length tables */ -static const u16 e1000_m88_cable_length_table[] = { +STATIC const u16 e1000_m88_cable_length_table[] = { 0, 50, 80, 110, 140, 140, E1000_CABLE_LENGTH_UNDEFINED }; #define M88E1000_CABLE_LENGTH_TABLE_SIZE \ (sizeof(e1000_m88_cable_length_table) / \ sizeof(e1000_m88_cable_length_table[0])) -static const u16 e1000_igp_2_cable_length_table[] = { +STATIC const u16 e1000_igp_2_cable_length_table[] = { 0, 0, 0, 0, 0, 0, 0, 0, 3, 5, 8, 11, 13, 16, 18, 21, 0, 0, 0, 3, 6, 10, 13, 16, 19, 23, 26, 29, 32, 35, 38, 41, 6, 10, 14, 18, 22, 26, 30, 33, 37, 41, 44, 48, 51, 54, 58, 61, 21, 26, 31, 35, 40, @@ -733,7 +733,7 @@ s32 e1000_set_page_igp(struct e1000_hw *hw, u16 page) * and stores the retrieved information in data. Release any acquired * semaphores before exiting. **/ -static s32 __e1000_read_phy_reg_igp(struct e1000_hw *hw, u32 offset, u16 *data, +STATIC s32 __e1000_read_phy_reg_igp(struct e1000_hw *hw, u32 offset, u16 *data, bool locked) { s32 ret_val = E1000_SUCCESS; @@ -802,7 +802,7 @@ s32 e1000_read_phy_reg_igp_locked(struct e1000_hw *hw, u32 offset, u16 *data) * Acquires semaphore, if necessary, then writes the data to PHY register * at the offset. Release any acquired semaphores before exiting. **/ -static s32 __e1000_write_phy_reg_igp(struct e1000_hw *hw, u32 offset, u16 data, +STATIC s32 __e1000_write_phy_reg_igp(struct e1000_hw *hw, u32 offset, u16 data, bool locked) { s32 ret_val = E1000_SUCCESS; @@ -871,7 +871,7 @@ s32 e1000_write_phy_reg_igp_locked(struct e1000_hw *hw, u32 offset, u16 data) * using the kumeran interface. The information retrieved is stored in data. * Release any acquired semaphores before exiting. **/ -static s32 __e1000_read_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 *data, +STATIC s32 __e1000_read_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 *data, bool locked) { u32 kmrnctrlsta; @@ -946,7 +946,7 @@ s32 e1000_read_kmrn_reg_locked(struct e1000_hw *hw, u32 offset, u16 *data) * at the offset using the kumeran interface. Release any acquired semaphores * before exiting. **/ -static s32 __e1000_write_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 data, +STATIC s32 __e1000_write_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 data, bool locked) { u32 kmrnctrlsta; @@ -2313,7 +2313,7 @@ s32 e1000_check_polarity_ife(struct e1000_hw *hw) * Waits for auto-negotiation to complete or for the auto-negotiation time * limit to expire, which ever happens first. **/ -static s32 e1000_wait_autoneg(struct e1000_hw *hw) +STATIC s32 e1000_wait_autoneg(struct e1000_hw *hw) { s32 ret_val = E1000_SUCCESS; u16 i, phy_status; @@ -2368,19 +2368,23 @@ s32 e1000_phy_has_link_generic(struct e1000_hw *hw, u32 iterations, * it across the board. */ ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, &phy_status); - if (ret_val) + if (ret_val) { /* If the first read fails, another entity may have * ownership of the resources, wait and try again to * see if they have relinquished the resources yet. */ - usec_delay(usec_interval); + if (usec_interval >= 1000) + msec_delay(usec_interval/1000); + else + usec_delay(usec_interval); + } ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, &phy_status); if (ret_val) break; if (phy_status & MII_SR_LINK_STATUS) break; if (usec_interval >= 1000) - msec_delay_irq(usec_interval/1000); + msec_delay(usec_interval/1000); else usec_delay(usec_interval); } @@ -3092,7 +3096,7 @@ s32 e1000_determine_phy_address(struct e1000_hw *hw) * * Returns the phy address for the page requested. **/ -static u32 e1000_get_phy_addr_for_bm_page(u32 page, u32 reg) +STATIC u32 e1000_get_phy_addr_for_bm_page(u32 page, u32 reg) { u32 phy_addr = 2; @@ -3544,7 +3548,7 @@ void e1000_power_down_phy_copper(struct e1000_hw *hw) * and stores the retrieved information in data. Release any acquired * semaphore before exiting. **/ -static s32 __e1000_read_phy_reg_hv(struct e1000_hw *hw, u32 offset, u16 *data, +STATIC s32 __e1000_read_phy_reg_hv(struct e1000_hw *hw, u32 offset, u16 *data, bool locked, bool page_set) { s32 ret_val; @@ -3654,7 +3658,7 @@ s32 e1000_read_phy_reg_page_hv(struct e1000_hw *hw, u32 offset, u16 *data) * Acquires semaphore, if necessary, then writes the data to PHY register * at the offset. Release any acquired semaphores before exiting. **/ -static s32 __e1000_write_phy_reg_hv(struct e1000_hw *hw, u32 offset, u16 data, +STATIC s32 __e1000_write_phy_reg_hv(struct e1000_hw *hw, u32 offset, u16 data, bool locked, bool page_set) { s32 ret_val; @@ -4126,7 +4130,7 @@ s32 e1000_read_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 *data) { u32 mphy_ctrl = 0; bool locked = false; - bool ready = false; + bool ready; DEBUGFUNC("e1000_read_phy_reg_mphy"); @@ -4188,7 +4192,7 @@ s32 e1000_write_phy_reg_mphy(struct e1000_hw *hw, u32 address, u32 data, { u32 mphy_ctrl = 0; bool locked = false; - bool ready = false; + bool ready; DEBUGFUNC("e1000_write_phy_reg_mphy"); diff --git a/lib/librte_pmd_e1000/e1000/e1000_phy.h b/lib/librte_pmd_e1000/e1000/e1000_phy.h index 8295dc0..73a9b1f 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_phy.h +++ b/lib/librte_pmd_e1000/e1000/e1000_phy.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -225,6 +225,7 @@ bool e1000_is_mphy_ready(struct e1000_hw *hw); #define HV_M_STATUS_AUTONEG_COMPLETE 0x1000 #define HV_M_STATUS_SPEED_MASK 0x0300 #define HV_M_STATUS_SPEED_1000 0x0200 +#define HV_M_STATUS_SPEED_100 0x0100 #define HV_M_STATUS_LINK_UP 0x0040 #define IGP01E1000_PHY_PCS_INIT_REG 0x00B4 @@ -274,10 +275,8 @@ bool e1000_is_mphy_ready(struct e1000_hw *hw); #define E1000_KMRNCTRLSTA_K1_CONFIG 0x7 #define E1000_KMRNCTRLSTA_K1_ENABLE 0x0002 /* enable K1 */ #define E1000_KMRNCTRLSTA_HD_CTRL 0x10 /* Kumeran HD Control */ -#if !defined(EXTERNAL_RELEASE) || (defined(NAHUM6LP_HW) && defined(ULP_IN_D0_SUPPORT)) #define E1000_KMRNCTRLSTA_OP_MODES 0x1F /* Kumeran Modes of Operation */ #define E1000_KMRNCTRLSTA_OP_MODES_LSC2CSC 0x0002 /* change LSC to CSC */ -#endif /* !EXTERNAL_RELEASE || (NAHUM6LP_HW && ULP_IN_D0_SUPPORT) */ #define IFE_PHY_EXTENDED_STATUS_CONTROL 0x10 #define IFE_PHY_SPECIAL_CONTROL 0x11 /* 100BaseTx PHY Special Ctrl */ diff --git a/lib/librte_pmd_e1000/e1000/e1000_regs.h b/lib/librte_pmd_e1000/e1000/e1000_regs.h index 81b34e9..bde2a08 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_regs.h +++ b/lib/librte_pmd_e1000/e1000/e1000_regs.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_e1000/e1000/e1000_vf.c b/lib/librte_pmd_e1000/e1000/e1000_vf.c index a4a96fd..778561e 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_vf.c +++ b/lib/librte_pmd_e1000/e1000/e1000_vf.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -378,7 +378,7 @@ STATIC u32 e1000_hash_mc_addr_vf(struct e1000_hw *hw, u8 *mc_addr) return hash_value; } -static void e1000_write_msg_read_ack(struct e1000_hw *hw, +STATIC void e1000_write_msg_read_ack(struct e1000_hw *hw, u32 *msg, u16 size) { struct e1000_mbx_info *mbx = &hw->mbx; diff --git a/lib/librte_pmd_e1000/e1000/e1000_vf.h b/lib/librte_pmd_e1000/e1000/e1000_vf.h index 56cf7b3..6d5bd99 100644 --- a/lib/librte_pmd_e1000/e1000/e1000_vf.h +++ b/lib/librte_pmd_e1000/e1000/e1000_vf.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_ixgbe/ixgbe/README b/lib/librte_pmd_ixgbe/ixgbe/README index 6cc441c..fc71e85 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/README +++ b/lib/librte_pmd_ixgbe/ixgbe/README @@ -34,7 +34,7 @@ Intel® IXGBE driver =================== This directory contains source code of FreeBSD ixgbe driver of version -cid-10g-shared-code.2012.11.09 released by LAD. The sub-directory of lad/ +cid-10g-shared-code.2014.03.13 released by LAD. The sub-directory of lad/ contains the original source package. This driver is valid for the product(s) listed below diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.c index a9d1b9d..ee2217d 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -36,7 +36,14 @@ POSSIBILITY OF SUCH DAMAGE. #include "ixgbe_api.h" #include "ixgbe_common.h" #include "ixgbe_phy.h" -#ident "$Id: ixgbe_82598.c,v 1.194 2012/03/28 00:54:08 jtkirshe Exp $" +#ident "$Id: ixgbe_82598.c,v 1.199 2013/05/22 23:26:31 jtkirshe Exp $" + +#define IXGBE_82598_MAX_TX_QUEUES 32 +#define IXGBE_82598_MAX_RX_QUEUES 64 +#define IXGBE_82598_RAR_ENTRIES 16 +#define IXGBE_82598_MC_TBL_SIZE 128 +#define IXGBE_82598_VFT_TBL_SIZE 128 +#define IXGBE_82598_RX_PB_SIZE 512 STATIC s32 ixgbe_get_link_capabilities_82598(struct ixgbe_hw *hw, ixgbe_link_speed *speed, @@ -49,18 +56,17 @@ STATIC s32 ixgbe_check_mac_link_82598(struct ixgbe_hw *hw, bool link_up_wait_to_complete); STATIC s32 ixgbe_setup_mac_link_82598(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete); STATIC s32 ixgbe_setup_copper_link_82598(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete); STATIC s32 ixgbe_reset_hw_82598(struct ixgbe_hw *hw); STATIC s32 ixgbe_clear_vmdq_82598(struct ixgbe_hw *hw, u32 rar, u32 vmdq); STATIC s32 ixgbe_clear_vfta_82598(struct ixgbe_hw *hw); -static void ixgbe_set_rxpba_82598(struct ixgbe_hw *hw, int num_pb, +STATIC void ixgbe_set_rxpba_82598(struct ixgbe_hw *hw, int num_pb, u32 headroom, int strategy); - +STATIC s32 ixgbe_read_i2c_sff8472_82598(struct ixgbe_hw *hw, u8 byte_offset, + u8 *sff8472_data); /** * ixgbe_set_pcie_completion_timeout - set pci-e completion timeout * @hw: pointer to the HW structure @@ -134,6 +140,7 @@ s32 ixgbe_init_ops_82598(struct ixgbe_hw *hw) mac->ops.read_analog_reg8 = &ixgbe_read_analog_reg8_82598; mac->ops.write_analog_reg8 = &ixgbe_write_analog_reg8_82598; mac->ops.set_lan_id = &ixgbe_set_lan_id_multi_port_pcie_82598; + mac->ops.enable_rx_dma = &ixgbe_enable_rx_dma_82598; /* RAR, Multicast, VLAN */ mac->ops.set_vmdq = &ixgbe_set_vmdq_82598; @@ -145,16 +152,17 @@ s32 ixgbe_init_ops_82598(struct ixgbe_hw *hw) /* Flow Control */ mac->ops.fc_enable = &ixgbe_fc_enable_82598; - mac->mcft_size = 128; - mac->vft_size = 128; - mac->num_rar_entries = 16; - mac->rx_pb_size = 512; - mac->max_tx_queues = 32; - mac->max_rx_queues = 64; + mac->mcft_size = IXGBE_82598_MC_TBL_SIZE; + mac->vft_size = IXGBE_82598_VFT_TBL_SIZE; + mac->num_rar_entries = IXGBE_82598_RAR_ENTRIES; + mac->rx_pb_size = IXGBE_82598_RX_PB_SIZE; + mac->max_rx_queues = IXGBE_82598_MAX_RX_QUEUES; + mac->max_tx_queues = IXGBE_82598_MAX_TX_QUEUES; mac->max_msix_vectors = ixgbe_get_pcie_msix_count_generic(hw); /* SFP+ Module */ phy->ops.read_i2c_eeprom = &ixgbe_read_i2c_eeprom_82598; + phy->ops.read_i2c_sff8472 = &ixgbe_read_i2c_sff8472_82598; /* Link */ mac->ops.check_link = &ixgbe_check_mac_link_82598; @@ -166,6 +174,8 @@ s32 ixgbe_init_ops_82598(struct ixgbe_hw *hw) /* Manageability interface */ mac->ops.set_fw_drv_ver = NULL; + mac->ops.get_rtrup2tc = NULL; + return ret_val; } @@ -712,15 +722,15 @@ out: * ixgbe_setup_mac_link_82598 - Set MAC link speed * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled * @autoneg_wait_to_complete: true when waiting for completion is needed * * Set the link speed in the AUTOC register and restarts link. **/ STATIC s32 ixgbe_setup_mac_link_82598(struct ixgbe_hw *hw, - ixgbe_link_speed speed, bool autoneg, + ixgbe_link_speed speed, bool autoneg_wait_to_complete) { + bool autoneg = false; s32 status = IXGBE_SUCCESS; ixgbe_link_speed link_capabilities = IXGBE_LINK_SPEED_UNKNOWN; u32 curr_autoc = IXGBE_READ_REG(hw, IXGBE_AUTOC); @@ -766,14 +776,12 @@ STATIC s32 ixgbe_setup_mac_link_82598(struct ixgbe_hw *hw, * ixgbe_setup_copper_link_82598 - Set the PHY autoneg advertised field * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled * @autoneg_wait_to_complete: true if waiting is needed to complete * * Sets the link speed in the AUTOC register in the MAC and restarts link. **/ STATIC s32 ixgbe_setup_copper_link_82598(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete) { s32 status; @@ -781,7 +789,7 @@ STATIC s32 ixgbe_setup_copper_link_82598(struct ixgbe_hw *hw, DEBUGFUNC("ixgbe_setup_copper_link_82598"); /* Setup the PHY according to input speed */ - status = hw->phy.ops.setup_link_speed(hw, speed, autoneg, + status = hw->phy.ops.setup_link_speed(hw, speed, autoneg_wait_to_complete); /* Set up MAC */ ixgbe_start_mac_link_82598(hw, autoneg_wait_to_complete); @@ -964,6 +972,7 @@ STATIC s32 ixgbe_clear_vmdq_82598(struct ixgbe_hw *hw, u32 rar, u32 vmdq) u32 rar_high; u32 rar_entries = hw->mac.num_rar_entries; + UNREFERENCED_1PARAMETER(vmdq); /* Make sure we are using a valid rar index range */ if (rar >= rar_entries) { @@ -1101,23 +1110,33 @@ s32 ixgbe_write_analog_reg8_82598(struct ixgbe_hw *hw, u32 reg, u8 val) } /** - * ixgbe_read_i2c_eeprom_82598 - Reads 8 bit word over I2C interface. + * ixgbe_read_i2c_phy_82598 - Reads 8 bit word over I2C interface. * @hw: pointer to hardware structure - * @byte_offset: EEPROM byte offset to read + * @dev_addr: address to read from + * @byte_offset: byte offset to read from dev_addr * @eeprom_data: value read * * Performs 8 byte read operation to SFP module's EEPROM over I2C interface. **/ -s32 ixgbe_read_i2c_eeprom_82598(struct ixgbe_hw *hw, u8 byte_offset, - u8 *eeprom_data) +STATIC s32 ixgbe_read_i2c_phy_82598(struct ixgbe_hw *hw, u8 dev_addr, + u8 byte_offset, u8 *eeprom_data) { s32 status = IXGBE_SUCCESS; u16 sfp_addr = 0; u16 sfp_data = 0; u16 sfp_stat = 0; + u16 gssr; u32 i; - DEBUGFUNC("ixgbe_read_i2c_eeprom_82598"); + DEBUGFUNC("ixgbe_read_i2c_phy_82598"); + + if (IXGBE_READ_REG(hw, IXGBE_STATUS) & IXGBE_STATUS_LAN_ID_1) + gssr = IXGBE_GSSR_PHY1_SM; + else + gssr = IXGBE_GSSR_PHY0_SM; + + if (hw->mac.ops.acquire_swfw_sync(hw, gssr) != IXGBE_SUCCESS) + return IXGBE_ERR_SWFW_SYNC; if (hw->phy.type == ixgbe_phy_nl) { /* @@ -1125,19 +1144,19 @@ s32 ixgbe_read_i2c_eeprom_82598(struct ixgbe_hw *hw, u8 byte_offset, * 0xC30D. These registers are used to talk to the SFP+ * module's EEPROM through the SDA/SCL (I2C) interface. */ - sfp_addr = (IXGBE_I2C_EEPROM_DEV_ADDR << 8) + byte_offset; + sfp_addr = (dev_addr << 8) + byte_offset; sfp_addr = (sfp_addr | IXGBE_I2C_EEPROM_READ_MASK); - hw->phy.ops.write_reg(hw, - IXGBE_MDIO_PMA_PMD_SDA_SCL_ADDR, - IXGBE_MDIO_PMA_PMD_DEV_TYPE, - sfp_addr); + hw->phy.ops.write_reg_mdi(hw, + IXGBE_MDIO_PMA_PMD_SDA_SCL_ADDR, + IXGBE_MDIO_PMA_PMD_DEV_TYPE, + sfp_addr); /* Poll status */ for (i = 0; i < 100; i++) { - hw->phy.ops.read_reg(hw, - IXGBE_MDIO_PMA_PMD_SDA_SCL_STAT, - IXGBE_MDIO_PMA_PMD_DEV_TYPE, - &sfp_stat); + hw->phy.ops.read_reg_mdi(hw, + IXGBE_MDIO_PMA_PMD_SDA_SCL_STAT, + IXGBE_MDIO_PMA_PMD_DEV_TYPE, + &sfp_stat); sfp_stat = sfp_stat & IXGBE_I2C_EEPROM_STATUS_MASK; if (sfp_stat != IXGBE_I2C_EEPROM_STATUS_IN_PROGRESS) break; @@ -1151,20 +1170,50 @@ s32 ixgbe_read_i2c_eeprom_82598(struct ixgbe_hw *hw, u8 byte_offset, } /* Read data */ - hw->phy.ops.read_reg(hw, IXGBE_MDIO_PMA_PMD_SDA_SCL_DATA, - IXGBE_MDIO_PMA_PMD_DEV_TYPE, &sfp_data); + hw->phy.ops.read_reg_mdi(hw, IXGBE_MDIO_PMA_PMD_SDA_SCL_DATA, + IXGBE_MDIO_PMA_PMD_DEV_TYPE, &sfp_data); *eeprom_data = (u8)(sfp_data >> 8); } else { status = IXGBE_ERR_PHY; - goto out; } out: + hw->mac.ops.release_swfw_sync(hw, gssr); return status; } /** + * ixgbe_read_i2c_eeprom_82598 - Reads 8 bit word over I2C interface. + * @hw: pointer to hardware structure + * @byte_offset: EEPROM byte offset to read + * @eeprom_data: value read + * + * Performs 8 byte read operation to SFP module's EEPROM over I2C interface. + **/ +s32 ixgbe_read_i2c_eeprom_82598(struct ixgbe_hw *hw, u8 byte_offset, + u8 *eeprom_data) +{ + return ixgbe_read_i2c_phy_82598(hw, IXGBE_I2C_EEPROM_DEV_ADDR, + byte_offset, eeprom_data); +} + +/** + * ixgbe_read_i2c_sff8472_82598 - Reads 8 bit word over I2C interface. + * @hw: pointer to hardware structure + * @byte_offset: byte offset at address 0xA2 + * @eeprom_data: value read + * + * Performs 8 byte read operation to SFP module's SFF-8472 data over I2C + **/ +STATIC s32 ixgbe_read_i2c_sff8472_82598(struct ixgbe_hw *hw, u8 byte_offset, + u8 *sff8472_data) +{ + return ixgbe_read_i2c_phy_82598(hw, IXGBE_I2C_EEPROM_DEV_ADDR2, + byte_offset, sff8472_data); +} + +/** * ixgbe_get_supported_physical_layer_82598 - Returns physical layer type * @hw: pointer to hardware structure * @@ -1337,11 +1386,12 @@ void ixgbe_enable_relaxed_ordering_82598(struct ixgbe_hw *hw) * @headroom: reserve n KB of headroom * @strategy: packet buffer allocation strategy **/ -static void ixgbe_set_rxpba_82598(struct ixgbe_hw *hw, int num_pb, +STATIC void ixgbe_set_rxpba_82598(struct ixgbe_hw *hw, int num_pb, u32 headroom, int strategy) { u32 rxpktsize = IXGBE_RXPBSIZE_64KB; u8 i = 0; + UNREFERENCED_1PARAMETER(headroom); if (!num_pb) return; @@ -1370,3 +1420,19 @@ static void ixgbe_set_rxpba_82598(struct ixgbe_hw *hw, int num_pb, return; } + +/** + * ixgbe_enable_rx_dma_82598 - Enable the Rx DMA unit + * @hw: pointer to hardware structure + * @regval: register value to write to RXCTRL + * + * Enables the Rx DMA unit + **/ +s32 ixgbe_enable_rx_dma_82598(struct ixgbe_hw *hw, u32 regval) +{ + DEBUGFUNC("ixgbe_enable_rx_dma_82598"); + + IXGBE_WRITE_REG(hw, IXGBE_RXCTRL, regval); + + return IXGBE_SUCCESS; +} diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.h index 0d9cbed..58ce4d1 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82598.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -49,4 +49,5 @@ u32 ixgbe_get_supported_physical_layer_82598(struct ixgbe_hw *hw); s32 ixgbe_init_phy_ops_82598(struct ixgbe_hw *hw); void ixgbe_set_lan_id_multi_port_pcie_82598(struct ixgbe_hw *hw); void ixgbe_set_pcie_completion_timeout(struct ixgbe_hw *hw); +s32 ixgbe_enable_rx_dma_82598(struct ixgbe_hw *hw, u32 regval); #endif /* _IXGBE_82598_H_ */ diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.c index db07789..ed97ad9 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -36,11 +36,17 @@ POSSIBILITY OF SUCH DAMAGE. #include "ixgbe_api.h" #include "ixgbe_common.h" #include "ixgbe_phy.h" -#ident "$Id: ixgbe_82599.c,v 1.301 2012/11/08 11:33:27 jtkirshe Exp $" +#ident "$Id: ixgbe_82599.c,v 1.334 2013/12/04 22:34:00 jtkirshe Exp $" + +#define IXGBE_82599_MAX_TX_QUEUES 128 +#define IXGBE_82599_MAX_RX_QUEUES 128 +#define IXGBE_82599_RAR_ENTRIES 128 +#define IXGBE_82599_MC_TBL_SIZE 128 +#define IXGBE_82599_VFT_TBL_SIZE 128 +#define IXGBE_82599_RX_PB_SIZE 512 STATIC s32 ixgbe_setup_copper_link_82599(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete); STATIC s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw); STATIC s32 ixgbe_read_eeprom_82599(struct ixgbe_hw *hw, @@ -48,14 +54,37 @@ STATIC s32 ixgbe_read_eeprom_82599(struct ixgbe_hw *hw, STATIC s32 ixgbe_read_eeprom_buffer_82599(struct ixgbe_hw *hw, u16 offset, u16 words, u16 *data); +bool ixgbe_mng_enabled(struct ixgbe_hw *hw) +{ + u32 fwsm, manc, factps; + + fwsm = IXGBE_READ_REG(hw, IXGBE_FWSM); + if ((fwsm & IXGBE_FWSM_MODE_MASK) != IXGBE_FWSM_FW_MODE_PT) + return false; + + manc = IXGBE_READ_REG(hw, IXGBE_MANC); + if (!(manc & IXGBE_MANC_RCV_TCO_EN)) + return false; + + factps = IXGBE_READ_REG(hw, IXGBE_FACTPS); + if (factps & IXGBE_FACTPS_MNGCG) + return false; + + return true; +} + void ixgbe_init_mac_link_ops_82599(struct ixgbe_hw *hw) { struct ixgbe_mac_info *mac = &hw->mac; DEBUGFUNC("ixgbe_init_mac_link_ops_82599"); - /* enable the laser control functions for SFP+ fiber */ - if (mac->ops.get_media_type(hw) == ixgbe_media_type_fiber) { + /* + * enable the laser control functions for SFP+ fiber + * and MNG not enabled + */ + if ((mac->ops.get_media_type(hw) == ixgbe_media_type_fiber) && + !ixgbe_mng_enabled(hw)) { mac->ops.disable_tx_laser = &ixgbe_disable_tx_laser_multispeed_fiber; mac->ops.enable_tx_laser = @@ -136,7 +165,6 @@ s32 ixgbe_setup_sfp_modules_82599(struct ixgbe_hw *hw) { s32 ret_val = IXGBE_SUCCESS; u16 list_offset, data_offset, data_value; - bool got_lock = false; DEBUGFUNC("ixgbe_setup_sfp_modules_82599"); @@ -158,44 +186,26 @@ s32 ixgbe_setup_sfp_modules_82599(struct ixgbe_hw *hw) goto setup_sfp_out; } - hw->eeprom.ops.read(hw, ++data_offset, &data_value); + if (hw->eeprom.ops.read(hw, ++data_offset, &data_value)) + goto setup_sfp_err; while (data_value != 0xffff) { IXGBE_WRITE_REG(hw, IXGBE_CORECTL, data_value); IXGBE_WRITE_FLUSH(hw); - hw->eeprom.ops.read(hw, ++data_offset, &data_value); + if (hw->eeprom.ops.read(hw, ++data_offset, &data_value)) + goto setup_sfp_err; } /* Release the semaphore */ hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_MAC_CSR_SM); - /* Delay obtaining semaphore again to allow FW access */ - msec_delay(hw->eeprom.semaphore_delay); - - /* Need SW/FW semaphore around AUTOC writes if LESM on, - * likewise reset_pipeline requires lock as it also writes - * AUTOC. + /* Delay obtaining semaphore again to allow FW access + * prot_autoc_write uses the semaphore too. */ - if (ixgbe_verify_lesm_fw_enabled_82599(hw)) { - ret_val = hw->mac.ops.acquire_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); - if (ret_val != IXGBE_SUCCESS) { - ret_val = IXGBE_ERR_SWFW_SYNC; - goto setup_sfp_out; - } - - got_lock = true; - } + msec_delay(hw->eeprom.semaphore_delay); /* Restart DSP and set SFI mode */ - IXGBE_WRITE_REG(hw, IXGBE_AUTOC, (IXGBE_READ_REG(hw, - IXGBE_AUTOC) | IXGBE_AUTOC_LMS_10G_SERIAL)); - - ret_val = ixgbe_reset_pipeline_82599(hw); - - if (got_lock) { - hw->mac.ops.release_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); - got_lock = false; - } + ret_val = hw->mac.ops.prot_autoc_write(hw, + hw->mac.orig_autoc | IXGBE_AUTOC_LMS_10G_SERIAL, + false); if (ret_val) { DEBUGOUT("sfp module setup not complete\n"); @@ -207,6 +217,88 @@ s32 ixgbe_setup_sfp_modules_82599(struct ixgbe_hw *hw) setup_sfp_out: return ret_val; + +setup_sfp_err: + /* Release the semaphore */ + hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_MAC_CSR_SM); + /* Delay obtaining semaphore again to allow FW access */ + msec_delay(hw->eeprom.semaphore_delay); + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", data_offset); + return IXGBE_ERR_PHY; +} + +/** + * prot_autoc_read_82599 - Hides MAC differences needed for AUTOC read + * @hw: pointer to hardware structure + * @locked: Return the if we locked for this read. + * @reg_val: Value we read from AUTOC + * + * For this part (82599) we need to wrap read-modify-writes with a possible + * FW/SW lock. It is assumed this lock will be freed with the next + * prot_autoc_write_82599(). + */ +s32 prot_autoc_read_82599(struct ixgbe_hw *hw, bool *locked, u32 *reg_val) +{ + s32 ret_val; + + *locked = false; + /* If LESM is on then we need to hold the SW/FW semaphore. */ + if (ixgbe_verify_lesm_fw_enabled_82599(hw)) { + ret_val = hw->mac.ops.acquire_swfw_sync(hw, + IXGBE_GSSR_MAC_CSR_SM); + if (ret_val != IXGBE_SUCCESS) + return IXGBE_ERR_SWFW_SYNC; + + *locked = true; + } + + *reg_val = IXGBE_READ_REG(hw, IXGBE_AUTOC); + return IXGBE_SUCCESS; +} + +/** + * prot_autoc_write_82599 - Hides MAC differences needed for AUTOC write + * @hw: pointer to hardware structure + * @reg_val: value to write to AUTOC + * @locked: bool to indicate whether the SW/FW lock was already taken by + * previous proc_autoc_read_82599. + * + * This part (82599) may need to hold a the SW/FW lock around all writes to + * AUTOC. Likewise after a write we need to do a pipeline reset. + */ +s32 prot_autoc_write_82599(struct ixgbe_hw *hw, u32 autoc, bool locked) +{ + s32 ret_val = IXGBE_SUCCESS; + + /* Blocked by MNG FW so bail */ + if (ixgbe_check_reset_blocked(hw)) + goto out; + + /* We only need to get the lock if: + * - We didn't do it already (in the read part of a read-modify-write) + * - LESM is enabled. + */ + if (!locked && ixgbe_verify_lesm_fw_enabled_82599(hw)) { + ret_val = hw->mac.ops.acquire_swfw_sync(hw, + IXGBE_GSSR_MAC_CSR_SM); + if (ret_val != IXGBE_SUCCESS) + return IXGBE_ERR_SWFW_SYNC; + + locked = true; + } + + IXGBE_WRITE_REG(hw, IXGBE_AUTOC, autoc); + ret_val = ixgbe_reset_pipeline_82599(hw); + +out: + /* Free the SW/FW semaphore as we either grabbed it here or + * already had it when this function was called. + */ + if (locked) + hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_MAC_CSR_SM); + + return ret_val; } /** @@ -226,7 +318,7 @@ s32 ixgbe_init_ops_82599(struct ixgbe_hw *hw) DEBUGFUNC("ixgbe_init_ops_82599"); - ret_val = ixgbe_init_phy_ops_generic(hw); + ixgbe_init_phy_ops_generic(hw); ret_val = ixgbe_init_ops_generic(hw); /* PHY */ @@ -250,6 +342,8 @@ s32 ixgbe_init_ops_82599(struct ixgbe_hw *hw) mac->ops.get_device_caps = &ixgbe_get_device_caps_generic; mac->ops.get_wwn_prefix = &ixgbe_get_wwn_prefix_generic; mac->ops.get_fcoe_boot_status = &ixgbe_get_fcoe_boot_status_generic; + mac->ops.prot_autoc_read = &prot_autoc_read_82599; + mac->ops.prot_autoc_write = &prot_autoc_write_82599; /* RAR, Multicast, VLAN */ mac->ops.set_vmdq = &ixgbe_set_vmdq_generic; @@ -271,12 +365,12 @@ s32 ixgbe_init_ops_82599(struct ixgbe_hw *hw) mac->ops.setup_rxpba = &ixgbe_set_rxpba_generic; ixgbe_init_mac_link_ops_82599(hw); - mac->mcft_size = 128; - mac->vft_size = 128; - mac->num_rar_entries = 128; - mac->rx_pb_size = 512; - mac->max_tx_queues = 128; - mac->max_rx_queues = 128; + mac->mcft_size = IXGBE_82599_MC_TBL_SIZE; + mac->vft_size = IXGBE_82599_VFT_TBL_SIZE; + mac->num_rar_entries = IXGBE_82599_RAR_ENTRIES; + mac->rx_pb_size = IXGBE_82599_RX_PB_SIZE; + mac->max_rx_queues = IXGBE_82599_MAX_RX_QUEUES; + mac->max_tx_queues = IXGBE_82599_MAX_TX_QUEUES; mac->max_msix_vectors = ixgbe_get_pcie_msix_count_generic(hw); mac->arc_subsystem_valid = (IXGBE_READ_REG(hw, IXGBE_FWSM) & @@ -292,6 +386,8 @@ s32 ixgbe_init_ops_82599(struct ixgbe_hw *hw) mac->ops.set_fw_drv_ver = &ixgbe_set_fw_drv_ver_generic; + mac->ops.get_rtrup2tc = &ixgbe_dcb_get_rtrup2tc_generic; + return ret_val; } @@ -299,13 +395,13 @@ s32 ixgbe_init_ops_82599(struct ixgbe_hw *hw) * ixgbe_get_link_capabilities_82599 - Determines link capabilities * @hw: pointer to hardware structure * @speed: pointer to link speed - * @negotiation: true when autoneg or autotry is enabled + * @autoneg: true when autoneg or autotry is enabled * * Determines the link capabilities by reading the AUTOC register. **/ s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw, ixgbe_link_speed *speed, - bool *negotiation) + bool *autoneg) { s32 status = IXGBE_SUCCESS; u32 autoc = 0; @@ -316,10 +412,14 @@ s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw, /* Check if 1G SFP module. */ if (hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core0 || hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core1 || +#ifdef SUPPORT_1000BASE_LX + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core1 || +#endif hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1) { *speed = IXGBE_LINK_SPEED_1GB_FULL; - *negotiation = true; + *autoneg = true; goto out; } @@ -336,22 +436,22 @@ s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw, switch (autoc & IXGBE_AUTOC_LMS_MASK) { case IXGBE_AUTOC_LMS_1G_LINK_NO_AN: *speed = IXGBE_LINK_SPEED_1GB_FULL; - *negotiation = false; + *autoneg = false; break; case IXGBE_AUTOC_LMS_10G_LINK_NO_AN: *speed = IXGBE_LINK_SPEED_10GB_FULL; - *negotiation = false; + *autoneg = false; break; case IXGBE_AUTOC_LMS_1G_AN: *speed = IXGBE_LINK_SPEED_1GB_FULL; - *negotiation = true; + *autoneg = true; break; case IXGBE_AUTOC_LMS_10G_SERIAL: *speed = IXGBE_LINK_SPEED_10GB_FULL; - *negotiation = false; + *autoneg = false; break; case IXGBE_AUTOC_LMS_KX4_KX_KR: @@ -363,7 +463,7 @@ s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw, *speed |= IXGBE_LINK_SPEED_10GB_FULL; if (autoc & IXGBE_AUTOC_KX_SUPP) *speed |= IXGBE_LINK_SPEED_1GB_FULL; - *negotiation = true; + *autoneg = true; break; case IXGBE_AUTOC_LMS_KX4_KX_KR_SGMII: @@ -374,12 +474,12 @@ s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw, *speed |= IXGBE_LINK_SPEED_10GB_FULL; if (autoc & IXGBE_AUTOC_KX_SUPP) *speed |= IXGBE_LINK_SPEED_1GB_FULL; - *negotiation = true; + *autoneg = true; break; case IXGBE_AUTOC_LMS_SGMII_1G_100M: *speed = IXGBE_LINK_SPEED_1GB_FULL | IXGBE_LINK_SPEED_100_FULL; - *negotiation = false; + *autoneg = false; break; default: @@ -391,7 +491,8 @@ s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw, if (hw->phy.multispeed_fiber) { *speed |= IXGBE_LINK_SPEED_10GB_FULL | IXGBE_LINK_SPEED_1GB_FULL; - *negotiation = true; + + *autoneg = true; } out: @@ -453,6 +554,33 @@ out: } /** + * ixgbe_stop_mac_link_on_d3_82599 - Disables link on D3 + * @hw: pointer to hardware structure + * + * Disables link during D3 power down sequence. + * + **/ +void ixgbe_stop_mac_link_on_d3_82599(struct ixgbe_hw *hw) +{ + u32 autoc2_reg, fwsm; + u16 ee_ctrl_2 = 0; + + DEBUGFUNC("ixgbe_stop_mac_link_on_d3_82599"); + ixgbe_read_eeprom(hw, IXGBE_EEPROM_CTRL_2, &ee_ctrl_2); + + /* Check to see if MNG FW could be enabled */ + fwsm = IXGBE_READ_REG(hw, IXGBE_FWSM); + + if (((fwsm & IXGBE_FWSM_MODE_MASK) != IXGBE_FWSM_FW_MODE_PT) && + !hw->wol_enabled && + ee_ctrl_2 & IXGBE_EEPROM_CCD_BIT) { + autoc2_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC2); + autoc2_reg |= IXGBE_AUTOC2_LINK_DISABLE_ON_D3_MASK; + IXGBE_WRITE_REG(hw, IXGBE_AUTOC2, autoc2_reg); + } +} + +/** * ixgbe_start_mac_link_82599 - Setup MAC link settings * @hw: pointer to hardware structure * @autoneg_wait_to_complete: true when waiting for completion is needed @@ -532,6 +660,10 @@ void ixgbe_disable_tx_laser_multispeed_fiber(struct ixgbe_hw *hw) { u32 esdp_reg = IXGBE_READ_REG(hw, IXGBE_ESDP); + /* Blocked by MNG FW so bail */ + if (ixgbe_check_reset_blocked(hw)) + return; + /* Disable tx laser; allow 100us to go dark per spec */ esdp_reg |= IXGBE_ESDP_SDP3; IXGBE_WRITE_REG(hw, IXGBE_ESDP, esdp_reg); @@ -574,6 +706,10 @@ void ixgbe_flap_tx_laser_multispeed_fiber(struct ixgbe_hw *hw) { DEBUGFUNC("ixgbe_flap_tx_laser_multispeed_fiber"); + /* Blocked by MNG FW so bail */ + if (ixgbe_check_reset_blocked(hw)) + return; + if (hw->mac.autotry_restart) { ixgbe_disable_tx_laser_multispeed_fiber(hw); ixgbe_enable_tx_laser_multispeed_fiber(hw); @@ -581,17 +717,17 @@ void ixgbe_flap_tx_laser_multispeed_fiber(struct ixgbe_hw *hw) } } + /** * ixgbe_setup_mac_link_multispeed_fiber - Set MAC link speed * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled * @autoneg_wait_to_complete: true when waiting for completion is needed * * Set the link speed in the AUTOC register and restarts link. **/ s32 ixgbe_setup_mac_link_multispeed_fiber(struct ixgbe_hw *hw, - ixgbe_link_speed speed, bool autoneg, + ixgbe_link_speed speed, bool autoneg_wait_to_complete) { s32 status = IXGBE_SUCCESS; @@ -600,13 +736,12 @@ s32 ixgbe_setup_mac_link_multispeed_fiber(struct ixgbe_hw *hw, u32 speedcnt = 0; u32 esdp_reg = IXGBE_READ_REG(hw, IXGBE_ESDP); u32 i = 0; - bool link_up = false; - bool negotiation; + bool autoneg, link_up = false; DEBUGFUNC("ixgbe_setup_mac_link_multispeed_fiber"); /* Mask off requested but non-supported speeds */ - status = ixgbe_get_link_capabilities(hw, &link_speed, &negotiation); + status = ixgbe_get_link_capabilities(hw, &link_speed, &autoneg); if (status != IXGBE_SUCCESS) return status; @@ -629,16 +764,22 @@ s32 ixgbe_setup_mac_link_multispeed_fiber(struct ixgbe_hw *hw, goto out; /* Set the module link speed */ - esdp_reg |= (IXGBE_ESDP_SDP5_DIR | IXGBE_ESDP_SDP5); - IXGBE_WRITE_REG(hw, IXGBE_ESDP, esdp_reg); - IXGBE_WRITE_FLUSH(hw); + switch (hw->phy.media_type) { + case ixgbe_media_type_fiber: + esdp_reg |= (IXGBE_ESDP_SDP5_DIR | IXGBE_ESDP_SDP5); + IXGBE_WRITE_REG(hw, IXGBE_ESDP, esdp_reg); + IXGBE_WRITE_FLUSH(hw); + break; + default: + DEBUGOUT("Unexpected media type.\n"); + break; + } /* Allow module to change analog characteristics (1G->10G) */ msec_delay(40); status = ixgbe_setup_mac_link_82599(hw, IXGBE_LINK_SPEED_10GB_FULL, - autoneg, autoneg_wait_to_complete); if (status != IXGBE_SUCCESS) return status; @@ -680,17 +821,23 @@ s32 ixgbe_setup_mac_link_multispeed_fiber(struct ixgbe_hw *hw, goto out; /* Set the module link speed */ - esdp_reg &= ~IXGBE_ESDP_SDP5; - esdp_reg |= IXGBE_ESDP_SDP5_DIR; - IXGBE_WRITE_REG(hw, IXGBE_ESDP, esdp_reg); - IXGBE_WRITE_FLUSH(hw); + switch (hw->phy.media_type) { + case ixgbe_media_type_fiber: + esdp_reg &= ~IXGBE_ESDP_SDP5; + esdp_reg |= IXGBE_ESDP_SDP5_DIR; + IXGBE_WRITE_REG(hw, IXGBE_ESDP, esdp_reg); + IXGBE_WRITE_FLUSH(hw); + break; + default: + DEBUGOUT("Unexpected media type.\n"); + break; + } /* Allow module to change analog characteristics (10G->1G) */ msec_delay(40); status = ixgbe_setup_mac_link_82599(hw, IXGBE_LINK_SPEED_1GB_FULL, - autoneg, autoneg_wait_to_complete); if (status != IXGBE_SUCCESS) return status; @@ -717,7 +864,7 @@ s32 ixgbe_setup_mac_link_multispeed_fiber(struct ixgbe_hw *hw, */ if (speedcnt > 1) status = ixgbe_setup_mac_link_multispeed_fiber(hw, - highest_link_speed, autoneg, autoneg_wait_to_complete); + highest_link_speed, autoneg_wait_to_complete); out: /* Set autoneg_advertised value based on input link speed */ @@ -736,13 +883,12 @@ out: * ixgbe_setup_mac_link_smartspeed - Set MAC link speed using SmartSpeed * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled * @autoneg_wait_to_complete: true when waiting for completion is needed * * Implements the Intel SmartSpeed algorithm. **/ s32 ixgbe_setup_mac_link_smartspeed(struct ixgbe_hw *hw, - ixgbe_link_speed speed, bool autoneg, + ixgbe_link_speed speed, bool autoneg_wait_to_complete) { s32 status = IXGBE_SUCCESS; @@ -775,7 +921,7 @@ s32 ixgbe_setup_mac_link_smartspeed(struct ixgbe_hw *hw, /* First, try to get link with full advertisement */ hw->phy.smart_speed_active = false; for (j = 0; j < IXGBE_SMARTSPEED_MAX_RETRIES; j++) { - status = ixgbe_setup_mac_link_82599(hw, speed, autoneg, + status = ixgbe_setup_mac_link_82599(hw, speed, autoneg_wait_to_complete); if (status != IXGBE_SUCCESS) goto out; @@ -810,7 +956,7 @@ s32 ixgbe_setup_mac_link_smartspeed(struct ixgbe_hw *hw, /* Turn SmartSpeed on to disable KR support */ hw->phy.smart_speed_active = true; - status = ixgbe_setup_mac_link_82599(hw, speed, autoneg, + status = ixgbe_setup_mac_link_82599(hw, speed, autoneg_wait_to_complete); if (status != IXGBE_SUCCESS) goto out; @@ -835,7 +981,7 @@ s32 ixgbe_setup_mac_link_smartspeed(struct ixgbe_hw *hw, /* We didn't get link. Turn SmartSpeed back off. */ hw->phy.smart_speed_active = false; - status = ixgbe_setup_mac_link_82599(hw, speed, autoneg, + status = ixgbe_setup_mac_link_82599(hw, speed, autoneg_wait_to_complete); out: @@ -849,27 +995,25 @@ out: * ixgbe_setup_mac_link_82599 - Set MAC link speed * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled * @autoneg_wait_to_complete: true when waiting for completion is needed * * Set the link speed in the AUTOC register and restarts link. **/ s32 ixgbe_setup_mac_link_82599(struct ixgbe_hw *hw, - ixgbe_link_speed speed, bool autoneg, + ixgbe_link_speed speed, bool autoneg_wait_to_complete) { + bool autoneg = false; s32 status = IXGBE_SUCCESS; - u32 autoc = IXGBE_READ_REG(hw, IXGBE_AUTOC); + u32 pma_pmd_1g, link_mode; + u32 current_autoc = IXGBE_READ_REG(hw, IXGBE_AUTOC); /* holds the value of AUTOC register at this current point in time */ + u32 orig_autoc = 0; /* holds the cached value of AUTOC register */ + u32 autoc = current_autoc; /* Temporary variable used for comparison purposes */ u32 autoc2 = IXGBE_READ_REG(hw, IXGBE_AUTOC2); - u32 start_autoc = autoc; - u32 orig_autoc = 0; - u32 link_mode = autoc & IXGBE_AUTOC_LMS_MASK; - u32 pma_pmd_1g = autoc & IXGBE_AUTOC_1G_PMA_PMD_MASK; u32 pma_pmd_10g_serial = autoc2 & IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_MASK; u32 links_reg; u32 i; ixgbe_link_speed link_capabilities = IXGBE_LINK_SPEED_UNKNOWN; - bool got_lock = false; DEBUGFUNC("ixgbe_setup_mac_link_82599"); @@ -891,6 +1035,9 @@ s32 ixgbe_setup_mac_link_82599(struct ixgbe_hw *hw, else orig_autoc = autoc; + link_mode = autoc & IXGBE_AUTOC_LMS_MASK; + pma_pmd_1g = autoc & IXGBE_AUTOC_1G_PMA_PMD_MASK; + if (link_mode == IXGBE_AUTOC_LMS_KX4_KX_KR || link_mode == IXGBE_AUTOC_LMS_KX4_KX_KR_1G_AN || link_mode == IXGBE_AUTOC_LMS_KX4_KX_KR_SGMII) { @@ -927,31 +1074,11 @@ s32 ixgbe_setup_mac_link_82599(struct ixgbe_hw *hw, } } - if (autoc != start_autoc) { - /* Need SW/FW semaphore around AUTOC writes if LESM is on, - * likewise reset_pipeline requires us to hold this lock as - * it also writes to AUTOC. - */ - if (ixgbe_verify_lesm_fw_enabled_82599(hw)) { - status = hw->mac.ops.acquire_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); - if (status != IXGBE_SUCCESS) { - status = IXGBE_ERR_SWFW_SYNC; - goto out; - } - - got_lock = true; - } - + if (autoc != current_autoc) { /* Restart link */ - IXGBE_WRITE_REG(hw, IXGBE_AUTOC, autoc); - ixgbe_reset_pipeline_82599(hw); - - if (got_lock) { - hw->mac.ops.release_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); - got_lock = false; - } + status = hw->mac.ops.prot_autoc_write(hw, autoc, false); + if (status != IXGBE_SUCCESS) + goto out; /* Only poll for autoneg to complete if specified to do so */ if (autoneg_wait_to_complete) { @@ -986,14 +1113,12 @@ out: * ixgbe_setup_copper_link_82599 - Set the PHY autoneg advertised field * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled * @autoneg_wait_to_complete: true if waiting is needed to complete * * Restarts link on PHY and MAC based on settings passed in. **/ STATIC s32 ixgbe_setup_copper_link_82599(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete) { s32 status; @@ -1001,7 +1126,7 @@ STATIC s32 ixgbe_setup_copper_link_82599(struct ixgbe_hw *hw, DEBUGFUNC("ixgbe_setup_copper_link_82599"); /* Setup the PHY according to input speed */ - status = hw->phy.ops.setup_link_speed(hw, speed, autoneg, + status = hw->phy.ops.setup_link_speed(hw, speed, autoneg_wait_to_complete); /* Set up MAC */ ixgbe_start_mac_link_82599(hw, autoneg_wait_to_complete); @@ -1021,7 +1146,9 @@ s32 ixgbe_reset_hw_82599(struct ixgbe_hw *hw) { ixgbe_link_speed link_speed; s32 status; - u32 ctrl, i, autoc, autoc2; + u32 ctrl = 0; + u32 i, autoc, autoc2; + u32 curr_lms; bool link_up = false; DEBUGFUNC("ixgbe_reset_hw_82599"); @@ -1055,6 +1182,9 @@ s32 ixgbe_reset_hw_82599(struct ixgbe_hw *hw) if (hw->phy.reset_disable == false && hw->phy.ops.reset != NULL) hw->phy.ops.reset(hw); + /* remember AUTOC from before we reset */ + curr_lms = IXGBE_READ_REG(hw, IXGBE_AUTOC) & IXGBE_AUTOC_LMS_MASK; + mac_reset_top: /* * Issue global reset to the MAC. Needs to be SW reset if link is up. @@ -1073,7 +1203,7 @@ mac_reset_top: IXGBE_WRITE_REG(hw, IXGBE_CTRL, ctrl); IXGBE_WRITE_FLUSH(hw); - /* Poll for reset bit to self-clear indicating reset is complete */ + /* Poll for reset bit to self-clear meaning reset is complete */ for (i = 0; i < 10; i++) { usec_delay(1); ctrl = IXGBE_READ_REG(hw, IXGBE_CTRL); @@ -1090,8 +1220,8 @@ mac_reset_top: /* * Double resets are required for recovery from certain error - * conditions. Between resets, it is necessary to stall to allow time - * for any pending HW events to complete. + * conditions. Between resets, it is necessary to stall to + * allow time for any pending HW events to complete. */ if (hw->mac.flags & IXGBE_FLAGS_DOUBLE_RESET_REQUIRED) { hw->mac.flags &= ~IXGBE_FLAGS_DOUBLE_RESET_REQUIRED; @@ -1118,29 +1248,25 @@ mac_reset_top: hw->mac.orig_autoc2 = autoc2; hw->mac.orig_link_settings_stored = true; } else { - if (autoc != hw->mac.orig_autoc) { - /* Need SW/FW semaphore around AUTOC writes if LESM is - * on, likewise reset_pipeline requires us to hold - * this lock as it also writes to AUTOC. - */ - bool got_lock = false; - if (ixgbe_verify_lesm_fw_enabled_82599(hw)) { - status = hw->mac.ops.acquire_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); - if (status != IXGBE_SUCCESS) { - status = IXGBE_ERR_SWFW_SYNC; - goto reset_hw_out; - } - got_lock = true; - } - - IXGBE_WRITE_REG(hw, IXGBE_AUTOC, hw->mac.orig_autoc); - ixgbe_reset_pipeline_82599(hw); + /* If MNG FW is running on a multi-speed device that + * doesn't autoneg with out driver support we need to + * leave LMS in the state it was before we MAC reset. + * Likewise if we support WoL we don't want change the + * LMS state. + */ + if ((hw->phy.multispeed_fiber && ixgbe_mng_enabled(hw)) || + hw->wol_enabled) + hw->mac.orig_autoc = + (hw->mac.orig_autoc & ~IXGBE_AUTOC_LMS_MASK) | + curr_lms; - if (got_lock) - hw->mac.ops.release_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); + if (autoc != hw->mac.orig_autoc) { + status = hw->mac.ops.prot_autoc_write(hw, + hw->mac.orig_autoc, + false); + if (status != IXGBE_SUCCESS) + goto reset_hw_out; } if ((autoc2 & IXGBE_AUTOC2_UPPER_MASK) != @@ -1335,8 +1461,10 @@ s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 fdirctrl) * @hw: pointer to hardware structure * @fdirctrl: value to write to flow director control register, initially * contains just the value of the Rx packet buffer allocation + * @cloud_mode: true - cloude mode, false - other mode **/ -s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl) +s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl, + bool cloud_mode) { DEBUGFUNC("ixgbe_init_fdir_perfect_82599"); @@ -1356,6 +1484,7 @@ s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl) (0xA << IXGBE_FDIRCTRL_MAX_LENGTH_SHIFT) | (4 << IXGBE_FDIRCTRL_FULL_THRESH_SHIFT); + /* write hashes and fdirctrl register, poll for completion */ ixgbe_fdir_enable_82599(hw, fdirctrl); @@ -1472,6 +1601,7 @@ s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw, /* * Get the flow_type in order to program FDIRCMD properly * lowest 2 bits are FDIRCMD.L4TYPE, third lowest bit is FDIRCMD.IPV6 + * fifth is FDIRCMD.TUNNEL_FILTER */ switch (input.formatted.flow_type) { case IXGBE_ATR_FLOW_TYPE_TCPV4: @@ -1531,34 +1661,20 @@ void ixgbe_atr_compute_perfect_hash_82599(union ixgbe_atr_input *input, u32 hi_hash_dword, lo_hash_dword, flow_vm_vlan; u32 bucket_hash = 0; + u32 hi_dword = 0; + u32 i = 0; /* Apply masks to input data */ - input->dword_stream[0] &= input_mask->dword_stream[0]; - input->dword_stream[1] &= input_mask->dword_stream[1]; - input->dword_stream[2] &= input_mask->dword_stream[2]; - input->dword_stream[3] &= input_mask->dword_stream[3]; - input->dword_stream[4] &= input_mask->dword_stream[4]; - input->dword_stream[5] &= input_mask->dword_stream[5]; - input->dword_stream[6] &= input_mask->dword_stream[6]; - input->dword_stream[7] &= input_mask->dword_stream[7]; - input->dword_stream[8] &= input_mask->dword_stream[8]; - input->dword_stream[9] &= input_mask->dword_stream[9]; - input->dword_stream[10] &= input_mask->dword_stream[10]; + for (i = 0; i < 14; i++) + input->dword_stream[i] &= input_mask->dword_stream[i]; /* record the flow_vm_vlan bits as they are a key part to the hash */ flow_vm_vlan = IXGBE_NTOHL(input->dword_stream[0]); /* generate common hash dword */ - hi_hash_dword = IXGBE_NTOHL(input->dword_stream[1] ^ - input->dword_stream[2] ^ - input->dword_stream[3] ^ - input->dword_stream[4] ^ - input->dword_stream[5] ^ - input->dword_stream[6] ^ - input->dword_stream[7] ^ - input->dword_stream[8] ^ - input->dword_stream[9] ^ - input->dword_stream[10]); + for (i = 1; i <= 13; i++) + hi_dword ^= input->dword_stream[i]; + hi_hash_dword = IXGBE_NTOHL(hi_dword); /* low dword is word swapped version of common */ lo_hash_dword = (hi_hash_dword >> 16) | (hi_hash_dword << 16); @@ -1577,21 +1693,8 @@ void ixgbe_atr_compute_perfect_hash_82599(union ixgbe_atr_input *input, lo_hash_dword ^= flow_vm_vlan ^ (flow_vm_vlan << 16); /* Process remaining 30 bit of the key */ - IXGBE_COMPUTE_BKT_HASH_ITERATION(1); - IXGBE_COMPUTE_BKT_HASH_ITERATION(2); - IXGBE_COMPUTE_BKT_HASH_ITERATION(3); - IXGBE_COMPUTE_BKT_HASH_ITERATION(4); - IXGBE_COMPUTE_BKT_HASH_ITERATION(5); - IXGBE_COMPUTE_BKT_HASH_ITERATION(6); - IXGBE_COMPUTE_BKT_HASH_ITERATION(7); - IXGBE_COMPUTE_BKT_HASH_ITERATION(8); - IXGBE_COMPUTE_BKT_HASH_ITERATION(9); - IXGBE_COMPUTE_BKT_HASH_ITERATION(10); - IXGBE_COMPUTE_BKT_HASH_ITERATION(11); - IXGBE_COMPUTE_BKT_HASH_ITERATION(12); - IXGBE_COMPUTE_BKT_HASH_ITERATION(13); - IXGBE_COMPUTE_BKT_HASH_ITERATION(14); - IXGBE_COMPUTE_BKT_HASH_ITERATION(15); + for (i = 1; i <= 15; i++) + IXGBE_COMPUTE_BKT_HASH_ITERATION(i); /* * Limit hash to 13 bits since max bucket count is 8K. @@ -1638,12 +1741,11 @@ STATIC u32 ixgbe_get_fdirtcpm_82599(union ixgbe_atr_input *input_mask) IXGBE_NTOHS(((u16)(_value) >> 8) | ((u16)(_value) << 8)) s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw, - union ixgbe_atr_input *input_mask) + union ixgbe_atr_input *input_mask, bool cloud_mode) { /* mask IPv6 since it is currently not supported */ u32 fdirm = IXGBE_FDIRM_DIPv6; u32 fdirtcpm; - DEBUGFUNC("ixgbe_fdir_set_atr_input_mask_82599"); /* @@ -1716,6 +1818,7 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw, return IXGBE_ERR_CONFIG; } + /* Now mask VM pool and destination IPv6 - bits 5 and 2 */ IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm); @@ -1737,7 +1840,7 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw, s32 ixgbe_fdir_write_perfect_filter_82599(struct ixgbe_hw *hw, union ixgbe_atr_input *input, - u16 soft_id, u8 queue) + u16 soft_id, u8 queue, bool cloud_mode) { u32 fdirport, fdirvlan, fdirhash, fdircmd; @@ -1769,6 +1872,7 @@ s32 ixgbe_fdir_write_perfect_filter_82599(struct ixgbe_hw *hw, fdirvlan |= IXGBE_NTOHS(input->formatted.vlan_id); IXGBE_WRITE_REG(hw, IXGBE_FDIRVLAN, fdirvlan); + /* configure FDIRHASH register */ fdirhash = input->formatted.bkt_hash; fdirhash |= soft_id << IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT; @@ -1785,6 +1889,8 @@ s32 ixgbe_fdir_write_perfect_filter_82599(struct ixgbe_hw *hw, IXGBE_FDIRCMD_LAST | IXGBE_FDIRCMD_QUEUE_EN; if (queue == IXGBE_FDIR_DROP_QUEUE) fdircmd |= IXGBE_FDIRCMD_DROP; + if (input->formatted.flow_type & IXGBE_ATR_L4TYPE_TUNNEL_MASK) + fdircmd |= IXGBE_FDIRCMD_TUNNEL_FILTER; fdircmd |= input->formatted.flow_type << IXGBE_FDIRCMD_FLOW_TYPE_SHIFT; fdircmd |= (u32)queue << IXGBE_FDIRCMD_RX_QUEUE_SHIFT; fdircmd |= (u32)input->formatted.vm_pool << IXGBE_FDIRCMD_VT_POOL_SHIFT; @@ -1851,7 +1957,7 @@ s32 ixgbe_fdir_erase_perfect_filter_82599(struct ixgbe_hw *hw, s32 ixgbe_fdir_add_perfect_filter_82599(struct ixgbe_hw *hw, union ixgbe_atr_input *input, union ixgbe_atr_input *input_mask, - u16 soft_id, u8 queue) + u16 soft_id, u8 queue, bool cloud_mode) { s32 err = IXGBE_ERR_CONFIG; @@ -1863,6 +1969,7 @@ s32 ixgbe_fdir_add_perfect_filter_82599(struct ixgbe_hw *hw, */ switch (input->formatted.flow_type) { case IXGBE_ATR_FLOW_TYPE_IPV4: + case IXGBE_ATR_FLOW_TYPE_TUNNELED_IPV4: input_mask->formatted.flow_type = IXGBE_ATR_L4TYPE_IPV6_MASK; if (input->formatted.dst_port || input->formatted.src_port) { DEBUGOUT(" Error on src/dst port\n"); @@ -1870,12 +1977,15 @@ s32 ixgbe_fdir_add_perfect_filter_82599(struct ixgbe_hw *hw, } break; case IXGBE_ATR_FLOW_TYPE_SCTPV4: + case IXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV4: if (input->formatted.dst_port || input->formatted.src_port) { DEBUGOUT(" Error on src/dst port\n"); return IXGBE_ERR_CONFIG; } case IXGBE_ATR_FLOW_TYPE_TCPV4: + case IXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV4: case IXGBE_ATR_FLOW_TYPE_UDPV4: + case IXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV4: input_mask->formatted.flow_type = IXGBE_ATR_L4TYPE_IPV6_MASK | IXGBE_ATR_L4TYPE_MASK; break; @@ -1885,7 +1995,7 @@ s32 ixgbe_fdir_add_perfect_filter_82599(struct ixgbe_hw *hw, } /* program input mask into the HW */ - err = ixgbe_fdir_set_input_mask_82599(hw, input_mask); + err = ixgbe_fdir_set_input_mask_82599(hw, input_mask, cloud_mode); if (err) return err; @@ -1894,7 +2004,7 @@ s32 ixgbe_fdir_add_perfect_filter_82599(struct ixgbe_hw *hw, /* program filters to filter memory */ return ixgbe_fdir_write_perfect_filter_82599(hw, input, - soft_id, queue); + soft_id, queue, cloud_mode); } /** @@ -1984,7 +2094,7 @@ out: **/ s32 ixgbe_identify_phy_82599(struct ixgbe_hw *hw) { - s32 status = IXGBE_ERR_PHY_ADDR_INVALID; + s32 status; DEBUGFUNC("ixgbe_identify_phy_82599"); @@ -2155,7 +2265,10 @@ s32 ixgbe_enable_rx_dma_82599(struct ixgbe_hw *hw, u32 regval) hw->mac.ops.disable_sec_rx_path(hw); - IXGBE_WRITE_REG(hw, IXGBE_RXCTRL, regval); + if (regval & IXGBE_RXCTRL_RXEN) + ixgbe_enable_rx(hw); + else + ixgbe_disable_rx(hw); hw->mac.ops.enable_sec_rx_path(hw); @@ -2172,11 +2285,11 @@ s32 ixgbe_enable_rx_dma_82599(struct ixgbe_hw *hw, u32 regval) * Returns IXGBE_ERR_EEPROM_VERSION if the FW is not present or * if the FW version is not supported. **/ -s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw) +STATIC s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw) { s32 status = IXGBE_ERR_EEPROM_VERSION; u16 fw_offset, fw_ptp_cfg_offset; - u16 fw_version = 0; + u16 fw_version; DEBUGFUNC("ixgbe_verify_fw_version_82599"); @@ -2187,22 +2300,37 @@ s32 ixgbe_verify_fw_version_82599(struct ixgbe_hw *hw) } /* get the offset to the Firmware Module block */ - hw->eeprom.ops.read(hw, IXGBE_FW_PTR, &fw_offset); + if (hw->eeprom.ops.read(hw, IXGBE_FW_PTR, &fw_offset)) { + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", IXGBE_FW_PTR); + return IXGBE_ERR_EEPROM_VERSION; + } if ((fw_offset == 0) || (fw_offset == 0xFFFF)) goto fw_version_out; /* get the offset to the Pass Through Patch Configuration block */ - hw->eeprom.ops.read(hw, (fw_offset + + if (hw->eeprom.ops.read(hw, (fw_offset + IXGBE_FW_PASSTHROUGH_PATCH_CONFIG_PTR), - &fw_ptp_cfg_offset); + &fw_ptp_cfg_offset)) { + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", + fw_offset + + IXGBE_FW_PASSTHROUGH_PATCH_CONFIG_PTR); + return IXGBE_ERR_EEPROM_VERSION; + } if ((fw_ptp_cfg_offset == 0) || (fw_ptp_cfg_offset == 0xFFFF)) goto fw_version_out; /* get the firmware version */ - hw->eeprom.ops.read(hw, (fw_ptp_cfg_offset + - IXGBE_FW_PATCH_VERSION_4), &fw_version); + if (hw->eeprom.ops.read(hw, (fw_ptp_cfg_offset + + IXGBE_FW_PATCH_VERSION_4), &fw_version)) { + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", + fw_ptp_cfg_offset + IXGBE_FW_PATCH_VERSION_4); + return IXGBE_ERR_EEPROM_VERSION; + } if (fw_version > 0x5) status = IXGBE_SUCCESS; @@ -2327,12 +2455,13 @@ STATIC s32 ixgbe_read_eeprom_82599(struct ixgbe_hw *hw, * @hw: pointer to hardware structure * * Reset pipeline by asserting Restart_AN together with LMS change to ensure - * full pipeline reset + * full pipeline reset. This function assumes the SW/FW lock is held. **/ s32 ixgbe_reset_pipeline_82599(struct ixgbe_hw *hw) { - s32 i, autoc_reg, autoc2_reg, ret_val; - s32 anlp1_reg = 0; + s32 ret_val; + u32 anlp1_reg = 0; + u32 i, autoc_reg, autoc2_reg; /* Enable link if disabled in NVM */ autoc2_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC2); @@ -2345,7 +2474,8 @@ s32 ixgbe_reset_pipeline_82599(struct ixgbe_hw *hw) autoc_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC); autoc_reg |= IXGBE_AUTOC_AN_RESTART; /* Write AUTOC register with toggled LMS[2] bit and Restart_AN */ - IXGBE_WRITE_REG(hw, IXGBE_AUTOC, autoc_reg ^ IXGBE_AUTOC_LMS_1G_AN); + IXGBE_WRITE_REG(hw, IXGBE_AUTOC, + autoc_reg ^ (0x4 << IXGBE_AUTOC_LMS_SHIFT)); /* Wait for AN to leave state 0 */ for (i = 0; i < 10; i++) { msec_delay(4); diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.h index fc5fe0b..8312419 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_82599.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -33,7 +33,7 @@ POSSIBILITY OF SUCH DAMAGE. #ifndef _IXGBE_82599_H_ #define _IXGBE_82599_H_ -#ident "$Id: ixgbe_82599.h,v 1.7 2012/10/03 07:10:29 jtkirshe Exp $" +#ident "$Id: ixgbe_82599.h,v 1.12 2013/10/30 10:19:10 jtkirshe Exp $" s32 ixgbe_get_link_capabilities_82599(struct ixgbe_hw *hw, ixgbe_link_speed *speed, bool *autoneg); @@ -42,15 +42,15 @@ void ixgbe_disable_tx_laser_multispeed_fiber(struct ixgbe_hw *hw); void ixgbe_enable_tx_laser_multispeed_fiber(struct ixgbe_hw *hw); void ixgbe_flap_tx_laser_multispeed_fiber(struct ixgbe_hw *hw); s32 ixgbe_setup_mac_link_multispeed_fiber(struct ixgbe_hw *hw, - ixgbe_link_speed speed, bool autoneg, + ixgbe_link_speed speed, bool autoneg_wait_to_complete); s32 ixgbe_setup_mac_link_smartspeed(struct ixgbe_hw *hw, - ixgbe_link_speed speed, bool autoneg, + ixgbe_link_speed speed, bool autoneg_wait_to_complete); s32 ixgbe_start_mac_link_82599(struct ixgbe_hw *hw, bool autoneg_wait_to_complete); s32 ixgbe_setup_mac_link_82599(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete); + bool autoneg_wait_to_complete); s32 ixgbe_setup_sfp_modules_82599(struct ixgbe_hw *hw); void ixgbe_init_mac_link_ops_82599(struct ixgbe_hw *hw); s32 ixgbe_reset_hw_82599(struct ixgbe_hw *hw); @@ -61,4 +61,6 @@ s32 ixgbe_identify_phy_82599(struct ixgbe_hw *hw); s32 ixgbe_init_phy_ops_82599(struct ixgbe_hw *hw); u32 ixgbe_get_supported_physical_layer_82599(struct ixgbe_hw *hw); s32 ixgbe_enable_rx_dma_82599(struct ixgbe_hw *hw, u32 regval); +s32 prot_autoc_read_82599(struct ixgbe_hw *hw, bool *locked, u32 *reg_val); +s32 prot_autoc_write_82599(struct ixgbe_hw *hw, u32 reg_val, bool locked); #endif /* _IXGBE_82599_H_ */ diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.c index 47eaa19..7e6b092 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -33,7 +33,20 @@ POSSIBILITY OF SUCH DAMAGE. #include "ixgbe_api.h" #include "ixgbe_common.h" -#ident "$Id: ixgbe_api.c,v 1.187 2012/11/08 10:11:52 jtkirshe Exp $" +#ident "$Id: ixgbe_api.c,v 1.207 2013/11/22 01:02:01 jtkirshe Exp $" + +/** + * ixgbe_dcb_get_rtrup2tc - read rtrup2tc reg + * @hw: pointer to hardware structure + * @map: pointer to u8 arr for returning map + * + * Read the rtrup2tc HW register and resolve its content into map + **/ +void ixgbe_dcb_get_rtrup2tc(struct ixgbe_hw *hw, u8 *map) +{ + if (hw->mac.ops.get_rtrup2tc) + hw->mac.ops.get_rtrup2tc(hw, map); +} /** * ixgbe_init_shared_code - Initialize the shared code @@ -65,13 +78,13 @@ s32 ixgbe_init_shared_code(struct ixgbe_hw *hw) case ixgbe_mac_82599EB: status = ixgbe_init_ops_82599(hw); break; + case ixgbe_mac_X540: + status = ixgbe_init_ops_X540(hw); + break; case ixgbe_mac_82599_vf: case ixgbe_mac_X540_vf: status = ixgbe_init_ops_vf(hw); break; - case ixgbe_mac_X540: - status = ixgbe_init_ops_X540(hw); - break; default: status = IXGBE_ERR_DEVICE_NOT_SUPPORTED; break; @@ -93,6 +106,12 @@ s32 ixgbe_set_mac_type(struct ixgbe_hw *hw) DEBUGFUNC("ixgbe_set_mac_type\n"); + if (hw->vendor_id != IXGBE_INTEL_VENDOR_ID) { + ERROR_REPORT2(IXGBE_ERROR_UNSUPPORTED, + "Unsupported vendor id: %x", hw->vendor_id); + return IXGBE_ERR_DEVICE_NOT_SUPPORTED; + } + switch (hw->device_id) { case IXGBE_DEV_ID_82598: case IXGBE_DEV_ID_82598_BX: @@ -138,6 +157,9 @@ s32 ixgbe_set_mac_type(struct ixgbe_hw *hw) break; default: ret_val = IXGBE_ERR_DEVICE_NOT_SUPPORTED; + ERROR_REPORT2(IXGBE_ERROR_UNSUPPORTED, + "Unsupported device id: %x", + hw->device_id); break; } @@ -506,16 +528,14 @@ s32 ixgbe_check_phy_link(struct ixgbe_hw *hw, ixgbe_link_speed *speed, * ixgbe_setup_phy_link_speed - Set auto advertise * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled * * Sets the auto advertised capabilities **/ s32 ixgbe_setup_phy_link_speed(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete) { return ixgbe_call_func(hw, hw->phy.ops.setup_link_speed, (hw, speed, - autoneg, autoneg_wait_to_complete), + autoneg_wait_to_complete), IXGBE_NOT_IMPLEMENTED); } @@ -575,17 +595,15 @@ void ixgbe_flap_tx_laser(struct ixgbe_hw *hw) * ixgbe_setup_link - Set link speed * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled * * Configures link settings. Restarts the link. * Performs autonegotiation if needed. **/ s32 ixgbe_setup_link(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete) { return ixgbe_call_func(hw, hw->mac.ops.setup_link, (hw, speed, - autoneg, autoneg_wait_to_complete), + autoneg_wait_to_complete), IXGBE_NOT_IMPLEMENTED); } @@ -998,6 +1016,8 @@ s32 ixgbe_set_fw_drv_ver(struct ixgbe_hw *hw, u8 maj, u8 min, u8 build, } + + /** * ixgbe_read_analog_reg8 - Reads 8 bit analog register * @hw: pointer to hardware structure @@ -1178,3 +1198,15 @@ void ixgbe_release_swfw_semaphore(struct ixgbe_hw *hw, u16 mask) hw->mac.ops.release_swfw_sync(hw, mask); } + +void ixgbe_disable_rx(struct ixgbe_hw *hw) +{ + if (hw->mac.ops.disable_rx) + hw->mac.ops.disable_rx(hw); +} + +void ixgbe_enable_rx(struct ixgbe_hw *hw) +{ + if (hw->mac.ops.enable_rx) + hw->mac.ops.enable_rx(hw); +} diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.h index 048cde9..da41d95 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_api.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -35,7 +35,9 @@ POSSIBILITY OF SUCH DAMAGE. #define _IXGBE_API_H_ #include "ixgbe_type.h" -#ident "$Id: ixgbe_api.h,v 1.115 2012/08/23 23:30:15 jtkirshe Exp $" +#ident "$Id: ixgbe_api.h,v 1.123 2013/11/22 01:02:01 jtkirshe Exp $" + +void ixgbe_dcb_get_rtrup2tc(struct ixgbe_hw *hw, u8 *map); s32 ixgbe_init_shared_code(struct ixgbe_hw *hw); @@ -72,13 +74,12 @@ s32 ixgbe_check_phy_link(struct ixgbe_hw *hw, bool *link_up); s32 ixgbe_setup_phy_link_speed(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete); void ixgbe_disable_tx_laser(struct ixgbe_hw *hw); void ixgbe_enable_tx_laser(struct ixgbe_hw *hw); void ixgbe_flap_tx_laser(struct ixgbe_hw *hw); s32 ixgbe_setup_link(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete); + bool autoneg_wait_to_complete); s32 ixgbe_check_link(struct ixgbe_hw *hw, ixgbe_link_speed *speed, bool *link_up, bool link_up_wait_to_complete); s32 ixgbe_get_link_capabilities(struct ixgbe_hw *hw, ixgbe_link_speed *speed, @@ -135,18 +136,20 @@ u32 ixgbe_get_supported_physical_layer(struct ixgbe_hw *hw); s32 ixgbe_enable_rx_dma(struct ixgbe_hw *hw, u32 regval); s32 ixgbe_disable_sec_rx_path(struct ixgbe_hw *hw); s32 ixgbe_enable_sec_rx_path(struct ixgbe_hw *hw); +s32 ixgbe_mng_fw_enabled(struct ixgbe_hw *hw); s32 ixgbe_reinit_fdir_tables_82599(struct ixgbe_hw *hw); s32 ixgbe_init_fdir_signature_82599(struct ixgbe_hw *hw, u32 fdirctrl); -s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl); +s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl, + bool cloud_mode); s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw, union ixgbe_atr_hash_dword input, union ixgbe_atr_hash_dword common, u8 queue); s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw, - union ixgbe_atr_input *input_mask); + union ixgbe_atr_input *input_mask, bool cloud_mode); s32 ixgbe_fdir_write_perfect_filter_82599(struct ixgbe_hw *hw, union ixgbe_atr_input *input, - u16 soft_id, u8 queue); + u16 soft_id, u8 queue, bool cloud_mode); s32 ixgbe_fdir_erase_perfect_filter_82599(struct ixgbe_hw *hw, union ixgbe_atr_input *input, u16 soft_id); @@ -154,7 +157,8 @@ s32 ixgbe_fdir_add_perfect_filter_82599(struct ixgbe_hw *hw, union ixgbe_atr_input *input, union ixgbe_atr_input *mask, u16 soft_id, - u8 queue); + u8 queue, + bool cloud_mode); void ixgbe_atr_compute_perfect_hash_82599(union ixgbe_atr_input *input, union ixgbe_atr_input *mask); u32 ixgbe_atr_compute_sig_hash_82599(union ixgbe_atr_hash_dword input, @@ -173,5 +177,7 @@ void ixgbe_release_swfw_semaphore(struct ixgbe_hw *hw, u16 mask); s32 ixgbe_get_wwn_prefix(struct ixgbe_hw *hw, u16 *wwnn_prefix, u16 *wwpn_prefix); s32 ixgbe_get_fcoe_boot_status(struct ixgbe_hw *hw, u16 *bs); +void ixgbe_disable_rx(struct ixgbe_hw *hw); +void ixgbe_enable_rx(struct ixgbe_hw *hw); #endif /* _IXGBE_API_H_ */ diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.c index 17e34f8..4e6f54f 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -33,29 +33,31 @@ POSSIBILITY OF SUCH DAMAGE. #include "ixgbe_common.h" #include "ixgbe_phy.h" +#include "ixgbe_dcb.h" +#include "ixgbe_dcb_82599.h" #include "ixgbe_api.h" -#ident "$Id: ixgbe_common.c,v 1.349 2012/11/05 23:08:30 jtkirshe Exp $" - -static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw); -static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw); -static void ixgbe_release_eeprom_semaphore(struct ixgbe_hw *hw); -static s32 ixgbe_ready_eeprom(struct ixgbe_hw *hw); -static void ixgbe_standby_eeprom(struct ixgbe_hw *hw); -static void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data, +#ident "$Id: ixgbe_common.c,v 1.382 2013/11/22 01:02:01 jtkirshe Exp $" + +STATIC s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw); +STATIC s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw); +STATIC void ixgbe_release_eeprom_semaphore(struct ixgbe_hw *hw); +STATIC s32 ixgbe_ready_eeprom(struct ixgbe_hw *hw); +STATIC void ixgbe_standby_eeprom(struct ixgbe_hw *hw); +STATIC void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data, u16 count); -static u16 ixgbe_shift_in_eeprom_bits(struct ixgbe_hw *hw, u16 count); -static void ixgbe_raise_eeprom_clk(struct ixgbe_hw *hw, u32 *eec); -static void ixgbe_lower_eeprom_clk(struct ixgbe_hw *hw, u32 *eec); -static void ixgbe_release_eeprom(struct ixgbe_hw *hw); +STATIC u16 ixgbe_shift_in_eeprom_bits(struct ixgbe_hw *hw, u16 count); +STATIC void ixgbe_raise_eeprom_clk(struct ixgbe_hw *hw, u32 *eec); +STATIC void ixgbe_lower_eeprom_clk(struct ixgbe_hw *hw, u32 *eec); +STATIC void ixgbe_release_eeprom(struct ixgbe_hw *hw); -static s32 ixgbe_mta_vector(struct ixgbe_hw *hw, u8 *mc_addr); -static s32 ixgbe_get_san_mac_addr_offset(struct ixgbe_hw *hw, +STATIC s32 ixgbe_mta_vector(struct ixgbe_hw *hw, u8 *mc_addr); +STATIC s32 ixgbe_get_san_mac_addr_offset(struct ixgbe_hw *hw, u16 *san_mac_offset); -static s32 ixgbe_read_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset, +STATIC s32 ixgbe_read_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset, u16 words, u16 *data); -static s32 ixgbe_write_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset, +STATIC s32 ixgbe_write_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset, u16 words, u16 *data); -static s32 ixgbe_detect_eeprom_page_size_generic(struct ixgbe_hw *hw, +STATIC s32 ixgbe_detect_eeprom_page_size_generic(struct ixgbe_hw *hw, u16 offset); /** @@ -104,6 +106,8 @@ s32 ixgbe_init_ops_generic(struct ixgbe_hw *hw) mac->ops.set_lan_id = &ixgbe_set_lan_id_multi_port_pcie; mac->ops.acquire_swfw_sync = &ixgbe_acquire_swfw_sync; mac->ops.release_swfw_sync = &ixgbe_release_swfw_sync; + mac->ops.prot_autoc_read = &prot_autoc_read_generic; + mac->ops.prot_autoc_write = &prot_autoc_write_generic; /* LEDs */ mac->ops.led_on = &ixgbe_led_on_generic; @@ -126,6 +130,8 @@ s32 ixgbe_init_ops_generic(struct ixgbe_hw *hw) mac->ops.set_vfta = NULL; mac->ops.set_vlvf = NULL; mac->ops.init_uta_tables = NULL; + mac->ops.enable_rx = &ixgbe_enable_rx_generic; + mac->ops.disable_rx = &ixgbe_disable_rx_generic; /* Flow Control */ mac->ops.fc_enable = &ixgbe_fc_enable_generic; @@ -134,32 +140,62 @@ s32 ixgbe_init_ops_generic(struct ixgbe_hw *hw) mac->ops.get_link_capabilities = NULL; mac->ops.setup_link = NULL; mac->ops.check_link = NULL; + mac->ops.dmac_config = NULL; + mac->ops.dmac_update_tcs = NULL; + mac->ops.dmac_config_tcs = NULL; return IXGBE_SUCCESS; } /** - * ixgbe_device_supports_autoneg_fc - Check if phy supports autoneg flow - * control - * @hw: pointer to hardware structure + * ixgbe_device_supports_autoneg_fc - Check if device supports autonegotiation + * of flow control + * @hw: pointer to hardware structure + * + * This function returns true if the device supports flow control + * autonegotiation, and false if it does not. * - * There are several phys that do not support autoneg flow control. This - * function check the device id to see if the associated phy supports - * autoneg flow control. **/ -s32 ixgbe_device_supports_autoneg_fc(struct ixgbe_hw *hw) +bool ixgbe_device_supports_autoneg_fc(struct ixgbe_hw *hw) { + bool supported = false; + ixgbe_link_speed speed; + bool link_up; DEBUGFUNC("ixgbe_device_supports_autoneg_fc"); - switch (hw->device_id) { - case IXGBE_DEV_ID_X540T: - case IXGBE_DEV_ID_X540T1: - case IXGBE_DEV_ID_82599_T3_LOM: - return IXGBE_SUCCESS; + switch (hw->phy.media_type) { + case ixgbe_media_type_fiber: + hw->mac.ops.check_link(hw, &speed, &link_up, false); + /* if link is down, assume supported */ + if (link_up) + supported = speed == IXGBE_LINK_SPEED_1GB_FULL ? + true : false; + else + supported = true; + break; + case ixgbe_media_type_backplane: + supported = true; + break; + case ixgbe_media_type_copper: + /* only some copper devices support flow control autoneg */ + switch (hw->device_id) { + case IXGBE_DEV_ID_82599_T3_LOM: + case IXGBE_DEV_ID_X540T: + case IXGBE_DEV_ID_X540T1: + supported = true; + break; + default: + supported = false; + } default: - return IXGBE_ERR_FC_NOT_SUPPORTED; + break; } + + ERROR_REPORT2(IXGBE_ERROR_UNSUPPORTED, + "Device %x does not support flow control autoneg", + hw->device_id); + return supported; } /** @@ -168,12 +204,12 @@ s32 ixgbe_device_supports_autoneg_fc(struct ixgbe_hw *hw) * * Called at init time to set up flow control. **/ -static s32 ixgbe_setup_fc(struct ixgbe_hw *hw) +STATIC s32 ixgbe_setup_fc(struct ixgbe_hw *hw) { s32 ret_val = IXGBE_SUCCESS; u32 reg = 0, reg_bp = 0; u16 reg_cu = 0; - bool got_lock = false; + bool locked = false; DEBUGFUNC("ixgbe_setup_fc"); @@ -182,7 +218,8 @@ static s32 ixgbe_setup_fc(struct ixgbe_hw *hw) * ixgbe_fc_rx_pause because it will cause us to fail at UNH. */ if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) { - DEBUGOUT("ixgbe_fc_rx_pause not valid in strict IEEE mode\n"); + ERROR_REPORT1(IXGBE_ERROR_UNSUPPORTED, + "ixgbe_fc_rx_pause not valid in strict IEEE mode\n"); ret_val = IXGBE_ERR_INVALID_LINK_SETTINGS; goto out; } @@ -200,10 +237,16 @@ static s32 ixgbe_setup_fc(struct ixgbe_hw *hw) * we link at 10G, the 1G advertisement is harmless and vice versa. */ switch (hw->phy.media_type) { - case ixgbe_media_type_fiber: case ixgbe_media_type_backplane: + /* some MAC's need RMW protection on AUTOC */ + ret_val = hw->mac.ops.prot_autoc_read(hw, &locked, ®_bp); + if (ret_val != IXGBE_SUCCESS) + goto out; + + /* only backplane uses autoc so fall though */ + case ixgbe_media_type_fiber: reg = IXGBE_READ_REG(hw, IXGBE_PCS1GANA); - reg_bp = IXGBE_READ_REG(hw, IXGBE_AUTOC); + break; case ixgbe_media_type_copper: hw->phy.ops.read_reg(hw, IXGBE_MDIO_AUTO_NEG_ADVT, @@ -268,13 +311,14 @@ static s32 ixgbe_setup_fc(struct ixgbe_hw *hw) reg_cu |= IXGBE_TAF_SYM_PAUSE | IXGBE_TAF_ASM_PAUSE; break; default: - DEBUGOUT("Flow control param set incorrectly\n"); + ERROR_REPORT1(IXGBE_ERROR_ARGUMENT, + "Flow control param set incorrectly\n"); ret_val = IXGBE_ERR_CONFIG; goto out; break; } - if (hw->mac.type != ixgbe_mac_X540) { + if (hw->mac.type < ixgbe_mac_X540) { /* * Enable auto-negotiation between the MAC & PHY; * the MAC will advertise clause 37 flow control. @@ -297,35 +341,16 @@ static s32 ixgbe_setup_fc(struct ixgbe_hw *hw) */ if (hw->phy.media_type == ixgbe_media_type_backplane) { reg_bp |= IXGBE_AUTOC_AN_RESTART; - /* Need the SW/FW semaphore around AUTOC writes if 82599 and - * LESM is on, likewise reset_pipeline requries the lock as - * it also writes AUTOC. - */ - if ((hw->mac.type == ixgbe_mac_82599EB) && - ixgbe_verify_lesm_fw_enabled_82599(hw)) { - ret_val = hw->mac.ops.acquire_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); - if (ret_val != IXGBE_SUCCESS) { - ret_val = IXGBE_ERR_SWFW_SYNC; - goto out; - } - got_lock = true; - } - - IXGBE_WRITE_REG(hw, IXGBE_AUTOC, reg_bp); - if (hw->mac.type == ixgbe_mac_82599EB) - ixgbe_reset_pipeline_82599(hw); - - if (got_lock) - hw->mac.ops.release_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); + ret_val = hw->mac.ops.prot_autoc_write(hw, reg_bp, locked); + if (ret_val) + goto out; } else if ((hw->phy.media_type == ixgbe_media_type_copper) && - (ixgbe_device_supports_autoneg_fc(hw) == IXGBE_SUCCESS)) { + (ixgbe_device_supports_autoneg_fc(hw))) { hw->phy.ops.write_reg(hw, IXGBE_MDIO_AUTO_NEG_ADVT, IXGBE_MDIO_AUTO_NEG_DEV_TYPE, reg_cu); } - DEBUGOUT1("Set up FC; IXGBE_AUTOC = 0x%08X\n", reg); + DEBUGOUT1("Set up FC; PCS1GLCTL = 0x%08X\n", reg); out: return ret_val; } @@ -757,7 +782,7 @@ s32 ixgbe_read_pba_raw(struct ixgbe_hw *hw, u16 *eeprom_buf, return ret_val; } else { if (eeprom_buf_size > (u32)(pba->word[1] + - pba->pba_block[0])) { + pba_block_size)) { memcpy(pba->pba_block, &eeprom_buf[pba->word[1]], pba_block_size * sizeof(u16)); @@ -919,23 +944,18 @@ s32 ixgbe_get_mac_addr_generic(struct ixgbe_hw *hw, u8 *mac_addr) } /** - * ixgbe_get_bus_info_generic - Generic set PCI bus info + * ixgbe_set_pci_config_data_generic - Generic store PCI bus info * @hw: pointer to hardware structure + * @link_status: the link status returned by the PCI config space * - * Sets the PCI bus info (speed, width, type) within the ixgbe_hw structure + * Stores the PCI bus info (speed, width, type) within the ixgbe_hw structure **/ -s32 ixgbe_get_bus_info_generic(struct ixgbe_hw *hw) +void ixgbe_set_pci_config_data_generic(struct ixgbe_hw *hw, u16 link_status) { struct ixgbe_mac_info *mac = &hw->mac; - u16 link_status; - - DEBUGFUNC("ixgbe_get_bus_info_generic"); hw->bus.type = ixgbe_bus_type_pci_express; - /* Get the negotiated link width and speed from PCI config space */ - link_status = IXGBE_READ_PCIE_WORD(hw, IXGBE_PCI_LINK_STATUS); - switch (link_status & IXGBE_PCI_LINK_WIDTH) { case IXGBE_PCI_LINK_WIDTH_1: hw->bus.width = ixgbe_bus_width_pcie_x1; @@ -970,6 +990,25 @@ s32 ixgbe_get_bus_info_generic(struct ixgbe_hw *hw) } mac->ops.set_lan_id(hw); +} + +/** + * ixgbe_get_bus_info_generic - Generic set PCI bus info + * @hw: pointer to hardware structure + * + * Gets the PCI bus info (speed, width, type) then calls helper function to + * store this data within the ixgbe_hw structure. + **/ +s32 ixgbe_get_bus_info_generic(struct ixgbe_hw *hw) +{ + u16 link_status; + + DEBUGFUNC("ixgbe_get_bus_info_generic"); + + /* Get the negotiated link width and speed from PCI config space */ + link_status = IXGBE_READ_PCIE_WORD(hw, IXGBE_PCI_LINK_STATUS); + + ixgbe_set_pci_config_data_generic(hw, link_status); return IXGBE_SUCCESS; } @@ -1211,7 +1250,7 @@ out: * If ixgbe_eeprom_update_checksum is not called after this function, the * EEPROM will most likely contain an invalid checksum. **/ -static s32 ixgbe_write_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset, +STATIC s32 ixgbe_write_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset, u16 words, u16 *data) { s32 status; @@ -1370,7 +1409,7 @@ out: * * Reads 16 bit word(s) from EEPROM through bit-bang method **/ -static s32 ixgbe_read_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset, +STATIC s32 ixgbe_read_eeprom_buffer_bit_bang(struct ixgbe_hw *hw, u16 offset, u16 words, u16 *data) { s32 status; @@ -1469,16 +1508,18 @@ s32 ixgbe_read_eerd_buffer_generic(struct ixgbe_hw *hw, u16 offset, if (words == 0) { status = IXGBE_ERR_INVALID_ARGUMENT; + ERROR_REPORT1(IXGBE_ERROR_ARGUMENT, "Invalid EEPROM words"); goto out; } if (offset >= hw->eeprom.word_size) { status = IXGBE_ERR_EEPROM; + ERROR_REPORT1(IXGBE_ERROR_ARGUMENT, "Invalid EEPROM offset"); goto out; } for (i = 0; i < words; i++) { - eerd = ((offset + i) << IXGBE_EEPROM_RW_ADDR_SHIFT) + + eerd = ((offset + i) << IXGBE_EEPROM_RW_ADDR_SHIFT) | IXGBE_EEPROM_RW_REG_START; IXGBE_WRITE_REG(hw, IXGBE_EERD, eerd); @@ -1505,7 +1546,7 @@ out: * This function is called only when we are writing a new large buffer * at given offset so the data would be overwritten anyway. **/ -static s32 ixgbe_detect_eeprom_page_size_generic(struct ixgbe_hw *hw, +STATIC s32 ixgbe_detect_eeprom_page_size_generic(struct ixgbe_hw *hw, u16 offset) { u16 data[IXGBE_EEPROM_PAGE_SIZE_MAX]; @@ -1575,11 +1616,13 @@ s32 ixgbe_write_eewr_buffer_generic(struct ixgbe_hw *hw, u16 offset, if (words == 0) { status = IXGBE_ERR_INVALID_ARGUMENT; + ERROR_REPORT1(IXGBE_ERROR_ARGUMENT, "Invalid EEPROM words"); goto out; } if (offset >= hw->eeprom.word_size) { status = IXGBE_ERR_EEPROM; + ERROR_REPORT1(IXGBE_ERROR_ARGUMENT, "Invalid EEPROM offset"); goto out; } @@ -1648,6 +1691,11 @@ s32 ixgbe_poll_eerd_eewr_done(struct ixgbe_hw *hw, u32 ee_reg) } usec_delay(5); } + + if (i == IXGBE_EERD_EEWR_ATTEMPTS) + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "EEPROM read/write done polling timed out"); + return status; } @@ -1658,7 +1706,7 @@ s32 ixgbe_poll_eerd_eewr_done(struct ixgbe_hw *hw, u32 ee_reg) * Prepares EEPROM for access using bit-bang method. This function should * be called before issuing a command to the EEPROM. **/ -static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw) +STATIC s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw) { s32 status = IXGBE_SUCCESS; u32 eec; @@ -1712,7 +1760,7 @@ static s32 ixgbe_acquire_eeprom(struct ixgbe_hw *hw) * * Sets the hardware semaphores so EEPROM access can occur for bit-bang method **/ -static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw) +STATIC s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw) { s32 status = IXGBE_ERR_EEPROM; u32 timeout = 2000; @@ -1783,14 +1831,15 @@ static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw) * was not granted because we don't have access to the EEPROM */ if (i >= timeout) { - DEBUGOUT("SWESMBI Software EEPROM semaphore " - "not granted.\n"); + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "SWESMBI Software EEPROM semaphore not granted.\n"); ixgbe_release_eeprom_semaphore(hw); status = IXGBE_ERR_EEPROM; } } else { - DEBUGOUT("Software semaphore SMBI between device drivers " - "not granted.\n"); + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "Software semaphore SMBI between device drivers " + "not granted.\n"); } return status; @@ -1802,7 +1851,7 @@ static s32 ixgbe_get_eeprom_semaphore(struct ixgbe_hw *hw) * * This function clears hardware semaphore bits. **/ -static void ixgbe_release_eeprom_semaphore(struct ixgbe_hw *hw) +STATIC void ixgbe_release_eeprom_semaphore(struct ixgbe_hw *hw) { u32 swsm; @@ -1820,7 +1869,7 @@ static void ixgbe_release_eeprom_semaphore(struct ixgbe_hw *hw) * ixgbe_ready_eeprom - Polls for EEPROM ready * @hw: pointer to hardware structure **/ -static s32 ixgbe_ready_eeprom(struct ixgbe_hw *hw) +STATIC s32 ixgbe_ready_eeprom(struct ixgbe_hw *hw) { s32 status = IXGBE_SUCCESS; u16 i; @@ -1861,7 +1910,7 @@ static s32 ixgbe_ready_eeprom(struct ixgbe_hw *hw) * ixgbe_standby_eeprom - Returns EEPROM to a "standby" state * @hw: pointer to hardware structure **/ -static void ixgbe_standby_eeprom(struct ixgbe_hw *hw) +STATIC void ixgbe_standby_eeprom(struct ixgbe_hw *hw) { u32 eec; @@ -1886,7 +1935,7 @@ static void ixgbe_standby_eeprom(struct ixgbe_hw *hw) * @data: data to send to the EEPROM * @count: number of bits to shift out **/ -static void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data, +STATIC void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data, u16 count) { u32 eec; @@ -1941,7 +1990,7 @@ static void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data, * ixgbe_shift_in_eeprom_bits - Shift data bits in from the EEPROM * @hw: pointer to hardware structure **/ -static u16 ixgbe_shift_in_eeprom_bits(struct ixgbe_hw *hw, u16 count) +STATIC u16 ixgbe_shift_in_eeprom_bits(struct ixgbe_hw *hw, u16 count) { u32 eec; u32 i; @@ -1981,7 +2030,7 @@ static u16 ixgbe_shift_in_eeprom_bits(struct ixgbe_hw *hw, u16 count) * @hw: pointer to hardware structure * @eec: EEC register's current value **/ -static void ixgbe_raise_eeprom_clk(struct ixgbe_hw *hw, u32 *eec) +STATIC void ixgbe_raise_eeprom_clk(struct ixgbe_hw *hw, u32 *eec) { DEBUGFUNC("ixgbe_raise_eeprom_clk"); @@ -2000,7 +2049,7 @@ static void ixgbe_raise_eeprom_clk(struct ixgbe_hw *hw, u32 *eec) * @hw: pointer to hardware structure * @eecd: EECD's current value **/ -static void ixgbe_lower_eeprom_clk(struct ixgbe_hw *hw, u32 *eec) +STATIC void ixgbe_lower_eeprom_clk(struct ixgbe_hw *hw, u32 *eec) { DEBUGFUNC("ixgbe_lower_eeprom_clk"); @@ -2018,7 +2067,7 @@ static void ixgbe_lower_eeprom_clk(struct ixgbe_hw *hw, u32 *eec) * ixgbe_release_eeprom - Release EEPROM, release semaphores * @hw: pointer to hardware structure **/ -static void ixgbe_release_eeprom(struct ixgbe_hw *hw) +STATIC void ixgbe_release_eeprom(struct ixgbe_hw *hw) { u32 eec; @@ -2214,7 +2263,8 @@ s32 ixgbe_set_rar_generic(struct ixgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, /* Make sure we are using a valid rar index range */ if (index >= rar_entries) { - DEBUGOUT1("RAR index %d is out of range.\n", index); + ERROR_REPORT2(IXGBE_ERROR_ARGUMENT, + "RAR index %d is out of range.\n", index); return IXGBE_ERR_INVALID_ARGUMENT; } @@ -2263,7 +2313,8 @@ s32 ixgbe_clear_rar_generic(struct ixgbe_hw *hw, u32 index) /* Make sure we are using a valid rar index range */ if (index >= rar_entries) { - DEBUGOUT1("RAR index %d is out of range.\n", index); + ERROR_REPORT2(IXGBE_ERROR_ARGUMENT, + "RAR index %d is out of range.\n", index); return IXGBE_ERR_INVALID_ARGUMENT; } @@ -2467,7 +2518,7 @@ s32 ixgbe_update_uc_addr_list_generic(struct ixgbe_hw *hw, u8 *addr_list, * by the MO field of the MCSTCTRL. The MO field is set during initialization * to mc_filter_type. **/ -static s32 ixgbe_mta_vector(struct ixgbe_hw *hw, u8 *mc_addr) +STATIC s32 ixgbe_mta_vector(struct ixgbe_hw *hw, u8 *mc_addr) { u32 vector = 0; @@ -2706,7 +2757,8 @@ s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw) fccfg_reg |= IXGBE_FCCFG_TFCE_802_3X; break; default: - DEBUGOUT("Flow control param set incorrectly\n"); + ERROR_REPORT1(IXGBE_ERROR_ARGUMENT, + "Flow control param set incorrectly\n"); ret_val = IXGBE_ERR_CONFIG; goto out; break; @@ -2730,10 +2782,11 @@ s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw) /* * In order to prevent Tx hangs when the internal Tx * switch is enabled we must set the high water mark - * to the maximum FCRTH value. This allows the Tx - * switch to function even under heavy Rx workloads. + * to the Rx packet buffer size - 24KB. This allows + * the Tx switch to function even under heavy Rx + * workloads. */ - fcrth = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i)) - 32; + fcrth = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i)) - 24576; } IXGBE_WRITE_REG(hw, IXGBE_FCRTH_82599(i), fcrth); @@ -2764,11 +2817,16 @@ out: * Find the intersection between advertised settings and link partner's * advertised settings **/ -static s32 ixgbe_negotiate_fc(struct ixgbe_hw *hw, u32 adv_reg, u32 lp_reg, +STATIC s32 ixgbe_negotiate_fc(struct ixgbe_hw *hw, u32 adv_reg, u32 lp_reg, u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm) { - if ((!(adv_reg)) || (!(lp_reg))) + if ((!(adv_reg)) || (!(lp_reg))) { + ERROR_REPORT3(IXGBE_ERROR_UNSUPPORTED, + "Local or link partner's advertised flow control " + "settings are NULL. Local: %x, link partner: %x\n", + adv_reg, lp_reg); return IXGBE_ERR_FC_NOT_NEGOTIATED; + } if ((adv_reg & adv_sym) && (lp_reg & lp_sym)) { /* @@ -2806,7 +2864,7 @@ static s32 ixgbe_negotiate_fc(struct ixgbe_hw *hw, u32 adv_reg, u32 lp_reg, * * Enable flow control according on 1 gig fiber. **/ -static s32 ixgbe_fc_autoneg_fiber(struct ixgbe_hw *hw) +STATIC s32 ixgbe_fc_autoneg_fiber(struct ixgbe_hw *hw) { u32 pcs_anadv_reg, pcs_lpab_reg, linkstat; s32 ret_val = IXGBE_ERR_FC_NOT_NEGOTIATED; @@ -2819,8 +2877,11 @@ static s32 ixgbe_fc_autoneg_fiber(struct ixgbe_hw *hw) linkstat = IXGBE_READ_REG(hw, IXGBE_PCS1GLSTA); if ((!!(linkstat & IXGBE_PCS1GLSTA_AN_COMPLETE) == 0) || - (!!(linkstat & IXGBE_PCS1GLSTA_AN_TIMED_OUT) == 1)) + (!!(linkstat & IXGBE_PCS1GLSTA_AN_TIMED_OUT) == 1)) { + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "Auto-Negotiation did not complete or timed out"); goto out; + } pcs_anadv_reg = IXGBE_READ_REG(hw, IXGBE_PCS1GANA); pcs_lpab_reg = IXGBE_READ_REG(hw, IXGBE_PCS1GANLP); @@ -2841,7 +2902,7 @@ out: * * Enable flow control according to IEEE clause 37. **/ -static s32 ixgbe_fc_autoneg_backplane(struct ixgbe_hw *hw) +STATIC s32 ixgbe_fc_autoneg_backplane(struct ixgbe_hw *hw) { u32 links2, anlp1_reg, autoc_reg, links; s32 ret_val = IXGBE_ERR_FC_NOT_NEGOTIATED; @@ -2852,13 +2913,19 @@ static s32 ixgbe_fc_autoneg_backplane(struct ixgbe_hw *hw) * - we are 82599 and link partner is not AN enabled */ links = IXGBE_READ_REG(hw, IXGBE_LINKS); - if ((links & IXGBE_LINKS_KX_AN_COMP) == 0) + if ((links & IXGBE_LINKS_KX_AN_COMP) == 0) { + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "Auto-Negotiation did not complete"); goto out; + } if (hw->mac.type == ixgbe_mac_82599EB) { links2 = IXGBE_READ_REG(hw, IXGBE_LINKS2); - if ((links2 & IXGBE_LINKS2_AN_SUPPORTED) == 0) + if ((links2 & IXGBE_LINKS2_AN_SUPPORTED) == 0) { + ERROR_REPORT1(IXGBE_ERROR_UNSUPPORTED, + "Link partner is not AN enabled"); goto out; + } } /* * Read the 10g AN autoc and LP ability registers and resolve @@ -2881,7 +2948,7 @@ out: * * Enable flow control according to IEEE clause 37. **/ -static s32 ixgbe_fc_autoneg_copper(struct ixgbe_hw *hw) +STATIC s32 ixgbe_fc_autoneg_copper(struct ixgbe_hw *hw) { u16 technology_ability_reg = 0; u16 lp_technology_ability_reg = 0; @@ -2920,12 +2987,17 @@ void ixgbe_fc_autoneg(struct ixgbe_hw *hw) * - FC autoneg is disabled, or if * - link is not up. */ - if (hw->fc.disable_fc_autoneg) + if (hw->fc.disable_fc_autoneg) { + ERROR_REPORT1(IXGBE_ERROR_UNSUPPORTED, + "Flow control autoneg is disabled"); goto out; + } hw->mac.ops.check_link(hw, &speed, &link_up, false); - if (!link_up) + if (!link_up) { + ERROR_REPORT1(IXGBE_ERROR_SOFTWARE, "The link is down"); goto out; + } switch (hw->phy.media_type) { /* Autoneg flow control on fiber adapters */ @@ -2941,7 +3013,7 @@ void ixgbe_fc_autoneg(struct ixgbe_hw *hw) /* Autoneg flow control on copper adapters */ case ixgbe_media_type_copper: - if (ixgbe_device_supports_autoneg_fc(hw) == IXGBE_SUCCESS) + if (ixgbe_device_supports_autoneg_fc(hw)) ret_val = ixgbe_fc_autoneg_copper(hw); break; @@ -2958,6 +3030,53 @@ out: } } +/* + * ixgbe_pcie_timeout_poll - Return number of times to poll for completion + * @hw: pointer to hardware structure + * + * System-wide timeout range is encoded in PCIe Device Control2 register. + * + * Add 10% to specified maximum and return the number of times to poll for + * completion timeout, in units of 100 microsec. Never return less than + * 800 = 80 millisec. + */ +STATIC u32 ixgbe_pcie_timeout_poll(struct ixgbe_hw *hw) +{ + s16 devctl2; + u32 pollcnt; + + devctl2 = IXGBE_READ_PCIE_WORD(hw, IXGBE_PCI_DEVICE_CONTROL2); + devctl2 &= IXGBE_PCIDEVCTRL2_TIMEO_MASK; + + switch (devctl2) { + case IXGBE_PCIDEVCTRL2_65_130ms: + pollcnt = 1300; /* 130 millisec */ + break; + case IXGBE_PCIDEVCTRL2_260_520ms: + pollcnt = 5200; /* 520 millisec */ + break; + case IXGBE_PCIDEVCTRL2_1_2s: + pollcnt = 20000; /* 2 sec */ + break; + case IXGBE_PCIDEVCTRL2_4_8s: + pollcnt = 80000; /* 8 sec */ + break; + case IXGBE_PCIDEVCTRL2_17_34s: + pollcnt = 34000; /* 34 sec */ + break; + case IXGBE_PCIDEVCTRL2_50_100us: /* 100 microsecs */ + case IXGBE_PCIDEVCTRL2_1_2ms: /* 2 millisecs */ + case IXGBE_PCIDEVCTRL2_16_32ms: /* 32 millisec */ + case IXGBE_PCIDEVCTRL2_16_32ms_def: /* 32 millisec default */ + default: + pollcnt = 800; /* 80 millisec minimum */ + break; + } + + /* add 10% to spec maximum */ + return (pollcnt * 11) / 10; +} + /** * ixgbe_disable_pcie_master - Disable PCI-express master access * @hw: pointer to hardware structure @@ -2970,15 +3089,17 @@ out: s32 ixgbe_disable_pcie_master(struct ixgbe_hw *hw) { s32 status = IXGBE_SUCCESS; - u32 i; + u32 i, poll; + u16 value; DEBUGFUNC("ixgbe_disable_pcie_master"); /* Always set this bit to ensure any future transactions are blocked */ IXGBE_WRITE_REG(hw, IXGBE_CTRL, IXGBE_CTRL_GIO_DIS); - /* Exit if master requets are blocked */ - if (!(IXGBE_READ_REG(hw, IXGBE_STATUS) & IXGBE_STATUS_GIO)) + /* Exit if master requests are blocked */ + if (!(IXGBE_READ_REG(hw, IXGBE_STATUS) & IXGBE_STATUS_GIO) || + IXGBE_REMOVED(hw->hw_addr)) goto out; /* Poll for master request bit to clear */ @@ -3003,14 +3124,18 @@ s32 ixgbe_disable_pcie_master(struct ixgbe_hw *hw) * Before proceeding, make sure that the PCIe block does not have * transactions pending. */ - for (i = 0; i < IXGBE_PCI_MASTER_DISABLE_TIMEOUT; i++) { + poll = ixgbe_pcie_timeout_poll(hw); + for (i = 0; i < poll; i++) { usec_delay(100); - if (!(IXGBE_READ_PCIE_WORD(hw, IXGBE_PCI_DEVICE_STATUS) & - IXGBE_PCI_DEVICE_STATUS_TRANSACTION_PENDING)) + value = IXGBE_READ_PCIE_WORD(hw, IXGBE_PCI_DEVICE_STATUS); + if (IXGBE_REMOVED(hw->hw_addr)) + goto out; + if (!(value & IXGBE_PCI_DEVICE_STATUS_TRANSACTION_PENDING)) goto out; } - DEBUGOUT("PCIe transaction pending bit also did not clear.\n"); + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "PCIe transaction pending bit also did not clear.\n"); status = IXGBE_ERR_MASTER_REQUESTS_PENDING; out: @@ -3027,44 +3152,41 @@ out: **/ s32 ixgbe_acquire_swfw_sync(struct ixgbe_hw *hw, u16 mask) { - u32 gssr; + u32 gssr = 0; u32 swmask = mask; u32 fwmask = mask << 5; - s32 timeout = 200; + u32 timeout = 200; + u32 i; DEBUGFUNC("ixgbe_acquire_swfw_sync"); - while (timeout) { + for (i = 0; i < timeout; i++) { /* - * SW EEPROM semaphore bit is used for access to all - * SW_FW_SYNC/GSSR bits (not just EEPROM) + * SW NVM semaphore bit is used for access to all + * SW_FW_SYNC bits (not just NVM) */ if (ixgbe_get_eeprom_semaphore(hw)) return IXGBE_ERR_SWFW_SYNC; gssr = IXGBE_READ_REG(hw, IXGBE_GSSR); - if (!(gssr & (fwmask | swmask))) - break; - - /* - * Firmware currently using resource (fwmask) or other software - * thread currently using resource (swmask) - */ - ixgbe_release_eeprom_semaphore(hw); - msec_delay(5); - timeout--; - } - - if (!timeout) { - DEBUGOUT("Driver can't access resource, SW_FW_SYNC timeout.\n"); - return IXGBE_ERR_SWFW_SYNC; + if (!(gssr & (fwmask | swmask))) { + gssr |= swmask; + IXGBE_WRITE_REG(hw, IXGBE_GSSR, gssr); + ixgbe_release_eeprom_semaphore(hw); + return IXGBE_SUCCESS; + } else { + /* Resource is currently in use by FW or SW */ + ixgbe_release_eeprom_semaphore(hw); + msec_delay(5); + } } - gssr |= swmask; - IXGBE_WRITE_REG(hw, IXGBE_GSSR, gssr); + /* If time expired clear the bits holding the lock and retry */ + if (gssr & (fwmask | swmask)) + ixgbe_release_swfw_sync(hw, gssr & (fwmask | swmask)); - ixgbe_release_eeprom_semaphore(hw); - return IXGBE_SUCCESS; + msec_delay(5); + return IXGBE_ERR_SWFW_SYNC; } /** @@ -3129,6 +3251,37 @@ s32 ixgbe_disable_sec_rx_path_generic(struct ixgbe_hw *hw) } /** + * prot_autoc_read_generic - Hides MAC differences needed for AUTOC read + * @hw: pointer to hardware structure + * @reg_val: Value we read from AUTOC + * + * The default case requires no protection so just to the register read. + */ +s32 prot_autoc_read_generic(struct ixgbe_hw *hw, bool *locked, u32 *reg_val) +{ + *locked = false; + *reg_val = IXGBE_READ_REG(hw, IXGBE_AUTOC); + return IXGBE_SUCCESS; +} + +/** + * prot_autoc_write_generic - Hides MAC differences needed for AUTOC write + * @hw: pointer to hardware structure + * @reg_val: value to write to AUTOC + * @locked: bool to indicate whether the SW/FW lock was already taken by + * previous read. + * + * The default case requires no protection so just to the register write. + */ +s32 prot_autoc_write_generic(struct ixgbe_hw *hw, u32 reg_val, bool locked) +{ + UNREFERENCED_1PARAMETER(locked); + + IXGBE_WRITE_REG(hw, IXGBE_AUTOC, reg_val); + return IXGBE_SUCCESS; +} + +/** * ixgbe_enable_sec_rx_path_generic - Enables the receive data path * @hw: pointer to hardware structure * @@ -3159,7 +3312,10 @@ s32 ixgbe_enable_rx_dma_generic(struct ixgbe_hw *hw, u32 regval) { DEBUGFUNC("ixgbe_enable_rx_dma_generic"); - IXGBE_WRITE_REG(hw, IXGBE_RXCTRL, regval); + if (regval & IXGBE_RXCTRL_RXEN) + ixgbe_enable_rx(hw); + else + ixgbe_disable_rx(hw); return IXGBE_SUCCESS; } @@ -3173,9 +3329,10 @@ s32 ixgbe_blink_led_start_generic(struct ixgbe_hw *hw, u32 index) { ixgbe_link_speed speed = 0; bool link_up = 0; - u32 autoc_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC); + u32 autoc_reg = 0; u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL); s32 ret_val = IXGBE_SUCCESS; + bool locked = false; DEBUGFUNC("ixgbe_blink_led_start_generic"); @@ -3186,29 +3343,18 @@ s32 ixgbe_blink_led_start_generic(struct ixgbe_hw *hw, u32 index) hw->mac.ops.check_link(hw, &speed, &link_up, false); if (!link_up) { - /* Need the SW/FW semaphore around AUTOC writes if 82599 and - * LESM is on. - */ - bool got_lock = false; - if ((hw->mac.type == ixgbe_mac_82599EB) && - ixgbe_verify_lesm_fw_enabled_82599(hw)) { - ret_val = hw->mac.ops.acquire_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); - if (ret_val != IXGBE_SUCCESS) { - ret_val = IXGBE_ERR_SWFW_SYNC; - goto out; - } - got_lock = true; - } + ret_val = hw->mac.ops.prot_autoc_read(hw, &locked, &autoc_reg); + if (ret_val != IXGBE_SUCCESS) + goto out; autoc_reg |= IXGBE_AUTOC_AN_RESTART; autoc_reg |= IXGBE_AUTOC_FLU; - IXGBE_WRITE_REG(hw, IXGBE_AUTOC, autoc_reg); - IXGBE_WRITE_FLUSH(hw); - if (got_lock) - hw->mac.ops.release_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); + ret_val = hw->mac.ops.prot_autoc_write(hw, autoc_reg, locked); + if (ret_val != IXGBE_SUCCESS) + goto out; + + IXGBE_WRITE_FLUSH(hw); msec_delay(10); } @@ -3228,36 +3374,23 @@ out: **/ s32 ixgbe_blink_led_stop_generic(struct ixgbe_hw *hw, u32 index) { - u32 autoc_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC); + u32 autoc_reg = 0; u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL); s32 ret_val = IXGBE_SUCCESS; - bool got_lock = false; + bool locked = false; DEBUGFUNC("ixgbe_blink_led_stop_generic"); - /* Need the SW/FW semaphore around AUTOC writes if 82599 and - * LESM is on. - */ - if ((hw->mac.type == ixgbe_mac_82599EB) && - ixgbe_verify_lesm_fw_enabled_82599(hw)) { - ret_val = hw->mac.ops.acquire_swfw_sync(hw, - IXGBE_GSSR_MAC_CSR_SM); - if (ret_val != IXGBE_SUCCESS) { - ret_val = IXGBE_ERR_SWFW_SYNC; - goto out; - } - got_lock = true; - } + ret_val = hw->mac.ops.prot_autoc_read(hw, &locked, &autoc_reg); + if (ret_val != IXGBE_SUCCESS) + goto out; autoc_reg &= ~IXGBE_AUTOC_FLU; autoc_reg |= IXGBE_AUTOC_AN_RESTART; - IXGBE_WRITE_REG(hw, IXGBE_AUTOC, autoc_reg); - - if (hw->mac.type == ixgbe_mac_82599EB) - ixgbe_reset_pipeline_82599(hw); - if (got_lock) - hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_MAC_CSR_SM); + ret_val = hw->mac.ops.prot_autoc_write(hw, autoc_reg, locked); + if (ret_val != IXGBE_SUCCESS) + goto out; led_reg &= ~IXGBE_LED_MODE_MASK(index); led_reg &= ~IXGBE_LED_BLINK(index); @@ -3278,18 +3411,26 @@ out: * pointer, and returns the value at that location. This is used in both * get and set mac_addr routines. **/ -static s32 ixgbe_get_san_mac_addr_offset(struct ixgbe_hw *hw, +STATIC s32 ixgbe_get_san_mac_addr_offset(struct ixgbe_hw *hw, u16 *san_mac_offset) { + s32 ret_val; + DEBUGFUNC("ixgbe_get_san_mac_addr_offset"); /* * First read the EEPROM pointer to see if the MAC addresses are * available. */ - hw->eeprom.ops.read(hw, IXGBE_SAN_MAC_ADDR_PTR, san_mac_offset); + ret_val = hw->eeprom.ops.read(hw, IXGBE_SAN_MAC_ADDR_PTR, + san_mac_offset); + if (ret_val) { + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom at offset %d failed", + IXGBE_SAN_MAC_ADDR_PTR); + } - return IXGBE_SUCCESS; + return ret_val; } /** @@ -3306,6 +3447,7 @@ s32 ixgbe_get_san_mac_addr_generic(struct ixgbe_hw *hw, u8 *san_mac_addr) { u16 san_mac_data, san_mac_offset; u8 i; + s32 ret_val; DEBUGFUNC("ixgbe_get_san_mac_addr_generic"); @@ -3313,18 +3455,9 @@ s32 ixgbe_get_san_mac_addr_generic(struct ixgbe_hw *hw, u8 *san_mac_addr) * First read the EEPROM pointer to see if the MAC addresses are * available. If they're not, no point in calling set_lan_id() here. */ - ixgbe_get_san_mac_addr_offset(hw, &san_mac_offset); - - if ((san_mac_offset == 0) || (san_mac_offset == 0xFFFF)) { - /* - * No addresses available in this EEPROM. It's not an - * error though, so just wipe the local address and return. - */ - for (i = 0; i < 6; i++) - san_mac_addr[i] = 0xFF; - + ret_val = ixgbe_get_san_mac_addr_offset(hw, &san_mac_offset); + if (ret_val || san_mac_offset == 0 || san_mac_offset == 0xFFFF) goto san_mac_addr_out; - } /* make sure we know which port we need to program */ hw->mac.ops.set_lan_id(hw); @@ -3332,13 +3465,27 @@ s32 ixgbe_get_san_mac_addr_generic(struct ixgbe_hw *hw, u8 *san_mac_addr) (hw->bus.func) ? (san_mac_offset += IXGBE_SAN_MAC_ADDR_PORT1_OFFSET) : (san_mac_offset += IXGBE_SAN_MAC_ADDR_PORT0_OFFSET); for (i = 0; i < 3; i++) { - hw->eeprom.ops.read(hw, san_mac_offset, &san_mac_data); + ret_val = hw->eeprom.ops.read(hw, san_mac_offset, + &san_mac_data); + if (ret_val) { + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", + san_mac_offset); + goto san_mac_addr_out; + } san_mac_addr[i * 2] = (u8)(san_mac_data); san_mac_addr[i * 2 + 1] = (u8)(san_mac_data >> 8); san_mac_offset++; } + return IXGBE_SUCCESS; san_mac_addr_out: + /* + * No addresses available in this EEPROM. It's not an + * error though, so just wipe the local address and return. + */ + for (i = 0; i < 6; i++) + san_mac_addr[i] = 0xFF; return IXGBE_SUCCESS; } @@ -3351,19 +3498,16 @@ san_mac_addr_out: **/ s32 ixgbe_set_san_mac_addr_generic(struct ixgbe_hw *hw, u8 *san_mac_addr) { - s32 status = IXGBE_SUCCESS; + s32 ret_val; u16 san_mac_data, san_mac_offset; u8 i; DEBUGFUNC("ixgbe_set_san_mac_addr_generic"); /* Look for SAN mac address pointer. If not defined, return */ - ixgbe_get_san_mac_addr_offset(hw, &san_mac_offset); - - if ((san_mac_offset == 0) || (san_mac_offset == 0xFFFF)) { - status = IXGBE_ERR_NO_SAN_ADDR_PTR; - goto san_mac_addr_out; - } + ret_val = ixgbe_get_san_mac_addr_offset(hw, &san_mac_offset); + if (ret_val || san_mac_offset == 0 || san_mac_offset == 0xFFFF) + return IXGBE_ERR_NO_SAN_ADDR_PTR; /* Make sure we know which port we need to write */ hw->mac.ops.set_lan_id(hw); @@ -3378,8 +3522,7 @@ s32 ixgbe_set_san_mac_addr_generic(struct ixgbe_hw *hw, u8 *san_mac_addr) san_mac_offset++; } -san_mac_addr_out: - return status; + return IXGBE_SUCCESS; } /** @@ -3411,6 +3554,8 @@ u16 ixgbe_get_pcie_msix_count_generic(struct ixgbe_hw *hw) DEBUGFUNC("ixgbe_get_pcie_msix_count_generic"); msix_count = IXGBE_READ_PCIE_WORD(hw, pcie_offset); + if (IXGBE_REMOVED(hw->hw_addr)) + msix_count = 0; msix_count &= IXGBE_PCIE_MSIX_TBL_SZ_MASK; /* MSI-X count is zero-based in HW */ @@ -3506,13 +3651,17 @@ s32 ixgbe_clear_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq) /* Make sure we are using a valid rar index range */ if (rar >= rar_entries) { - DEBUGOUT1("RAR index %d is out of range.\n", rar); + ERROR_REPORT2(IXGBE_ERROR_ARGUMENT, + "RAR index %d is out of range.\n", rar); return IXGBE_ERR_INVALID_ARGUMENT; } mpsar_lo = IXGBE_READ_REG(hw, IXGBE_MPSAR_LO(rar)); mpsar_hi = IXGBE_READ_REG(hw, IXGBE_MPSAR_HI(rar)); + if (IXGBE_REMOVED(hw->hw_addr)) + goto done; + if (!mpsar_lo && !mpsar_hi) goto done; @@ -3555,7 +3704,8 @@ s32 ixgbe_set_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq) /* Make sure we are using a valid rar index range */ if (rar >= rar_entries) { - DEBUGOUT1("RAR index %d is out of range.\n", rar); + ERROR_REPORT2(IXGBE_ERROR_ARGUMENT, + "RAR index %d is out of range.\n", rar); return IXGBE_ERR_INVALID_ARGUMENT; } @@ -3654,7 +3804,8 @@ s32 ixgbe_find_vlvf_slot(struct ixgbe_hw *hw, u32 vlan) if (first_empty_slot) regindex = first_empty_slot; else { - DEBUGOUT("No space in VLVF.\n"); + ERROR_REPORT1(IXGBE_ERROR_SOFTWARE, + "No space in VLVF.\n"); regindex = IXGBE_ERR_NO_SPACE; } } @@ -3944,8 +4095,9 @@ s32 ixgbe_get_wwn_prefix_generic(struct ixgbe_hw *hw, u16 *wwnn_prefix, *wwpn_prefix = 0xFFFF; /* check if alternative SAN MAC is supported */ - hw->eeprom.ops.read(hw, IXGBE_ALT_SAN_MAC_ADDR_BLK_PTR, - &alt_san_mac_blk_offset); + offset = IXGBE_ALT_SAN_MAC_ADDR_BLK_PTR; + if (hw->eeprom.ops.read(hw, offset, &alt_san_mac_blk_offset)) + goto wwn_prefix_err; if ((alt_san_mac_blk_offset == 0) || (alt_san_mac_blk_offset == 0xFFFF)) @@ -3953,19 +4105,29 @@ s32 ixgbe_get_wwn_prefix_generic(struct ixgbe_hw *hw, u16 *wwnn_prefix, /* check capability in alternative san mac address block */ offset = alt_san_mac_blk_offset + IXGBE_ALT_SAN_MAC_ADDR_CAPS_OFFSET; - hw->eeprom.ops.read(hw, offset, &caps); + if (hw->eeprom.ops.read(hw, offset, &caps)) + goto wwn_prefix_err; if (!(caps & IXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN)) goto wwn_prefix_out; /* get the corresponding prefix for WWNN/WWPN */ offset = alt_san_mac_blk_offset + IXGBE_ALT_SAN_MAC_ADDR_WWNN_OFFSET; - hw->eeprom.ops.read(hw, offset, wwnn_prefix); + if (hw->eeprom.ops.read(hw, offset, wwnn_prefix)) { + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", offset); + } offset = alt_san_mac_blk_offset + IXGBE_ALT_SAN_MAC_ADDR_WWPN_OFFSET; - hw->eeprom.ops.read(hw, offset, wwpn_prefix); + if (hw->eeprom.ops.read(hw, offset, wwpn_prefix)) + goto wwn_prefix_err; wwn_prefix_out: return IXGBE_SUCCESS; + +wwn_prefix_err: + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", offset); + return IXGBE_SUCCESS; } /** @@ -4135,7 +4297,7 @@ void ixgbe_enable_relaxed_ordering_gen2(struct ixgbe_hw *hw) * Calculates the checksum for some buffer on a specified length. The * checksum calculated is returned. **/ -static u8 ixgbe_calculate_checksum(u8 *buffer, u32 length) +u8 ixgbe_calculate_checksum(u8 *buffer, u32 length) { u32 i; u8 sum = 0; @@ -4161,8 +4323,8 @@ static u8 ixgbe_calculate_checksum(u8 *buffer, u32 length) * Communicates with the manageability block. On success return IXGBE_SUCCESS * else return IXGBE_ERR_HOST_INTERFACE_COMMAND. **/ -static s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, u32 *buffer, - u32 length) +s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, u32 *buffer, + u32 length) { u32 hicr, i, bi; u32 hdr_size = sizeof(struct ixgbe_hic_hdr); @@ -4411,3 +4573,61 @@ void ixgbe_clear_tx_pending(struct ixgbe_hw *hw) IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0); } + +/** + * ixgbe_dcb_get_rtrup2tc_generic - read rtrup2tc reg + * @hw: pointer to hardware structure + * @map: pointer to u8 arr for returning map + * + * Read the rtrup2tc HW register and resolve its content into map + **/ +void ixgbe_dcb_get_rtrup2tc_generic(struct ixgbe_hw *hw, u8 *map) +{ + u32 reg, i; + + reg = IXGBE_READ_REG(hw, IXGBE_RTRUP2TC); + for (i = 0; i < IXGBE_DCB_MAX_USER_PRIORITY; i++) + map[i] = IXGBE_RTRUP2TC_UP_MASK & + (reg >> (i * IXGBE_RTRUP2TC_UP_SHIFT)); + return; +} + +void ixgbe_disable_rx_generic(struct ixgbe_hw *hw) +{ + u32 pfdtxgswc; + u32 rxctrl; + + rxctrl = IXGBE_READ_REG(hw, IXGBE_RXCTRL); + if (rxctrl & IXGBE_RXCTRL_RXEN) { + if (hw->mac.type != ixgbe_mac_82598EB) { + pfdtxgswc = IXGBE_READ_REG(hw, IXGBE_PFDTXGSWC); + if (pfdtxgswc & IXGBE_PFDTXGSWC_VT_LBEN) { + pfdtxgswc &= ~IXGBE_PFDTXGSWC_VT_LBEN; + IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, pfdtxgswc); + hw->mac.set_lben = true; + } else { + hw->mac.set_lben = false; + } + } + rxctrl &= ~IXGBE_RXCTRL_RXEN; + IXGBE_WRITE_REG(hw, IXGBE_RXCTRL, rxctrl); + } +} + +void ixgbe_enable_rx_generic(struct ixgbe_hw *hw) +{ + u32 pfdtxgswc; + u32 rxctrl; + + rxctrl = IXGBE_READ_REG(hw, IXGBE_RXCTRL); + IXGBE_WRITE_REG(hw, IXGBE_RXCTRL, (rxctrl | IXGBE_RXCTRL_RXEN)); + + if (hw->mac.type != ixgbe_mac_82598EB) { + if (hw->mac.set_lben) { + pfdtxgswc = IXGBE_READ_REG(hw, IXGBE_PFDTXGSWC); + pfdtxgswc |= IXGBE_PFDTXGSWC_VT_LBEN; + IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, pfdtxgswc); + hw->mac.set_lben = false; + } + } +} diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.h index e94e524..80e47c1 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_common.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -35,17 +35,22 @@ POSSIBILITY OF SUCH DAMAGE. #define _IXGBE_COMMON_H_ #include "ixgbe_type.h" -#ident "$Id: ixgbe_common.h,v 1.133 2012/11/05 23:08:30 jtkirshe Exp $" +#ident "$Id: ixgbe_common.h,v 1.143 2013/11/22 01:02:01 jtkirshe Exp $" #define IXGBE_WRITE_REG64(hw, reg, value) \ do { \ IXGBE_WRITE_REG(hw, reg, (u32) value); \ IXGBE_WRITE_REG(hw, reg + 4, (u32) (value >> 32)); \ } while (0) +#ifndef IXGBE_REMOVED +#define IXGBE_REMOVED(a) (0) +#endif /* IXGBE_REMOVED */ struct ixgbe_pba { u16 word[2]; u16 *pba_block; }; +void ixgbe_dcb_get_rtrup2tc_generic(struct ixgbe_hw *hw, u8 *map); + u16 ixgbe_get_pcie_msix_count_generic(struct ixgbe_hw *hw); s32 ixgbe_init_ops_generic(struct ixgbe_hw *hw); s32 ixgbe_init_hw_generic(struct ixgbe_hw *hw); @@ -64,6 +69,7 @@ s32 ixgbe_get_pba_block_size(struct ixgbe_hw *hw, u16 *eeprom_buf, u32 eeprom_buf_size, u16 *pba_block_size); s32 ixgbe_get_mac_addr_generic(struct ixgbe_hw *hw, u8 *mac_addr); s32 ixgbe_get_bus_info_generic(struct ixgbe_hw *hw); +void ixgbe_set_pci_config_data_generic(struct ixgbe_hw *hw, u16 link_status); void ixgbe_set_lan_id_multi_port_pcie(struct ixgbe_hw *hw); s32 ixgbe_stop_adapter_generic(struct ixgbe_hw *hw); @@ -106,7 +112,7 @@ s32 ixgbe_disable_sec_rx_path_generic(struct ixgbe_hw *hw); s32 ixgbe_enable_sec_rx_path_generic(struct ixgbe_hw *hw); s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw); -s32 ixgbe_device_supports_autoneg_fc(struct ixgbe_hw *hw); +bool ixgbe_device_supports_autoneg_fc(struct ixgbe_hw *hw); void ixgbe_fc_autoneg(struct ixgbe_hw *hw); s32 ixgbe_validate_mac_addr(u8 *mac_addr); @@ -114,6 +120,9 @@ s32 ixgbe_acquire_swfw_sync(struct ixgbe_hw *hw, u16 mask); void ixgbe_release_swfw_sync(struct ixgbe_hw *hw, u16 mask); s32 ixgbe_disable_pcie_master(struct ixgbe_hw *hw); +s32 prot_autoc_read_generic(struct ixgbe_hw *hw, bool *, u32 *reg_val); +s32 prot_autoc_write_generic(struct ixgbe_hw *hw, u32 reg_val, bool locked); + s32 ixgbe_blink_led_start_generic(struct ixgbe_hw *hw, u32 index); s32 ixgbe_blink_led_stop_generic(struct ixgbe_hw *hw, u32 index); @@ -148,8 +157,15 @@ void ixgbe_set_rxpba_generic(struct ixgbe_hw *hw, int num_pb, u32 headroom, void ixgbe_enable_relaxed_ordering_gen2(struct ixgbe_hw *hw); s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min, u8 build, u8 ver); +u8 ixgbe_calculate_checksum(u8 *buffer, u32 length); +s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, u32 *buffer, + u32 length); void ixgbe_clear_tx_pending(struct ixgbe_hw *hw); extern s32 ixgbe_reset_pipeline_82599(struct ixgbe_hw *hw); +extern void ixgbe_stop_mac_link_on_d3_82599(struct ixgbe_hw *hw); +bool ixgbe_mng_enabled(struct ixgbe_hw *hw); +void ixgbe_disable_rx_generic(struct ixgbe_hw *hw); +void ixgbe_enable_rx_generic(struct ixgbe_hw *hw); #endif /* IXGBE_COMMON */ diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.c index 540801c..1c2459b 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -36,7 +36,7 @@ POSSIBILITY OF SUCH DAMAGE. #include "ixgbe_dcb.h" #include "ixgbe_dcb_82598.h" #include "ixgbe_dcb_82599.h" -#ident "$Id: ixgbe_dcb.c,v 1.52 2012/05/24 02:09:56 jtkirshe Exp $" +#ident "$Id: ixgbe_dcb.c,v 1.55 2013/11/22 01:02:01 jtkirshe Exp $" /** * ixgbe_dcb_calculate_tc_credits - This calculates the ieee traffic class diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.h index d4b1039..633abb2 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.c index b0fdc3a..52c7e72 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.h index 1b55db0..9307644 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82598.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.c index cc7ce7a..2bcf1c7 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -329,7 +329,14 @@ s32 ixgbe_dcb_config_pfc_82599(struct ixgbe_hw *hw, u8 pfc_en, u8 *map) fcrtl = (hw->fc.low_water[i] << 10) | IXGBE_FCRTL_XONE; IXGBE_WRITE_REG(hw, IXGBE_FCRTL_82599(i), fcrtl); } else { - reg = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i)) - 32; + /* + * In order to prevent Tx hangs when the internal Tx + * switch is enabled we must set the high water mark + * to the Rx packet buffer size - 24KB. This allows + * the Tx switch to function even under heavy Rx + * workloads. + */ + reg = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i)) - 24576; IXGBE_WRITE_REG(hw, IXGBE_FCRTL_82599(i), 0); } diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.h index 1d78113..94a5e9e 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_dcb_82599.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -33,7 +33,7 @@ POSSIBILITY OF SUCH DAMAGE. #ifndef _IXGBE_DCB_82599_H_ #define _IXGBE_DCB_82599_H_ -#ident "$Id: ixgbe_dcb_82599.h,v 1.33 2012/03/26 22:28:19 jtkirshe Exp $" +#ident "$Id: ixgbe_dcb_82599.h,v 1.34 2013/03/20 21:52:47 jtkirshe Exp $" /* DCB register definitions */ #define IXGBE_RTTDCS_TDPAC 0x00000001 /* 0 Round Robin, @@ -51,6 +51,7 @@ POSSIBILITY OF SUCH DAMAGE. /* Receive UP2TC mapping */ #define IXGBE_RTRUP2TC_UP_SHIFT 3 +#define IXGBE_RTRUP2TC_UP_MASK 7 /* Transmit UP2TC mapping */ #define IXGBE_RTTUP2TC_UP_SHIFT 3 diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.c index 1dc13dc..4c51659 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -76,10 +76,11 @@ s32 ixgbe_write_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id) DEBUGFUNC("ixgbe_write_mbx"); - if (size > mbx->size) + if (size > mbx->size) { ret_val = IXGBE_ERR_MBX; - - else if (mbx->ops.write) + ERROR_REPORT2(IXGBE_ERROR_ARGUMENT, + "Invalid mailbox message size %d", size); + } else if (mbx->ops.write) ret_val = mbx->ops.write(hw, msg, size, mbx_id); return ret_val; @@ -169,6 +170,10 @@ STATIC s32 ixgbe_poll_for_msg(struct ixgbe_hw *hw, u16 mbx_id) usec_delay(mbx->usec_delay); } + if (countdown == 0) + ERROR_REPORT2(IXGBE_ERROR_POLLING, + "Polling for VF%d mailbox message timedout", mbx_id); + out: return countdown ? IXGBE_SUCCESS : IXGBE_ERR_MBX; } @@ -197,6 +202,10 @@ STATIC s32 ixgbe_poll_for_ack(struct ixgbe_hw *hw, u16 mbx_id) usec_delay(mbx->usec_delay); } + if (countdown == 0) + ERROR_REPORT2(IXGBE_ERROR_POLLING, + "Polling for VF%d mailbox ack timedout", mbx_id); + out: return countdown ? IXGBE_SUCCESS : IXGBE_ERR_MBX; } @@ -325,6 +334,7 @@ STATIC s32 ixgbe_check_for_msg_vf(struct ixgbe_hw *hw, u16 mbx_id) { s32 ret_val = IXGBE_ERR_MBX; + UNREFERENCED_1PARAMETER(mbx_id); DEBUGFUNC("ixgbe_check_for_msg_vf"); if (!ixgbe_check_for_bit_vf(hw, IXGBE_VFMAILBOX_PFSTS)) { @@ -346,6 +356,7 @@ STATIC s32 ixgbe_check_for_ack_vf(struct ixgbe_hw *hw, u16 mbx_id) { s32 ret_val = IXGBE_ERR_MBX; + UNREFERENCED_1PARAMETER(mbx_id); DEBUGFUNC("ixgbe_check_for_ack_vf"); if (!ixgbe_check_for_bit_vf(hw, IXGBE_VFMAILBOX_PFACK)) { @@ -367,6 +378,7 @@ STATIC s32 ixgbe_check_for_rst_vf(struct ixgbe_hw *hw, u16 mbx_id) { s32 ret_val = IXGBE_ERR_MBX; + UNREFERENCED_1PARAMETER(mbx_id); DEBUGFUNC("ixgbe_check_for_rst_vf"); if (!ixgbe_check_for_bit_vf(hw, (IXGBE_VFMAILBOX_RSTD | @@ -415,6 +427,7 @@ STATIC s32 ixgbe_write_mbx_vf(struct ixgbe_hw *hw, u32 *msg, u16 size, s32 ret_val; u16 i; + UNREFERENCED_1PARAMETER(mbx_id); DEBUGFUNC("ixgbe_write_mbx_vf"); @@ -457,6 +470,7 @@ STATIC s32 ixgbe_read_mbx_vf(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 i; DEBUGFUNC("ixgbe_read_mbx_vf"); + UNREFERENCED_1PARAMETER(mbx_id); /* lock the mailbox to prevent pf/vf race condition */ ret_val = ixgbe_obtain_mbx_lock_vf(hw); @@ -627,6 +641,10 @@ STATIC s32 ixgbe_obtain_mbx_lock_pf(struct ixgbe_hw *hw, u16 vf_number) p2v_mailbox = IXGBE_READ_REG(hw, IXGBE_PFMAILBOX(vf_number)); if (p2v_mailbox & IXGBE_PFMAILBOX_PFU) ret_val = IXGBE_SUCCESS; + else + ERROR_REPORT2(IXGBE_ERROR_POLLING, + "Failed to obtain mailbox lock for VF%d", vf_number); + return ret_val; } diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.h index 44d0bf6..4594572 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_mbx.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_osdep.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_osdep.h index 312e8d6..1636ee1 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_osdep.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_osdep.h @@ -1,6 +1,6 @@ /****************************************************************************** - Copyright (c) 2001-2012, Intel Corporation + Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -62,6 +62,10 @@ #define DEBUGOUT6(S, args...) DEBUGOUT(S, ##args) #define DEBUGOUT7(S, args...) DEBUGOUT(S, ##args) +#define ERROR_REPORT1(e, S, args...) DEBUGOUT(S, ##args) +#define ERROR_REPORT2 DEBUGOUT2 +#define ERROR_REPORT3 DEBUGOUT3 + #define FALSE 0 #define TRUE 1 @@ -87,6 +91,7 @@ typedef uint8_t u8; typedef int8_t s8; typedef uint16_t u16; +typedef int16_t s16; typedef uint32_t u32; typedef int32_t s32; typedef uint64_t u64; diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.c index 1a47ab6..7d2ed2a 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -34,19 +34,21 @@ POSSIBILITY OF SUCH DAMAGE. #include "ixgbe_api.h" #include "ixgbe_common.h" #include "ixgbe_phy.h" -#ident "$Id: ixgbe_phy.c,v 1.139 2012/05/24 23:36:12 jtkirshe Exp $" - -static void ixgbe_i2c_start(struct ixgbe_hw *hw); -static void ixgbe_i2c_stop(struct ixgbe_hw *hw); -static s32 ixgbe_clock_in_i2c_byte(struct ixgbe_hw *hw, u8 *data); -static s32 ixgbe_clock_out_i2c_byte(struct ixgbe_hw *hw, u8 data); -static s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw); -static s32 ixgbe_clock_in_i2c_bit(struct ixgbe_hw *hw, bool *data); -static s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data); -static void ixgbe_raise_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl); -static void ixgbe_lower_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl); -static s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data); -static bool ixgbe_get_i2c_data(u32 *i2cctl); +#ident "$Id: ixgbe_phy.c,v 1.155 2013/08/14 22:34:03 jtkirshe Exp $" + +STATIC void ixgbe_i2c_start(struct ixgbe_hw *hw); +STATIC void ixgbe_i2c_stop(struct ixgbe_hw *hw); +STATIC s32 ixgbe_clock_in_i2c_byte(struct ixgbe_hw *hw, u8 *data); +STATIC s32 ixgbe_clock_out_i2c_byte(struct ixgbe_hw *hw, u8 data); +STATIC s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw); +STATIC s32 ixgbe_clock_in_i2c_bit(struct ixgbe_hw *hw, bool *data); +STATIC s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data); +STATIC void ixgbe_raise_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl); +STATIC void ixgbe_lower_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl); +STATIC s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data); +STATIC bool ixgbe_get_i2c_data(u32 *i2cctl); +STATIC s32 ixgbe_read_i2c_sff8472_generic(struct ixgbe_hw *hw, u8 byte_offset, + u8 *sff8472_data); /** * ixgbe_init_phy_ops_generic - Inits PHY function ptrs @@ -65,12 +67,15 @@ s32 ixgbe_init_phy_ops_generic(struct ixgbe_hw *hw) phy->ops.reset = &ixgbe_reset_phy_generic; phy->ops.read_reg = &ixgbe_read_phy_reg_generic; phy->ops.write_reg = &ixgbe_write_phy_reg_generic; + phy->ops.read_reg_mdi = &ixgbe_read_phy_reg_mdi; + phy->ops.write_reg_mdi = &ixgbe_write_phy_reg_mdi; phy->ops.setup_link = &ixgbe_setup_phy_link_generic; phy->ops.setup_link_speed = &ixgbe_setup_phy_link_speed_generic; phy->ops.check_link = NULL; phy->ops.get_firmware_version = ixgbe_get_phy_firmware_version_generic; phy->ops.read_i2c_byte = &ixgbe_read_i2c_byte_generic; phy->ops.write_i2c_byte = &ixgbe_write_i2c_byte_generic; + phy->ops.read_i2c_sff8472 = &ixgbe_read_i2c_sff8472_generic; phy->ops.read_i2c_eeprom = &ixgbe_read_i2c_eeprom_generic; phy->ops.write_i2c_eeprom = &ixgbe_write_i2c_eeprom_generic; phy->ops.i2c_bus_clear = &ixgbe_i2c_bus_clear; @@ -121,9 +126,14 @@ s32 ixgbe_identify_phy_generic(struct ixgbe_hw *hw) break; } } - /* clear value if nothing found */ - if (status != IXGBE_SUCCESS) + + /* Certain media types do not have a phy so an address will not + * be found and the code will take this path. Caller has to + * decide if it is an error or not. + */ + if (status != IXGBE_SUCCESS) { hw->phy.addr = 0; + } } else { status = IXGBE_SUCCESS; } @@ -132,6 +142,35 @@ s32 ixgbe_identify_phy_generic(struct ixgbe_hw *hw) } /** + * ixgbe_check_reset_blocked - check status of MNG FW veto bit + * @hw: pointer to the hardware structure + * + * This function checks the MMNGC.MNG_VETO bit to see if there are + * any constraints on link from manageability. For MAC's that don't + * have this bit just return faluse since the link can not be blocked + * via this method. + **/ +s32 ixgbe_check_reset_blocked(struct ixgbe_hw *hw) +{ + u32 mmngc; + + DEBUGFUNC("ixgbe_check_reset_blocked"); + + /* If we don't have this bit, it can't be blocking */ + if (hw->mac.type == ixgbe_mac_82598EB) + return false; + + mmngc = IXGBE_READ_REG(hw, IXGBE_MMNGC); + if (mmngc & IXGBE_MMNGC_MNG_VETO) { + ERROR_REPORT1(IXGBE_ERROR_SOFTWARE, + "MNG_VETO bit detected.\n"); + return true; + } + + return false; +} + +/** * ixgbe_validate_phy_addr - Determines phy address is valid * @hw: pointer to hardware structure * @@ -237,6 +276,10 @@ s32 ixgbe_reset_phy_generic(struct ixgbe_hw *hw) (IXGBE_ERR_OVERTEMP == hw->phy.ops.check_overtemp(hw))) goto out; + /* Blocked by MNG FW so bail */ + if (ixgbe_check_reset_blocked(hw)) + goto out; + /* * Perform soft PHY reset to the PHY_XS. * This will cause a soft reset to the PHY @@ -262,7 +305,8 @@ s32 ixgbe_reset_phy_generic(struct ixgbe_hw *hw) if (ctrl & IXGBE_MDIO_PHY_XS_RESET) { status = IXGBE_ERR_RESET_FAILED; - DEBUGOUT("PHY reset polling failed to complete.\n"); + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "PHY reset polling failed to complete.\n"); } out: @@ -270,7 +314,87 @@ out: } /** + * ixgbe_read_phy_mdi - Reads a value from a specified PHY register without + * the SWFW lock + * @hw: pointer to hardware structure + * @reg_addr: 32 bit address of PHY register to read + * @phy_data: Pointer to read data from PHY register + **/ +s32 ixgbe_read_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type, + u16 *phy_data) +{ + u32 i, data, command; + + /* Setup and write the address cycle command */ + command = ((reg_addr << IXGBE_MSCA_NP_ADDR_SHIFT) | + (device_type << IXGBE_MSCA_DEV_TYPE_SHIFT) | + (hw->phy.addr << IXGBE_MSCA_PHY_ADDR_SHIFT) | + (IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND)); + + IXGBE_WRITE_REG(hw, IXGBE_MSCA, command); + + /* + * Check every 10 usec to see if the address cycle completed. + * The MDI Command bit will clear when the operation is + * complete + */ + for (i = 0; i < IXGBE_MDIO_COMMAND_TIMEOUT; i++) { + usec_delay(10); + + command = IXGBE_READ_REG(hw, IXGBE_MSCA); + if ((command & IXGBE_MSCA_MDI_COMMAND) == 0) + break; + } + + + if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) { + ERROR_REPORT1(IXGBE_ERROR_POLLING, "PHY address command did not complete.\n"); + return IXGBE_ERR_PHY; + } + + /* + * Address cycle complete, setup and write the read + * command + */ + command = ((reg_addr << IXGBE_MSCA_NP_ADDR_SHIFT) | + (device_type << IXGBE_MSCA_DEV_TYPE_SHIFT) | + (hw->phy.addr << IXGBE_MSCA_PHY_ADDR_SHIFT) | + (IXGBE_MSCA_READ | IXGBE_MSCA_MDI_COMMAND)); + + IXGBE_WRITE_REG(hw, IXGBE_MSCA, command); + + /* + * Check every 10 usec to see if the address cycle + * completed. The MDI Command bit will clear when the + * operation is complete + */ + for (i = 0; i < IXGBE_MDIO_COMMAND_TIMEOUT; i++) { + usec_delay(10); + + command = IXGBE_READ_REG(hw, IXGBE_MSCA); + if ((command & IXGBE_MSCA_MDI_COMMAND) == 0) + break; + } + + if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) { + ERROR_REPORT1(IXGBE_ERROR_POLLING, "PHY read command didn't complete\n"); + return IXGBE_ERR_PHY; + } + + /* + * Read operation is complete. Get the data + * from MSRWD + */ + data = IXGBE_READ_REG(hw, IXGBE_MSRWD); + data >>= IXGBE_MSRWD_READ_DATA_SHIFT; + *phy_data = (u16)(data); + + return IXGBE_SUCCESS; +} + +/** * ixgbe_read_phy_reg_generic - Reads a value from a specified PHY register + * using the SWFW lock - this function is needed in most cases * @hw: pointer to hardware structure * @reg_addr: 32 bit address of PHY register to read * @phy_data: Pointer to read data from PHY register @@ -278,10 +402,7 @@ out: s32 ixgbe_read_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type, u16 *phy_data) { - u32 command; - u32 i; - u32 data; - s32 status = IXGBE_SUCCESS; + s32 status; u16 gssr; DEBUGFUNC("ixgbe_read_phy_reg_generic"); @@ -291,85 +412,94 @@ s32 ixgbe_read_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, else gssr = IXGBE_GSSR_PHY0_SM; - if (hw->mac.ops.acquire_swfw_sync(hw, gssr) != IXGBE_SUCCESS) + if (hw->mac.ops.acquire_swfw_sync(hw, gssr) == IXGBE_SUCCESS) { + status = ixgbe_read_phy_reg_mdi(hw, reg_addr, device_type, + phy_data); + hw->mac.ops.release_swfw_sync(hw, gssr); + } else { status = IXGBE_ERR_SWFW_SYNC; + } - if (status == IXGBE_SUCCESS) { - /* Setup and write the address cycle command */ - command = ((reg_addr << IXGBE_MSCA_NP_ADDR_SHIFT) | - (device_type << IXGBE_MSCA_DEV_TYPE_SHIFT) | - (hw->phy.addr << IXGBE_MSCA_PHY_ADDR_SHIFT) | - (IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND)); + return status; +} - IXGBE_WRITE_REG(hw, IXGBE_MSCA, command); +/** + * ixgbe_write_phy_reg_mdi - Writes a value to specified PHY register + * without SWFW lock + * @hw: pointer to hardware structure + * @reg_addr: 32 bit PHY register to write + * @device_type: 5 bit device type + * @phy_data: Data to write to the PHY register + **/ +s32 ixgbe_write_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr, + u32 device_type, u16 phy_data) +{ + u32 i, command; - /* - * Check every 10 usec to see if the address cycle completed. - * The MDI Command bit will clear when the operation is - * complete - */ - for (i = 0; i < IXGBE_MDIO_COMMAND_TIMEOUT; i++) { - usec_delay(10); + /* Put the data in the MDI single read and write data register*/ + IXGBE_WRITE_REG(hw, IXGBE_MSRWD, (u32)phy_data); - command = IXGBE_READ_REG(hw, IXGBE_MSCA); + /* Setup and write the address cycle command */ + command = ((reg_addr << IXGBE_MSCA_NP_ADDR_SHIFT) | + (device_type << IXGBE_MSCA_DEV_TYPE_SHIFT) | + (hw->phy.addr << IXGBE_MSCA_PHY_ADDR_SHIFT) | + (IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND)); - if ((command & IXGBE_MSCA_MDI_COMMAND) == 0) - break; - } + IXGBE_WRITE_REG(hw, IXGBE_MSCA, command); - if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) { - DEBUGOUT("PHY address command did not complete.\n"); - status = IXGBE_ERR_PHY; - } + /* + * Check every 10 usec to see if the address cycle completed. + * The MDI Command bit will clear when the operation is + * complete + */ + for (i = 0; i < IXGBE_MDIO_COMMAND_TIMEOUT; i++) { + usec_delay(10); - if (status == IXGBE_SUCCESS) { - /* - * Address cycle complete, setup and write the read - * command - */ - command = ((reg_addr << IXGBE_MSCA_NP_ADDR_SHIFT) | - (device_type << IXGBE_MSCA_DEV_TYPE_SHIFT) | - (hw->phy.addr << IXGBE_MSCA_PHY_ADDR_SHIFT) | - (IXGBE_MSCA_READ | IXGBE_MSCA_MDI_COMMAND)); - - IXGBE_WRITE_REG(hw, IXGBE_MSCA, command); - - /* - * Check every 10 usec to see if the address cycle - * completed. The MDI Command bit will clear when the - * operation is complete - */ - for (i = 0; i < IXGBE_MDIO_COMMAND_TIMEOUT; i++) { - usec_delay(10); - - command = IXGBE_READ_REG(hw, IXGBE_MSCA); - - if ((command & IXGBE_MSCA_MDI_COMMAND) == 0) - break; - } + command = IXGBE_READ_REG(hw, IXGBE_MSCA); + if ((command & IXGBE_MSCA_MDI_COMMAND) == 0) + break; + } - if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) { - DEBUGOUT("PHY read command didn't complete\n"); - status = IXGBE_ERR_PHY; - } else { - /* - * Read operation is complete. Get the data - * from MSRWD - */ - data = IXGBE_READ_REG(hw, IXGBE_MSRWD); - data >>= IXGBE_MSRWD_READ_DATA_SHIFT; - *phy_data = (u16)(data); - } - } + if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) { + ERROR_REPORT1(IXGBE_ERROR_POLLING, "PHY address cmd didn't complete\n"); + return IXGBE_ERR_PHY; + } - hw->mac.ops.release_swfw_sync(hw, gssr); + /* + * Address cycle complete, setup and write the write + * command + */ + command = ((reg_addr << IXGBE_MSCA_NP_ADDR_SHIFT) | + (device_type << IXGBE_MSCA_DEV_TYPE_SHIFT) | + (hw->phy.addr << IXGBE_MSCA_PHY_ADDR_SHIFT) | + (IXGBE_MSCA_WRITE | IXGBE_MSCA_MDI_COMMAND)); + + IXGBE_WRITE_REG(hw, IXGBE_MSCA, command); + + /* + * Check every 10 usec to see if the address cycle + * completed. The MDI Command bit will clear when the + * operation is complete + */ + for (i = 0; i < IXGBE_MDIO_COMMAND_TIMEOUT; i++) { + usec_delay(10); + + command = IXGBE_READ_REG(hw, IXGBE_MSCA); + if ((command & IXGBE_MSCA_MDI_COMMAND) == 0) + break; } - return status; + if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) { + ERROR_REPORT1(IXGBE_ERROR_POLLING, "PHY write cmd didn't complete\n"); + return IXGBE_ERR_PHY; + } + + return IXGBE_SUCCESS; } /** * ixgbe_write_phy_reg_generic - Writes a value to specified PHY register + * using SWFW lock- this function is needed in most cases * @hw: pointer to hardware structure * @reg_addr: 32 bit PHY register to write * @device_type: 5 bit device type @@ -378,9 +508,7 @@ s32 ixgbe_read_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, s32 ixgbe_write_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type, u16 phy_data) { - u32 command; - u32 i; - s32 status = IXGBE_SUCCESS; + s32 status; u16 gssr; DEBUGFUNC("ixgbe_write_phy_reg_generic"); @@ -390,73 +518,12 @@ s32 ixgbe_write_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, else gssr = IXGBE_GSSR_PHY0_SM; - if (hw->mac.ops.acquire_swfw_sync(hw, gssr) != IXGBE_SUCCESS) - status = IXGBE_ERR_SWFW_SYNC; - - if (status == IXGBE_SUCCESS) { - /* Put the data in the MDI single read and write data register*/ - IXGBE_WRITE_REG(hw, IXGBE_MSRWD, (u32)phy_data); - - /* Setup and write the address cycle command */ - command = ((reg_addr << IXGBE_MSCA_NP_ADDR_SHIFT) | - (device_type << IXGBE_MSCA_DEV_TYPE_SHIFT) | - (hw->phy.addr << IXGBE_MSCA_PHY_ADDR_SHIFT) | - (IXGBE_MSCA_ADDR_CYCLE | IXGBE_MSCA_MDI_COMMAND)); - - IXGBE_WRITE_REG(hw, IXGBE_MSCA, command); - - /* - * Check every 10 usec to see if the address cycle completed. - * The MDI Command bit will clear when the operation is - * complete - */ - for (i = 0; i < IXGBE_MDIO_COMMAND_TIMEOUT; i++) { - usec_delay(10); - - command = IXGBE_READ_REG(hw, IXGBE_MSCA); - - if ((command & IXGBE_MSCA_MDI_COMMAND) == 0) - break; - } - - if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) { - DEBUGOUT("PHY address cmd didn't complete\n"); - status = IXGBE_ERR_PHY; - } - - if (status == IXGBE_SUCCESS) { - /* - * Address cycle complete, setup and write the write - * command - */ - command = ((reg_addr << IXGBE_MSCA_NP_ADDR_SHIFT) | - (device_type << IXGBE_MSCA_DEV_TYPE_SHIFT) | - (hw->phy.addr << IXGBE_MSCA_PHY_ADDR_SHIFT) | - (IXGBE_MSCA_WRITE | IXGBE_MSCA_MDI_COMMAND)); - - IXGBE_WRITE_REG(hw, IXGBE_MSCA, command); - - /* - * Check every 10 usec to see if the address cycle - * completed. The MDI Command bit will clear when the - * operation is complete - */ - for (i = 0; i < IXGBE_MDIO_COMMAND_TIMEOUT; i++) { - usec_delay(10); - - command = IXGBE_READ_REG(hw, IXGBE_MSCA); - - if ((command & IXGBE_MSCA_MDI_COMMAND) == 0) - break; - } - - if ((command & IXGBE_MSCA_MDI_COMMAND) != 0) { - DEBUGOUT("PHY address cmd didn't complete\n"); - status = IXGBE_ERR_PHY; - } - } - + if (hw->mac.ops.acquire_swfw_sync(hw, gssr) == IXGBE_SUCCESS) { + status = ixgbe_write_phy_reg_mdi(hw, reg_addr, device_type, + phy_data); hw->mac.ops.release_swfw_sync(hw, gssr); + } else { + status = IXGBE_ERR_SWFW_SYNC; } return status; @@ -529,6 +596,10 @@ s32 ixgbe_setup_phy_link_generic(struct ixgbe_hw *hw) autoneg_reg); } + /* Blocked by MNG FW so don't reset PHY */ + if (ixgbe_check_reset_blocked(hw)) + return status; + /* Restart PHY autonegotiation and wait for completion */ hw->phy.ops.read_reg(hw, IXGBE_MDIO_AUTO_NEG_CONTROL, IXGBE_MDIO_AUTO_NEG_DEV_TYPE, &autoneg_reg); @@ -553,7 +624,8 @@ s32 ixgbe_setup_phy_link_generic(struct ixgbe_hw *hw) if (time_out == max_time_out) { status = IXGBE_ERR_LINK_SETUP; - DEBUGOUT("ixgbe_setup_phy_link_generic: time out"); + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "PHY autonegotiation time out"); } return status; @@ -563,13 +635,12 @@ s32 ixgbe_setup_phy_link_generic(struct ixgbe_hw *hw) * ixgbe_setup_phy_link_speed_generic - Sets the auto advertised capabilities * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled **/ s32 ixgbe_setup_phy_link_speed_generic(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete) { + UNREFERENCED_1PARAMETER(autoneg_wait_to_complete); DEBUGFUNC("ixgbe_setup_phy_link_speed_generic"); @@ -606,7 +677,7 @@ s32 ixgbe_get_copper_link_capabilities_generic(struct ixgbe_hw *hw, ixgbe_link_speed *speed, bool *autoneg) { - s32 status = IXGBE_ERR_LINK_SETUP; + s32 status; u16 speed_ability; DEBUGFUNC("ixgbe_get_copper_link_capabilities_generic"); @@ -743,6 +814,10 @@ s32 ixgbe_setup_phy_link_tnx(struct ixgbe_hw *hw) autoneg_reg); } + /* Blocked by MNG FW so don't reset PHY */ + if (ixgbe_check_reset_blocked(hw)) + return status; + /* Restart PHY autonegotiation and wait for completion */ hw->phy.ops.read_reg(hw, IXGBE_MDIO_AUTO_NEG_CONTROL, IXGBE_MDIO_AUTO_NEG_DEV_TYPE, &autoneg_reg); @@ -781,7 +856,7 @@ s32 ixgbe_setup_phy_link_tnx(struct ixgbe_hw *hw) s32 ixgbe_get_phy_firmware_version_tnx(struct ixgbe_hw *hw, u16 *firmware_version) { - s32 status = IXGBE_SUCCESS; + s32 status; DEBUGFUNC("ixgbe_get_phy_firmware_version_tnx"); @@ -800,7 +875,7 @@ s32 ixgbe_get_phy_firmware_version_tnx(struct ixgbe_hw *hw, s32 ixgbe_get_phy_firmware_version_generic(struct ixgbe_hw *hw, u16 *firmware_version) { - s32 status = IXGBE_SUCCESS; + s32 status; DEBUGFUNC("ixgbe_get_phy_firmware_version_generic"); @@ -826,6 +901,10 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw) DEBUGFUNC("ixgbe_reset_phy_nl"); + /* Blocked by MNG FW so bail */ + if (ixgbe_check_reset_blocked(hw)) + goto out; + hw->phy.ops.read_reg(hw, IXGBE_MDIO_PHY_XS_CONTROL, IXGBE_MDIO_PHY_XS_DEV_TYPE, &phy_data); @@ -861,6 +940,8 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw) * Read control word from PHY init contents offset */ ret_val = hw->eeprom.ops.read(hw, data_offset, &eword); + if (ret_val) + goto err_eeprom; control = (eword & IXGBE_CONTROL_MASK_NL) >> IXGBE_CONTROL_SHIFT_NL; edata = eword & IXGBE_DATA_MASK_NL; @@ -873,10 +954,16 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw) case IXGBE_DATA_NL: DEBUGOUT("DATA:\n"); data_offset++; - hw->eeprom.ops.read(hw, data_offset++, - &phy_offset); + ret_val = hw->eeprom.ops.read(hw, data_offset, + &phy_offset); + if (ret_val) + goto err_eeprom; + data_offset++; for (i = 0; i < edata; i++) { - hw->eeprom.ops.read(hw, data_offset, &eword); + ret_val = hw->eeprom.ops.read(hw, data_offset, + &eword); + if (ret_val) + goto err_eeprom; hw->phy.ops.write_reg(hw, phy_offset, IXGBE_TWINAX_DEV, eword); DEBUGOUT2("Wrote %4.4x to %4.4x\n", eword, @@ -908,6 +995,11 @@ s32 ixgbe_reset_phy_nl(struct ixgbe_hw *hw) out: return ret_val; + +err_eeprom: + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", data_offset); + return IXGBE_ERR_PHY; } /** @@ -968,9 +1060,7 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) IXGBE_SFF_IDENTIFIER, &identifier); - if (status == IXGBE_ERR_SWFW_SYNC || - status == IXGBE_ERR_I2C || - status == IXGBE_ERR_SFP_NOT_PRESENT) + if (status != IXGBE_SUCCESS) goto err_read_i2c_eeprom; /* LAN ID is needed for sfp_type determination */ @@ -984,26 +1074,20 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) IXGBE_SFF_1GBE_COMP_CODES, &comp_codes_1g); - if (status == IXGBE_ERR_SWFW_SYNC || - status == IXGBE_ERR_I2C || - status == IXGBE_ERR_SFP_NOT_PRESENT) + if (status != IXGBE_SUCCESS) goto err_read_i2c_eeprom; status = hw->phy.ops.read_i2c_eeprom(hw, IXGBE_SFF_10GBE_COMP_CODES, &comp_codes_10g); - if (status == IXGBE_ERR_SWFW_SYNC || - status == IXGBE_ERR_I2C || - status == IXGBE_ERR_SFP_NOT_PRESENT) + if (status != IXGBE_SUCCESS) goto err_read_i2c_eeprom; status = hw->phy.ops.read_i2c_eeprom(hw, IXGBE_SFF_CABLE_TECHNOLOGY, &cable_tech); - if (status == IXGBE_ERR_SWFW_SYNC || - status == IXGBE_ERR_I2C || - status == IXGBE_ERR_SFP_NOT_PRESENT) + if (status != IXGBE_SUCCESS) goto err_read_i2c_eeprom; /* ID Module @@ -1064,6 +1148,16 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) else hw->phy.sfp_type = ixgbe_sfp_type_srlr_core1; +#ifdef SUPPORT_10GBASE_ER + } else if (comp_codes_10g & + IXGBE_SFF_10GBASEER_CAPABLE) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + ixgbe_sfp_type_er_core0; + else + hw->phy.sfp_type = + ixgbe_sfp_type_er_core1; +#endif /* SUPPORT_10GBASE_ER */ } else if (comp_codes_1g & IXGBE_SFF_1GBASET_CAPABLE) { if (hw->bus.lan_id == 0) hw->phy.sfp_type = @@ -1078,6 +1172,15 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) else hw->phy.sfp_type = ixgbe_sfp_type_1g_sx_core1; +#ifdef SUPPORT_1000BASE_LX + } else if (comp_codes_1g & IXGBE_SFF_1GBASELX_CAPABLE) { + if (hw->bus.lan_id == 0) + hw->phy.sfp_type = + ixgbe_sfp_type_1g_lx_core0; + else + hw->phy.sfp_type = + ixgbe_sfp_type_1g_lx_core1; +#endif /* SUPPORT_1000BASE_LX */ } else { hw->phy.sfp_type = ixgbe_sfp_type_unknown; } @@ -1101,27 +1204,21 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) IXGBE_SFF_VENDOR_OUI_BYTE0, &oui_bytes[0]); - if (status == IXGBE_ERR_SWFW_SYNC || - status == IXGBE_ERR_I2C || - status == IXGBE_ERR_SFP_NOT_PRESENT) + if (status != IXGBE_SUCCESS) goto err_read_i2c_eeprom; status = hw->phy.ops.read_i2c_eeprom(hw, IXGBE_SFF_VENDOR_OUI_BYTE1, &oui_bytes[1]); - if (status == IXGBE_ERR_SWFW_SYNC || - status == IXGBE_ERR_I2C || - status == IXGBE_ERR_SFP_NOT_PRESENT) + if (status != IXGBE_SUCCESS) goto err_read_i2c_eeprom; status = hw->phy.ops.read_i2c_eeprom(hw, IXGBE_SFF_VENDOR_OUI_BYTE2, &oui_bytes[2]); - if (status == IXGBE_ERR_SWFW_SYNC || - status == IXGBE_ERR_I2C || - status == IXGBE_ERR_SFP_NOT_PRESENT) + if (status != IXGBE_SUCCESS) goto err_read_i2c_eeprom; vendor_oui = @@ -1171,7 +1268,11 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) if (comp_codes_10g == 0 && !(hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core1 || hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core0 || - hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || +#ifdef SUPPORT_1000BASE_LX + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core1 || +#endif + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1)) { hw->phy.type = ixgbe_phy_sfp_unsupported; status = IXGBE_ERR_SFP_NOT_SUPPORTED; @@ -1186,10 +1287,18 @@ s32 ixgbe_identify_sfp_module_generic(struct ixgbe_hw *hw) ixgbe_get_device_caps(hw, &enforce_sfp); if (!(enforce_sfp & IXGBE_DEVICE_CAPS_ALLOW_ANY_SFP) && - !((hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core0) || - (hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core1) || - (hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0) || - (hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1))) { + !(hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_1g_cu_core1 || +#ifdef SUPPORT_1000BASE_LX + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core1 || +#endif +#ifdef SUPPORT_10GBASE_ER + hw->phy.sfp_type == ixgbe_sfp_type_er_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_er_core1 || +#endif + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1)) { /* Make sure we're a supported PHY type */ if (hw->phy.type == ixgbe_phy_sfp_intel) { status = IXGBE_SUCCESS; @@ -1265,16 +1374,33 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw, * SR modules */ if (sfp_type == ixgbe_sfp_type_da_act_lmt_core0 || +#ifdef SUPPORT_10GBASE_ER + sfp_type == ixgbe_sfp_type_er_core0 || +#endif /* SUPPORT_10GBASE_ER */ +#ifdef SUPPORT_1000BASE_LX + sfp_type == ixgbe_sfp_type_1g_lx_core0 || +#endif /* SUPPORT_1000BASE_LX */ sfp_type == ixgbe_sfp_type_1g_cu_core0 || sfp_type == ixgbe_sfp_type_1g_sx_core0) sfp_type = ixgbe_sfp_type_srlr_core0; else if (sfp_type == ixgbe_sfp_type_da_act_lmt_core1 || +#ifdef SUPPORT_10GBASE_ER + sfp_type == ixgbe_sfp_type_er_core1 || +#endif /* SUPPORT_10GBASE_ER */ +#ifdef SUPPORT_1000BASE_LX + sfp_type == ixgbe_sfp_type_1g_lx_core1 || +#endif /* SUPPORT_1000BASE_LX */ sfp_type == ixgbe_sfp_type_1g_cu_core1 || sfp_type == ixgbe_sfp_type_1g_sx_core1) sfp_type = ixgbe_sfp_type_srlr_core1; /* Read offset to PHY init contents */ - hw->eeprom.ops.read(hw, IXGBE_PHY_INIT_OFFSET_NL, list_offset); + if (hw->eeprom.ops.read(hw, IXGBE_PHY_INIT_OFFSET_NL, list_offset)) { + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", + IXGBE_PHY_INIT_OFFSET_NL); + return IXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT; + } if ((!*list_offset) || (*list_offset == 0xFFFF)) return IXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT; @@ -1286,12 +1412,14 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw, * Find the matching SFP ID in the EEPROM * and program the init sequence */ - hw->eeprom.ops.read(hw, *list_offset, &sfp_id); + if (hw->eeprom.ops.read(hw, *list_offset, &sfp_id)) + goto err_phy; while (sfp_id != IXGBE_PHY_INIT_END_NL) { if (sfp_id == sfp_type) { (*list_offset)++; - hw->eeprom.ops.read(hw, *list_offset, data_offset); + if (hw->eeprom.ops.read(hw, *list_offset, data_offset)) + goto err_phy; if ((!*data_offset) || (*data_offset == 0xFFFF)) { DEBUGOUT("SFP+ module not supported\n"); return IXGBE_ERR_SFP_NOT_SUPPORTED; @@ -1301,7 +1429,7 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw, } else { (*list_offset) += 2; if (hw->eeprom.ops.read(hw, *list_offset, &sfp_id)) - return IXGBE_ERR_PHY; + goto err_phy; } } @@ -1311,6 +1439,11 @@ s32 ixgbe_get_sfp_init_sequence_offsets(struct ixgbe_hw *hw, } return IXGBE_SUCCESS; + +err_phy: + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "eeprom read at offset %d failed", *list_offset); + return IXGBE_ERR_PHY; } /** @@ -1332,6 +1465,22 @@ s32 ixgbe_read_i2c_eeprom_generic(struct ixgbe_hw *hw, u8 byte_offset, } /** + * ixgbe_read_i2c_sff8472_generic - Reads 8 bit word over I2C interface + * @hw: pointer to hardware structure + * @byte_offset: byte offset at address 0xA2 + * @eeprom_data: value read + * + * Performs byte read operation to SFP module's SFF-8472 data over I2C + **/ +STATIC s32 ixgbe_read_i2c_sff8472_generic(struct ixgbe_hw *hw, u8 byte_offset, + u8 *sff8472_data) +{ + return hw->phy.ops.read_i2c_byte(hw, byte_offset, + IXGBE_I2C_EEPROM_DEV_ADDR2, + sff8472_data); +} + +/** * ixgbe_write_i2c_eeprom_generic - Writes 8 bit EEPROM word over I2C interface * @hw: pointer to hardware structure * @byte_offset: EEPROM byte offset to write @@ -1424,9 +1573,9 @@ s32 ixgbe_read_i2c_byte_generic(struct ixgbe_hw *hw, u8 byte_offset, break; fail: + ixgbe_i2c_bus_clear(hw); hw->mac.ops.release_swfw_sync(hw, swfw_mask); msec_delay(100); - ixgbe_i2c_bus_clear(hw); retry++; if (retry < max_retry) DEBUGOUT("I2C byte read error - Retrying.\n"); @@ -1521,7 +1670,7 @@ write_byte_out: * * Sets I2C start condition (High -> Low on SDA while SCL is High) **/ -static void ixgbe_i2c_start(struct ixgbe_hw *hw) +STATIC void ixgbe_i2c_start(struct ixgbe_hw *hw) { u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL); @@ -1552,7 +1701,7 @@ static void ixgbe_i2c_start(struct ixgbe_hw *hw) * * Sets I2C stop condition (Low -> High on SDA while SCL is High) **/ -static void ixgbe_i2c_stop(struct ixgbe_hw *hw) +STATIC void ixgbe_i2c_stop(struct ixgbe_hw *hw) { u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL); @@ -1578,7 +1727,7 @@ static void ixgbe_i2c_stop(struct ixgbe_hw *hw) * * Clocks in one byte data via I2C data/clock **/ -static s32 ixgbe_clock_in_i2c_byte(struct ixgbe_hw *hw, u8 *data) +STATIC s32 ixgbe_clock_in_i2c_byte(struct ixgbe_hw *hw, u8 *data) { s32 i; bool bit = 0; @@ -1600,12 +1749,12 @@ static s32 ixgbe_clock_in_i2c_byte(struct ixgbe_hw *hw, u8 *data) * * Clocks out one byte data via I2C data/clock **/ -static s32 ixgbe_clock_out_i2c_byte(struct ixgbe_hw *hw, u8 data) +STATIC s32 ixgbe_clock_out_i2c_byte(struct ixgbe_hw *hw, u8 data) { s32 status = IXGBE_SUCCESS; s32 i; u32 i2cctl; - bool bit = 0; + bool bit; DEBUGFUNC("ixgbe_clock_out_i2c_byte"); @@ -1632,7 +1781,7 @@ static s32 ixgbe_clock_out_i2c_byte(struct ixgbe_hw *hw, u8 data) * * Clocks in/out one bit via I2C data/clock **/ -static s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw) +STATIC s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw) { s32 status = IXGBE_SUCCESS; u32 i = 0; @@ -1660,7 +1809,8 @@ static s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw) } if (ack == 1) { - DEBUGOUT("I2C ack was not received.\n"); + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "I2C ack was not received.\n"); status = IXGBE_ERR_I2C; } @@ -1679,7 +1829,7 @@ static s32 ixgbe_get_i2c_ack(struct ixgbe_hw *hw) * * Clocks in one bit via I2C data/clock **/ -static s32 ixgbe_clock_in_i2c_bit(struct ixgbe_hw *hw, bool *data) +STATIC s32 ixgbe_clock_in_i2c_bit(struct ixgbe_hw *hw, bool *data) { u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL); @@ -1708,7 +1858,7 @@ static s32 ixgbe_clock_in_i2c_bit(struct ixgbe_hw *hw, bool *data) * * Clocks out one bit via I2C data/clock **/ -static s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data) +STATIC s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data) { s32 status; u32 i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL); @@ -1730,7 +1880,8 @@ static s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data) usec_delay(IXGBE_I2C_T_LOW); } else { status = IXGBE_ERR_I2C; - DEBUGOUT1("I2C data was not set to %X\n", data); + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "I2C data was not set to %X\n", data); } return status; @@ -1742,7 +1893,7 @@ static s32 ixgbe_clock_out_i2c_bit(struct ixgbe_hw *hw, bool data) * * Raises the I2C clock line '0'->'1' **/ -static void ixgbe_raise_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl) +STATIC void ixgbe_raise_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl) { u32 i = 0; u32 timeout = IXGBE_I2C_CLOCK_STRETCHING_TIMEOUT; @@ -1771,7 +1922,7 @@ static void ixgbe_raise_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl) * * Lowers the I2C clock line '1'->'0' **/ -static void ixgbe_lower_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl) +STATIC void ixgbe_lower_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl) { DEBUGFUNC("ixgbe_lower_i2c_clk"); @@ -1793,7 +1944,7 @@ static void ixgbe_lower_i2c_clk(struct ixgbe_hw *hw, u32 *i2cctl) * * Sets the I2C data bit **/ -static s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data) +STATIC s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data) { s32 status = IXGBE_SUCCESS; @@ -1814,7 +1965,9 @@ static s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data) *i2cctl = IXGBE_READ_REG(hw, IXGBE_I2CCTL); if (data != ixgbe_get_i2c_data(i2cctl)) { status = IXGBE_ERR_I2C; - DEBUGOUT1("Error - I2C data was not set to %X.\n", data); + ERROR_REPORT2(IXGBE_ERROR_INVALID_STATE, + "Error - I2C data was not set to %X.\n", + data); } return status; @@ -1827,7 +1980,7 @@ static s32 ixgbe_set_i2c_data(struct ixgbe_hw *hw, u32 *i2cctl, bool data) * * Returns the I2C data bit value **/ -static bool ixgbe_get_i2c_data(u32 *i2cctl) +STATIC bool ixgbe_get_i2c_data(u32 *i2cctl) { bool data; @@ -1901,6 +2054,7 @@ s32 ixgbe_tn_check_overtemp(struct ixgbe_hw *hw) goto out; status = IXGBE_ERR_OVERTEMP; + ERROR_REPORT1(IXGBE_ERROR_CAUTION, "Device over temperature"); out: return status; } diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.h index a07c66e..696a0d5 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_phy.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -35,7 +35,9 @@ POSSIBILITY OF SUCH DAMAGE. #define _IXGBE_PHY_H_ #include "ixgbe_type.h" -#define IXGBE_I2C_EEPROM_DEV_ADDR 0xA0 +#define IXGBE_I2C_EEPROM_DEV_ADDR 0xA0 +#define IXGBE_I2C_EEPROM_DEV_ADDR2 0xA2 +#define IXGBE_I2C_EEPROM_BANK_LEN 0xFF /* EEPROM byte offsets */ #define IXGBE_SFF_IDENTIFIER 0x0 @@ -47,6 +49,8 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_SFF_10GBE_COMP_CODES 0x3 #define IXGBE_SFF_CABLE_TECHNOLOGY 0x8 #define IXGBE_SFF_CABLE_SPEC_COMP 0x3C +#define IXGBE_SFF_SFF_8472_SWAP 0x5C +#define IXGBE_SFF_SFF_8472_COMP 0x5E /* Bitmasks */ #define IXGBE_SFF_DA_PASSIVE_CABLE 0x4 @@ -57,6 +61,10 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_SFF_1GBASET_CAPABLE 0x8 #define IXGBE_SFF_10GBASESR_CAPABLE 0x10 #define IXGBE_SFF_10GBASELR_CAPABLE 0x20 +#ifdef SUPPORT_10GBASE_ER +#define IXGBE_SFF_10GBASEER_CAPABLE 0x80 +#endif /* SUPPORT_10GBASE_ER */ +#define IXGBE_SFF_ADDRESSING_MODE 0x4 #define IXGBE_I2C_EEPROM_READ_MASK 0x100 #define IXGBE_I2C_EEPROM_STATUS_MASK 0x3 #define IXGBE_I2C_EEPROM_STATUS_NO_OPERATION 0x0 @@ -94,7 +102,10 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_TN_LASI_STATUS_REG 0x9005 #define IXGBE_TN_LASI_STATUS_TEMP_ALARM 0x0008 -#ident "$Id: ixgbe_phy.h,v 1.48 2012/01/04 01:49:02 jtkirshe Exp $" +/* SFP+ SFF-8472 Compliance */ +#define IXGBE_SFF_SFF_8472_UNSUP 0x00 + +#ident "$Id: ixgbe_phy.h,v 1.56 2013/09/05 23:59:49 jtkirshe Exp $" s32 ixgbe_init_phy_ops_generic(struct ixgbe_hw *hw); bool ixgbe_validate_phy_addr(struct ixgbe_hw *hw, u32 phy_addr); @@ -102,6 +113,10 @@ enum ixgbe_phy_type ixgbe_get_phy_type_from_id(u32 phy_id); s32 ixgbe_get_phy_id(struct ixgbe_hw *hw); s32 ixgbe_identify_phy_generic(struct ixgbe_hw *hw); s32 ixgbe_reset_phy_generic(struct ixgbe_hw *hw); +s32 ixgbe_read_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type, + u16 *phy_data); +s32 ixgbe_write_phy_reg_mdi(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type, + u16 phy_data); s32 ixgbe_read_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, u32 device_type, u16 *phy_data); s32 ixgbe_write_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, @@ -109,11 +124,11 @@ s32 ixgbe_write_phy_reg_generic(struct ixgbe_hw *hw, u32 reg_addr, s32 ixgbe_setup_phy_link_generic(struct ixgbe_hw *hw); s32 ixgbe_setup_phy_link_speed_generic(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete); s32 ixgbe_get_copper_link_capabilities_generic(struct ixgbe_hw *hw, ixgbe_link_speed *speed, bool *autoneg); +s32 ixgbe_check_reset_blocked(struct ixgbe_hw *hw); /* PHY specific */ s32 ixgbe_check_phy_link_tnx(struct ixgbe_hw *hw, diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h index f03046f..18ac227 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -34,9 +34,50 @@ POSSIBILITY OF SUCH DAMAGE. #ifndef _IXGBE_TYPE_H_ #define _IXGBE_TYPE_H_ +/* + * The following is a brief description of the error categories used by the + * ERROR_REPORT* macros. + * + * - IXGBE_ERROR_INVALID_STATE + * This category is for errors which represent a serious failure state that is + * unexpected, and could be potentially harmful to device operation. It should + * not be used for errors relating to issues that can be worked around or + * ignored. + * + * - IXGBE_ERROR_POLLING + * This category is for errors related to polling/timeout issues and should be + * used in any case where the timeout occured, or a failure to obtain a lock, or + * failure to receive data within the time limit. + * + * - IXGBE_ERROR_CAUTION + * This category should be used for reporting issues that may be the cause of + * other errors, such as temperature warnings. It should indicate an event which + * could be serious, but hasn't necessarily caused problems yet. + * + * - IXGBE_ERROR_SOFTWARE + * This category is intended for errors due to software state preventing + * something. The category is not intended for errors due to bad arguments, or + * due to unsupported features. It should be used when a state occurs which + * prevents action but is not a serious issue. + * + * - IXGBE_ERROR_ARGUMENT + * This category is for when a bad or invalid argument is passed. It should be + * used whenever a function is called and error checking has detected the + * argument is wrong or incorrect. + * + * - IXGBE_ERROR_UNSUPPORTED + * This category is for errors which are due to unsupported circumstances or + * configuration issues. It should not be used when the issue is due to an + * invalid argument, but for when something has occurred that is unsupported + * (Ex: Flow control autonegotiation or an unsupported SFP+ module.) + */ + #include "ixgbe_osdep.h" -#ident "$Id: ixgbe_type.h,v 1.552 2012/11/08 11:33:27 jtkirshe Exp $" +#ident "$Id: ixgbe_type.h,v 1.630 2013/11/22 22:48:40 jtkirshe Exp $" + +/* Vendor ID */ +#define IXGBE_INTEL_VENDOR_ID 0x8086 /* Device IDs */ #define IXGBE_DEV_ID_82598 0x10B6 @@ -59,15 +100,22 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_DEV_ID_82599_CX4 0x10F9 #define IXGBE_DEV_ID_82599_SFP 0x10FB #define IXGBE_SUBDEV_ID_82599_SFP 0x11A9 +#define IXGBE_SUBDEV_ID_82599_SFP_WOL0 0x1071 #define IXGBE_SUBDEV_ID_82599_RNDC 0x1F72 #define IXGBE_SUBDEV_ID_82599_560FLR 0x17D0 #define IXGBE_SUBDEV_ID_82599_ECNA_DP 0x0470 +#define IXGBE_SUBDEV_ID_82599_SP_560FLR 0x211B +#define IXGBE_SUBDEV_ID_82599_LOM_SFP 0x8976 +#define IXGBE_SUBDEV_ID_82599_LOM_SNAP6 0x2159 +#define IXGBE_SUBDEV_ID_82599_SFP_1OCP 0x000D +#define IXGBE_SUBDEV_ID_82599_SFP_2OCP 0x0008 #define IXGBE_DEV_ID_82599_BACKPLANE_FCOE 0x152A #define IXGBE_DEV_ID_82599_SFP_FCOE 0x1529 #define IXGBE_DEV_ID_82599_SFP_EM 0x1507 #define IXGBE_DEV_ID_82599_SFP_SF2 0x154D #define IXGBE_DEV_ID_82599_SFP_SF_QP 0x154A #define IXGBE_DEV_ID_82599EN_SFP 0x1557 +#define IXGBE_SUBDEV_ID_82599EN_SFP_OCP1 0x0001 #define IXGBE_DEV_ID_82599_XAUI_LOM 0x10FC #define IXGBE_DEV_ID_82599_T3_LOM 0x151C #define IXGBE_DEV_ID_82599_VF 0x10ED @@ -211,12 +259,12 @@ POSSIBILITY OF SUCH DAMAGE. (((_i) < 64) ? (0x0100C + ((_i) * 0x40)) : \ (0x0D00C + (((_i) - 64) * 0x40)))) #define IXGBE_RDRXCTL 0x02F00 -#define IXGBE_RDRXCTL_RSC_PUSH 0x80 /* 8 of these 0x03C00 - 0x03C1C */ #define IXGBE_RXPBSIZE(_i) (0x03C00 + ((_i) * 4)) #define IXGBE_RXCTRL 0x03000 #define IXGBE_DROPEN 0x03D04 #define IXGBE_RXPBSIZE_SHIFT 10 +#define IXGBE_RXPBSIZE_MASK 0x000FFC00 /* Receive Registers */ #define IXGBE_RXCSUM 0x05000 @@ -282,6 +330,7 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_RETA(_i) (0x05C00 + ((_i) * 4)) /* 32 of these (0-31) */ #define IXGBE_RSSRK(_i) (0x05C80 + ((_i) * 4)) /* 10 of these (0-9) */ + /* Flow Director registers */ #define IXGBE_FDIRCTRL 0x0EE00 #define IXGBE_FDIRHKEY 0x0EE68 @@ -290,6 +339,7 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_FDIRSIP4M 0x0EE40 #define IXGBE_FDIRTCPM 0x0EE44 #define IXGBE_FDIRUDPM 0x0EE48 +#define IXGBE_FDIRSCTPM 0x0EE78 #define IXGBE_FDIRIP6M 0x0EE74 #define IXGBE_FDIRM 0x0EE70 @@ -362,11 +412,16 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_WUPL 0x05900 #define IXGBE_WUPM 0x05A00 /* wake up pkt memory 0x5A00-0x5A7C */ + #define IXGBE_FHFT(_n) (0x09000 + (_n * 0x100)) /* Flex host filter table */ /* Ext Flexible Host Filter Table */ #define IXGBE_FHFT_EXT(_n) (0x09800 + (_n * 0x100)) +/* Four Flexible Filters are supported */ #define IXGBE_FLEXIBLE_FILTER_COUNT_MAX 4 + +/* Six Flexible Filters are supported */ +#define IXGBE_FLEXIBLE_FILTER_COUNT_MAX_6 6 #define IXGBE_EXT_FLEXIBLE_FILTER_COUNT_MAX 2 /* Each Flexible Filter is at most 128 (0x80) bytes in length */ @@ -398,10 +453,11 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_WUFC_FLX3 0x00080000 /* Flexible Filter 3 Enable */ #define IXGBE_WUFC_FLX4 0x00100000 /* Flexible Filter 4 Enable */ #define IXGBE_WUFC_FLX5 0x00200000 /* Flexible Filter 5 Enable */ -#define IXGBE_WUFC_FLX_FILTERS 0x000F0000 /* Mask for 4 flex filters */ +#define IXGBE_WUFC_FLX_FILTERS 0x000F0000 /* Mask for 4 flex filters */ /* Mask for Ext. flex filters */ #define IXGBE_WUFC_EXT_FLX_FILTERS 0x00300000 -#define IXGBE_WUFC_ALL_FILTERS 0x003F00FF /* Mask for all wakeup filters */ +#define IXGBE_WUFC_ALL_FILTERS 0x000F00FF /* Mask all 4 flex filters */ +#define IXGBE_WUFC_ALL_FILTERS_6 0x003F00FF /* Mask all 6 flex filters */ #define IXGBE_WUFC_FLX_OFFSET 16 /* Offset to the Flexible Filters bits */ /* Wake Up Status */ @@ -422,7 +478,6 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_WUS_FLX5 IXGBE_WUFC_FLX5 #define IXGBE_WUS_FLX_FILTERS IXGBE_WUFC_FLX_FILTERS -/* Wake Up Packet Length */ #define IXGBE_WUPL_LENGTH_MASK 0xFFFF /* DCB registers */ @@ -439,6 +494,7 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_TDPT2TCSR(_i) (0x0CD40 + ((_i) * 4)) /* 8 of these (0-7) */ + /* Security Control Registers */ #define IXGBE_SECTXCTRL 0x08800 #define IXGBE_SECTXSTAT 0x08804 @@ -582,8 +638,6 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_FCPTRH 0x02414 /* FC USer Desc. PTR High */ #define IXGBE_FCBUFF 0x02418 /* FC Buffer Control */ #define IXGBE_FCDMARW 0x02420 /* FC Receive DMA RW */ -#define IXGBE_FCINVST0 0x03FC0 /* FC Invalid DMA Context Status Reg 0*/ -#define IXGBE_FCINVST(_i) (IXGBE_FCINVST0 + ((_i) * 4)) #define IXGBE_FCBUFF_VALID (1 << 0) /* DMA Context Valid */ #define IXGBE_FCBUFF_BUFFSIZE (3 << 3) /* User Buffer Size */ #define IXGBE_FCBUFF_WRCONTX (1 << 7) /* 0: Initiator, 1: Target */ @@ -731,11 +785,6 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_BXOFFRXC 0x041E0 #define IXGBE_BXONTXC 0x041E4 #define IXGBE_BXOFFTXC 0x041E8 -#define IXGBE_PCRC8ECL 0x0E810 -#define IXGBE_PCRC8ECH 0x0E811 -#define IXGBE_PCRC8ECH_MASK 0x1F -#define IXGBE_LDPCECL 0x0E820 -#define IXGBE_LDPCECH 0x0E821 /* Management */ #define IXGBE_MAVTV(_i) (0x05010 + ((_i) * 4)) /* 8 of these (0-7) */ @@ -757,11 +806,14 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_BMCIP_IPADDR_VALID 0x00000002 /* Management Bit Fields and Masks */ +#define IXGBE_MANC_RCV_TCO_EN 0x00020000 /* Rcv TCO packet enable */ #define IXGBE_MANC_EN_BMC2OS 0x10000000 /* Ena BMC2OS and OS2BMC traffic */ #define IXGBE_MANC_EN_BMC2OS_SHIFT 28 /* Firmware Semaphore Register */ #define IXGBE_FWSM_MODE_MASK 0xE +#define IXGBE_FWSM_TS_ENABLED 0x1 +#define IXGBE_FWSM_FW_MODE_PT 0x4 /* ARC Subsystem registers */ #define IXGBE_HICR 0x15F00 @@ -877,8 +929,6 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_RDPROBE 0x02F20 #define IXGBE_RDMAM 0x02F30 #define IXGBE_RDMAD 0x02F34 -#define IXGBE_TDSTATCTL 0x07C20 -#define IXGBE_TDSTAT(_i) (0x07C00 + ((_i) * 4)) /* 0x07C00 - 0x07C1C */ #define IXGBE_TDHMPN 0x07F08 #define IXGBE_TDHMPN2 0x082FC #define IXGBE_TXDESCIC 0x082CC @@ -1027,7 +1077,9 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_RDRXCTL_RDMTS_1_2 0x00000000 /* Rx Desc Min THLD Size */ #define IXGBE_RDRXCTL_CRCSTRIP 0x00000002 /* CRC Strip */ #define IXGBE_RDRXCTL_MVMEN 0x00000020 +#define IXGBE_RDRXCTL_RSC_PUSH_DIS 0x00000020 #define IXGBE_RDRXCTL_DMAIDONE 0x00000008 /* DMA init cycle done */ +#define IXGBE_RDRXCTL_RSC_PUSH 0x00000080 #define IXGBE_RDRXCTL_AGGDIS 0x00010000 /* Aggregation disable */ #define IXGBE_RDRXCTL_RSCFRSTSIZE 0x003E0000 /* RSC First packet size */ #define IXGBE_RDRXCTL_RSCLLIDIS 0x00800000 /* Disable RSC compl on LLI*/ @@ -1056,6 +1108,7 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_CTRL_RST_MASK (IXGBE_CTRL_LNK_RST | IXGBE_CTRL_RST) /* FACTPS */ +#define IXGBE_FACTPS_MNGCG 0x20000000 /* Manageblility Clock Gated */ #define IXGBE_FACTPS_LFS 0x40000000 /* LAN Function Select */ /* MHADD Bit Masks */ @@ -1157,6 +1210,10 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_MDIO_AUTO_NEG_STATUS 0x1 /* AUTO_NEG Status Reg */ #define IXGBE_MDIO_AUTO_NEG_ADVT 0x10 /* AUTO_NEG Advt Reg */ #define IXGBE_MDIO_AUTO_NEG_LP 0x13 /* AUTO_NEG LP Status Reg */ +#define IXGBE_MDIO_AUTO_NEG_EEE_ADVT 0x3C /* AUTO_NEG EEE Advt Reg */ +#define IXGBE_AUTO_NEG_10GBASE_EEE_ADVT 0x8 /* AUTO NEG EEE 10GBaseT Advt */ +#define IXGBE_AUTO_NEG_1000BASE_EEE_ADVT 0x4 /* AUTO NEG EEE 1000BaseT Advt */ +#define IXGBE_AUTO_NEG_100BASE_EEE_ADVT 0x2 /* AUTO NEG EEE 100BaseT Advt */ #define IXGBE_MDIO_PHY_XS_CONTROL 0x0 /* PHY_XS Control Reg */ #define IXGBE_MDIO_PHY_XS_RESET 0x8000 /* PHY_XS Reset */ #define IXGBE_MDIO_PHY_ID_HIGH 0x2 /* PHY ID High Reg*/ @@ -1176,6 +1233,12 @@ POSSIBILITY OF SUCH DAMAGE. #define IXGBE_MDIO_PMA_PMD_SDA_SCL_DATA 0xC30B /* PHY_XS SDA/SCL Data Reg */ #define IXGBE_MDIO_PMA_PMD_SDA_SCL_STAT 0xC30C /* PHY_XS SDA/SCL Status Reg */ +#define IXGBE_PCRC8ECL 0x0E810 /* PCR CRC-8 Error Count Lo */ +#define IXGBE_PCRC8ECH 0x0E811 /* PCR CRC-8 Error Count Hi */ +#define IXGBE_PCRC8ECH_MASK 0x1F +#define IXGBE_LDPCECL 0x0E820 /* PCR Uncorrected Error Count Lo */ +#define IXGBE_LDPCECH 0x0E821 /* PCR Uncorrected Error Count Hi */ + /* MII clause 22/28 definitions */ #define IXGBE_MDIO_PHY_LOW_POWER_MODE 0x0800 @@ -1553,11 +1616,13 @@ enum { * FCoE (0x8906): Filter 2 * 1588 (0x88f7): Filter 3 * FIP (0x8914): Filter 4 + * LLDP (0x88CC): Filter 5 */ #define IXGBE_ETQF_FILTER_EAPOL 0 #define IXGBE_ETQF_FILTER_FCOE 2 #define IXGBE_ETQF_FILTER_1588 3 #define IXGBE_ETQF_FILTER_FIP 4 +#define IXGBE_ETQF_FILTER_LLDP 5 /* VLAN Control Bit Masks */ #define IXGBE_VLNCTRL_VET 0x0000FFFF /* bits 0-15 */ #define IXGBE_VLNCTRL_CFI 0x10000000 /* bit 28 */ @@ -1673,13 +1738,17 @@ enum { #define IXGBE_AUTOC2_10G_KR (0x0 << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT) #define IXGBE_AUTOC2_10G_XFI (0x1 << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT) #define IXGBE_AUTOC2_10G_SFI (0x2 << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT) -#define IXGBE_AUTOC2_LINK_DISABLE_MASK 0x70000000 +#define IXGBE_AUTOC2_LINK_DISABLE_ON_D3_MASK 0x50000000 +#define IXGBE_AUTOC2_LINK_DISABLE_MASK 0x70000000 #define IXGBE_MACC_FLU 0x00000001 #define IXGBE_MACC_FSV_10G 0x00030000 #define IXGBE_MACC_FS 0x00040000 #define IXGBE_MAC_RX2TX_LPBK 0x00000002 +/* Veto Bit definiton */ +#define IXGBE_MMNGC_MNG_VETO 0x00000001 + /* LINKS Bit Masks */ #define IXGBE_LINKS_KX_AN_COMP 0x80000000 #define IXGBE_LINKS_UP 0x40000000 @@ -1779,28 +1848,28 @@ enum { #define IXGBE_PBANUM_LENGTH 11 /* Checksum and EEPROM pointers */ -#define IXGBE_PBANUM_PTR_GUARD 0xFAFA -#define IXGBE_EEPROM_CHECKSUM 0x3F -#define IXGBE_EEPROM_SUM 0xBABA -#define IXGBE_PCIE_ANALOG_PTR 0x03 -#define IXGBE_ATLAS0_CONFIG_PTR 0x04 -#define IXGBE_PHY_PTR 0x04 -#define IXGBE_ATLAS1_CONFIG_PTR 0x05 -#define IXGBE_OPTION_ROM_PTR 0x05 -#define IXGBE_PCIE_GENERAL_PTR 0x06 -#define IXGBE_PCIE_CONFIG0_PTR 0x07 -#define IXGBE_PCIE_CONFIG1_PTR 0x08 -#define IXGBE_CORE0_PTR 0x09 -#define IXGBE_CORE1_PTR 0x0A -#define IXGBE_MAC0_PTR 0x0B -#define IXGBE_MAC1_PTR 0x0C -#define IXGBE_CSR0_CONFIG_PTR 0x0D -#define IXGBE_CSR1_CONFIG_PTR 0x0E -#define IXGBE_FW_PTR 0x0F -#define IXGBE_PBANUM0_PTR 0x15 -#define IXGBE_PBANUM1_PTR 0x16 -#define IXGBE_ALT_MAC_ADDR_PTR 0x37 -#define IXGBE_FREE_SPACE_PTR 0X3E +#define IXGBE_PBANUM_PTR_GUARD 0xFAFA +#define IXGBE_EEPROM_CHECKSUM 0x3F +#define IXGBE_EEPROM_SUM 0xBABA +#define IXGBE_PCIE_ANALOG_PTR 0x03 +#define IXGBE_ATLAS0_CONFIG_PTR 0x04 +#define IXGBE_PHY_PTR 0x04 +#define IXGBE_ATLAS1_CONFIG_PTR 0x05 +#define IXGBE_OPTION_ROM_PTR 0x05 +#define IXGBE_PCIE_GENERAL_PTR 0x06 +#define IXGBE_PCIE_CONFIG0_PTR 0x07 +#define IXGBE_PCIE_CONFIG1_PTR 0x08 +#define IXGBE_CORE0_PTR 0x09 +#define IXGBE_CORE1_PTR 0x0A +#define IXGBE_MAC0_PTR 0x0B +#define IXGBE_MAC1_PTR 0x0C +#define IXGBE_CSR0_CONFIG_PTR 0x0D +#define IXGBE_CSR1_CONFIG_PTR 0x0E +#define IXGBE_FW_PTR 0x0F +#define IXGBE_PBANUM0_PTR 0x15 +#define IXGBE_PBANUM1_PTR 0x16 +#define IXGBE_ALT_MAC_ADDR_PTR 0x37 +#define IXGBE_FREE_SPACE_PTR 0X3E #define IXGBE_SAN_MAC_ADDR_PTR 0x28 #define IXGBE_DEVICE_CAPS 0x2C @@ -1844,8 +1913,10 @@ enum { #define IXGBE_ETH_LENGTH_OF_ADDRESS 6 #define IXGBE_EEPROM_PAGE_SIZE_MAX 128 -#define IXGBE_EEPROM_RD_BUFFER_MAX_COUNT 512 /* words rd in burst */ +#define IXGBE_EEPROM_RD_BUFFER_MAX_COUNT 256 /* words rd in burst */ #define IXGBE_EEPROM_WR_BUFFER_MAX_COUNT 256 /* words wr in burst */ +#define IXGBE_EEPROM_CTRL_2 1 /* EEPROM CTRL word 2 */ +#define IXGBE_EEPROM_CCD_BIT 2 #ifndef IXGBE_EEPROM_GRANT_ATTEMPTS #define IXGBE_EEPROM_GRANT_ATTEMPTS 1000 /* EEPROM attempts to gain grant */ @@ -1886,6 +1957,18 @@ enum { #define IXGBE_ALT_SAN_MAC_ADDR_CAPS_SANMAC 0x0 /* Alt SAN MAC exists */ #define IXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN 0x1 /* Alt WWN base exists */ +/* FW header offset */ +#define IXGBE_X540_FW_PASSTHROUGH_PATCH_CONFIG_PTR 0x4 +#define IXGBE_X540_FW_MODULE_MASK 0x7FFF +/* 4KB multiplier */ +#define IXGBE_X540_FW_MODULE_LENGTH 0x1000 +/* version word 2 (month & day) */ +#define IXGBE_X540_FW_PATCH_VERSION_2 0x5 +/* version word 3 (silicon compatibility & year) */ +#define IXGBE_X540_FW_PATCH_VERSION_3 0x6 +/* version word 4 (major & minor numbers) */ +#define IXGBE_X540_FW_PATCH_VERSION_4 0x7 + #define IXGBE_DEVICE_CAPS_WOL_PORT0_1 0x4 /* WoL supported on ports 0 & 1 */ #define IXGBE_DEVICE_CAPS_WOL_PORT0 0x8 /* WoL supported on port 0 */ #define IXGBE_DEVICE_CAPS_WOL_MASK 0xC /* Mask for WoL capabilities */ @@ -1908,6 +1991,17 @@ enum { #define IXGBE_PCI_HEADER_TYPE_MULTIFUNC 0x80 #define IXGBE_PCI_DEVICE_CONTROL2_16ms 0x0005 +#define IXGBE_PCIDEVCTRL2_TIMEO_MASK 0xf +#define IXGBE_PCIDEVCTRL2_16_32ms_def 0x0 +#define IXGBE_PCIDEVCTRL2_50_100us 0x1 +#define IXGBE_PCIDEVCTRL2_1_2ms 0x2 +#define IXGBE_PCIDEVCTRL2_16_32ms 0x5 +#define IXGBE_PCIDEVCTRL2_65_130ms 0x6 +#define IXGBE_PCIDEVCTRL2_260_520ms 0x9 +#define IXGBE_PCIDEVCTRL2_1_2s 0xa +#define IXGBE_PCIDEVCTRL2_4_8s 0xd +#define IXGBE_PCIDEVCTRL2_17_34s 0xe + /* Number of 100 microseconds we wait for PCI Express master disable */ #define IXGBE_PCI_MASTER_DISABLE_TIMEOUT 800 @@ -2276,6 +2370,7 @@ enum ixgbe_fdir_pballoc_type { #define IXGBE_FDIRCTRL_DROP_Q_SHIFT 8 #define IXGBE_FDIRCTRL_FLEX_SHIFT 16 #define IXGBE_FDIRCTRL_SEARCHLIM 0x00800000 +#define IXGBE_FDIRCTRL_FILTERMODE_MASK 0x00E00000 #define IXGBE_FDIRCTRL_MAX_LENGTH_SHIFT 24 #define IXGBE_FDIRCTRL_FULL_THRESH_MASK 0xF0000000 #define IXGBE_FDIRCTRL_FULL_THRESH_SHIFT 28 @@ -2330,13 +2425,13 @@ enum ixgbe_fdir_pballoc_type { #define IXGBE_FDIRCMD_QUEUE_EN 0x00008000 #define IXGBE_FDIRCMD_FLOW_TYPE_SHIFT 5 #define IXGBE_FDIRCMD_RX_QUEUE_SHIFT 16 +#define IXGBE_FDIRCMD_TUNNEL_FILTER_SHIFT 23 #define IXGBE_FDIRCMD_VT_POOL_SHIFT 24 #define IXGBE_FDIR_INIT_DONE_POLL 10 #define IXGBE_FDIRCMD_CMD_POLL 10 - +#define IXGBE_FDIRCMD_TUNNEL_FILTER 0x00800000 #define IXGBE_FDIR_DROP_QUEUE 127 -#define IXGBE_STATUS_OVERHEATING_BIT 20 /* STATUS overtemp bit num */ /* Manageablility Host Interface defines */ #define IXGBE_HI_MAX_BLOCK_BYTE_LENGTH 1792 /* Num of bytes in range */ @@ -2622,6 +2717,7 @@ typedef u32 ixgbe_physical_layer; #define IXGBE_ATR_L4TYPE_TCP 0x2 #define IXGBE_ATR_L4TYPE_SCTP 0x3 #define IXGBE_ATR_L4TYPE_IPV6_MASK 0x4 +#define IXGBE_ATR_L4TYPE_TUNNEL_MASK 0x10 enum ixgbe_atr_flow_type { IXGBE_ATR_FLOW_TYPE_IPV4 = 0x0, IXGBE_ATR_FLOW_TYPE_UDPV4 = 0x1, @@ -2631,6 +2727,14 @@ enum ixgbe_atr_flow_type { IXGBE_ATR_FLOW_TYPE_UDPV6 = 0x5, IXGBE_ATR_FLOW_TYPE_TCPV6 = 0x6, IXGBE_ATR_FLOW_TYPE_SCTPV6 = 0x7, + IXGBE_ATR_FLOW_TYPE_TUNNELED_IPV4 = 0x10, + IXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV4 = 0x11, + IXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV4 = 0x12, + IXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV4 = 0x13, + IXGBE_ATR_FLOW_TYPE_TUNNELED_IPV6 = 0x14, + IXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV6 = 0x15, + IXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV6 = 0x16, + IXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV6 = 0x17, }; /* Flow Director ATR input struct. */ @@ -2642,6 +2746,9 @@ union ixgbe_atr_input { * flow_type - 1 byte * vlan_id - 2 bytes * src_ip - 16 bytes + * inner_mac - 6 bytes + * cloud_mode - 2 bytes + * tni_vni - 4 bytes * dst_ip - 16 bytes * src_port - 2 bytes * dst_port - 2 bytes @@ -2654,12 +2761,15 @@ union ixgbe_atr_input { __be16 vlan_id; __be32 dst_ip[4]; __be32 src_ip[4]; + u8 inner_mac[6]; + __be16 tunnel_type; + __be32 tni_vni; __be16 src_port; __be16 dst_port; __be16 flex_bytes; __be16 bkt_hash; } formatted; - __be32 dword_stream[11]; + __be32 dword_stream[14]; }; /* Flow Director compressed ATR hash input struct */ @@ -2755,6 +2865,14 @@ enum ixgbe_sfp_type { ixgbe_sfp_type_1g_cu_core1 = 10, ixgbe_sfp_type_1g_sx_core0 = 11, ixgbe_sfp_type_1g_sx_core1 = 12, +#ifdef SUPPORT_1000BASE_LX + ixgbe_sfp_type_1g_lx_core0 = 13, + ixgbe_sfp_type_1g_lx_core1 = 14, +#endif /* SUPPORT_1000BASE_LX */ +#ifdef SUPPORT_10GBASE_ER + ixgbe_sfp_type_er_core0 = 15, + ixgbe_sfp_type_er_core1 = 16, +#endif /* SUPPORT_10GBASE_ER */ ixgbe_sfp_type_not_present = 0xFFFE, ixgbe_sfp_type_unknown = 0xFFFF }; @@ -2975,12 +3093,14 @@ struct ixgbe_mac_operations { s32 (*enable_sec_rx_path)(struct ixgbe_hw *); s32 (*acquire_swfw_sync)(struct ixgbe_hw *, u16); void (*release_swfw_sync)(struct ixgbe_hw *, u16); + s32 (*prot_autoc_read)(struct ixgbe_hw *, bool *, u32 *); + s32 (*prot_autoc_write)(struct ixgbe_hw *, u32, bool); /* Link */ void (*disable_tx_laser)(struct ixgbe_hw *); void (*enable_tx_laser)(struct ixgbe_hw *); void (*flap_tx_laser)(struct ixgbe_hw *); - s32 (*setup_link)(struct ixgbe_hw *, ixgbe_link_speed, bool, bool); + s32 (*setup_link)(struct ixgbe_hw *, ixgbe_link_speed, bool); s32 (*check_link)(struct ixgbe_hw *, ixgbe_link_speed *, bool *, bool); s32 (*get_link_capabilities)(struct ixgbe_hw *, ixgbe_link_speed *, bool *); @@ -3021,6 +3141,15 @@ struct ixgbe_mac_operations { /* Manageability interface */ s32 (*set_fw_drv_ver)(struct ixgbe_hw *, u8, u8, u8, u8); + s32 (*dmac_config)(struct ixgbe_hw *hw); + s32 (*dmac_update_tcs)(struct ixgbe_hw *hw); + s32 (*dmac_config_tcs)(struct ixgbe_hw *hw); + void (*get_rtrup2tc)(struct ixgbe_hw *hw, u8 *map); + s32 (*set_eee)(struct ixgbe_hw *hw, bool enable_eee); + s32 (*eee_linkup)(struct ixgbe_hw *hw, bool eee_enabled); + void (*set_ethertype_anti_spoofing)(struct ixgbe_hw *, bool, int); + void (*disable_rx)(struct ixgbe_hw *hw); + void (*enable_rx)(struct ixgbe_hw *hw); }; struct ixgbe_phy_operations { @@ -3030,13 +3159,15 @@ struct ixgbe_phy_operations { s32 (*reset)(struct ixgbe_hw *); s32 (*read_reg)(struct ixgbe_hw *, u32, u32, u16 *); s32 (*write_reg)(struct ixgbe_hw *, u32, u32, u16); + s32 (*read_reg_mdi)(struct ixgbe_hw *, u32, u32, u16 *); + s32 (*write_reg_mdi)(struct ixgbe_hw *, u32, u32, u16); s32 (*setup_link)(struct ixgbe_hw *); - s32 (*setup_link_speed)(struct ixgbe_hw *, ixgbe_link_speed, bool, - bool); + s32 (*setup_link_speed)(struct ixgbe_hw *, ixgbe_link_speed, bool); s32 (*check_link)(struct ixgbe_hw *, ixgbe_link_speed *, bool *); s32 (*get_firmware_version)(struct ixgbe_hw *, u16 *); s32 (*read_i2c_byte)(struct ixgbe_hw *, u8, u8, u8 *); s32 (*write_i2c_byte)(struct ixgbe_hw *, u8, u8, u8); + s32 (*read_i2c_sff8472)(struct ixgbe_hw *, u8 , u8 *); s32 (*read_i2c_eeprom)(struct ixgbe_hw *, u8 , u8 *); s32 (*write_i2c_eeprom)(struct ixgbe_hw *, u8, u8); void (*i2c_bus_clear)(struct ixgbe_hw *); @@ -3075,12 +3206,14 @@ struct ixgbe_mac_info { u32 max_rx_queues; u32 orig_autoc; u8 san_mac_rar_index; + bool get_link_status; u32 orig_autoc2; u16 max_msix_vectors; bool arc_subsystem_valid; bool orig_link_settings_stored; bool autotry_restart; u8 flags; + bool set_lben; }; struct ixgbe_phy_info { @@ -3150,6 +3283,7 @@ struct ixgbe_hw { int api_version; bool force_full_reset; bool allow_unsupported_sfp; + bool wol_enabled; }; #define ixgbe_call_func(hw, func, params, error) \ @@ -3191,8 +3325,14 @@ struct ixgbe_hw { #define IXGBE_ERR_INVALID_ARGUMENT -32 #define IXGBE_ERR_HOST_INTERFACE_COMMAND -33 #define IXGBE_ERR_OUT_OF_MEM -34 +#define IXGBE_ERR_FEATURE_NOT_SUPPORTED -36 +#define IXGBE_ERR_EEPROM_PROTECTED_REGION -37 #define IXGBE_NOT_IMPLEMENTED 0x7FFFFFFF +#ifdef IXGBE_OSDEP2 +#include "ixgbe_osdep2.h" + +#endif /* IXGBE_OSDEP2 */ #endif /* _IXGBE_TYPE_H_ */ diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.c index ca6e311..af0e0fd 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -35,7 +35,7 @@ POSSIBILITY OF SUCH DAMAGE. #include "ixgbe_api.h" #include "ixgbe_type.h" #include "ixgbe_vf.h" -#ident "$Id: ixgbe_vf.c,v 1.58 2012/08/09 20:24:53 cmwyborn Exp $" +#ident "$Id: ixgbe_vf.c,v 1.62 2013/06/27 21:30:59 jtkirshe Exp $" #ifndef IXGBE_VFWRITE_REG #define IXGBE_VFWRITE_REG IXGBE_WRITE_REG @@ -134,7 +134,7 @@ s32 ixgbe_reset_hw_vf(struct ixgbe_hw *hw) struct ixgbe_mbx_info *mbx = &hw->mbx; u32 timeout = IXGBE_VF_INIT_TIMEOUT; s32 ret_val = IXGBE_ERR_INVALID_MAC_ADDR; - u32 ctrl, msgbuf[IXGBE_VF_PERMADDR_MSG_LEN]; + u32 msgbuf[IXGBE_VF_PERMADDR_MSG_LEN]; u8 *addr = (u8 *)(&msgbuf[1]); DEBUGFUNC("ixgbevf_reset_hw_vf"); @@ -147,8 +147,7 @@ s32 ixgbe_reset_hw_vf(struct ixgbe_hw *hw) DEBUGOUT("Issuing a function level reset to MAC\n"); - ctrl = IXGBE_VFREAD_REG(hw, IXGBE_VFCTRL) | IXGBE_CTRL_RST; - IXGBE_VFWRITE_REG(hw, IXGBE_VFCTRL, ctrl); + IXGBE_VFWRITE_REG(hw, IXGBE_VFCTRL, IXGBE_CTRL_RST); IXGBE_WRITE_FLUSH(hw); msec_delay(50); @@ -159,34 +158,33 @@ s32 ixgbe_reset_hw_vf(struct ixgbe_hw *hw) usec_delay(5); } - if (timeout) { - /* mailbox timeout can now become active */ - mbx->timeout = IXGBE_VF_MBX_INIT_TIMEOUT; + if (!timeout) + return IXGBE_ERR_RESET_FAILED; - msgbuf[0] = IXGBE_VF_RESET; - mbx->ops.write_posted(hw, msgbuf, 1, 0); + /* mailbox timeout can now become active */ + mbx->timeout = IXGBE_VF_MBX_INIT_TIMEOUT; - msec_delay(10); + msgbuf[0] = IXGBE_VF_RESET; + mbx->ops.write_posted(hw, msgbuf, 1, 0); - /* - * set our "perm_addr" based on info provided by PF - * also set up the mc_filter_type which is piggy backed - * on the mac address in word 3 - */ - ret_val = mbx->ops.read_posted(hw, msgbuf, - IXGBE_VF_PERMADDR_MSG_LEN, 0); - if (!ret_val) { - if (msgbuf[0] == (IXGBE_VF_RESET | - IXGBE_VT_MSGTYPE_ACK)) { - memcpy(hw->mac.perm_addr, addr, - IXGBE_ETH_LENGTH_OF_ADDRESS); - hw->mac.mc_filter_type = - msgbuf[IXGBE_VF_MC_TYPE_WORD]; - } else { - ret_val = IXGBE_ERR_INVALID_MAC_ADDR; - } - } - } + msec_delay(10); + + /* + * set our "perm_addr" based on info provided by PF + * also set up the mc_filter_type which is piggy backed + * on the mac address in word 3 + */ + ret_val = mbx->ops.read_posted(hw, msgbuf, + IXGBE_VF_PERMADDR_MSG_LEN, 0); + if (ret_val) + return ret_val; + + if (msgbuf[0] != (IXGBE_VF_RESET | IXGBE_VT_MSGTYPE_ACK) && + msgbuf[0] != (IXGBE_VF_RESET | IXGBE_VT_MSGTYPE_NACK)) + return IXGBE_ERR_INVALID_MAC_ADDR; + + memcpy(hw->mac.perm_addr, addr, IXGBE_ETH_LENGTH_OF_ADDRESS); + hw->mac.mc_filter_type = msgbuf[IXGBE_VF_MC_TYPE_WORD]; return ret_val; } @@ -227,6 +225,8 @@ s32 ixgbe_stop_adapter_vf(struct ixgbe_hw *hw) reg_val &= ~IXGBE_RXDCTL_ENABLE; IXGBE_VFWRITE_REG(hw, IXGBE_VFRXDCTL(i), reg_val); } + /* Clear packet split and pool config */ + IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, 0); /* flush all queues disables */ IXGBE_WRITE_FLUSH(hw); @@ -247,7 +247,7 @@ s32 ixgbe_stop_adapter_vf(struct ixgbe_hw *hw) * by the MO field of the MCSTCTRL. The MO field is set during initialization * to mc_filter_type. **/ -static s32 ixgbe_mta_vector(struct ixgbe_hw *hw, u8 *mc_addr) +STATIC s32 ixgbe_mta_vector(struct ixgbe_hw *hw, u8 *mc_addr) { u32 vector = 0; @@ -275,7 +275,7 @@ static s32 ixgbe_mta_vector(struct ixgbe_hw *hw, u8 *mc_addr) return vector; } -static void ixgbevf_write_msg_read_ack(struct ixgbe_hw *hw, +STATIC void ixgbevf_write_msg_read_ack(struct ixgbe_hw *hw, u32 *msg, u16 size) { struct ixgbe_mbx_info *mbx = &hw->mbx; @@ -301,6 +301,7 @@ s32 ixgbe_set_rar_vf(struct ixgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, u32 msgbuf[3]; u8 *msg_addr = (u8 *)(&msgbuf[1]); s32 ret_val; + UNREFERENCED_3PARAMETER(vmdq, enable_addr, index); memset(msgbuf, 0, 12); msgbuf[0] = IXGBE_VF_SET_MAC_ADDR; @@ -340,6 +341,7 @@ s32 ixgbe_update_mc_addr_list_vf(struct ixgbe_hw *hw, u8 *mc_addr_list, u32 cnt, i; u32 vmdq; + UNREFERENCED_1PARAMETER(clear); DEBUGFUNC("ixgbe_update_mc_addr_list_vf"); @@ -379,6 +381,7 @@ s32 ixgbe_set_vfta_vf(struct ixgbe_hw *hw, u32 vlan, u32 vind, bool vlan_on) struct ixgbe_mbx_info *mbx = &hw->mbx; u32 msgbuf[2]; s32 ret_val; + UNREFERENCED_1PARAMETER(vind); msgbuf[0] = IXGBE_VF_SET_VLAN; msgbuf[1] = vlan; @@ -403,6 +406,7 @@ s32 ixgbe_set_vfta_vf(struct ixgbe_hw *hw, u32 vlan, u32 vind, bool vlan_on) **/ u32 ixgbe_get_num_of_tx_queues_vf(struct ixgbe_hw *hw) { + UNREFERENCED_1PARAMETER(hw); return IXGBE_VF_MAX_TX_QUEUES; } @@ -414,6 +418,7 @@ u32 ixgbe_get_num_of_tx_queues_vf(struct ixgbe_hw *hw) **/ u32 ixgbe_get_num_of_rx_queues_vf(struct ixgbe_hw *hw) { + UNREFERENCED_1PARAMETER(hw); return IXGBE_VF_MAX_RX_QUEUES; } @@ -472,10 +477,10 @@ s32 ixgbevf_set_uc_addr_vf(struct ixgbe_hw *hw, u32 index, u8 *addr) * * Set the link speed in the AUTOC register and restarts link. **/ -s32 ixgbe_setup_mac_link_vf(struct ixgbe_hw *hw, - ixgbe_link_speed speed, bool autoneg, +s32 ixgbe_setup_mac_link_vf(struct ixgbe_hw *hw, ixgbe_link_speed speed, bool autoneg_wait_to_complete) { + UNREFERENCED_3PARAMETER(hw, speed, autoneg_wait_to_complete); return IXGBE_SUCCESS; } @@ -491,22 +496,26 @@ s32 ixgbe_setup_mac_link_vf(struct ixgbe_hw *hw, s32 ixgbe_check_mac_link_vf(struct ixgbe_hw *hw, ixgbe_link_speed *speed, bool *link_up, bool autoneg_wait_to_complete) { + struct ixgbe_mbx_info *mbx = &hw->mbx; + struct ixgbe_mac_info *mac = &hw->mac; + s32 ret_val = IXGBE_SUCCESS; u32 links_reg; + u32 in_msg = 0; + UNREFERENCED_1PARAMETER(autoneg_wait_to_complete); - if (!(hw->mbx.ops.check_for_rst(hw, 0))) { - *link_up = false; - *speed = 0; - return -1; - } + /* If we were hit with a reset drop the link */ + if (!mbx->ops.check_for_rst(hw, 0) || !mbx->timeout) + mac->get_link_status = true; - links_reg = IXGBE_VFREAD_REG(hw, IXGBE_VFLINKS); + if (!mac->get_link_status) + goto out; - if (links_reg & IXGBE_LINKS_UP) - *link_up = true; - else - *link_up = false; + /* if link status is down no point in checking to see if pf is up */ + links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS); + if (!(links_reg & IXGBE_LINKS_UP)) + goto out; - switch (links_reg & IXGBE_LINKS_SPEED_10G_82599) { + switch (links_reg & IXGBE_LINKS_SPEED_82599) { case IXGBE_LINKS_SPEED_10G_82599: *speed = IXGBE_LINK_SPEED_10GB_FULL; break; @@ -518,7 +527,33 @@ s32 ixgbe_check_mac_link_vf(struct ixgbe_hw *hw, ixgbe_link_speed *speed, break; } - return IXGBE_SUCCESS; + /* if the read failed it could just be a mailbox collision, best wait + * until we are called again and don't report an error + */ + if (mbx->ops.read(hw, &in_msg, 1, 0)) + goto out; + + if (!(in_msg & IXGBE_VT_MSGTYPE_CTS)) { + /* msg is not CTS and is NACK we must have lost CTS status */ + if (in_msg & IXGBE_VT_MSGTYPE_NACK) + ret_val = -1; + goto out; + } + + /* the pf is talking, if we timed out in the past we reinit */ + if (!mbx->timeout) { + ret_val = -1; + goto out; + } + + /* if we passed all the tests above then the link is up and we no + * longer need to check for link + */ + mac->get_link_status = false; + +out: + *link_up = !mac->get_link_status; + return ret_val; } /** diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.h index 2261218..b84b4ba 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_vf.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -33,7 +33,7 @@ POSSIBILITY OF SUCH DAMAGE. #ifndef __IXGBE_VF_H__ #define __IXGBE_VF_H__ -#ident "$Id: ixgbe_vf.h,v 1.34 2012/08/09 17:04:18 cmwyborn Exp $" +#ident "$Id: ixgbe_vf.h,v 1.37 2013/11/07 08:18:53 jtkirshe Exp $" #define IXGBE_VF_IRQ_CLEAR_MASK 7 #define IXGBE_VF_MAX_TX_QUEUES 8 @@ -120,7 +120,7 @@ u32 ixgbe_get_num_of_tx_queues_vf(struct ixgbe_hw *hw); u32 ixgbe_get_num_of_rx_queues_vf(struct ixgbe_hw *hw); s32 ixgbe_get_mac_addr_vf(struct ixgbe_hw *hw, u8 *mac_addr); s32 ixgbe_setup_mac_link_vf(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool autoneg_wait_to_complete); + bool autoneg_wait_to_complete); s32 ixgbe_check_mac_link_vf(struct ixgbe_hw *hw, ixgbe_link_speed *speed, bool *link_up, bool autoneg_wait_to_complete); s32 ixgbe_set_rar_vf(struct ixgbe_hw *hw, u32 index, u8 *addr, u32 vmdq, @@ -135,4 +135,8 @@ int ixgbevf_negotiate_api_version(struct ixgbe_hw *hw, int api); int ixgbevf_get_queues(struct ixgbe_hw *hw, unsigned int *num_tcs, unsigned int *default_tc); +#ifdef IXGBEVF_OSDEP2 +#include "ixgbevf_osdep2.h" + +#endif /* IXGBEVF_OSDEP2 */ #endif /* __IXGBE_VF_H__ */ diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.c b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.c index d3e1730..163ee25 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.c +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.c @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -37,7 +37,13 @@ POSSIBILITY OF SUCH DAMAGE. #include "ixgbe_common.h" #include "ixgbe_phy.h" -STATIC s32 ixgbe_update_flash_X540(struct ixgbe_hw *hw); +#define IXGBE_X540_MAX_TX_QUEUES 128 +#define IXGBE_X540_MAX_RX_QUEUES 128 +#define IXGBE_X540_RAR_ENTRIES 128 +#define IXGBE_X540_MC_TBL_SIZE 128 +#define IXGBE_X540_VFT_TBL_SIZE 128 +#define IXGBE_X540_RX_PB_SIZE 384 + STATIC s32 ixgbe_poll_flash_update_done_X540(struct ixgbe_hw *hw); STATIC s32 ixgbe_get_swfw_sync_semaphore(struct ixgbe_hw *hw); STATIC void ixgbe_release_swfw_sync_semaphore(struct ixgbe_hw *hw); @@ -116,12 +122,12 @@ s32 ixgbe_init_ops_X540(struct ixgbe_hw *hw) mac->ops.check_link = &ixgbe_check_mac_link_generic; - mac->mcft_size = 128; - mac->vft_size = 128; - mac->num_rar_entries = 128; - mac->rx_pb_size = 384; - mac->max_tx_queues = 128; - mac->max_rx_queues = 128; + mac->mcft_size = IXGBE_X540_MC_TBL_SIZE; + mac->vft_size = IXGBE_X540_VFT_TBL_SIZE; + mac->num_rar_entries = IXGBE_X540_RAR_ENTRIES; + mac->rx_pb_size = IXGBE_X540_RX_PB_SIZE; + mac->max_rx_queues = IXGBE_X540_MAX_RX_QUEUES; + mac->max_tx_queues = IXGBE_X540_MAX_TX_QUEUES; mac->max_msix_vectors = ixgbe_get_pcie_msix_count_generic(hw); /* @@ -141,6 +147,8 @@ s32 ixgbe_init_ops_X540(struct ixgbe_hw *hw) /* Manageability interface */ mac->ops.set_fw_drv_ver = &ixgbe_set_fw_drv_ver_generic; + mac->ops.get_rtrup2tc = &ixgbe_dcb_get_rtrup2tc_generic; + return ret_val; } @@ -169,6 +177,7 @@ s32 ixgbe_get_link_capabilities_X540(struct ixgbe_hw *hw, **/ enum ixgbe_media_type ixgbe_get_media_type_X540(struct ixgbe_hw *hw) { + UNREFERENCED_1PARAMETER(hw); return ixgbe_media_type_copper; } @@ -176,16 +185,14 @@ enum ixgbe_media_type ixgbe_get_media_type_X540(struct ixgbe_hw *hw) * ixgbe_setup_mac_link_X540 - Sets the auto advertised capabilities * @hw: pointer to hardware structure * @speed: new link speed - * @autoneg: true if autonegotiation enabled * @autoneg_wait_to_complete: true when waiting for completion is needed **/ s32 ixgbe_setup_mac_link_X540(struct ixgbe_hw *hw, - ixgbe_link_speed speed, bool autoneg, + ixgbe_link_speed speed, bool autoneg_wait_to_complete) { DEBUGFUNC("ixgbe_setup_mac_link_X540"); - return hw->phy.ops.setup_link_speed(hw, speed, autoneg, - autoneg_wait_to_complete); + return hw->phy.ops.setup_link_speed(hw, speed, autoneg_wait_to_complete); } /** @@ -226,7 +233,8 @@ mac_reset_top: if (ctrl & IXGBE_CTRL_RST_MASK) { status = IXGBE_ERR_RESET_FAILED; - DEBUGOUT("Reset polling failed to complete.\n"); + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "Reset polling failed to complete.\n"); } msec_delay(100); @@ -372,12 +380,13 @@ s32 ixgbe_read_eerd_X540(struct ixgbe_hw *hw, u16 offset, u16 *data) DEBUGFUNC("ixgbe_read_eerd_X540"); if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM) == - IXGBE_SUCCESS) + IXGBE_SUCCESS) { status = ixgbe_read_eerd_generic(hw, offset, data); - else + hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); + } else { status = IXGBE_ERR_SWFW_SYNC; + } - hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); return status; } @@ -397,13 +406,14 @@ s32 ixgbe_read_eerd_buffer_X540(struct ixgbe_hw *hw, DEBUGFUNC("ixgbe_read_eerd_buffer_X540"); if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM) == - IXGBE_SUCCESS) + IXGBE_SUCCESS) { status = ixgbe_read_eerd_buffer_generic(hw, offset, words, data); - else + hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); + } else { status = IXGBE_ERR_SWFW_SYNC; + } - hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); return status; } @@ -421,12 +431,13 @@ s32 ixgbe_write_eewr_X540(struct ixgbe_hw *hw, u16 offset, u16 data) DEBUGFUNC("ixgbe_write_eewr_X540"); if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM) == - IXGBE_SUCCESS) + IXGBE_SUCCESS) { status = ixgbe_write_eewr_generic(hw, offset, data); - else + hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); + } else { status = IXGBE_ERR_SWFW_SYNC; + } - hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); return status; } @@ -446,13 +457,14 @@ s32 ixgbe_write_eewr_buffer_X540(struct ixgbe_hw *hw, DEBUGFUNC("ixgbe_write_eewr_buffer_X540"); if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM) == - IXGBE_SUCCESS) + IXGBE_SUCCESS) { status = ixgbe_write_eewr_buffer_generic(hw, offset, words, data); - else + hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); + } else { status = IXGBE_ERR_SWFW_SYNC; + } - hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); return status; } @@ -466,12 +478,13 @@ s32 ixgbe_write_eewr_buffer_X540(struct ixgbe_hw *hw, **/ u16 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw) { - u16 i; - u16 j; + u16 i, j; u16 checksum = 0; u16 length = 0; u16 pointer = 0; u16 word = 0; + u16 checksum_last_word = IXGBE_EEPROM_CHECKSUM; + u16 ptr_start = IXGBE_PCIE_ANALOG_PTR; /* * Do not use hw->eeprom.ops.read because we do not want to take @@ -482,19 +495,20 @@ u16 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw) DEBUGFUNC("ixgbe_calc_eeprom_checksum_X540"); /* Include 0x0-0x3F in the checksum */ - for (i = 0; i < IXGBE_EEPROM_CHECKSUM; i++) { + for (i = 0; i <= checksum_last_word; i++) { if (ixgbe_read_eerd_generic(hw, i, &word) != IXGBE_SUCCESS) { DEBUGOUT("EEPROM read failed\n"); break; } - checksum += word; + if (i != IXGBE_EEPROM_CHECKSUM) + checksum += word; } /* * Include all data from pointers 0x3, 0x6-0xE. This excludes the * FW, PHY module, and PCIe Expansion/Option ROM pointers. */ - for (i = IXGBE_PCIE_ANALOG_PTR; i < IXGBE_FW_PTR; i++) { + for (i = ptr_start; i < IXGBE_FW_PTR; i++) { if (i == IXGBE_PHY_PTR || i == IXGBE_OPTION_ROM_PTR) continue; @@ -578,17 +592,20 @@ s32 ixgbe_validate_eeprom_checksum_X540(struct ixgbe_hw *hw, * Verify read checksum from EEPROM is the same as * calculated checksum */ - if (read_checksum != checksum) + if (read_checksum != checksum) { status = IXGBE_ERR_EEPROM_CHECKSUM; + ERROR_REPORT1(IXGBE_ERROR_INVALID_STATE, + "Invalid EEPROM checksum"); + } /* If the user cares, return the calculated checksum */ if (checksum_val) *checksum_val = checksum; + hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); } else { status = IXGBE_ERR_SWFW_SYNC; } - hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); out: return status; } @@ -615,8 +632,10 @@ s32 ixgbe_update_eeprom_checksum_X540(struct ixgbe_hw *hw) */ status = hw->eeprom.ops.read(hw, 0, &checksum); - if (status != IXGBE_SUCCESS) + if (status != IXGBE_SUCCESS) { DEBUGOUT("EEPROM read failed\n"); + return status; + } if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_EEP_SM) == IXGBE_SUCCESS) { @@ -625,18 +644,17 @@ s32 ixgbe_update_eeprom_checksum_X540(struct ixgbe_hw *hw) /* * Do not use hw->eeprom.ops.write because we do not want to * take the synchronization semaphores twice here. - */ + */ status = ixgbe_write_eewr_generic(hw, IXGBE_EEPROM_CHECKSUM, checksum); - if (status == IXGBE_SUCCESS) - status = ixgbe_update_flash_X540(hw); - else + if (status == IXGBE_SUCCESS) + status = ixgbe_update_flash_X540(hw); + hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); + } else { status = IXGBE_ERR_SWFW_SYNC; } - hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_EEP_SM); - return status; } @@ -647,10 +665,10 @@ s32 ixgbe_update_eeprom_checksum_X540(struct ixgbe_hw *hw) * Set FLUP (bit 23) of the EEC register to instruct Hardware to copy * EEPROM from shadow RAM to the flash device. **/ -STATIC s32 ixgbe_update_flash_X540(struct ixgbe_hw *hw) +s32 ixgbe_update_flash_X540(struct ixgbe_hw *hw) { u32 flup; - s32 status = IXGBE_ERR_EEPROM; + s32 status; DEBUGFUNC("ixgbe_update_flash_X540"); @@ -669,7 +687,7 @@ STATIC s32 ixgbe_update_flash_X540(struct ixgbe_hw *hw) else DEBUGOUT("Flash update time out\n"); - if (hw->revision_id == 0) { + if (hw->mac.type == ixgbe_mac_X540 && hw->revision_id == 0) { flup = IXGBE_READ_REG(hw, IXGBE_EEC); if (flup & IXGBE_EEC_SEC1VAL) { @@ -710,6 +728,11 @@ STATIC s32 ixgbe_poll_flash_update_done_X540(struct ixgbe_hw *hw) } usec_delay(5); } + + if (i == IXGBE_FLUDONE_ATTEMPTS) + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "Flash update status polling timed out"); + return status; } @@ -771,11 +794,13 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u16 mask) /* Failed to get SW only semaphore */ if (swmask == IXGBE_GSSR_SW_MNG_SM) { ret_val = IXGBE_ERR_SWFW_SYNC; + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "Failed to get SW only semaphore"); goto out; } /* If the resource is not released by the FW/HW the SW can assume that - * the FW/HW malfunctions. In that case the SW should sets the SW bit(s) + * the FW/HW malfunctions. In that case the SW should set the SW bit(s) * of the requested resource(s) while ignoring the corresponding FW/HW * bits in the SW_FW_SYNC register. */ @@ -791,6 +816,17 @@ s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u16 mask) ixgbe_release_swfw_sync_semaphore(hw); msec_delay(5); } + /* If the resource is not released by other SW the SW can assume that + * the other SW malfunctions. In that case the SW should clear all SW + * flags that it does not own and then repeat the whole process once + * again. + */ + else if (swfw_sync & swmask) { + ixgbe_release_swfw_sync_X540(hw, IXGBE_GSSR_EEP_SM | + IXGBE_GSSR_PHY0_SM | IXGBE_GSSR_PHY1_SM | + IXGBE_GSSR_MAC_CSR_SM); + ret_val = IXGBE_ERR_SWFW_SYNC; + } out: return ret_val; @@ -865,14 +901,15 @@ STATIC s32 ixgbe_get_swfw_sync_semaphore(struct ixgbe_hw *hw) * was not granted because we don't have access to the EEPROM */ if (i >= timeout) { - DEBUGOUT("REGSMP Software NVM semaphore not " - "granted.\n"); + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "REGSMP Software NVM semaphore not granted.\n"); ixgbe_release_swfw_sync_semaphore(hw); status = IXGBE_ERR_EEPROM; } } else { - DEBUGOUT("Software semaphore SMBI between device drivers " - "not granted.\n"); + ERROR_REPORT1(IXGBE_ERROR_POLLING, + "Software semaphore SMBI between device drivers " + "not granted.\n"); } return status; diff --git a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.h b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.h index 37d2b82..4742f41 100644 --- a/lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.h +++ b/lib/librte_pmd_ixgbe/ixgbe/ixgbe_x540.h @@ -1,6 +1,6 @@ /******************************************************************************* -Copyright (c) 2001-2012, Intel Corporation +Copyright (c) 2001-2014, Intel Corporation All rights reserved. Redistribution and use in source and binary forms, with or without @@ -35,13 +35,13 @@ POSSIBILITY OF SUCH DAMAGE. #define _IXGBE_X540_H_ #include "ixgbe_type.h" -#ident "$Id: ixgbe_x540.h,v 1.6 2012/08/09 20:43:58 cmwyborn Exp $" +#ident "$Id: ixgbe_x540.h,v 1.11 2013/10/11 08:36:03 jtkirshe Exp $" s32 ixgbe_get_link_capabilities_X540(struct ixgbe_hw *hw, ixgbe_link_speed *speed, bool *autoneg); enum ixgbe_media_type ixgbe_get_media_type_X540(struct ixgbe_hw *hw); s32 ixgbe_setup_mac_link_X540(struct ixgbe_hw *hw, ixgbe_link_speed speed, - bool autoneg, bool link_up_wait_to_complete); + bool link_up_wait_to_complete); s32 ixgbe_reset_hw_X540(struct ixgbe_hw *hw); s32 ixgbe_start_hw_X540(struct ixgbe_hw *hw); u32 ixgbe_get_supported_physical_layer_X540(struct ixgbe_hw *hw); @@ -56,6 +56,7 @@ s32 ixgbe_write_eewr_buffer_X540(struct ixgbe_hw *hw, u16 offset, u16 words, s32 ixgbe_update_eeprom_checksum_X540(struct ixgbe_hw *hw); s32 ixgbe_validate_eeprom_checksum_X540(struct ixgbe_hw *hw, u16 *checksum_val); u16 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw); +s32 ixgbe_update_flash_X540(struct ixgbe_hw *hw); s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u16 mask); void ixgbe_release_swfw_sync_X540(struct ixgbe_hw *hw, u16 mask); @@ -63,3 +64,4 @@ void ixgbe_release_swfw_sync_X540(struct ixgbe_hw *hw, u16 mask); s32 ixgbe_blink_led_start_X540(struct ixgbe_hw *hw, u32 index); s32 ixgbe_blink_led_stop_X540(struct ixgbe_hw *hw, u32 index); #endif /* _IXGBE_X540_H_ */ + diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c index a5a7f9a..89ab4aa 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c @@ -1376,7 +1376,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev) goto error; } - err = ixgbe_setup_link(hw, speed, negotiate, link_up); + err = ixgbe_setup_link(hw, speed, link_up); if (err) goto error; -- 1.7.0.7